var/home/core/zuul-output/0000755000175000017500000000000015143772213014532 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015144010332015462 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000361150615144010175020260 0ustar corecore}ikubelet.log_o[;r)Br'o-n(!9t%Cs7}g/غIs$r.k9GfB i>EZ͖o_˖wKo///o}͛jiq^|1Fbg_>cV*˿mVˋ^<~UWy]L-͗_pU_P|Xûx{AtW~3 _P/&R/xDy~rJ_/*ofXx$%X"LADA@@tgV~.}-+zvy J+WF^i4JpOO pzM6/vs?}fVj6'p~U Pm,UTV̙UΞg\ Ӵ-$}.Uۙއ0* T(-aD~J'`:R߿fKS'oowHF6":=3Ȑ 3xҝd){Ts}cZ%BdARO# #D"-bFg4*%3`C\LtiKgz֝$,:;zuL{+>2^G) u.`l(Sm&F4a0>eBmFR5]!PI6f٘"y/(":[#;`1}+V ?qKf, # qe䧤 ss]QzH.ad!rJBi`sKZiu}THW{y|*BPW*g,Z0>?<{r.:{]31o:mof{>{Z8H'U̞=yg7awSL2uQO)sai]>yE*,?k 9Z29}}(4ҲIFyG -^76ox7,*uvf d |TRZ;j?| |!I狓 3w̗`{K0Aȶ9W E%*mG:toG(;h0!}j)CMitmy߀~s{@Q,}s=LN YlYd'Z;.K'~[.Vp|A*Z*}QJ0SqAYE0i5P-$̿:R€ N0RQGkuWZ^yhi-cS4 6"mKaFרfq&}͕C,RizpV:!җӣ D>P.BvJ>mIyVVTF% tFL-*$tZm2AČAE9ϯ~ihFf&6"W&\jVJ5&jNgB|90v߁R:~U jڞU~oN9菏xԞ~J|d`V)F5d,0SSNK9ް4:ÒozsB<^/鄌4:B%cXhK I}!5 YM%<>"۞)Za@Ι}YJz{ɛrm4^jC d-saܺCY "D^&M){ߙ>:i V4nQi1h$Zb)ŠȃAݢCr|<~gQwQ!q/C>*({bʂ!0[r_G{j 6JYǹ>zs;tc.mctie:x&"bR4S uV8?,0-Y8Uav0NET݃jYAT` &AD]Ax95mvXYs"(A+/+o+{b]}@UP*5ì"Ml؈W|sN{mL=@N'DǭZrb5Iffe6Rh&C4F;D3T\[ bk5̕@UFB1/ {f'}KXg%q3Ifq CXReQP2$TbgK ؾ#AZ9 K>UHkZ;oﴍ8M*a~ff~6|Y,d,`!qIv꜒"T[1!I!NwLv}\|s.|=o4_P\(Lۻ5_vJB/_xQ># ſԸn}9U}'/osVu[H<9˷0st\t@HTu( v e`H*1aK`3CmF1K>*Mk{_'֜dw wFEc* A>{avdt)8|mg徚TN7,TEXt+`F P |ɧ<Ғ8_iqE}_Vc P2EU:F4!ʢlQHZ9E CBU)Y(S8)c yO[E}Lc&l .u蝋RQ'Vt3,F3,#Y3,kJ3,LhVnKauomˠ_>2h-z+ϟMn?!ԫ5H&=JkܓhkB\LQ"<LxeLo4l_m24^3.{oɼʪ~75/nQ?s d|pxu\uw?=QR -Mݞίk@Pc n1æ*m$=4Dbs+J \EƄզ}@۶(ޓmmU̸nG pHKZ{{Qo}i¿Xc\]e1e,5`te.5Hhao<[50wMUF􀍠PV?Yg"ź)\3mf|ܔMUiU|Ym! #'ukMmQ9Blm]TO1ba.XW x6ܠ9[v35H;-]Um4mMrW-k#~fؤϋu_j*^Wj^qM `-Pk.@%=X#|ۡb1lKcj$׋bKv[~"N jS4HOkeF3LPyi︅iWk! cAnxu6<7cp?WN $?X3l(?  'Z! ,Z.maO_Bk/m~ޖ(<qRfR"Au\PmLZ"twpuJ` mvf+T!6Ѓjw1ncuwo':o gSPC=]U҅yY9 &K<-na'Xk,P4+`Þ/lX/bjFO.= w ?>ȑ3n߿z,t s5Z/ Clo-` z?a~b mzkC zFȏ>1k*Dls6vP9hS  ehC.3 @6ijvUuBY hBnb[ Fr#D7ćlA!:X lYE>#0JvʈɌ|\u,'Y˲.,;oOwoj-25Hݻ7 li0bSlbw=IsxhRbd+I]Y]JP}@.供SЃ??w w@KvKts[TSa /ZaDžPAEư07>~w3n:U/.P珀Yaٳ5Ʈ]խ4 ~fh.8C>n@T%W?%TbzK-6cb:XeGL`'žeVVޖ~;BLv[n|viPjbMeO?!hEfޮ])4 ?KN1o<]0Bg9lldXuT ʑ!Iu2ʌnB5*<^I^~G;Ja߄bHȌsK+D"̽E/"Icƀsu0,gy(&TI{ U܋N5 l͖h"褁lm *#n/Q!m b0X3i)\IN˭% Y&cKoG w 9pM^WϋQf7s#bd+SDL ,FZ<1Kx&C!{P|Ռr,* ] O;*X]Eg,5,ouZm8pnglVj!p2֬uT[QyB402|2d5K: `Bcz|Rxxl3{c` 1nhJzQHv?hbºܞz=73qSO0}Dc D]ͺjgw07'㤸z YJ\Hb9Ɖ„2Hi{(2HFE?*w*hy4ޙM^٫wF(p]EwQzr*! 5F XrO7E[!gJ^.a&HߣaaQÝ$_vyz4}0!yܒ栒޹a% Ŋ X!cJ!A\ ?E\R1 q/rJjd A4y4c+bQ̘TT!kw/nb͵FcRG0xeO sw5TV12R7<OG5cjShGg/5TbW > ]~Wޠ9dNiee$V[\[Qp-&u~a+3~;xUFFW>'ǣC~방u)т48ZdH;j a]`bGԹ#qiP(yڤ~dO@wA[Vz/$NW\F?H4kX6)F*1*(eJAaݡ krqB}q^fn 8y7P  GRޠkQn>eqQntq"Occ°NRjg#qSn02DŔw:ؽ 5l)Fa/TTmCԤ{"9b{ywSXE*m#3U ùRIvޏrJ`k|wJKH:O*OKy`( ݢe*{ ua ȻݔhvOkU~OǠI/aǕ-JMX _.6KsjA Qsmd  O#F.Uf28ZAgy>y,d$C?v01q5e.Um>]RLa&r?+@6k&#l)I5_> ` D s5npo}/ؙq #a2V?X~.4O/'|/_|&q̑0dd4>vk 60D _o~[Sw3ckpkpLNa ^j 5*<&}kˢmqvۗj=<Tr=[ a^؃ È(<^=xZb [_tܡ&yЋ{ Sym^?̑sU~' Ԓ f\itu)b>5X -$wMm[eG`̵E$uLrk-$_{$# $B*hN/ٟPE[/Y5d{zrBܖ6Hlc "mKv~[uLU4lZ;xEN'oI㤛rP*jC# 6@dmHg1$ʇȠh#CBΤ{sTQ{%w)7@y1K^ ].Y$46[B-3%OONw8d`Q4d$x0t8@t]y1T\YAidtxBG:pɨyeNg4n]M؞ e}Wn6׳i~'ہZ*FU{fXڃP'Hl4 ,ŸqMHDCYZz Qnz܁$Jp04ȴIL΃.0FiO-qy)i_TA|S2G4miBȨHM(2hys|F 94 DNlϒòκ-q|xC ,gKDzHR%t+E/wd#礱ºȄWEz o\JξB.wLKZ39(M +(PWՇfR6#ю3Ȋt ݪbh]MTw䀩S]'qf&)-_G;"1qz퇛0,#yiq$ՁɄ)KٮޓJ|̖D?:3mhW=rOf'/wѹ8BS8]`;=?,ڼ"ϴq*(A7? /W= #^ub"6q f+=^OI@߱^F[n4A#bYѤwd)J^Z{*ǥzw73LuaVad=$6)iI gC~.1%YmҪ+2gSt!8iIۛ*JgE7LGoş\bC}O i ycK1YhO6 /g:KT sPv6l+uN|!"VS^΄t*3b\N7dYܞLcn3rnNd8"is"1- ޑܧd[]~:'#;N(NknfV('I rcj2J1G<5 Nj̒Qh]ꍾZBn&Un' CyUM0nCj.&Oڣg\q0^Ϻ%4i" ZZG>Xr'XKc$2iσֹH<6N8HSg>uMik{Fm(W F@@{W+ߑ?X2hS4-=^YgpUHެbZ!y!ul@ڼ63" ۩:6=TZõ$E,ϓRV|G&$rr;J TtIHFE=RȬ]P pLm|?$%>Eü%mWO[>Xmw,*9.[G n >X8Ī;xW%dT:`ٓ~:QO,}j6j!yڦʲT:Pqҋh] H+&=>g| Z;D8ܶb:! Å{2:+au 6:!fF+0#+̬NY"!6a7#񕪰%:r|o5Znڧs?si/W qEU馥˟^_޶oڷOj'?nc]Rn\t3^邳塨Lɏ"8k8M~?M}OAH$77f|lgn I;.K*!<+"eK5c&`X:#;@B@[(K44sBFu M.MNWLlY]K᜴=/ VމYlϿ4i36$>m|_>9|dUA"{!$jKx E$K3hN(tÊ-#v#O N, 9g80Ǭ&VdӞ5W1!1KYd`,-*&>F~⯰&jb.~cNk BL_OG]Bv.A|'qT(Ol.' 4IE|@Iі)<-p JkQm1 `qacܗVc?)cl*&<}P媠E{-sVU>߇GUt\+n3X]Byoz)li$2cPs6D>TE-n# rve{椱I |p)U݋7yJw&PzDgi xs  xh\L r Ѥo Zt(I >|$>tnMdэor,8.7uO`c Nc0%Ն R C%_ EV a"҅4 |T!DdǍ- .™5,V:;[g./0 +v䤗dWF >:֓[@ QPltsHtQ$J==O!;*>ohǖVa[|E7e0ϕ9Uyzg%pg/cc6RS`HFLЩ LkJu\!`0);Sak$Vfp~C%YdE6c>1ƕ (0W4Q>@>lWN"^ X5G-nm.8B>NOI[31,j2 Ce |M>8l WIf|\q4|UkC.gr`˱Lϰ} xr.~l-ɩu_Drd31V_ѺUib0/ %IYhq ҕ  O UA!wY~ -`%Űb`\mS38W1`vOF7/.C!Pu&Jm l?Q>}O+D7 P=x@`0ʿ26a>d Bqε^a'NԋsI`Yu.7v$Rt)Ag:ݙyX|HkX cU82IP qgzkX=>׻K߉J%E92' ]qҙ%rXgs+"sc9| ]>T]"JرWBΌ-zJS-~y30G@U#=h7) ^EUB Q:>9W΀çM{?`c`uRljצXr:l`T~IQg\Ѝpgu#QH! ,/3`~eB|C1Yg~ؼ/5I7w9I}qww}U~7뭱ԏ,}e7]ukDn`jSlQ7DžHa/EU^IpYWW兹Q7WyTz|nˇ _qˍ[!;n ^b k[);ng]ȶM_u)O_xV hx h[K2kـ`b duhq[..cS'5YO@˒ӓdcY'HAKq^$8`b $1r Qz?ۧ1ZM/G+qYcYl YhD$kt_TId E$dS:֢̆ ?GЅ'JƖ'ZXO݇'kJՂU086\h%1GK(Yn% ']Q; Gd:!gI-XEmkF}:~0}4t3Qf5xd\hEB-} |q*ȃThLj'sQ %؇Gk`F;Sl\h)5؈x2Ld="KԦ:EVewN ًS9d#$*u>>I#lX9vW !&H2kVyKZt<cm^] bCD6b&>9VE7e4p +{&g߷2KY,`Wf1_ܑMYٚ'`ySc4ΔV`nI+ƳC6;җ2ct"*5S}t)eNqǪP@o`co ˎ<عLۀG\ 7۶+q|YRiĹ zm/bcK3;=,7}RqT vvFI O0]&5uKMf#pDTk6yi*cem:y0W|1u CWL;oG^\ X5.aRߦ[_Vs? Ž^A12JQ̛XL:OEUپOY>WK-uP0\8"M: /P4Qz~j3 .-8NJ|!N9/|a|>lX9T ҇t~T1=UF"t; 8-1I|2L+)WȱL˿ˍ-038D*0-)ZyT13`tTnm|Yhi+lQ&Z!֨řoҒ"HKX 6„=z{Ҍ5+P1;ڇ6UNE@Uo/>8.fgW]kY0Cgcu6/!_Ɩ} ' Ў3)X<seWfSv!ؒRKfs%(1Lhrٵ L.] s?I,HCԢ[b C-lLG+@_$c%* _jR|\:dc5u= A@kUc\ǔz;M>dUN/aFRĦ@x؂ǀ$6%}N^ \mQ!%8j0dUo=rh>*YȴU3Q,̸*E%59sTzɟڮ2kg ۱wEUD3uKrr&"B:p`\E)j<).R&#ÃecE,dp"nPS 44 Q8ZƈKnnJei+^z '3JDbSK;*uБ:hF ѹ @˿ޗ~7g9| hLXULi7.1-Qk%ƩJ4^=ple;u.6vQe UZAl *^Vif]>HUd6ƕ̽=T/se+ϙK$S`hnOcE(Tcr!:8UL | 8 !t Q7jk=nn7J0ܽ0{GGL'_So^ʮL_'s%eU+U+ȳlX6}i@djӃfb -u-w~ r}plK;ֽ=nlmuo[`wdй d:[mS%uTڪ?>={2])|Ը>U{s]^l`+ ja^9c5~nZjA|ЩJs Va[~ۗ#rri# zLdMl?6o AMҪ1Ez&I2Wwߎ|7.sW\zk﯊溺^TW^T\*6eqr/^T77WNZ7F_}-򲺺VWQ77V\_v?9?"Th $LqQjiXMlk1=VzpO֠24hf 1hi D{q:v%̈#v^nBi~MefZF >:/?Ac 1M'I`22؆DT!/j璓P åiw@wgRCsT~$U>ceއE)BI>UljO|Ty$ŋrwOtZ7$ "i 8U 7bSem'k?I]YoȲ+_K(qk. Y8q;:gM)[[ERe(f28eWյRU,Qw ETFY:\D0~c#PmTCUl g~js~`?T]xDCUBQ;ȋ҆[HcR8+y zvf<-ǵ_HCUzƽJ(y̫0[/yuăͱ@.pϋ]bxP/q< A2=Q񏢍r6xoGi ~yt=L3ȝ8yi~y.)[1]Rd wC3񥩮kro ^o0,f-ŕkf(ln k;`0CC{Y\jkngmtKsm偡eXv!iM79 }m0xT5%ϊj⨖ZjvΰW?/\=W+_|Y*8/*%AML'{ /3iÓc/2 kc7} 7\|Az tr?- KOvK8kY-)4Ƃ/I_SS@jwzkB ej0quűp>C߷Ga4Kx>Y/RwBrWkڿXWuvyZ{aj RQLVE5hAUA9`=7+Stöl-B Oں;}z8~d떡6[$D[kP-8%c$2~W^L&T !A5_uNjz< cF}EaynjLrG9B ZluYę ȴ*x,y4|0 q-`9 Bew1eŚRFPDY<yB@DjyTr^A|b2]D|TŨ+.Fe.:?eK `BU'wt7rR4c[b\*df(E)^񳥃:YRUJP=,J f΋``*}ї(QH ºĝ?Y=Zu\]yxX8WO(ly)ƣg𧏻yg Œ]ZBk[Kxy4"؞Ɩc1|>jgk]ӾRfqrdЈR<.[Gz\MWw਍3q0ԯ:E'=Ǝ$3F Ӫ<9M YІwyE!긊>*+.Kq[g?Q è\E9M(tsHqCAbHq@ax!'y89*{gTh6đЌ'uZFyU~J߈HC. "}:E4 6u褉,>.sDeZ8ThX!I\ApEO;Ga|I䉮EprWyc凬?"6J[&%, .SIS^_!E(m@}p|ê1{Iصݳes^i,כC^MTr/݃?\6V=~[1d\" *NaϬQ?fi͊n8$2Ax?pmΙGT)p~VZh%xns?~8I{}! { |a|bv iSg%̒s˻4Mg)kzw{tiAip=gь,0Bu~s$726.+hwW?x\qHs!H%-fžDnwQ V,eX$(_ԗz2nc۵gӅBCK%/ӷmm-t" !i@}0,ݠ:Qa&0Jh36IHEJk-)he^}_du>My^γejEW =+/h]jZ}>b{1ЄAHs[[45;Ϣˆ^yhY+)j瓦M&K  jT8DNϚ Mth-IC !?@pcrpp>8T=Lڭܨ盚͖& N`&D}OG(<񥋠ͳ͈a01e~pHQx`@7P٪%ٕ:-y(p#_@4#fMեGp>(q(I*>\Q(M%z}OŃSL:Uv)RJx,<<y\E>Ǟ @i6=QK{ Q0G1̮f,/hDpַ>McO Tً}x0ɋ̟U\xR4Ȓ ~" hP4&RV5 ~cG _d>}FCu=U6 @peDmHtğaC BA2$jlEQ8<2mH |-+{pq l5H͕Q PY0emC+6)PR96 q-e;ʚ+uOXc|NDl:$Jʤ/s<1`Yq{wE9J.r n)c4\%x_hF5#Ws m ( k-%|̹#r 3eŴ"5~+ VTnk9 8ʂ&'Jhak[rIC"\J֬ݔTKSn`N Ƶ^Ul(tUVճL(e샸flERAT?OxxJ uSFM_i /0ꢤVL4-HJ6p &wգ@C*}9W"d~XmmI 1<=+%UؖzwG kݒnJU:ީ)Su>AYSg3D2i_GD^|}MVc0^J4KB ]N' 8牄%gȣ-^XZ ʖ5Y*խ$)TRiJeǃEJXoH1։BcQlSY]K!r"ԡ!+̥͒9 ڭ&@D_WD8²:ؼ9Eipvǫ0MFбd㼗*[0WC NY1%j.K'Idm1*2]Q-7umC+h6W9yC0aCߒp]mUt#l]>T3IՆ>$i󨯌T.TL" kweWW4\2Gۈ}y)UlĺhJS nK]gaeTKET*ϲsUyQ2 M Րba44UFh,WfɆq]cF  ݔu= /K/0 {K%Gn^܂J\W,+$(γ7^P(ʼNZV˷Z˵4X-@$o .RWX$2cl OPR1Պ9l{Z+K`ThLՔQMŅ QIIU纩F0k@Y |܉pmk@C!A7֌?׌MӔ bX)cnN.qRbAcrƚn5hv5DK/,;wJozc c >ʋ(8,~F,Xod4#IO6n ѭWb=Vl5:qEu]AB? ~΃G|M[KES<`?ϕ9Vۊ_zmt]0ᄁkzaU]]|'t[aێ{6 ]ZdU4mkc:ΛBU8#}X[~,u> &*F %[B}U,Ya)Y߬"#td$.iENQ,Yi|E"valIw]㾿s?f&luuR綖Lؖڒ-ˈ-yM$Z%6sS6J]}HK 0yH. .oVtݎ߁Lu~/Z[tWU}X%۪.рv.4Q=Dg"C[ݗ$_]L#9_tp^q;[/W>x˚mNbrF"{3 kGu,Y6p@*]h`g.~,l.nz\E4{?dK/m/-DeZC>}4q$ mv1JŅ}ua:Gי3 {閱'-'}/nL9; evc3~Efn_fJ~gڗ{d0b6. \'}6z[@@h']:z3VڣF i`X{B=cB_6iIv}fsGjz.|Ac]qfP|q~Bba҂Jwo'[r$,D}]#bf:,Q{C٫4 ,eC@;E$ /]#5~ql;M/ a&Ά4L B {=?wu~6z ` T*\#֔t~*9o8@'Ql.x^V{7DE_p~u$f?}yDXs?"vRҨGNrio\ѭV4;=m'Kx,_eONON_tp˳Ʊ~\"`!0BҰt}-8}pD$Oi#!]ptT`-9߉w3ޢ=2 @N/2T'׶sGuy "8;A.@"xFE(oTf. eAp5I%vJ'08tD6퉗;iD=}cC w X<Wr۵!QCG}wbڄG b抸{׆ND Srz-`4t] (NZ-#hDut+gZhv,pG(Y,գ gw~?i ]qv(`=qF0~_)KrwA\MF"27 } DPWqX8& =//^'Nδ )tj(7*ALcR5Ҟ&KoFF힎Xzq@x_H_^)v ,WnB zr X/fT(\{w}Ї 5Il^|}>@ S^!? z*ghlLz]92p4;3B:N{5! Gr}4Pg5"9ux Nc e"=IQXt҂vPHx4RFP-#G(A\hL^q`иr4;m O$m#02iu`ߋ !@iud =3v C#ې2:Ie҇?B :_9y}&4<)m˯_~9QX,O8d0R,䩑d)ZR)Je L4S  EY"l, 0ZYPvI5Gp= <WQyR2˩u!p7'szGfQVqYL1$ T݌X:ScSs(xO8NUr!7:W*ґT@as%3jf4$܆#5@Vn@3Q^$xr.v-b6)79pBp;ý%qI%8,01i[4,kp!p¨ bQ>\ʃZ12Fs#7r f"44̑(t.Wj4ׇϹ|G4leo:fRbOy~!.eoUp 3ˮC׶-|Q Չ< zc~|I3bFY5o_SA&%wV՗O?W8ǡ֚\3RK} Foaޔn~XLH"&Mt`zfx)Us+̾Q݁>{Jj ޠ[4OϏ^J:QMȌpsdEOFr Iz=N"emR&@URyf^vDdFЅ{IdD8'-լ#5 3|j%ix?E}V PϢ )f9't6N">9_HKjK1pPdCVNz;77c6ZރV즕X`vևO *!ASWkOr6S 4.lHo1Y70+NʆH N`WH_0c E ԣ2rxr['Ey*ӢCΗf"mࡸ}єN'|\(fW˓KؕaT^\'xq?0j ;(|/꫖"|Rף~lc|fN:,74&gB&'<01wQc3m7JV *,k)\z6픍(p溉I+yR?5s}QFGax0~y`M'ݸEA`ίⳀxv7P{Bt;<pC\/p'zC!8I 09gcz?̈́+'Y뻁CSC)9!V&=w`ǻ[Oƨgy6󷟀Fq43ǻXJ}Q"W1=N6H#r_!a^`.'0j͍J3~$#JC?G䂰PȚ/BPMU*Ժ T B 9V˺XjBUoc1 =FU0Y [vfkGPW18LWlnLN2 a*s@tB!PXוY+{J}(tݐ =|(f%`%/v[-&Hlc7Uΐ: :XmB6uyI]<hP<4*ZPQjMR/?/U>5UB e/#=P kCʶSAe/Uae[n5ڗw-_F*&| BUB-'z3 ^%ނP{{Bj?P{ BUB-u'y3 u ]%݂Pw{Bݗ>Pw BUB-'{3 _%߂P{B?P BUB- '4x3 6KeF*K }Ahs^Dx@Շ;<3j1K .γTpa:M# 7XI|YpdRIw:Iz-LxaҵHc@|z'jHAK٤ =ͽC|N {pn(}Ք?9H̪[qyFlPl8rQ1)yӠl1ȏ'GHrS8'Z+M4X̳ 0B1pV U Ⱦyh8pO\JyRfh1ji m1TvquԎQ3rfi,?JY!"V5FSx'-J&ɧ0{h=D)Ė,`s20/H5E}7x7Bp{F$t(HWCy0&YcA%~i\V -|  #R@ !n˚~-%aWZ2 -<_h11NSaLd/KZ${05'leVW{5&PXRjr_Ӫphc ux BLt #c2Sx@34'Q6t~SW/C @&jߔ6<0wʄClgWfߎWRͅ\*(sgC/A1M ʲDxk+#J蚉0txd5P9Cyհ`9PAј+Twy,tH n`&+QK>,hP k"S 5H~1j9[`PUp{ɔCSEuu5 I (0!֬M`<>SP-yb;Y(2 ,6pL{P^EEe ~͓(#tYƋCL.Djf>_ "wÝ>!KG\X?!yV+T;IR f "9^A&u;PD C #=jhNC2oKQ19O_?੯D+9?~c |Ʒv/{КPN]c~;_j[ %#tGM*nAׯI2ŨVRGBJѵ_w"g57EykZ%Aw&z\$2zFtUTsShZQ )WZ`\%LLƥYUqQ)ONR7Q``*C&c=,JJm,2x{ʹ yC)4+ 4g$֡FCͤv[sFLB$TZPrCnّ0;#PZǨjX[JPݜLΧM\0pLβ\%e :P1/ iW0y" R+hXlWf>e%ng-NRCD :N?0AGaɓ8Uqw"Tqmٷo,6|VVEq3=~i1 Z^޹tyi%ǞB4USy9 3_+" ;E3~pr-c% Ǜ3f̩ i1k= M dda.UC0f RmEL |VɅ 8MXdwn@¬S46z{9ދ^iDJO \.r{&wLW[ꉸ/!#;Ye@\h\F!]zn\-. 6+>%5EOYV-:T 0U4Egվf{k'CƇeh]x98劺tvkUP`T_:ТV}11v1ؙzWn9=I UY v1pB`ޯ3~D=A +F|pix1TȢRUh $k,4];6̔s1Mz( d&MR%['$xA^iY \h΀h+0 FW&i֕:-A(bЁu%z ݹS'U[FD׾3M )`?ʆn&7}͘1dg T?m'$85CZ%O7G?7p19LmJ9`*悰xTB$(ț۴; [QID?sਅk2hJcĆ?M)vϟh5@*LZטx t)` ?m6'$ Ci)(\LAlS3KIJQ.z`f0 `zi 9d9᭦GFjL:1,9CWMa%*DI~ Aa\I=(NǃO[sk15e_a&fMFej)RC a!i&cz^cܑ)!W!O9ZU E҉}?:GB62 ``!0p*:b1[#h\#(s-k& dhJP"8 e}` _VaLW)u鋤TP)b!oܽѰ)aAn!?qX*ŹYA+}O,8ƹVYh0@.FbD,v=WbG*k_yjsaڹ01oHD@OP 70;36UCyk1A;9!h|YۤQABG}`d mK2e] 75HYIV։!5A҉3y/*^HO21N׭jo4^"&4 9.,#E lsp&3ieУJ<]0EoԷ- ~X{dsnr62 S^BCk>'QO%MPİ5! c[eš;l;~}c_צQ^,G6""g3xk&^C$f*Xa%ZgvïA\T^PRiYpl>H~:cPs1҅[ʤcT&A!^7AP2 G~cqrh X$Gc\M*WEBsj9j-{M R6cn;I':x_ SdRy=g6.4ڡ' Qƪ &R$D|nf5KH)Ӽ˿\Yl 3`bFVpT( \gK/2Eixxςc4U9DRZpEgT9iJ%E)^" ?$xev& ;G'&LJZ1CKo/%:Ymvc(q1Cmr5+t,VX XIe=|U n1u.ΝQeG12%9fBUs1äl%ļ p8)jtGA?ҡ,655EIsV,89za'To|xȁOf0_9/Rʰ;ˈ8Â\3H ":|RP^R$&1<91.S((b8TKI5cza'F|wē,Ə4JR+tdRHIDpT4Һ\8L"EIM.KPl^ή=nv{_ݩqvj? 8E4cp%A)@LZhAdq?`/4}p˧[߾e>>O'mFTe6%VPZom:ڛ崽W₷sjPPxo.kbK`.mkf9B[-p1MBaNAO UX kIl8kaQ<)z1~c E{DNrɮt2,W~nfjl#%"/ZԶp)}B~047eރ8|d*9ՈI'T 9'hb$ĚJ˽$0Ή7!=B.j aG3ĈNNtR(-EBk췐ۻq/jr)c1m_w ?GЊ6]w=i9,8jyQ& ٢+&O{0G^\,йZDK`-hSlU$f)`syl^p!}q1X*[k*,BeJyY4+AIM67_A?J#=rӘ07*&S0Sa!!r DǿfWŴ'MtzĦ<̂c#)90om txJ[Lx!H:qow[GF#Fe^ڥ? V\9e E^' Wǖ'G=3ݬv˔Et9%'sd5*RJ6z2JGbxj5&s:{ռo3*tRTF&Ҝ|}bB m .1VRuuA \I'_ֿx's柛}H))GF*<$+kN>eYgXu<RTXHL^N\}d. 5 ALNk2~>$xQ_jᏲ6yR20 ǔ\JH_#q~9s5%!oݹًb|>l73Ͼ|fMuuRSenssUvGM ǴA YBYiJ"ԇUpZu߇.7W9>EͻʇaQ[S+c*?h.k" +.LngjYCIb́( 졵BRHQH:%זgZD*C <.D{QgΆtyE=7ۋv9jnlٚN:/;IG#ͭU,8ZyQã &;s&w $̇6@W"PxYp E~Ԙ-9$ v浤\ईcoC^*uy!'_}"hBޘCB$v=S8S}ryz8C͂hprƋɣԶ746N!c΋/Re߈TIp{h' x^\TpUT2ٱ8a&+s<s f^])`K9Ekx}kI)nASE8$ҫXp|j|+VGIUe%{®wXޤ&vENv̡T6BDZkcM 5XJBÎ<`=.SIQ2Z jFE K JtJ1܍;E"-isGzl 9 { @˕يmRj.'LsnXxĄM>,6H8cw=ϢuYuu[PѺj#E)ԗ{dꋺOFFC8NYg i;cHȱ n`҂k5UAe!HR&=f?QO)a8\G?ݚVk>;Q$M0 ^+Lo'9Lpz{G#?M￳ hE`\i"{Mb'%&t;r5]"D⮲n2O%䄴!LJnrdx*cor2R5~` ! zo^mnʇ.DArh-" hzЯ0F umMySC|AcA[' 6AԎJ`ݲʝP<3ߐ|Me@Vèyfnq& BWu. LjJHK_$@"]unXptHdii[?)ܭO?{䨳V)STxl}sVb~Im|pYsp>e/ 1|XLT4{JnjHiȚ6w2'91'vFpMlB8y b֪(H)f B:Hܤ<73#s"ӞG'WMc>Dr,8pQIGR|&.,8d]Hd=qqd"H>2['0)c΂&7;NgGn>nnd|%7/@Nkv:ʖLn8`_[xy4zꌩ0~vj߈N%(= :-i(jR٣|61Q)NvsRe޵?7n#%CuT3um2I2?S.Ǽ赤wjkEd)ے(u7-]XB%?$iU:jV52l$JjS\%{:n'~IulyO=uǦM嶻Z8MڊE}Za|4qRQڷ[=)$ʯQEZpU]Wͮh4$̏@n ߅k4w2d&pzGhFۡKV*WP..LLty[ RVׇS:p|d^$i|A2ʼ<,w;0iv0ʮ ,~7zeif[9}|N4VWfã2DNYxtB:@,it}Ъ}U@Μ;l:(oV=Wb`\ݧt~0C s,5RMYe/'9/,-2dSمFQ4wNϮXblFʶb -h U-tr[@w"28^JJ ?8h\5[ 5sj#];Pe616ˎ;r[K F7\ٹ= 8ν[ӟ\^x<?Td\(.i$4'q }rl}?c7`F,@Ǩ b$Tu1"]ES_x9ChVO:-UwU@\A:4n3f(+DwۆOks  &%*s UO,Qnw̼VggyՒxx]CTOET#刂,S颻h1fϼd/;a;!QKr.Y(35kRLV35q)cW"=Wo.<ϴ,cvi\wLvOãR2 K#˽pU~mf6(Xn\\tBDs@;ێyg1e},+X9;[*B_ ͜ScIw>sVK`R!zRۙZRMڠ̽\;A1F8f"3tagn[8X0Ȫ0{H1BL̉l 8qm 6(D%PZYvտ=RZUڎUX6#KЭt aHgYVhU x{ [@aTlRԜh"Z$^Ds]P?m4u{Y\P9l\>4Ӳf;mbNS.'@#G1XKX *dD1=\,;ٟw}o\Jw(D';LC,PHXYmbeEz#JG0&nQm97`:(͓HH-CIap"Eκe J6+ 0VniG[7R{r*mD`jD$*J(VH5u l:?a8/(&w0$DĻxh`3Ap,1#y<V\ ;`E-mu>xJ8t%)>b@= ._U~k^#rfkOEE~R0ՠv\s>QJ8)sJ ΄Ј>DL"-Q{s~~\nŕU-ͳAj>|y~r==xʋ47/i9T˾LX*K%"N#g ^0ք8Ą<E,eqr^qkrxΦ- 1Zp ^gL__gqMtqʛhF,î5c'v =f9 =- ? 3"*(HdZQ`I׌5ӌ?^3h8cOU38DV,0J$ֈmPLSSJHk+5iE4i 4Cʦ]kƪfhƌM$r&DLGDcd‰\$4J0$0`$qM m׷&M+jM+t0Ł*AKO̫%敂!~jG lԎXL8quknؑRB%y)Д:}ށmoRIK;;rtiN0ZhtzM!ͬ4Hl<[%f9 q2M*w)3rPeRvWԍb#H "9$P/qC4_99}w!&H~*{uz3o/MQkMoe.PgmO8짷?~+٦i>QRӗ0QoVIxd }(O7fb\fJoU:6>9%VL__ Fj-N/Bj>g-΀4Jw"4Դ5\dӰdZ6X =4:J!@ qS]Ed).Hɿ遼trf{CDn1Wޅ4!$JWsYv5[ƽ }kx<ǩ*9 5\Phd^|+>Im?` 󱍺Qӳhh3-LBj٧W.:bjEQQҧpWjk#%؉q'ePXxGȟ?%e!m{?}7y^s@W#+ʉ\9v?Z*4ZokRlĔݚx_O\NWgjJVgj^&)G/vud ʔȞCPn{ ׺fI*&Q9Ur3]/\^7ޙ/R3ɚT-| >C$<FZZGh%=6 r?=1 %/iZ`<6dVl4ِ "(#4Ѻpʒ{A :bVy(,>1,i?`'w?S":cDzn)- Dy\XU.\H`J޷v Hݐ&fڟS*7ė X X*A<6AP@kgb?f˟_NW6.fv/Lq w=GpAJw~эygfr; [Wރ~ h0 u|Ё^PMO~`ܛ힪)lis]#u@!'.E30 61P {>,yz'>ؿގGo7`|v: d:u9gTtpS%}|r6&dM)S F5S5ڭxyȷ:⍛ݣ|I:]7ǫ7Ͽ kݍuJL)3ƒ(dQmFK7%Aqtr3ԤVm~Vw6Fk#Ԯ:bbw~)Q=W݆ǵ%5&|YSC1„@DL(a ,&ܢ(!D8A&oEtoOV,q[Sľ6‰pUF &-6ZMᶕyMScHiGX[H$&&E_fbT" q#("5Ƴ 1J|+15V"hYi"TViMgEވ^`\SLX30KEADSł$ QBr+Yh`uN.~2ג261Mld 6{pt\u[!Qju:P iBRPL#,a$ #* X*s4'bE وq  T&QY&6LgFH|.ӆd4$nj#W"CfO@Z[PUZaJbR a1E&$u ձ8I8XfՉ\sႛckLcOۢ  EkZ'd{.s無QD+!ĕV2/$&A4\w; w&Ǚ -E,-QG3bRKͽQR+$9iAH3V1suihK o91gZ+42PfRx rRrrL}+ǬA3 k #(ϙgTK9A9,)A-@4 ?G>M_[YQ0%<XҎSqp56V",=U`Kscxa =re2sbA#IIF &4j#,m938g8Tw{a<2SPY~;oaiy=;XIПyLLDfZ˨/?Gg  0?N[@6Rp[1RI{5)x#%|a9y1zb6آ|B H2Pn!2d+)OJa$+'KEHsS[*H,KNܣ44 ?N]BmPT.NV"Hvc]MN$j"DFx-w')HO0P ˜ϼr%H f&8I3k1ܛp̦CVk]q]Ac`*ˮ83- u]`ŝYp Pt{,BA15L * 8ƘΜxнLzSeŬцO> lDz;ar5|6UtI8mt]4xS4`3\3btA*'iH4h$ mV4Dd}mHO޽ -z$XmUʾ~vP$DnرP bGD8<f!9:l#1c5;ݘ f ȑb!Fe?Yn΀EKrCubXq]6DDgD`8k5ǡJ<7{-g ф˹@0OLxk !݈rF*,ȱa 9'*YA>C9nr # TSђUGRJC)a,&! ) ǹ9Y5F:6-YVoDν8G=GHtG'XBI|;OM1W0j-wD)Cް`/dۨOI3]mynS0\^^nܟO>Z7JE/Bwphr_.OG:Q u-$67!,q/+^ׇ o϶5=xYiY_l?FME|fVWٻU>3YXC(C A [_.&0La;~ܞ>P`r};DB@ftkBiw)RN%x,@ oxoA̹ ĺ1u(x+&;oA ,ֺG1R}Ő"R'o{Xa pIXV oV)>w,G=HS l;K.,[DvukLFP`J-eX#ᎃu7NNN29+:;v$nyq9/7:;iS}I0j(-oQY{Tc c 1SAϷP42TWLF$w; ktnݯGƝ59.?F4 䕎P| .{t*Ceu9>T(PRA8u8L֊1n,JL"d1qM%.vgxG8(hNxrI5V(dnWV#\mh cb؎@܂'<(A0IbmM{~16@pp=JQ0f};v_#*@BO$- Ǖ1hkq& 'lt 1S0 $ffYBtCdu#6/U]$7ZBAě26.x2If~*]'#cͳE%ݍղ6Aovq^"wiU{e&/ĀnްFc+;{1wr7E++^I Qopl`|֬پ9BejiF/W6oٱ߿YޱLWQ]~ٗԝBY0w ޻ W0\9U5~ |ein2d1('1-a/FfZL/ [b+J7?MWoǗò<BY?nmfݨjjӰ^^W?k."./f<)kޟgDh% OmAוݏu]}as(O){kwݛ 6Rڻ~(zq( 2t`6`Dq#O#FIq^' j& LrY/ΆWf6CrYܑ~I]`C!kZM"{!7}p Iq#]uI|/u`:i(͒1/}`Y(Zmnޱ6j 7yu/:QG[c`*UxK:^ܱꟅ-fL |Г,BYG-` /6I{#RjS ˅#֡AdV"HYZ(º7f%x;qvA,{؎;Jӓĕ Uqp4r{et9jFl#B/hgEQ0<Q! >>)d-mSu*Y; acVkua :&JmkA7VreQ0"o,;^ u*#2(|fl3}Fi sOS?a%Z,-ϋa!g~Vs~t] Y ӠțvDgsІIUM;J^C~Q>N=4=GL&.Uno1t<[|y0F Ow>zSAzʮ )y —`/0ƒiˉ)$CtFDLCd&y5A?*`Kѳ6opYϭ+|||OqQ=pQpbnw)ϋbRY\Q}Z-`w´P cˤțMP&Kh HL0+?Iܐ,+هJX C"XBf'[le:* jĞIYMM9KL܅?+{H, ac5/Wиv/;p\Epd <* o?e!N& DK)ߚۍ #-BUV/+& afC%D tmE^W&OgIY,fbNx}5*_|h`ľ/iQǶՕIs3;ibϞj#؁9ϯ^Mf~5\Y(f<$JL@V,e)on_Hʘ6|զtbbq%{.LPl[ם4e{!oBЁեr8OS3r˸Nq~cK=@`1fY:]NO-P)YǓJ")eoBP,j|4H^zZHL2_b)0mbFz%hw-hE:hZm5%[G6TPD,a+ {;EPp.]ٴZ}xbiUyR~_^§e9w kRs۬#&7/ ^fR:Z~że^}ԃa,fr? Nj^-`(NP48C; )kPv,k}sc"f)[ᕄ7{`Xr7җvZ۶Gg.I$UbA287vLRK7Lu9Y;jKV%a(+|jǓjaq5(#@òV34u$QۿUsCӑpo.$ldR&}xCȎT0ץOR3_Cej+}_XKTnU9ow0s&DrNqަ,YU)F[{=xv7=~㺷zCCឱܖOwl}? ǑHb$\6p )9dhƒfw#-J)N`0bHȪ&_s3=`dI?GKJ]?,MhAU? F X5 .gmJl&i?$<*P뿧" Y|8,]ܥua7D״ k~^?VgCiWU_bu}X/v.X|EBf8n\-Nu/};6???OYmm\ꙜN\X;5!߶b}\\E.x{}!ᇡ&EGq[_Il>grV'6YyZOm/cs?.:7g`P7&Tؾt4!e> {nP.P3mOw(_UFg܀-OW avզ-}&T=}ʏQjٛ?Q[H\:'b/ygg Ͱj]^.G1.aZc\5G10XF4 :fy |ۿKz[HǥَB*mPF^սݯ^Ꮪsh?cojyIs2=ninϏ!X[.6ɕ[]K HwJ F*/-s ito߁ɿ^QDN2cn~i_qه9"#^;4j$X`Q B!35Ǫ|9 VZ%t̷.i]X{=J/U9, TN(cRiy@pRVF2:Cxd*U&|?VM;w;+ 26MSNc Ţ!uq r08F2OuH %vZqRb*}U3 BZGwxP S!<2ǪiG?]dDl`qGBAG ʑE7X'gbȘ@Ǫy`QrLrT$+*P$S(B>RISOjV.ni)AA#c3+DL :/lU8Vs?+2sd)l> 5I$qQH k슶 1E;%CgJU?7P.1ˠt׌s]sr(Q,Fc;KBs^+ǍՀCiijWym2Ue/e1utK~/*'1 X+,=Ir#5CǪMg^y;=/}m6AK!}Ux 9Gjzf4zXuضaw(6$JHeGm5rN G?W">4-Pwi7)|TԜR'l]eAW1Gl|2`ՖzOx#yEܩf>$'kD&*@o6&/L iwt !Z)&C6'Ԃ@8y1DQ=BLU`DϿO5؇(^3髊PTI7]b* FH+hm93ا`?9>G!<}=?{xZx<mi,rfsIsI_b]G~_7䮡PLx#)6rHPjd,(ZcNYb]6r&`}{Q`D4hzsAd]j66]A@(7u)*{[D dZaȺs#*Sq!WWqsZ UŔѮp4 rqQ6=u:pi]hB(mz0z"FL "ǎ 0pms$H2̧ q߼o$8aFԶJ=D* 5aALknckkknѧC*Fݥ8Njq8hEk(b{Yow{(N85LV.Ÿ#_mRܻ/z5T'”^>=xx^[akꏻ@[|m߻EY~ E*\Mupr˶,8/E6QFb2ݎ{*;;]7l>xL^9TK;zE^"b#z+]0apPRw@C7 *$0G,~0! ^ V K}C!.Ͷ71:vi%uYA R̥_FNݼbŷ'z{ƛ4!L3 xSۖz?yƶ--!\˿ HYXq,dZ(B"C JKn#wB+0 ң~y_K^ @9;yKIL+w}s1f/N.o]=aEQEw5W(R5S*a1fq62ii#~1Nָ{ikD>Ryx,"x@|K%_VM/j}tw~+h3Pxf !ƣm7n*݋]b>=%Nmg(4!nQƂ]gt'ه?}37]F<<{dѷX7};ꆺy !Q7ݸű-i|JY||څKeDr(e9P/ f>XMGR7#[ /Ӵ<𴼆(%t&vUI<~OȊ'9ȿr4`.TT8p-IbCoMLYkQUXNB<\h]V/2Wx)iۗϝx5Z4\suN%!އ3]ȡKENhkCl=!+~<~"^ RsT,7]QܶQ]rۡTN?Y*aJtT]2T5l?z&$x5l7n(Xdu Y΍ӄ@#+ZL$ )&drw7$Y&P:QDԼFB#0H[[n"sQL̇S%J"vH*xR>pQއSl_bZZ^ -uQ1"f%1#ˤ$[!ZJÎ9\ p!tsԙ5o't)@B҃/,Sp?<=?j>^~{jjT.5s?阑X4UR0ޡݚ|ԚPXY%Bfڅծ4iq {RiLYru`lRT/:mnAcUanZP8&-%e@FrL0*XLg3vvmq4 iiF>Vl:k{_l4 ٿjhj"R'S1b0"S杒!3%*YbF*rp]xdt*T#ާ~p3 syęRQ]99ٻƑ#^1.H.@>leC=X<mLuE=hE YXEUW,%D>Pj}Xϡ=-5R+y ]">u52{6=|)L勏=:;d4<ڝ@CU[\CKΈ t;Lڽ={;kD?uzݲ{OV/0:(.@Qo q|\@H,an&@l@&( -|QSn?bG׸ƸLSo!α?ZKN3x _*Oنrlњzq% 8.ɽq)._J}g]J,ͷgbP,4aװ T࿟_Δי[l𞷍dzlCcpDt@ا>I5+(x|D2ħ+gn8v.hF#RIK1. %&f1iBmiȇiR'ĴḯyNI IGd SOx\U㎗KɄ2HiV# ;ʄԩӎ[͕"GJD>TMoݽ<!Ih'\ui¬Lj=r!M:d4v@j{%&oY1 WL4M]u=Vh>+9ms"dm _zdW&4+p$g(@VJ +s [;CxT#r=^v>Ƈkеxq!Trb#T~}f-5ȇ7g e\U2(=ިt[ j?սf\_Qh!K@hQ\"UBrvI|tޏVO\R<G{U9ӕ9GL9q85ժD k5*(6XCyDD>T|iFF12 a/OZL5 o_!Lk\geIx~A.մHjw >(:, ӡA>0d-όSf2'v$7Û+r#BtM(I`I܃VFs>U9ֻwU R=8 RF<{,9E@pD%#u'͐ڳD>pp2a{I3ԕ \}d2mˍ锈36NwS-opcҟҼYfB_:%eU_o~W>B#VN7am'O3'8ǭ` Sok_?ėUqϏC&ۆYu_Z5-[.*;V^RN"27H6 Kx,_;[9?hnO+XoigC1SDOc?o쭥8k)YuwX%oҰ8W H^.Ψ쿖U߼Kr_7ƹԌn B OkZ}]=rvY>g=FXY*x`t+#bcb%1ڰe" =d4XNlI߳y?yd~?0[ď3eSgl1[)ji>?kΟMY,̗ ݼM^b7.j'~Qɛ.>bݢ7M_sֻܑ͟Rq R! gzƍzlK8"^*ƹR_Ŏr$=6_lu0>Adz]ݮf+LJRcw7OEso~^kV˧zl~E%l_S`jo~3e7]}{~RI^Z*i/Ġr,uQH ^yr#e_e݇mP<{wQg~߼NTV"7Y讪no U`gW[>U?N\toC5.ʇ4퇅r:'?/3]@KN΋h,e{ xr޿܉3 FTFWL?1,}PK:f(p[ Jf# vyuu+2RұW ™//?\E<.{QɑL ܄VћkƄMx1 ΰ2;ӝ&ɲt}:aCth.%쐖Hto?tp!TC٘'x_7@-_?vw'ܳ#,G!- ƸȔTRt{WSk} ~osf Zj*Uqeܓ4u h,%bf|Y Jn L9 i6N.D1&)' wpΉ<^@Ji۠3<tĞUgZnͥEe T+tr͊`#!#tލ~Ng"!"z)Z[-(F\CE ˄u-a-b!x <"hp|Jqt\hV@h WY5ƅRzLꕣ(\xg c֕5xÆԮ%,T=,L\-8ZOM֝^ ne4=ACU+BtqckARǹ̝P+4Bh!<ؐ *DL<ʼnb}ʣl1VTsVTQA9Ӽj~OԳ3BR WmϏF9_=?VE;1H# C ,h7.{i"VFNdU4#6U=nfЬ1x8!Z"5UEHbS ;6;|cҴGұCFwz;K<3.ڈ k ݵU2 æKw|1+ףs=v\$3RF6FcƸT[v?@CUT*ҳ{͋%U Sj:]qpY+J&7(ˀ20\ 5lmD>PLwup^>8 1,хHFoB dIΗRZvZ}x@CU.kTщ T!# FI)z4d*J:f'J8Vj].$b"{T:ddq&TTH=@'g|xz}5:^DoI#mB!ɅY) 3؟v-?<{I6yM1ThͩGhk`Jnj^+ޜq-ҫͺ;\Ðr5--eM;ICep,%dBZe04tI 1)%.(;tR-w[B hױ(0<T0}ZI-Fo%Wھ}zg 7YA$IA$dDԈc}@6GLg DW.V,OZkWJT*n|:dd0hD\jRRNr>?U kA18nẄpO~Q>h`Ng l 9PREP(/X(d;8u5::ܙB)q18d<!ACWOи=:d9d5 ىz!8&FL-gkɳϿ58|çȁ鐑'kUgU?~5 _WPG~[UWW(96T2R"ER )V+^S \>l 'n?$,wP!Xߞa=4Lg COwumuw832D\cM\tD@ d!bg-o{{*dM&|,}i35[өR:{z`7<3\sbzZyz϶P(󅤷.L1CQ~>;j~m![g\ce_#7uv-<wJX^nqc"BuJe \73ay%һOܮyGj35H&il}w6_=[ $.^%aApI4NU2T0j$/oCGeltZT+z^}Hxu:`atYJVO'$[G"qAӶ5rڦ5~ 7✔_g5ZkQP%j8_M j,`t謍&p !Ɂ7kp\V{Fn$`prL4#6Ar`ãXmL~Òb[-;8UŮb=Y,Q1g_Y ـtfxޑ!6erJi&H<}ޚ^ >z?F<$Sg.[Hq_'Qj'ă=)W//-wКz<$bH C5daS@55?]ͺ3vey~ړG)K%uO-4 6X oquJaX35Z1_N kNqjApˊ[Y.c15]Ih/ {mk[m6ԿY_ӾBKҝ$svuB;็ʧm^UQ89;U;g}Cs)Fq6I<=,-r1=@VR%|܍ZgO:$Հ=d7xi2^靛|{x ^]sTffpybZxYC.Az6Q2ϱWH3!h-)ʌI0ջ-RE팫GX7J8jgg |w*[lXKAcݭNu$VaZ/_pl[}TM=^tm" (9lٟv:pɱA__xյ)ukÂ5E(>ipj;_IéGuf2.- M'(Mt_>ZEl[=5xiTSm7.M?sǿ6Y1 FA 4zqL.;'s-$΃#@8r)l sp~^buReHK ckp+r39&pK6E4Ky T&SA2s%P#Nτ, S,:Diqː R6ԞiCݭj*Kq5pK#No[xLی+ R! F{"  j3q(98 U)r}XM-ɱ39xkIeK$48{6hsA߀_h|?ל =O"dFJe6#C@y3٘1(88z&Njwu;5aD;. ,ʙl\hFԡf=GNtfͨ ʍGPr5i̤b{%@V,,a\m1RĽPc8+)>G怃pXK"A@rJ F`eU6Wx'78tHmHPJ2-׫E#,Q-8$haat_mn읹Jl~["l: ]T,*)^) ;(6<8ɨh1* b'58x%0Iy &T䟼_ Q}$' LL9x1 02.C0ĉ58SE_LL9QT>nl\I?8z&NvlºhQ(U)C9g9Ș7˝z{&gmbat-n$k=݌Uv2n>u[z&Rg(00`QEbH(VAr@O{{(Ɠq1nsK#3b!+JcedPg"t`c,fWĢ-r"E{"iUJ[3V8=^cnHI'pLދ 3ułsa|iW[ξ量T nu/>?O@[ͥ]̓Rz{6by=!}[%  ZV8@&n3mw.ipSo ꗺUuK;M@|JR~jeF~포gͷM|۟I7r)VJ=xx%O%v5GV{MBƍqfGMh&چJu6G ;20.pݛŝ/?hs k`\4ipOqJcjS:Y2ϚIX?st%Dܟr._Iҹlo ZӤ3BGl 5ߒ76>%Tl]'Я&5(ܮhӳA5R4[ϰA.HJ:6H:5S𭹘485ꨦqCAx4雃 ~s"`g\k.& nLL,*4񯕝 fi] 4j]0ߚISf5$k\ kNOْĄo7}":2I;(B"+mX28]"4PuF_LjlvFvJ$$gR74 3I="YluO̴S_Zݔ4%0ٜodLI3q+_p5ziC gjgWqjid8 crD\f :9e =aBsǷKjgh$hXbL h;8婐rY^D 6>QTqkڭRO@ܔ_ ~{œe]X % Nj1P{I_]}ͼoZSLsV$WwX{;"!X6Dp-LX(S nSߣvIjrD֐xM|KvV\u;Ս7YWW_MgŨ* Nyeu Gf2 cwG"ZziZ1.^l?F'}m)clEj< )ڦ 0 ßzt՟hA ̴{^<q],fѾ+SM虸nsDž|+im\9H&\[w'n75T YBMwKIUjGR^#8kk+ܽnz>',_ ,X,5f%_hE=IA̓'0ߛ`׌JfQ&{L+.sۍ֥TW,Laa-r2ںodO"%±8FOoGn̩èHT@hi<ˇl>*hn_]]J_7z2ί'C+-nhrr8 G0Ix->D61N  : ކY`{N@`I%ǛlH\16KH/50w@ U,[l91+󫫰 Z d)<]KAM1s;=!9F?3?V7mo'O/>w+U|5gof?fQ|=f [VX)4>`xqHaR|/41,?+Ӈϫ~Zkz|_d&`i0aF`@փqES-7gR\Φfzٴdr$![Uɏ%0-^߸ۃF.G?^ChE\deLpXqK%9J)hKK!PY)'oYy'_JANM7!~u^;$;Gg󠉙_Rɉjʕ/#m&^Cȴr {qn^֢?&:Pim~w+BcJٲȟ餢oz~jaY2~=hP5jԳ;;5l0AB.K㏟Vx \ Q}[sw@L*x4@UoV k&O׷App/)=1 {2X5ȍ,Y+^F@Ϩ71Q>{Zǝ)= H<$g( k $72vZ5=R7p'`)7vgp&P`8-zw\.p{,RgZɑ_IsRqlm{[۝9YI P.+D 2dǡ2I%wrZM$C 1̈́ C~ pGmmdjS=螯jVq`w {"~5-kJ |g(c8TSoJAB,1BrCP)E8/2a\: ̨-.ʏ5=j~6{TxGBD)шXnY~omZN⺴)cciSq/W*nqTb$qNo;r%1Eqƣ&*$ŀ@0x\h{3}aY,9"Z0JA T Єt.k \F ?^@0vYvؾc`(V{0m(WO8` 8%cPr&YxMC>v(K59|tYSAfe??|۫%pc܁-NOc$B5::km`Pmjo #VR{2 )ԏ*zyQ@ǷfquPfO UNMH1[V#{~G.nR D.ڲ00y rgw]@_1h=ܢdDL$ve~H˸.]ʶy]Úe O(r\mWHwX>*{rj5|K'qW ty};70;O1U}s2z9&O*YzR!&UjV|5n 3e5)pEn(bTE 9&VUpl[(0(uGdc4DC"f`.1T~]k=+g >X y. EYH`R*C4") e`>7NȑZzBO8z`nEKYz܁-^#1.!4o. F/cd'r,0WOq ADkfAqB0H 6/T@Z<1x1_pvwG]dw,!΍qX/rnYm8:w{Q^ؽ[Hb2{Ln;GCo?Yb%aG=zW2n"QBtHKI&E{l5Cοu:⻄QV#>⥘u. Sս[ ÷)<~Ϫ. 22^޳4^ d7e(c `](vMEB@sDKoi(.k1( U9#kuaaahyJP:R@S}hmX fc3Ypz狳G"JRP-1 Ӟol+:w&ccLqw^-c|C$JnYOax\7vAtҵK 瘏; xQT#.Wc>Ruqru߸h(-x:իIsrL^,7h3T`\O|'H_=~Xυꋏjzo K=yq[-,W-޵B^6oASJmWPH]EwYc@mʄM@@GKE{­v H"њH8i7krJG6H3H8j0껧~Q ]cwW??gq "l-f0߾oK/xW7OM=xmKyrEIjt 8sm#N6\|Y)OX;[:tro870?6TR|'kt sN }ƻ=^ R]s몃Q4vҩIkyAXCQ~:m Jy8w~ߣ_7)l42  @ w`bJYݞ0g EdF TR.˵'`jaN?vF˘?14N5c(;ˑ˪_ks}s5iȧ9}n <=q##d2Yh2Md] ư9&-A ]}e"34kh݆FosymsJK *(Z<*5.'g#OHXewv_j5Ĩw{!+zRFO?Kq11A+kc+ u9GEO嘽yrկ:3sFw`Hׂƾh*,x٢7C ^5 ?no%s ˙ ,3IWp1ŝ%Tv/nG&*\qf+@ۜ`Zѫ`zZ|<lF{iLW={D!db"Y3Z%Ĉ5YV?e^ Vd:fͦx»($e7zemXwkP߬=9mI鈎W>+Um%^-.GS,lO6Vyf)gW]{!ubq1*iyu\oiant-l/&ZG TV8@AtTi>vu Hqm =W"1Qh9Ϥ濁aH3H)א‘"`ЈY xO'8WvjG]?lgw|!΍\qX(Qí{q6Rq(0ӬBc2vk`eJtH>YcTf4;$.5&x M·)BR:'ccOq:pA267||Smδ-os!&Z֚#7ҒKA,/qUD- @JFPȬ FOP=P=^9J 3[UBX9Bjԍ*P%+yӽWntˉh7OS T+\3#U;Ӈ8OzƙM> `ԅ41Da9%R,( q^"+5^Kt[(P] dK{8] Totc!5] v:՗vywC'黐~!JCqMz3] 4nKCTb"5ݤ`4Zz~Tj "mq I my-6-of_.hGT+R&)Co~3_j,T9~'I1'żblfc3(e1J]Cd\Ml|)%,$aܮu\om-_2f:|!7Opq)_0fz (L#`(;PB SKG;83u./et4/HO=T{MKeR:ϊܛgUݰC(W`' ^I}嘜^Gmš,n}6~7N@x4p4Qp\Nx@=Ju篇q2ڗ'sDx!b(DG8@t0Zm@. ŠvlY*FHbsY34[N}W^ޥwljQ}ƀxRbYL&0&zVc;Yz1qȳKՁ^r!aD v{yV}aϱd#qz^.όWr2~g"\s?_F.o]uR5_;\_k8ٜw+.7Av%]7ra7Y]~|dd@3hj7^7h)Յ.fG9~T\wun@M/}n)c:cj9@b.nvW$kf˚sʐ¹'FN_ brP~A,҆\rs]BAr^֮{S`-ʗZdfੱLdqco?UuREzȊ?Z5JmFHKG3o“)y]V^8Y+'=BaA<נgA԰9Z [OMFS-{Ǯ,/ ࡰy9.Ep7P25^@孴 N{'CZx'!FuZXD>0i`M[J>)2QaKC#z>_6 "I)?>aHspa95:!ZPT`}Vg@!΁@fjL%^DerZ=ǻ6*a6.0mlкX&t阒B.h4P="NP"g{hb*Zl,-c=AI&M4mHM{h @) >>DRΧ'LA^,DFç뿧n6XS cJzZde5ղQsG9$V.'L- X8 pI~A!2$ q*Q]JS*8Q%t\`v`l]j' g[`+B0`ɭ=s/E.2Ŷ ԭ?}Rr. g_ ݎWyr0(K&߫^D.w.4,%TC64BCh\yڦ`qS&'/K'SO\ qO:`piOf}.Wlȣ PS< 1V#;Hl>=s< 6~p2ȉ1È@8\>ӱ"cƩgh9 YXs$h\a~WߜMn m2H]V~v.+g8ɬ83Z;gxdk <֣݌ݞ5c[h4Nމ?9ͣXf;9R3y/D/C`_q]ݫWG-$[NFh!>\czG9%/1yoٵ0{^*'6٬G!M˨gyN]dmtO0insas8#0J;(;a*=xʩh9"|g +zsHkmFEp8 0-6Y a7H%O[,9R˞E[,[Ҩ"DXқMwUi*z:%ִvi/ܢ!מ[xM $gk]'e6hĮkΛev\_GU̮Zuu/!cF5eR"]uO4!;h aX,EEw3A+ڸN ISܯv-W),>JpmI vߢfcO~͜ΫumW"*vΟ?_H,kގk?ܾ] 5tjy]i, =,qNuvx}%*~m)hTWh>6'mfxSnfYϠ0 f5&_P'z_%k7N>Y;AgT|U˯:-|DNzBI|nЋ ,=ͽkR:C x.Sf/Ǘ>nq&))C[?mpk8ހ\?4w *\J~j3?‡ Sb#b.Q#A$$A;47Zb$RhHhZ`X)գw2c:u <8 P!qn. ) Q Ý!Ip@𐐈RcEdweHVµMFC l8)F1DB&6^V`X'1 l>1_~ `'Wioi`/)<*[p8t ] JhO-¯8n] e*RjZ ޭƄqD}= c"7.:uQFepH-f=DZ+nGʴ+hǒG$z74nno/iT> gplR8+Gdn80ȀFW@قYVx68d'w`t|sy]DtXa 6:vШ 8!7ѰA>,Nv+P ;d=N@(Q"#jF]:mY5Z}J"*{!'Qiv$Hzd]_@V&lNh}i:6^.) L>0&YU`t`hb')<<~Ϣ_VNMʄ3N8Xu"-.8sR^C{IUFK_v G*^v2Tnm>ze@ $dd?K*ɐHd.WXeN`p`hX񧻦GB"I vPR$s6lFӖ4SF֢;8>(l-6Hx`2EH[.iTs*a-ﭤ*U` p#GgK:Q uJ?^x HȾSZ,XD|E yc0NĎ /*>`qpC5A1|E&T'P @G0MDyQr bQ^ޤd&+"JVC=.JhsApK$2P6ěWDHOo|h Ztɜ?1ܙi˰ÁD %2rͧř+֒FEpX(J}]vCwШ Nl'(}h)VKTQc"e0h Qe峖E8K*o;hTG~ș@#XJ>L" {aKF ih8y_ HM/=v:ۯ ?\sOZO@ m*R!M]wШ <~F''>TH/z۾z3h2${ :ǐwW-کl,k0ymNܑ2 6mJIAI8 (m,ոdaZZf҂*G&lL@Lxq@gA`Q@\nXG/Z:(=A1OT82⣱|3oҪuD__Zhy|7%(a7Q&c&'xucM~o凖mG:LCfrq-_ڧ7qEI)NNƍ1-l14hGO1zmpח>%%oΚwS;߼^?4C1SD{&ShW_F+7~nZu>&p>) &-}ҬPf/EF}2qЊ;6יj㏹5o&\kviLB3ccI2&x O'O9Q%fvVY},w>>,jz8b,>aam3X2iڏhtR|l.Fwq9}s!vm2:'8f ڶSz1wb0v/w:U#g;A//2\=MgKx{skίh`36u~n8= ­4HfgO͔8$ՍB!Srez'X3KƲeZM~۱eM2\=?Ŧ攀DZf{3MS}IoY{K|oto(\=w^2 }d0&V8~Rq15- `qܟ$>'3y$E'rD.jٷDpϒ+k)gxqTs%r!5S: jNx8X?bG,b=&kmsx${M bVYEINSN4rǃ`=`Ǩ6yvX7ۛP'-o;1=Rйap}i.-5K(irAs-Ux)ῆ±Pv\k?YVx9pd&*p:q!: \~uYHOSJQ-hdvJ@}RԬl-7S::Nhs0sNٶnok_y(֏Uu)/q@I \P~E-9Erqoo|劷_n ~ZB۔pmT}tπ?>^%CMCV pG'u|k{XyoJ_ϳkղv7\ydžSZ^vOY6ZQlto>"05tQUBN~jp=k}W_\|y2ڃ;ɮ- aҝwFS=9ue7󮬖r?ZMSJd/wuM|7=۔tzJ۾ڇlRpXa13 .Nr4btVjo{ϣ߃x{FZPR jVy,gdN SuWԠ(~Vpya%/n\2擋W+2߾>hΘ_h^>DN\hbFPCfhYB;s)xp*tGywX4>?DS6Do! UpV7SO`s݆>2Rl?܂Y?Sǘ2T^_-r|WwOv;/g5sOuE'N3(o.8ص|>6̜w=u|:ڿ;E4zla b'+6h-q3ņ}:L̡GcDC#o-F;Cc <|w )SCl9QH 5"y߅ 0_ܳO| I?5M3KcߓY'^Y;ZG1ˉn45r<~CrPLxv$B$$x&S{qx!x'?8uv1f Ӫw}}Wxu)]Wl $Vחu;~M'k;.'S@ճ|KDB ?8ll䳳ɟ-3?^;I`)ԋ YzR/NVIWZ aP|sӛc>xQ80D 7Biy}{nsC#V9 vsk|6a_gÐȾ&$'I9~9-"^]Y7.jj!pcq=>G|r% >蛡nWw:uV}} a?JުM|Pɸb(üλ_w\膛ό7ljm7$ۡt:ջ0nH*ڶX[%0W{:M*y\ Tg$2ז |y:tUk Ld/w=aOBٶt7?`rx^8ӻFxjK"7c|hKM˻wFZn{`~ۣ҇mj)[t'fvsZƎzD_It(* fd[)VLi8曡Usğ_Y5Hg1gisC1Fr7JFEVƷ*%YT6Zpk+meY2jW;y/h")|GaH!SZ{\ȥүxh㲼*F\V(A6[Xɉ"Xgq^[J6$MPd1Fu$TZ %m2ʸRQh}jGlǎ~J(b3 iK!W(0*P8*6KP-sIi)';X+,TyᓗւUҩd,FDU*LפnEƖb1d]RvRZ4=אTLRk䒔7 )YLuL-$LKc>5pf,cSJX38f3FZ+ $eʢrQXWNaEK3Rp0)`:$%LE:6Ls&Ф4IM;)5,F2+^et,yrh.rS& ,66 .XX!؝  "Б&~w71,1K?[nJ V!Ec}J:'erZI${UNȪs#s"( V[4hZ)*!wiߟkXKW fNaM d0E$XH|& im-Uल8DIaOJ=k\:gI̕&l.-6JȒBb@䒳"acV@jV BBtX%RJEt)I6m$2[0As K.! 9'h Ƽge] %՗5%D0 Kէ xT*BB(Z%B̓pBi5QV+H'۝@VAbZ |[`4q ֙ /{n`;f12Dfjw0!@ߧ3vlL|ȗ\6ܲmzx3>&{ n|j6,W  .MpdX,+(]45-*h`ffSx`gܴ!Y5 k A-)J$RFVWex'E`8҅iavH^|$ԗTe2jnuf#fP܆`m@7{JXdunV& :UϳW5VwFTR8&K΄fNO >ڽVSGUD|ҷJ>b=^!EXB.` 5AJ,./oXՒn!E$e`O PJ%Lň@Qf8Y.Az l l+ lwJXb惙3`bduJ<+,G ›2%Y=vE YAdD" (M6ʀVΓمȈUբ"P~V}MD1dD $c}:\6`k.ǟ?wHg,YD5VЀ,G&޲6R*4譈 KYmRXP@$d}7~QZr?2á BhӴ렭=xy+7vo-m783E Q7*nn@3 v0SQ`Om>S; ,V6jYT-G$Q"dѠI3(^ rVZTH{8HA/k CC*ȇ2xyDࡵا vzVD˱rQΈT`=@EHeBBJX@R9$DiS)7 ! 7q2lm6@̰ +BJ1$!O D Ϻ],F.)&~BE9FR%<9 0Qߤ 30?y.LmDR1uBΩ X;u&AL@XF=P(-g/U{IW'#)ɐ+2-L欭O"\֬q~=(JS>( ) ڊIC6ȴ𴰂[džYaG: X&]Zsċ+B~) 's|d0 g=xnaYm6VFv," sʈI8s!#I );(@q%Gx D DpG%~a!Hx)XoO8wmYW}Ŧb;1r|#"Uw/MI着ԫB7̣E-K_ aj\,*h\Ҭ_  Ÿ(5|?/H䕉V"3YBP.JAa$|mi66a_g+?JO}6CRS:suJw9PV^0GPAuAPAuAPAuAPAuAPAuAPAuAPAuAPAuAPAuAPAuAPAuAPAg ꔊ(qNNYo7@w|QzkN@ҷ@ne : : : : : : : : : : : lAp|@2$g@oyNTAi : : : : : : : : : : : lAp@0g@w;Pz+u"5:5: : : : : : : : : : : :<PN"|]a-XW^-'MY)&8'*$SIfpeifv,9ܳH.BӜrK # .KKR4 %cT4%2K<N :T"$J*PQcݛRƣէ5 Z QEW3W)J%+!"2gI.p&-pkc3W.4vk HZD%&sH]Z!&{̀%< c̈́9 SSH^#D &mykb2d悴)eJ(HL/5_JWהּW#DZxNT߆JӼ&S2 +3DV>WgTqSvzB Pq{<WZ*e7M$\塔f3)S^LMVLXaZ[tas]Pz~ wi-dHzh_),ɴDL'_?ԣ~^<i%Hs؟|HXR HMJd+йDAwOJ0H,x <@a)MqJmt [ k?}S8Pݡ> aj8\עCvW/ђ2bܐhCCLYi_<&CxiH<2P%d8A̎%'Rp2>:! 1,w Ihud嬮d2M$,=X x URHʐ-D c x@i@, F.0f6@/) O3 gzЬ>G2R$A"˖="rIDe1YR+ V9Y)P N8=JhFg]X5Syo8GJL UZ Xt%͕aUfo˖A\ Phm`jS` B%֢r#Kj 43$0R."9RڲQ,0kA:amƔ?RDAS𲷡t!/#ihQPq?#1BeqoQG,A]vyp[&Vv94upwӾy!m;(k?7:mP7ݠ{prqA~ 0@y{6{qֽk'^}jBwPS۶/vw)w=26i\; [ wnؗwX8t~iCi_p(.&.;SѼ_OLLb8SO|i؛u{ݤKJUB?Ov8}ͻyq8<\~%,gLohU4Fz;lZSwXx*Ϯ?m9)ٕmO?ˊw]uF. VNTXV|6>v8ᘇ&onׇe8M't1: CGпͯ)%iXT}ymiw1(xc V#.vtݘE>qwr xjCoH䖂ZGqkҸ"nvˎ}x;ٜԧR؉,1f0߽]l`+++JjZVnd{Z5r[sCP&|`9SyxxC@?LS񐶄dt;eC5A GQK G,Jp+1d`0G|.a!@|rS]|페[ӳ3;$V<6W[dLsZʏgk)t lxJc3C_19ɒtp_M H k+Вۛ򏵷}Oٜvdb^ j0z8ڰޅo_jsy .r>ޤ16;՜" Yd]l{-_vI.νCs˙ ~$7u;ֆcr:\J;_Qz?^aQ+?䒆.U啫k2+*;Հ{e.iyqquɳܭhG{k8[?@Gü_W̳m|OW־p쥧.}9{٥7 x)J~h39x_F3 o [-9yχ~7q`7Az,4ޒ*5-U)[psv/9l~sAA2E|,%}t=tP+5Qg U9L6]!/²To$B0Rc3AA#,# &\vgm m٘5$I NmVŐ|'Lh4 -hihihiEEg1yf+xq};_ʳ,ΔwĻ)Ʊ(})5\{_4M~͹pz4r\?yA||[foj}ݷ֞X5z)`uгrOe xk;RԫW͚7$"@ M|rǂsw}6w#X¹p^u9Tuq{MW홷`fg{Z4'o|])dq=vmUet;wf<nsk3aNhJKMV[sYR PJ*Mtg'U˃NY<GyPNZ]챕K?Qec{E $sR@@`~nij9gaAYx"[%YK1*H4QRN#GDaRkRȅ ;\&Vei'΁5Vepv0^`-vS v[^J٤."(%XA4"J눉) (Ut ƚ9N{lRÁ* n'f͡`>8h"Rsᬿ͍)_6?j.%"G [K*[ rKzUcr)ٶ6Ids(spA5{#v1%Jr–esNjrJ"M<[ԒD (97ZOHt6yo򑙯0 qэN`sNY!Fq㵐LlJ k^Tgx0~we~gDʨy?%[\F}vWX/yZm= zvuq==*r#7$qtP>c@O A]4QD\[@ dST͠%Z9#osVh Z3KC5=͝x۪ռ|[C^jtFk9?GH5";77X7Me}w]ccXE瘭-C5恂yGyeaLjux"nHodX g+Hā|DtݻX9@L빱1$M*$4gSG'99+ו&y\tSk!;s߮{1` Tq\i珚 .J LqnuL9CEvX*S8LWSQCyNXg_%XQFGυV()5khy:%* 񠳓Y nV<uS9}nuwyc}Gx-ϋRٶ*:hmGbK!l 2Rj1yJ#]@0cX *ThXUJp=r;[9bb/o?_nؖhU;T ĩv!Zp`2Qf$P9ĎF 5A)nΤ !Q1xΒj=8=(gu~^2KKոɗb@seRapިY㻷V%.ErH8O]QGăv${Jx-MY }tJĞy;a\}E;YMufҒüo2bWOpn~yGjq#/t`dr:W0!PZ) EbR Dscv NT+Cҽe)1$,VI!;(peK@SєR!D n!тy:!JbMB0;jՅFʹ;#^hFs#P hh\w`˥NФB5Pgq#+GXdp ^kfE4 1Lj -iL{~ݷ +Y˵pAb"&Θ -&˷ 3{sò9X=S1S=DE늩^]JME*kY&˴} KKAt 6H'LK͈v d-G:9 NfHղ1ke|qTiuBZ~Sv>Jz"-dgb@9xvfXVm!xU"xX(X. 'T <D1 5O7ܧ>t21U$:@-U&*x>JJϭAS7 ahz:`mR$vQ\d&WDB0*QY-0{:΁7k!*gUHl\a?X>+o%bc@&PoO5jvO׍>Ӎ&[7Vpc;VǝS;[]^4O\DEݺxB[wY$޽]v>q1ՋWZ\o @س|OM1ō0 *_Yuv#M׮_x >|E1Z:TCW,tn:KF6WNX. Bu /  Ȑ\oQ>h '~UǣȠq+?`^x\pWL%(|=yå6^dh=Ν ˩^H)Z#9 xyvQ4 rXQS=mVѓX 0bpXL"i4'pF\QN\ѾHE[5rt v{v*ɺ(J G(tH2Bd 6X[J؄<ЊE K!?]cX_7(x|u<* zjg pgY.d!IaMb`"|L+݅ 0y{9kaIHeӂRTz2SKu-2!~`]H"86*kgNz@Eb)U5@L>%" <= S[Iy#JD*xIWD8)=9 "7AsaH@-#&3VKk|[BK(#ߘElPyO!V2!N >EZ?tBv >ަѤ^ˠTIE!a('6ɤigJ&=NgIo'<5`=:FT208p~7]ˉVP#rqͅ_}A{a4C,vj8 bfh"RtFYS/qO/'gC:\~zPԕ[\eV\WU;݂[3Nɶ@|I"C{ CF)Vs,?^sTrwql⧖$Z@ɑqzB1G qtl99RD+j2~h lE RU=a*^^"̔QÇ_W\ T#AGg0>h_a5;;EYfM@z˅EfWRp:.+L-+>'RF?]?7(qP2jgxcmyImg'h['wmI &-QŮ;d,A?ndRC;hҽ[(3@[y/ gl2@N,8%\O; % q`nK5aˠG{Pd>A]ǖjG@d% =#ДX HsGuE3Jb%^;OXMsݸ bKpXu̓-_'1=;|)q8_[`I >H Lp8-ÎxbBQωC=& SPQQ,K rU~ UCos+$#^'$c{?< !ar~׷b6֏V+ӡJJW .oFuXa9"u]/@͞- k/hiȚF1ymq$<&H| YUҁ=k1nCo]q»A}\DZjͫ8Fd9ޯQ_*_(^阕@t96P~K0YŢ?|WO?)ew|~'9oף,n*oM[-V5YMKq&n>'^+iVBI,RP炋>#G'=:jE]胎59Sv *gێY]}CvwCofjenBƉ*]:?~}^]D\`Q맓bc `巫E%oe6ʼŅg[ *G?[%D}yu8WyQf0-9Y$J:ӡb<|i4RQ`ʈ-q(CF#r\VO'MhHA%c@҂..z@"$AfT.qplF:#-RͻO峳J;wQSґEw6nCamOh̓#7m̥T*% . &DI_Dr$<`rGw3ϣհf4/'gu-yJN'b*е pv'NL~?Is8},/*g%'r>!$H̆ÏNW!GeǏ򗓓5/Z+/.Z?7F߽Tw$-oAW IOm%Ңɼl8P'ř흩]D1yR h#]:BC6i׼ky{;+y(}<.c6Q]I}2玸[B _ޗt Pm>PHjF>}r ݷl+ +W'2tJ\&g1j,(Z ,:'(kv;'Vim@v8;-ɘwZ5#1g^S?Rſ. ,(hJ$G%Ebі}GJR`ʸac^t|NQvD]\+!o˅+xz[w -dwSe4Fzt΀o}n_tgs߯Xuw[rd =.]`iϾZ$ "6FD#"f%8C֪ vJ֢A&fc' 3BBB/WڤPO׳zϥ/RRVSl;6ܿE*u'pkx"AYDj7 4I[cx"-tO'΢ڣpE ;?᪪žWUKwXOY+D..!>_udg)]l)NR_:鱺e>˟dJk6Os!~[PF֏IPz}mu úɧMyVԮ۪$]+xAOnmw*+a6Q.f2ӏ`-L>GˮwR+F:뛌񳾧Me/N 9gv_yI..UfNIEZ\ӈ?Zixzu V[GUeWg 0#T<(㚃%jX Mk"ޥ$a -kMkA[4q.2E39ć~[gU rHdtN4?, xZPxR6<21nO|};1Z¶6{AYe4 8Blp k|;M[vWMˍKbNaNk:V'#DI ``mXQ2FBX-%<+LDRȼͲh3vc$/ P|1RUm?BB TLJ:fMD0aNhȉl9#x⦭ԙH_L R9)rb"vHhހ+B3,* >$%<†*2wH*~61sK& vhܮs!L$$_aASY%(TҤ!Gz(QY T/SMF\Ug:i1#$%rrI*'@:sTp|ۏ<ՋKZ 䯼N8K5YBϨ TQB^@# xk[WuzK%8\f`֔PY uT8.ZILTQSF "R 1ZFNud"(1*B0zfPEHiۏؚۛ ) e)\ZhCl=?FB usD3RoY=H@Ɉ0~&n=c(XQAɰE uPS +Ovn-߄cRoס7E ]( [%h*T>#$X y騊 c$/4(!C BFQ:B/yaN$M`` -SD)3s KC-@ M,GbI,{E^J@Xf9HhEsORy#҄bu-PBeE( -#x0신%ωj,D]GHh^i]Fr6F 6 @$#xekG{ϑf %ڔlHqHhf##ۅ'D0䶖BjS>c$oxNŒ8dE)cx9 ##ٕdiD: QP恪n5}$oΊaAxCH0 -#x ey嵟Wy+wvt:$;VOIzc㾚,WSt:˫h&|"6'57n!{Ia7[V!Gjక3J${{H]-վ-cWUKsWe Sܠk~2M/ח'+\s^QM,֦Q݀&L;?2_XcTJmź~ӓ' QY+ pIg RLNa$ƜT8FiBemJQR;66߄Rs㑼K5x+uj*3ΩMTעˊ<&D"茢_JO 1'!+@eh3c'ЄƐfUdc]˜(_]{oF*Bp"@mp,i̡g~$EQ^< "dǯD)$:FDV:eTZ!Taq:p"rGébG {q漌Ex8d?!}/1;78fsWxvSHixx\]juu ۅл;I:ۙH# 0ɼH9CXgCyMr)`d7t藺U}<忛y/XA>IMp6~݁soi9!!zq߯nm" #>~^ކޣo'otmOke3v֙E3%eTDB |81PDS U`]j %c΍$dO_o} ¯́)Ct@p`..e&#QHYuH0k5f,`6^Z"6;{WN4/Q0x!5r5ŗ0D{lZOBmŬZ_htYʩk.;:S|t&ngYgn b^z'YMu hnzQo0ĭ}=F§/&!mɃ]+&Tw,C3I'7n.&.к~:kZ&YGvmZ-e͚ ̠erd ljʽ!qOouw?|ϋx7ij%~7üȚ@[?УvYo?Jc qӢ1\aky7Yhm9=I*m-ʨNdW0C/i~knqqU^\6skCF}nmaOȿ.^Lb&?Lɛ@d`_j9iUcͤ~U:3gEHKH\N~BU(T; 7-Ik]$}DfRa?cyI%#t/GbuO.買{k=wqV]OfXXN,c'KP a` j'ݘ|`;,\/W +-M9/vͫ-5Jk1E O'  1yZ#$OU!z?1u GO1fTe'mG/˅WY1~xb8&CR"jxT4eV7$fAn$*ŰK '֝"҆6P3{J% jdrjI`dXl).R F\\+?JF.E"PG hSଶěU`SPk1t(]K"mMwwyb/]i<ge5ZPE#&onxb5)peir:[T rX^!vZHn"#lf%iCqŎ08z{(=zIz}t,&oa)r 8M(rneZGEd1Pu9lVJH?B"TyhڷNܵ؞ΟC4! ܆I>O ާ4Ťv ~uCu1l~tM7cg,qp{FTиutV%\;"l/.Ѯߜ/tνXoרo=(聂+yWR3-K ;ήXZf1crHx򸉕A~{A_-A\(wp)BSϘ8P=d:FY^1Tk%i#|&qo&nP&m|NOEԱ~q(aҽ`*Gp];l5;P!܃2$=m]oRj5 LAf)IdVS#ORO4frf "pb%BwPI{ѠMnPlLKfcF1'0"ǭV paYĸdjY/ >*̐W`B zHb+p))0pZb+K)µLq8z<;':>?#7կO @@e`ZaK@n_~āiAtϴlP4`|;Ds0,I3 1h\|'(SoVi$|?)Crv ݉?> +X]/y'i./#5[`1@u~SMSՇŴ-]ި4LR]n\G'z1E݋mÐ\Ŋ<'u$`C[5)ǹqQ7-"{ZTA]j2XGK[ݓ"5&5J\ku+ikV.ct 5[tof0TzL>b%~_ӯkU/)Ofw/%]062QdXJRP,_!Y{oS  85!!rJI]9slYIK.xM_?4ԆnQV9Wy8x۽?^ Z YԛyN[+Қ1e}sx6,ae Fv?iM#s&KbDyI0WN'-=횕O.'?k8`gn¤]Gl_Oi|>96q{"7&q&@!րET`11!RZEj`G$!ڣ%F6 B6Y-> :<<|K,SlRVY:M`9FA"en璳z0xbj:pޓN4=5|e¶^:m9{^gX*>|y&_zB2N Q>h6#DT "G"7D s=Ad0QG' F 9gg0B88U@à82rwDنJ%@aѲ&}# |#͹c1%x|r*`gvhwder@c 4@2Fϴ,sJÃ&~ؓz ^}sIK5*Cz Y@SJx->c=3-ѳ2Ѱ`$c~vIzf)!+A*GwE[?kޭH/NQ EWgLc&wZ5#tFww\I'O,_]%ӗ@A,kI>V~ȷWC/N鉍zӅqVNm&LY,elڵ$>+"ue܌rsr%T8L~'Hkji/%%D )XZYh6 !7- =UnڮYܕikܞIa2IfTq}lUI|ߣdQ0G|=G #WWQ(PL D˛iKj0;ɔԣLаyfwD'C$m$FVMF-wNZn+>j]rL=PwrU)j<>-_OgP5%=jIqЬ(lrobRچ/ܥT#Ί=Զ:]#Ġ?Mx[iWJٰ*kw: |C8-zda+PfRSx沖[DB:Qv GCtF MZ[-*蔈"5z{%%]ٔ)<[^.|o9Q-|Y4%B]34nhխo-7{հy@󢚫qy3 z]jZʝV̓/{ȍdNj^j`<ՃqRHe|zl$Rp]HtB*r̦Ns^G}͵4l^]ͮ]Ws+ƹhl;0h1q 3N1jq!R [=c8Sc!B*52ZTѩP]!T+$X?uTUvߛWWJӅݿFuA*OH]!愜A Qr* AR(W JRWHp2*NʣOjTjީW`W3v?_=x;ߏFޟƹ7gjf|Voʾm%8T+ꌉ >]_޽Cko{Ϥ3HBr<`I?\v]Q GPe9:W/]:\ץup]:\ץup_o:<2QHK = tz>&@xw*CݩT|w*;ߝNwhFGы ́5.,j]o9*rt7a\po&Yuk@}}B'JϘf'[B7|Q(@1KN@x,$Pډi2BM1㢹Ӻw+Zl>ֿC:h{(ZG21U$:@-U&*x.[V1v{*"V1HL#Ĉ'%QMnJlFn} j*j&&W7ಓ\r|[J`ڼ /0^U˲VOl˘fvе}xVY-[[[ iU{L]1nIOϑ Sƶwk廋։mjkum}hy|ws|&5Wɫ~ v_z-G}0lcfͭ-Dv׾~e8tZuq(vt_P`j:t@ m`;8z!C NBPmc"^6tʾ" KB*`1 PB$^ȴhkQBxQYۆ u2+1)!R(r$э Xʂ*5z~`(nϜ6dlCN:IۧA(Z5;eU<%[(ق.d4'{-@ͱpZ )0(yVc!*h,0*7X3@x}{ݽg; UP·ܹ?؜ylmBzlb g4s 47r \PƐ4t` g"B31RHp;m<8v(Cd(H) 1GbGt֯<|)V\x0syx :_ 2F^҈|M{HV:Z * c12U$m"sI ~p3{+ 9< 3x"J3UoFV*hQ"Klqǒq/0Vjtº ?wC0ڧhR1Eʀ3CI֧DAǼcv;n)Ӵ )ocPH%/) ('9'AdWG3>h.L)N$9c& ۋ.%/4;Yŝu @j}bb6_p8W-ePSSzpp~Mo^]s.|YwL ;FU-d[ۿOz)Bu*(g)ޖg㥺6eq5̹_}u=ʉ\ŐGe/Oއ^%,7#z#A[L?jA]Yo[YvIDSO~c7}\YS3O7}͊zkcvʤsϡK8LREʮ_sj<Z s 6W(aY0yY _'=q`M*nc6fr Sznl $ r-e vQVsN_tuhيfN<8btCjM>\I@כ j$ 衂-S Ӆ.{{ Lq>H=T'p™"110߫jQxN;a;S:4h% PRj"ȝ "tJT@A0x l1}{= 7wlC|տyrztXz!˶ g ACA=ף6d0B>BKwG-WvYTVO{g(*I#e)h% K?Js?kWJTdˡe4ʶT<_7 kw]R74]ߖƵW ޯ/UnE5W{M8 z]wsS*srJj]rcvt3:>^jK4W+A޻qݔ}"^oܢ(F&҉k8Ёqx-MY }tJNx[G=r}C [=:eRK(vq7F7a8(_H>V=4-x1TVʂG&%lvcرYJL2+ˢUR \ GЗsDs4(T`*]E&g;FD ],iWs@yδKt ב+y|t0Q&-OQ]G 8C1AhbI(Px0P + H\kJ=!&ji.*`wItOZ#U!2N=г幠pѝY0KMi8+|ۡ~4~}c} yM|9r8 g߽;qZN (@FS$R;Jgo~P,j,xB+_\9=&1/ś$8oI`AIcYJ[-Yuݲ$MT]*%%)ۙYEz!O9H0ukgRT-uأ^F qh+Mxt15{&f>Ӥ$K))Jg}QkIq##o \c[bW.,Q/ua/m`ɹ_|0Wӿop!pʷZ2prd>;];-OaǞ,_fxwuUJ0|f.C<+``κ|a{Ƭ|<6/#fsBڭ^;\JzV",ɲ7oNo() 2*g6D<ݒ;41!UImTXFlyܰ"txʗ-3+cV<cX֕C7ő2;3ֶ˟'ZZ~6ګSTEH /XdOڶWx| L&M;%)nvg[:28\Y\nLvtr5@N},qoYQNF&;|:po:'ʂ^RJ ףy ԍMދJ;&1q?f,AbI;EQCAsu=Ŵ/TZRe}^sg)>5PԤ8y{r|yMKY+lP6ؤDž ɵ.@bZ015J͝{W6w^u룹[{M'/N:xR(0Ckˑ2_6Fd_k2qM&5&WH&H倫3G VœM\P믬ְSWViu { 㴭5ٺ&[dl]kuMI5ٺ&[WO5ٺ&[אĚl]+cVƬ5ٺ2EM5ٺ&[W)WkuM5ٺ&[dl]QTM5ٺ&[dl]kuMAP{,ɗrjcCc $/ќ C%b KRb(ҽMbY~O- >V8dѻ}G !9s7Mjv=X1$!4Bb(^F&geI%ek-x3]FEF+2AUBHlsNk1eňR).Tdl:"Jsΐ5B4yZY`00ƈ-g7F$ <--18Txoa2 svOxeI8;ig: j>b b y0:]bBl,}*%%@n/jB X䢢C?+0G@ǵ& \po6|Kdj+ _uG^+ QbbK A5Zp6Ra^Ё{q6mAvˮʱ##vWs'!qY# x #ܔS# S(ᾖ:ߊW-^ ,j3l+:"CF7݄7Jo$)\BĆ= Z! kH13P*i]ҽ:Րe=pH#F27Nq嘒L:0 ZhOQj-gS '5>lIq,}x.J4'Yc<~{G%ark1>qY.v:[06cR_@*1a胶>+r4CsV\s{ Z>%. }k u[o*i2i3!st,Bϼr,TN9N $T!=ゥ`C dsΥBISl×(.4s }siL+cr=rnR SBmK,Ȭ&{ EU$VNdZ%f0ccPZ0)CAl,+NHmrkפȺ)8a ӯC:#t'p*-Π<;PK|, 6lz>:y.=iSXV)E {>B߇tbqF~3ϤqEԠЀVrd Q6<Q8s1"5wkWjInhcB.i[nOFa~a+`mZխ</4ȤS* =rR KՕRށ$.2љeNmнuۺg]UQ}ǜjǜ1cL;-f1 :p4JqdT%.ѩQͷ3N0GaRJ]-Rs#ʱb߉}-g7q^TUNcotںD>ME_DꪶrgEv&dN<.8{{Tb"->1eLǴ Ir̠B)jM4qeQ[#vYI ؘ%$zZV邈>E3xCZzLBorVيR" {Bٳ,Uuˢ{9Pkv-qɻ=f^ww[ݲ=h~#'MchSa\,+o%lAן7ms#ӧcps4Ydo-j6f+iJkr7Lr]AԹf@.xz0UNP!75AdTjۡU9a}e~^2|` 2e'\3eJ:g Yi?EMSgf3#zٿ_ ߰ryrTv۶\Cu _ޑ`#o=~#N?+;c&NMxΉ4\nȢAspRXbzLSiNƁJ`PR t %w:Ea}\-F*(QkC*Cʒ'i3LFY3s==ics?jY澸_ۗWh\CO=}m6feN0~ikJi`˘IDYe)c.*22uG!ߴ5t\`w:9εs$z]lTI#B-=1UYdEZn^>W=$vHަl+ey$RqӭI={+k ;EM2:) ^q]{sZY˝'^W8hV@i۠,t)z,<Մ ʀN1N2Cd(Hƿ‚O<婷]]lZ迓}Yuy@jPq*:Xf"N~ZC{F /R݇Hlca`wĞ|Y9IfƋnR-G,T4.Vzwc4rt0EE3VXR?p6-Ka.#f_X9$]j 5QmGs ʼ֑aY;XtUDLh$%rϤeD;\˵ٸj m.j>]کi%xk"(zGR@DQskKy6ͭ\4h<%c4~ͭDB3bWdlUsaW ZO]%(( +!9gĮ`ٰd®D* N]%(IѮ"a@SwxvX_qg3Fƍ{W y[ q.8+mw4.߼ Hv=h7s?&29bnMS~woʚP L,LJc$Njk~oϠ8`Ko8ԔIu]gp5GEDuc: mr%I9xB滢Ү0?O4x@ѥ"*Ia-M[ ٷ{ayq)O\8~if>MVllUh4`uE`RGTQJlp>Vimw`ּcνe?N5qE5ZwqqfWx)}: IyHUrUwUpq>o:JsVZC[ί)yx[c/"ggKsT"_i0T*1ܑ[y>"A-6>3g}]-s; tS~l2Iu`bӼ͋XEkSe> %#:Bש^. ǚڨy5:ܙZv>tN+U Rz߹ٷ(A-Tf3n+/߄mHXگD5Q~3ƻ7L K)V؛Hۛxs{w\ve%Z9(-˝.otoC#fMyDlo\0$dٳyߓf%1D4kWN&WEv3E&n#b7 ՁLz}Mwtiy7g-}e/TL3ybgczST9#v|UsaW ڎDUR® Js~N |X JRq*ATaWo]iBff8U@A/adnt]Z׎A?>]L'A {zGKrMAX`qx{ knoD>L38d.|C~51?^acei4/(pn uUtʼ'K 4ueS(.nV|ߩRW[iv}x/=tr p#@; kB/ Âs; {n=`q+\yy#? JNOΟabzlx.ψ]\aW \υ]SgW J z0Bed<`1y2|O^ȹyuݫ 1/ fu+{Ó̝Ik P!\(1GA#~vqi-$%j/KҪj/Kj/JҪi^ZVU{i^ZVU{i^ZVU{JҪj/KҪ<;xgv|FPN^Ҫj_ے?) d.߫| 7/rtGgrԣq|VEy1Շ"R`&'%ٹ$h:%%' PBr[jxmjpU=7"ҳC".Ql@Gz%_n2)# )rR/`AQϔ28"b8HepJ f!V{{%v5j("9i݋UmodHt~ƏgWDQJY{3X1-Vdp`FPD^"X;1::iH0RpΨ%3*3bqA!FK/<ádd]=ЯD[L$Y]5/fw_hݨObIԜùwhڧwdQB_G~|Z/r5*Cz Z{1[,} *[B ߫Ivϣ xGTQb3Nif ~. ZIO=$L7 yL,g~V1qK2J+y$QqXzou>a".._Fh}K5=0} j Z$ZY1'Їeɧa,ig|/:w2w3q,M=yFuRڵ^9\J}2ok%/+tS$`Ϡ$ ,|gMD(N- [Em#W08߫AjT`GgZy{ŽVtFog[|զU'bd)d%3HADs?zɟV^jGXJaZ*P+Fe %e^-|tJO;4m1J*Lpf2vRFEpZ,1„(izЦD [FiMnED*D(jrHRzkㅑJmc}J!lER7}.p+niNFCuEƱP.qi'HVr-)x]e'W$!fh1ubJ+L"*&Utd0<+ε cX`i XD0!S&J;f2Rʃp-Ggi%8=#ʟ RNR)a\HZH3*)k\`)1D`L1R@لw0>,t:o0Xőp.uXE00e9!X-##/o!֏#4 Yi_NrOu*vt$H*Sx/^T #(bF Б|RPf:QDŷη:uubեXb՛ѮԮ^RojWwmѴi)t 7FUő$9ZE+G4Tb T*Syڭ a]M\i1w-D$ɀ2'FJ1)_ *#5{Q`,0Fs2N`B<UXE2f,`ZFL&Z5OHKDf\6pzn6U~[ElrM}k{u{zy0ښG{L%ӸɞS^s&[np'wm-ݚ]FКJ $Eh{G50ோIۜ,?Nft ~IkwIֽNi{08G̝a:lot{~=p~2z.4_27f2^n,~2UoX|}3q-Aߴ}x|;$KkHжhFh*=6a[bP O+Ñezdv-16fuv0V҃ya\J&.1W^c o̱x_Xg]&<*ϩVeJ-(rQu.eNrq>-1E:츍$oCR"jxT]koG+}H~v/Ydq?l hR!)%JV )A=")K69SSU]U]s_ $&n4+Anw__ohto%wr -SZؖ ^::N*8s\"Rk/,Q'NRJp6TJ5#DޫgTF'F:XL̜2p=)`$Hi=.Ep;JpTߞ7qpg_؛f<}!G_vEyvV*+ ڃO/k1}ǿWp,@’qV^ܳ+wTep{jR̆ Md3&G*^6:W x (㣃,cAI'm - I3g*#&(S$TeSVk'"g^:6sgZC6L7D`Ndc hDCHbO`X_@c lDgADIIScBNhDKrfitb 9SBc>\oqVy& UdAE z5`r6DåB( R-woQF/O)HlE}z!e[Lo :?ߥ %5M*|_R:MK/iѨ.GEqZ2絧\-%T}-pQ濛J͵LBu"բʍbu.+H55peZ|ܞf?>b<\3 *o鸩[pSTI 6qa=i)'3al[LO?jAY5a,mwW]ѻpR3=u!y<~n,JkXJ%?:8oMVVMo)/0VpZ{0 (6/_N/g&OU7bHJG('li98}7~x?"u? m6$'s`=u]*mRM]\9[/`"N>/FF)x?dkAS,͈9]J<N?Sn,ݘ-nPȶ}Ҡf_J;41`I}pWw [&31/iX8;b6OFGݔ7xE[ kAޠ/q`9IUWmoflUoe~į4]UmѵFnBMGET[}:e2?O43$'΋FJ{mhr>M<0u& cYh7jܵzl񿷓w J8vI  AaSړ#j~j;h:\n{mj. %~X]& QO's˓ ?z&35xleC{6җr>'ꨇM$ |Jȵ^z[w}*e.@9֤eNj QG!{EL`9S)2d*ɭ5爜epԸW@W;Rm)2Id2MAf6,X$pjgL`Z Vw )Q) Z,Rě`LU ቚHT ƹLl{w>ئ v?XN'7oٺxʮbَYWzM?9r(O!D'a)g99JhQJqYEm-zN 4ds_`xgU*ku ggOgSJr9&ٳ&|k 츰Uc;k_Hut9E:rCBm"\y(~ZO~ޯh׆(Sx9sio+w^t{/)sXS[BM^zB$ЂP! 5'!m|5ZdBBfkVlԚ3 GAnTg Bߍ_s>q2.䕧ӯPsF\Q:-5٫87}nN~/bb2`CL4B*Ha2(1fœJBZ&4^^Iţfj{ ^Jv[|͆ =A@ʋP+C}͘)YG딪y)#]c;cmڱ䴒Rp"\(S)B.ӜrJ KQEA˛=+D<]δFyKcymy3sC!ꮭ8L꧃R p"z69P% fX-8RdL8Ahb};72-z1"|4DomTD٨h:&r̖gme}epޗ+;R 6Rib%㽇TQd<#5QRI-P.\k"CNo~Ƕ~%-BU°C}ٍۘ˗&Lf_ WcnIB$툜ӛTs7[OO:W ZTYo+D'_?}h0K f+fmY\7Ί+8Z^VO! _F[1-qi&Ѻ{]]67Z-9üY  ş̂o&]0P釮^},;e)OG-}~a`Gjۇ+6 y 8PO$mVGOwvw@_>/voSN,`wuF{yutwk)k<4n@d0&>x3i&fڜ,CwqDžqw֝~CźGtSEwD^x>pCm v=gKBaD<1a l9,TIA҃VGSKvA( 6OW]nu_9oJ6mwe f5o՞ wl=Y $={gjc+ 3+宸"zwU4 tW#vRlθ"e@ZK"%5^R|j0ݸzzͺIX{zz~iܕzR讞-K Tp[ƊR}Vm;UhP}TktO8וWEOyWMƳvhùU Ho^~7MZ~]fg|t] -b}t)oGk-v]V*j+1KvwUޤ\UyLOwtȱFu<> ON-;#ߺ#KS!jMz9^BdiD*/5J)U7l:[v::=hoӷ˺)}.iGC@@@7ƭA B;;;0zNH(W ;~H߁H߁HC4H߁h7H߁ wH߁hhH߁hH߁H߁吾;;;;0B@@@@l+^typ^pimCOp08ndɑ$__Jez^[Rj=˙%9$qj {DZh+^:Vr~൴ֿ\=ѫ͇J!*4xa+O nR(?Q0zNJ͓J~^3$bPM\mJ I$]P JR(锱~ꋤ{x'oŦҏUwỻ/_wMXT:(2d 7G"T4H)9Cv" gAZ̦Y`Yߵ2x{)c@vؿ+ɍ'/4'Ň1.PJb2aD>6ɂoizwp 'I1Fd:d3 g糈HByɱtk#Ӟ١.BFRkyTo͋6T /y}mh?^3nn.Z9ў#kJs׃d%qNS rFkKS]X;,265eۿ:^գo[77]8W7ڗ:N?ހHD?9~?0Vױz=&79:3h x=MD2QNOqTTaa-wVq0񈾰 7tҝq*CB4Z39,La,ɮ~t;.)?>Ζ-/"+Ґ081Yјɂrq)FZb"lAs2IDN> OZ 9edAdXV:#gG/M^RGR)U- $Suτ"X /Uau Iac¡|Yuiɱ$q>%J^ _ʻc 7b2r fO1{;$%NhM Qb"UL&M>ޕ*׽MkFfoC־뮨 't:k-l 9\L ;^^ Jr. MeװAh6c`ʆ,ﵡ<:(aQ &E]+|uĜP}1)cBiDE"k'/+Y&k 9"u2$뒲GD:#gGr"mz=Zَ:.ښ{094fՈmǬ)mvЋ]xWv% +lWVAng_o&yӢkV˻7Gw]8rڅs^b L&!&)IgSZ0M*[.($S柍?1KXnmd2OBx+r(OoBi0xvڝSՁkw.QԣjxީV?wX\YIau"-B @,!kYww%疄77UGoʐT})#RTך3rvHhEvOՅc]{]nʳ\8p(f t~Ctkle1 1䱸 TAZbDvČP {`ڲ Y ҪjYHS!("5vgli<+hθZ:kނ񁒲)vJ썈B8)8K6xP[%].a1ELH`-C E&Ǻ&@Hyl8 1b@vر>쌜upU1F?M׈](d@P:;6 H9N)*p:: BQ5mG%h26d{J$SY .f .4i`82o"s&(uQ&ŤQk("{*HyWk_Ii Q[3z Ş 6+l$YA`$ٙ%C=6kdrLD"eQI6=@@dcQ)9uB[^z5O RL:R1f zco- MI>`)HiC2)>HV(SʠՑug.Y߀Wp~lS>?>[~|~1^um&T%t@֩ 4! )ʂ*" Z@gɻS΅mFcy]q,`0xWёȂ@V* kqoU-Lmwa݅uR9DNB CH ĠU9JRG&y{ ^U[MiR)XrH֢)%[vmK;k Xԭ =&m ͓[BmX߉o,bv:!J71@4ʘ1i%¢mrjfi _40Ұ4ӋQ ߅+:Ze\ZfŦÏHOpJ6(}q͸]^OjZ6ഽ<~&Uvb4g߮D"{omN-X֎ x<؎/yRNa89m{ib=]YMUL"xg_n5jFc,>e: Oojz6JƪnHlA >;oj:pvC`kn˫`3Cg.:"H^=B\Z $Nt7'or'4 woE`Q0$QJ*nq(5Z[r /`ߪѬe iܾ#{reW(;U=;ijrHu9!Gr+m\kD=x֦>[a!Ozpfz&F“1? kV Wy"<:OO-+{;V=>l-kg`qخغ@| ◾W޻h_-_r[n.[MPX4yʼKz͎$[PI[[.ҲiZ|nЁݰ'@FOe[h4HPK`PhD(`xc-GN"S]_ gg%[`d3#advv?F2c٤= {^u7Ѽ$Y=X|{|5 GdWS00HyRDÈz`. V omh[4ujM![[28d/v1a7'o$(ѸTMY{V3;Փ<j1F: lp6HF7g)Oc.*5X@UV[u»DT͝_gl`F6Υ:K:.ˠ\(w$?:QF56\ ^2ti"D9j1t ߑp%q-ҟ&)%Z̾0} jjhmQSY"dt|0aZgݞw>ݼ1t|BSF|m\X+>fsn?C_+s]M'7g)7z$ Ȳw֔Xtm?} ?[%qI-;MJnfOk`:$c hu`.x`ZŽ֜,>; T[|8pO:e]AS5 Ԏ002 o;cJ6QU=E2}sFSsްѼUZ0։I}=Z<,ڜlxv%PUu;L  p`B(eb@ 9qbpЄbLs ji3F ؽ0E p/o<po->'%3MgR-JOTjBrVj&p>uz{I G>%/=}ix d'"z-GO!ڧw(G.?{~JrXjQ݉\TMG `A rVN!g]|q_j+ჺ>HO քpmCVۘGfgjZ$պv.6~]E3u{]rLo0}Qӫ]}3U|oߴ -V"XQ. ~8HMH{GF4~2hc ФtP8QxSAb+<ϓ:g;=!ڧ~~eqc*h%{gqoR0#4'(xAczOesz֍nþzrO`RMEǎro>&I#h9~.J<(xTk]*F) |^6N*qi@.!Ty%|Cpm =F2Lg×L N %F%(MOIR";nQ B& |D$Ju'ǡa9$$V8m0RS-y lJ!,Up ;9UN =w o” 4*uJH*A \"gP&rj8 F$\cڋei/$|$ F;Q$鄾쌶 >dZ!@ cL#)SL|rQq$\@KeVѸL*GQ25CQgByL*,WM~:;'c {HDiN FuPqP kNX0FDPRNIF;hZgf=4W'tҁj6GOzeFv8HAтϿ7uxY='Yb]~/Ix 2v'Fs]][\9í6m=^C5[)[V>4srUo|L-+TܚVza.lM?܄APOT‚/,a]QM@xyryoYFW\YR!Wr~3!Jc&ic6(SZ8SmiQ9Zn5B:NRyd:DP)+A`";XCҒBNZ=e:iw1yFGﰕ4m%b]g6]hlɍr3k4hl;OTeNu3;/?/v҇W?eK3xJ kgy )=Ms|/>rFe$S5P'B: zGqbs2QWR:sn$\Yd x$ )Cb;cƌL"^ˈiDk45e iȜFΎWnu55N ̧k#6E{p} 65Rz侷fvԭ}ݸG][SHZ+LC}R%\8Sx4lFg uj~umݴ<`z:077x~8NASn؛p;,`a{7sXsӡh]g ~{FgϷu6\Rd1rR#IE͍oBFӯ D?Us!dqΰ-fdp$JlhB.d߶+t1ck`ATS)9-#η/BF_^SvxZa?k[k Ra1ZP">p\ @=OZbO9uqII&HmE.(q $p,ȍ$ D1,,H5.fTƁ 9@g㓧B3VϞ̗{|YϪr7`M3ĥL#Am(5N0cA/HTY5P2z,R܀P>RL@)Jm7@ `̙rklOK9.OՅ4..56"B墌M~Fcb(N\94D`y}YhdV ? JL`V0/2B.Dʬ]c/碵qS6ˬY;v`@Py~#ĘEe2B+FEbsȅ4*DbQA9AQN: >0G:Hc2llׇuP?m hlT3kDiN#f0BB2[g($^T֩ ,)\$53Gc9k6۞uN^Ҟ@(2>gV1+$;Q!tFO2m23SˍQDiJf)W4,>(;hb,pW}7 |=g/L8n"NhYĸdjY/ >*̐W`BfbNc(͇7Wk^˃p@KBPeb ш6Hyʛm!E7\.CHw$H cRYufi%b~ Ay'I#XylZbB kD!nɅu8C*`$R3m4%{C:#Vy=ϯ# }atZXDNymp[kx" ryELT'Hy)ہV%^QޒJh;Z0s_ *cYn)5O%uiiGwa[ 35K`s`p#X 3ŧp&-HkܙI*f7\G;z0E ҳچ!? _zDpu$=`lX=P g3I=)rP7a\ &@DiClָOiPR?՝qװY 1SJZ0F@ٷY2PuՊTׅ~1vw__ ox 8Zry)Z p/HoViFv{)!LC7RP TUj F2u^ N[lB-pTa5R2#B܇>$-)x?{ejOlxԆQӿ9zUZnwx~ƒ1~5<^V XC6K |xȳ?$}C. q>O(?~(@=dmznLpMz,B&,%䅋4cbBd-bո` (nE<)h8N1Z,;! Hcq0ltiΒixrw.ŐxŐ4ǧ6'}t?ȡ+8MQ#LYQE!-f2@u#@m KO_hj&ҐoᓟL$R$ov8ߋ` ~;W}zWq}߽q/jƇvcD jJ(Kw)lpUk9rf5Y_\7ڍz)Ԛvcs,lY^Mtpm6 G { H@|wd~& 7̨Pki8O_y4%Cmv3d e(hhBU=JGCo:yN6t 7} 1&'L :PYKE'W j"guIί V*I>;ނ1?bý 8l[x,[5uNq&7-0`b*SE іjSxw~;R;cR62GzIT+"jIc$ᝋ"d|"l8(ЎC X -]H3޴)RQTNg0Q8Ƌ2a:8\]_s>mOovwܜ|4sv>W?׳. sB 1 Ku*%J! %D*Qb$bEOEŶ]#I,O&K5x wJ2WJVa4ڳRU!kp3q6k8 qe+2soϝ~uus)ftV=g=鴙\8-X2l-Ty*jUZ'=SJ]3OCiZsUvX[%s*报u\U)+B\N=~sU&u2Ebtu,{4WıʮWƽ,҅?_Nn.N f=+of0\_3/Nb~TNi뇳UFOg/WWB1Cs5~2V飯yVJo[~r޾dJe jVhCtNy̤^Ҫݾ{`R: S %E`Y0tYKh'׋'-hYR*+ψhlh^ ޙO_gSywtNtrv/L'ԟ>_tv}y34˙ɟ1gBCڒN}-wJfFNLWqNLFc7UJ LC3S<ș6h9Jٿ4rJg*\ EICʎ!9uVYR. 1NF̟3j1]2y)h퀤/c"S[q@*+I) CE#=6F'<)ʹGUu]+飌L@VlW"Zѽ-r,Rd\eT\eָcwUJUKWi\i{Bީ+EY dԃGaVwJl)*U5)ڒEۨuktRE/Kcb+nhsRAH3Zփy5i)?nARm{?_,W?W%g풱܏s?vع;c'܏s?v>;c~܏s?vع;c~܏@s?vع;)5l}R;ZdjLu2NQ :ZGCLu2N:Z'S{Pu2N:Z'SdjLu2N:Z,_Y K" M^Bn; !푇BH{D!=P*^QIFwIr%C>ւҘ<0Ռo6g3c/,y~d/^8%6 &}ף_;mcz{̶IALǫ«N# 0"Tqdhr}?CRٝ!<:|".qdoo۾*5OPAPAPT8Q{ r 䨉R+Jڱ$F[-bօ4jS[ePIJjP+^q !U }l]Ĺ%DUM3>1{fS45w|}Ǜ߶מU1+,,'.iv0%9stB'p"G<m9X]y(<|# f~|z/_aq}UCyO۾s|*Ilϰ6nA1P*Zo̠3e'cM>RPW[:$A[0GHc *,%o,hI sDoRzgZ]~i/}#/зn>0Y;[B_M>oɈh 'DnFedZTVhCtNy̤coʽ%qԶh?YsRF!9-(-dK:EřDMYBheEe{eɗ轵CA* D!dX#3qAZ `2 `uߡ;]SG~U0eEGW Gs<;[n] #S (o|.1^Ҫ5^*17J|1S*Di+ %RDaUDxKR1IQR (TUa0C|[y"MeWc}ۡ!jE%X%|i6tYE;?HL+kO_OM[yGNjm֕v Q$f> jH1P S)(Su)VR"i@Gw1WD$%Yu8[? ]SGSl)-DVg(8>.֡'Uas h0/_|`XUu Yx.D銏RTphW9 -R &V@QKtb ut 2BLT,y- HJXע .ްJ6cǤc9֜kﺓ<]q m͋5A}@ "E{km=f2+ p*jzHg<,SgKHxiC@OP/NQKgB9p´%YƁ٢aFxDn(=7fe״/[t-c'ײ?iԱ9vttt l~鏳CYՂbky0<;pORHa?bS)Ŝ)JG!pPykekySqDcJN s*J*dm m>gE9FNRlHde. KUDJd%I_'i 6 SJk[~xcs7YdV;ŀ#̚=L;l\⛎Ynzf:Ótc3BmXJ7\ 6q,b=-# Cai>jU.>Ms#/]~>\_^oqvsb_vKw鉳n8rX|w]z2\m Vn1}zx{C:Ԁ}9rbse(>BF?:Yd !{22 @ *+< 3*$E&Co*9y>g d*?{p|EuGMugr坭yla?f\8u"`1eh 1X &h:aG&+y:ʈ[ ,)APXQXHߛ!XU?H0M[og!|pE5'/m]Tvk_E=) Vysrg/M5*Z!ksF6#d1UBZFlIгphCJ>'W xHjmEYZr[lfƶPw[﷋7~[O~L2 $(pTP2%"u-v3q6[p9NKXfV[mV#u3rPZA)b&Ϳ$ aB$p6rDhfc {XXC !32QtٲIYDr0pd|HTg}c{Lp/ڝWx,xEE"'08lZR1Vd*TRFvFZP"rj]o#9vWS f ȇLc@>bQqTRKj]KnWXw;4<.[6޴pZᴋ7a'`hVOTH&Ѽ$yUJWI*I^%ɫ$y$UJWI*I^gͥT$b$J2WI*\%$sdUJ2W G)\%$sdUJ2WI*\% ؒUJ2WI*u$G\knTR$ yZ#GJʢp`ea'c9rS;\Jc\,rޕԻzWRJx} } 9gFꟹpQ}^:8T2Q Jj<9wʘ7/;4i%꺍pWQhی8{c{ sxkKqC9h@rOpيR'.d FY7&I2F_Xg<2=, ]ҝ-ؾ_P!] BՊo?L`[naYFo#': 27-eeFw1!+!@I'm8F"J 1Up D$sIK*>9&mMΛ/,UϴFBN$ck49":d$ˁcCPH7qW4%M'0 B311jj ֌1M ,gFD @1/FPTA*$ J(LW (+f5U^,åMB LanM r6 >%F@;9=ű^H)E ̖M &Bn{Uf|٭}uvg Fuvzu܌(/V1ytgVgtqQO͚%8_)Bu"բw=7 +g\-Ϧ|36=ahWŃ 7޿/I~4ŰfƵ/`8#.fa9f~Ξ샺#7A+mQv)^G.⧾w_쿅 q87.k%#GiqU%CQWopop 7M?}3uea|4gd@~D3+jGPN=ъsW?|>\šsFJSDbN16qr%l/rkq4mCkd|Eьh)zOuZnwx~҃ yE->] A߃ ^@faz`%Cf >{$X$$bT9i8+pUf#)nJqoe~WWQ{_{(qP^mo6GտM`.6IP8rAOK*PG\Lepjf#1Ȁ˘޺]un͑MհϧK4.$|9~O^`Nw?tD/[~/[9j;/[]̡qe.=jg\Y"kv]Ϣ#}n&rdLjRGHB ~vF[WY KLՊ:on˛CmCe!muMhH(ri*R*V^z[]^vDF꽕T^jʤDZ)CTAgi"tJTă\Jrk)<;pyL^׿g04R^O>Qo㲢:!j㉫^oe^gh9}V{/.rX?Kf5mI+לUlYf>_8wrAKukk`@@=E'h29mٵvm"j+]`T{uv*ݷgԮa_ Hv!2OOTHKrk."6maGM7kߛԎ'^ndiqب3=v jxI0T~0iQ{@cd8&Ƽ9*)z7ϦnҒ΋u,ZNw+ClCQuq8v87 eUu`b/ $Ȅy9-TIAңV )f Km_u9ԭ/ɔ6]5K!Gz]Bj~C^q75/5VKW)M {Un]e5͠Nf}UGE[\V F[S3mĈt }IA⒂QG"#匣΍H ^xU$$x!N3@ݶ11`3F`hUqA8!bNYH mΖsHd#/:@~ێKϳO]>rDaZk-ݛR AsS3רT-ǫ b kR9`,T Tދ`T^m?%A<;7n;UT9b(4gjP ғS']R"D.Quۨ$y!֖k Chh uz4kdqa vݨǝ0|spOυD9;~dɓW2|ДequrjOCb b<}>ʹBme hD"W%ɝ|.܇STQ:2&.F \hj{cMF2fRZ샶4KW:[΁ _B))^Q N 19Vi3<(C"R%Tap Mj.B!y9ID6I(3o[&!c K"L2 +&Vy4_5QO#(~:(Gr2?EIvdMYݪf"N>LOd;21-n9'T%'S:f-ѱW}5+U_oꋙ+1Y2pes*p% ;\5+Gzp彶tJp SS/:u2YoUr%"w|Ʊ=8\=`80\=CKv+A L+C?ҴGHϷ˫wϿrp>y|{;5b'ŘZݼlW.C;BmAQ:3ZG:C S9ŠmR%z)'˔l[p^z翽7|'Pf6{ns ʟgEFy Ӑnwٛ[_>[$B,d@:"&k|n%u ł `8ʊb%d+ub㓱NCL%!eN<`6z- <C 4y7?xρw?:[ z5D͖/f }V3Z~Cqh zCP3ORGRNHB.h嫯I'0&Bh\TR998Z ֡W\uZC PM:+jAj)J:j̩BNYL ͜~%aClh1KX]QbdeXוK>4Mi|4)Qz:%A1ء>%f.SYl;KʕqI, NBqrkCbdH[JY դώtqP`@ᐘ!PQfK"_]Kruu.d, gcܽu?&'V9f*EOswJF vꁣyA]MnRok_My7owyP ]bk ^h:M.)k<5P;!l-Nb2=\B5y`PќMCwfΎp5|:'o}QZX|ЗQ;VET0,$˪ D![%4Wp*X!CޛFD "#0p`(oThZFȂ+R!XVcSJUY ^4`)].j’iF;39; |JԞ!|xlM@=ӯ3!v_\=|9xO}E uu\|<+GqNcșt@@g2|*$|6ET1#A<3"kCk[Ks 7/&@QƲ.)*ٹwECqvooÁ#' "A|Im VvD+6M"LU5o돲Y8YR5>+AfԣHQo6Lɻ9-!p&<InxKWH3SLgqиȷ!#<Ǧڳ/QPc2%&#֘=̉SmH#OgǶIoc^7`0+_ƹ衢w*Cuc[ӳȣ|5/[S]#1~[(0YT230 :p(ƱkO^ x#sG*p aZH14T (bVP>k8c֔|\:|rLm`=*ٚڙv3g9.}|uCV*ۛc6ae^)|i ٧QiI]]Č|@|* ~oBOwʇ_N= lf> V+p[GR'k~gj5/"$l^ !)X>JٙSEkQ-U:.FvYL^ 7 I}ku?4o P;w Fq$܉C LJb,(*$3&WRt>λdoD(}y55難fmU_b=o@ye/0GLeDή{( q pEB⌆18!21ymLBAlKBrvUG-ar ؜+!KwS$tv ,;oPI49vh8#Od$gz0#@W5c[H- -эuh?8nt_ר?*oUd7G:ej.YD`b) @7sɗL<};N9=6>4KΓ\i7oܼ}#9sᏘ'?|w)bmet\SN 0k*,9B*@S sv(|ӡ A'O Md0 N;瘝5{t3g.?RSSՂH0m8xdEŇRAXkm(A8EFSlʇPyau fKf2) 8&5$]\K*/Xb`WCd"ŕt\V>ٯc}Ha[NS|bWQB`,3yd#g!X$*@،>vL>6-|Aڍulm^lX w/{_&ŏN}5:) 1 9bPAhb}t pD pbI(b 8&J*LH9VekHVۗӕJTG־ 3[F,ٻM: uln**$߮.%z,Ü-_}ۮνL<5UQw[s$v~Ζfz9 ~˫9ҭhڢCn mWg?jxo$ij?Hgot{8{ߜyazy|7n<_E-(-'Ρ|_7.+W!P7sz] ş O.ѡ꫑(6G- yϱyȋ1L"i65P= !CM.8cȾ"-EUlnB{]0xNJLƧ[hS'+Vvxr ܭ'OE(ڸK*\(T GurDZ xqJ&DB_)T΃VTIH-oLFFm#eũ@/sh9+e -dġ8yj8&8=~ wY:Ter6(r.WbT)rb%a%[+q4g-%B)$ =%5mGȡdc5AkPz#c7svHRFStB3b;._ikToZ>6KKo!L&L_^l p*pБVfW|!$ kAH岆ME -  4lb쪄{}TwFnlGxyWcAn㩨QGKHPJT_2 J9rL\ (35=ٕ+YRd(0CE'S`M.*Y( qB̕JTBgs0y;08aPUGT\(3}OfZA/ݵ^kE` c]΅ڦyj*" ť eԪ9۶by"-PHƇUm٫󾢶El?=3 1:g2wgnlOQ Ϟ6^0yC̆O']L_5QÓH}D}G_.frәb^Z2߁)"֦25Y6D`KK:`R~$6lUuq`|>t )XtٻFrWm^7r$Ay8;Xgd#ifSl],˖Kj=mb[UůUe҆g6AfCYKN% S<@M(lt3$~Z,{/ǶN~zGzkWE]kE>LuBP,5k xUTCX3, լC6 `apg(:2c=&B$ - {R΀HfW"PNNւ:0rǔ9HU;!u> K#:v'N"Rs3uۧX+ly>\\,m7"vْ't ̾Ǭ5y˨^|vT$*me *b֩B gI"u[EA}`s҆9B,cժd)ejd욀Gqڃl2zhW>rύq6]vYl} ֚t$yl`>OA7YB21xE]؂1(8:Xpج T|RR\@J4?0g[{3|Gd Pːl^] (|$Xs |z5Q؝Ad ED5^9ȃy\q0\YfdEV^,8gܛ!P7Yـtt:M4Bxo[r Q" #Q8ǗkV[~B~V"gPHee8VUyvhG,Z]B<MEoX102-M = rMsMgrb5M/Wx[3'ڸiT4¦ %yzӳ*-k{w{YO}[>YOl[yN}n'h/En`?9rsɊ(,mD6BRۿTK-p.z-v%hKK8 ]%iZnP ,{QEI05\1 $Af L."M o!Iyg?nґ3xaG.1bU441nzcsrV })g \ ,c 08hVX)r0Cmѫ[uQoMGY7%2F# d@&BPE !DvZt $RJ+!@ j+UTXD.J9VyOt0m_G/6uxY~bq7?>_unT}Ӽ!FVEɺ4o0'O <J:(Phxeљ<(o'ʻ@omq7mj =U{{(vcvovߤBjyT}ؑiC^ [(dRz1 -Zl`1CC!PPB!*ltTIvL&H}( (K q&b.-tsBXv;NF^8XA5j><tVÅ%^_t@~,rfT:I Bʪ-RWhrPfk5whzieDKIbA,$ʢcv7!$+|҉)(] CW]}U]/vKztm۹H*x ݷz9n֒?!;qw4 Qxk94XLi0D:JY"Tw\K} ,38@GW%$DQgyD)Dz`3u8Q] y(I1iWnJ} Y0u˸xNe+w f;Y@ѡrQFaY/(:Ϋ D KY 7XX* 1 ``r:h5Pݚ[X[{Xs BUShgMƟ!ob,lɡwk甑<}}Z(n OO1 ++ !{Iو(ufE'RHj=n d5 c,G<ֲmɺDhH1T|)iiT!RJeёx#,O'uQ=0gGq HQ&d2CJЊ~vI6aJ PrER$:8{އ[t PоE= u] ?c.xUŠS7ZlDR`Q2yΰ!O#D%$J oID47dsʌo>/g eZ0lo6Gz9/-~>Řr#5`,_HcE smyxgcwKIYx%e{-h1jxLcӔw1cO)* -ϖn r$^{y Mьs,e/YCIZ@)@J`VI|4N'N{vi>޹fG(>wNfj(6`Bc:+B+9(-!:@V9U rokg¾)]m}M}o8Ja}Puz.+XyFw4^0:wJO4n%ƵشL^Ngy8{.])zՀD ga<=Py]Q8׋q*@,z$"DL*ʑQtƐ UA1MaPPT[Ae1IC6 CL`R$-)'KygyJsk|!֊m9]{w2Ք*O֮/NF߇s]v}[>{<(9^Himy`/M*4|_xC%D!Aނ oA'=y;wys_`mb.fҪgw0d(9UBNEZ}ƅD A N;-dF!`RjC9ȏ Mao,8IT^z $bN+ b 7br 2\jN?Ӡc}1- s oAjQ RPzoXD{{Wv^:'dyo;}ߓA[+*ǃAC8H@`_Cbi:6_h@χc.f]0Qh&&`Ca=6W&;-p=:&PI@AZ.FYTV)9f&JXaq0#/ 'P~ݯR=X7a؎/vt\6MO'鏫v+[غ9Hx)H\Y+9fB:62FJ*X+܀>Gףk/n6`}&k}֑¹ΑՏ}6єS8UsIX|4 C*l @g풽KS=JSEirYM|FsHk~hI#_8i :e3'ꖵ9[S>zrfC"uL1 *"%I,FyEe;u'nmGgۖu)7[ Lږ1„{{sVfL؅>ͬsџGw"w.m鴮 w槧 Jx[-;|trn|Yn]v}0.z\.g^[&;[B 䄻GL0_]ڷ]HXCSvP;Y֡z78`eQA[@L0؁˞,26@4nJCQmyU[Qo`х[kJs2NS-9:%T@l"Q>(#eܸ,F$6 Je()^Fx!8UkZ=HMNj9U4_L,(FyCmFUOڌve.xb*/Bޥ?{ƍ)}Kj9zrsqmMއDS"%qfHQ2)RHd:3CLgʉ KiF@Qk$`Fǂ^$+n![d`#YE@H1 .Y*%bdF1XP1g&HQ+{8c[,-c!c᭗Bo"Ou}sCurM(`0z?};Fl"E.ƒ E #2|808Q*z4jerie0$xI? %^F0ґ0хT-#vkGl;rWP5Y˨`x,R0J9/^qd\VhWh\,pcxY1 4Cv4H$(QpXG i[#g=f~V&' bk-"qW&:֦TJDqt&$i:K4.4, < #$$8pFBb%# gN%Q`IA_q`I&9b])Z#g="`-b5 t%_gk\-.qQqqWJ6@Cr8*m EFXNj"N΂CNakܱ-ʖp2a-F?mxUT>~!%ɠ̟e8gK-7FT2MXɬ#*\Ba}l-pW>>+3>WQ7 FbZZd㒁&Ff&0CBYϾ\s>ߥpŴ\hќ{ nN!N(02 1h\|/-73IrLץ^yxò)vwldk|3e_"{GErf@Q8hO_;$G`?'jAi1z6 7:ZޥTc·=i{oBL9%C3|8TuKPW-q^+| z(KM翤B@~,tx)- P.HoVE |tǨ{,]SU>v),r'0esoS  N[lBqTa9R2'7 +I(X'㧑k0&~P'<>ȱUL<T?԰ws c5tY_!kw$_y\˳9MNnuE<ČaSrITm~épm-98hr8_u>\$nYߋԞ2}ן^oq`y۞GlYlQ W*mz'9jnL|嚛YL XX 䅋4aĄ4*UvDA3^qi6B i1 `2½eR\"As'LLj9ԫ%8ZI*b;UGPquyk҅^!C<"+L wc3lE7X=zL2D/`[Ԧh͹ 4iM.ȒYKE")ѦC/ڸ/ڸE5ぃrǝp*pp"N[GTDb w!/׼Ug z!U`J# be}0oʔQDL ` Xy$RD[vFΆKk\9>|%_ >Vesg&e#t~9ҳ~f]zϻ㐰H)d`!bRG9!Z{lvL5h4*kc&PfTetۨ15 TȖ%5rKR=qWE=Xڗ *urΥڮ[eY*E4iѤ>$O1M=:EtvaQX8SAlUV#] y̠.Sss@Q> $Pz~!Iߜ)ejk/CB>4 4u}R?ןj)7fIp>Q9D^ |%.^E2^"(ygq3:g} aU+JEq4@2U.O5OskziXq]#mЧ6).OpesRdv@C? |Zz8j;o^c;?7V u^ezy?X5yӑ'`SxIAd;GEጥ#_ y/uvXOVÝ_$MN̠{]_Wul߃q$w|2$|:O0Cxढ$ E`ƍN`w@q&na6yі62u><; fiQ4}W͸:#9@1 yqu.9npOxIG9Amwn+vWp[+Y_WyzmvYse

-|ŋQ&RSP4Ab:3(.LLG %IuXN^|㛳5evVn>Waz0wӅ@?}@~? )X,,Z)M:7I8}QSPr- -w=V&W?ZZv=Vk}ɵ4VW@0C'`"Wڅ,{; WJp5•igS {]"O{Qs?3Ô!}`Zc Oճha(z5jJ]YHE3z&}6C)fD3 qfՙ})(!\6R3{3:䍼&YoKpޱy .ZMöF%DhTHCKK)]~ Ө(U;K-&FFX8D# jHKLPIsN*[4%T8LEO+G:u$%\2N,s$k;qpk4[ M2<[~n  Άy/n?lˇO:AE.MD1N]MaA+5QDÈz +2`%{mExd$OӛqYgw 7r~OqU:}tװoƜ^8v?;6#Wݱ\ʅ]رر^QZwau׏Tx ę-8]@[[Ӆ^h(iMܵܵ`N3jnjJyp`yEc ˆh qg8 YR ЏĸMѥSUܻ6v_[ז"' ٭myc%t|C{%Qɪy) xԇj# V1F: l3[!94Pš- ڢza8$7X;\IXl9-%wg>3jS `7N1f0}.鐋i,OPk]Q-v)իP>i\Oy:u`&:uK쨺mҺ&;ߙZe>,:-=2JCl90z+eEJ`,% XgBs!lVXkE\f?J?&i{Lfjͧ˾U˧ns׶MSuxF 3N {T\Ƞs@AS@W1|* VG[#ˆA&K1aWRc-dR%&,Ѥ Tj>PA(zkѺ{FNSNDŽ@!WZq?fnU\_6^o!-w 0"-o2oxr6 ! xM"'"m+6!.x<3"φ< [rs' "ebʘǘ‡e# <O_<!8{exNcqӾ:6FV Ni1ŰW.L9[v9TU%Q7(CjAg?  w-˴{Ұ=@|io lmr]2aܚ\E=)GZgy3ט9TYUr=:C=:0c"pLCc-n;:m/GpÏM˃0~< z і-̵ֻ{fَ>9LWps{R՞fˆRZ.]򀖭v1I<]bf 1YE#F+±awΰdpA F~/\ 8e}}.2:d㹠 ڇ1Im)G!RP@{}F8Oپ5r AY/_l[SZp, mHh<Đ#yq< 5 zƣDOzxpFL=uBlgH7QW71 ?p? /V21kDFGt)kǑ5NƺL1e p9BO@>:JLqs{\>&W7+6Nm`e]%|i귓և>ӊČ|jt zZ?<#ܙ]ʞ^6~2|ם;{J=rr{=_4͢`;w"(=uނu_{ZY^u h[vxnٻa+&u0%kLnK"Cr[kۚxt@fn/ G+$CJ Vdkj d.Z2yckHT=\iy,ƁcY՘ bl0s$1B!&8k1KATN<$ּw& .`Myn'8h!xFr2l96ά."G3oo~Ζ3FzVF[b b )y,99y0A.ׁq!h*E o lb)؈BJ3] )k*2py[ 2jhJZXNi eW>:ʇ)qPkm*p qٔI|Э tCBoWNy.\+֡$ЄqU>RdC%BX]UNTz F׊:vL:-(1Jd`c 8]]c0'Fueu@;1m?32еw\ݝV{nq^jۯp|6+*r]*HUЕ`Z֯[˚&pQ{!%Gň6ΚfEEmYh 3%/ƧMBvmsZp{}ir R6_qru=]Zwܦku'鯚>7lsu< ߿O'|_ -n+۵)/LF bhܥL9ЩlGN%L-Ei4ߩɎ?U5jVL^o`ޅ OGk콏"0sdz28.ʱfT%s!3%Z098uٵPU"βf6zTFEhwo2H#-DoiGO.$ѻC g>hSXl}:쪃ɾU͑dW8op|)*M5dY3T !ATަbU)EP|ދlQ{=FJ "Zmz dx2ʘLV ޒ H AW$c_,cܷ/!=3ճ3^,y/t4\9|/sR q r Leb ֈ!GZ14Q6%cMƏ<$9CsOab+8=pv#v5XP{0a`ԆGMcIƇ`(uD+dt ZCхB \H>"r%k8VqBA,daĊf[`M.) q(b1K nL(:wGrDU@M2YZl%0ڏ=Ac׋W&]92=~oy2'b"[+@H닭rTrE JțWNK71e?qGܮ/6=Y|/A=wb  &bÈP([0JV"TNzP?Pg|F4ބ ̃! }F{G -|хDiX»KWnEE #MG0 K fk-!c N5Qgb+qB("WľB'6p80kA(=kR& Eh]}iXEkK)zYu :H-C;卑C9Ĭmd7PsvlϚi#P?rs?p߳κΚۯr2+./u>j?כ"pEW?O~S\PuF MɧVzWz(w`NjnN};ĮB mB>^6>Y7(zL_.'y2|yVOSƊv[O.SkgvC\\ş;b8i.s`Dgwߚm ͞pnp~/ir2盓=?[2i/-׬ e|ZczaMXDszSb5UF^o)f*M]IV?ѫB Q9&MTITځ X,OYkʃߗ{C2BkkzobxoѿZo]bϷ |2kR (SaA}vyg2؜s5ȤIF%->+IKtʋ7S{]즜ovVwѤeM~^e?k;?GAYg>bUڞlQ[IC ;gn!{ZO$ =W[dYR!*YZ Qf`]ig nM׫ߏ :^,dh$[nTYE@95S2C' k5z4ic L& ޷Br*^'Re/o:sw8D8 H.Q2oҢ`jj%^V"'c-!ڹycUBH*S((=꜖8bdUKF[}*t }gtĶcR79F?4 kx^^?e %M3=!:FJ%u>deFf1>~#?Ajzgg79BTYi3{TlRgt&W`?jz/#y!٨,j0-Éq8ɟI6Ve۪U8δedCo{yP%w" xgM1\Dkllc>mϸ7als{kҭ<_9ڶ1cl|m]XG2T( WR'I4uuWEb4y0뀦CG7vAi蠀;u z&ꀆcbUՑ 8*Bd)G 4b콳(apP]VcʅoV&Wm7L!EcVU;;g4_/P?>~FV/K!?ae݁2 8Cc:"ҫ.;\ V>)DBqʔZZket VVT}1gM. bq5Ԅ/Qk#i*LHSņ*tKD585A5BBݮϽZ{pmEM(Nٮ-,wb62'> /Mإm> ~~Qs r4UWK' zv8cknq5-O7@t2[G?e"xR4w_ϯע "<}dc :/rо}_/ˏ׏٥|(ǽW?ɵo@.IΠ*&gV1Sd֖DR5"߱gM AeK7/bTr*^NOO;z'dI~#O8=^v϶} |xe=F N,4(Ӄ쐬ke%|>9TT9tT9XT$m(%%RT@FpƘI\f`3L@W]h:Cu!Xx CU%W2[SiQ}LWYw˰*%e27N%s |h| ˒b1PENheQżG"elv2hB1R2*o]zވ}40 ևp`p0[,v=!~;^fMR>̿E~;X1IcY1 @K@5*ѸcqUr S(xg@ Ab%$]k䃋JA>;]ƴ.i[-~#lmn6gXoͼ]q[2T>)KQ̨\ؚ׳+i;W};W6۹vu1[I5?2GArEז[[ [|! ;:2jm$O:P'ldapā*!(`IhNv9톧ܴy^;xDXK-:Fo (9RAC`gڮJa#v(#4msY[۞{tb++TܰJ[C_KOp<SްGŅ 8 LiHT`rUB|5b*,r0 C)FK^2&,ZWAR8eT|PhIg6ٖ=2&?כV.*Z2ˠ ayZ10pθ`3\aY{Dm{f6[;O9^GzFyk`K5BJ0x RLsTe=XPdd#tAF׉ڼ?7B= 0h_I PZ~1d)g8j]CcZb?qJAYoS :/A%"E-Ӵ[@_M_BN sT.8[ Z ѓ:sD̹BϪdK{=3X 2-#ܷ4};Z,vEZ&wtݰ z  Waَk|O[7YV,NVY7:G`dLOyWl4-7Z s/\#Mk䗏D)0/2lFLiyM-afQhCDcD] 9278}@@'ì\^]<6=uq+GL=!3$^y؆A:£ǪW\㎡R\/Ѷ{>ONgӯqLǹbJfY 9 CЍ\ϔ<]*W8q(*ǹs10{BZ9kE- 8DǁV1e}jIYU }XĬ0S5NB#9BO|Jv'lpd+<맷.:D϶M,O~<AZvzԡ[_E+W ~p|X){25L.b5qp!%ט`KE$y73H;":68❾pdSA@3[Sc$j䍭!QLj;1ӊU;0\$ɵ&)˓y VhSUo1g_>{Q[@Y*'HQ!(*X3AN @]uEh MhcT.ȷlb2KBi% r樲+3>5x4%y-HiYdA+3}u&&SB6Uԡ;4EȦ|mʁa3j"(ELkJ2 Mh;hXJ%+4*QF YT9Eة@%j>1 :vL:ZMM HVU zDkf牼Qb]*@Ў:vL:6-IFMFF&[A;܁1z+p~><\(nAqZ;4`Hi B VqS{lMIh8jO:TLPFZ9`VTtіAFa 0} @z#-n'lHi}m7l''[J׭*"{is=\Ci^}8aGN[HM}R+(ĪkmE/)ס~SKr.m4yp"]\cK%Ζ8ۧ!ΦCb;2bZ16ƨM܃'5Fi:ƨMJƨ`cTg WgrPςBʖ{&!')S:~oo>8`sDq(|5CQR+ ے+gK3jUkܷkM(JP *Lc)$ڦ̔Rm9m͚jhIR g0nruuCâ̓L..^()0[}k :w1[f1f3wK/W/}~q9>ۃtg;BnmQI7\PvWD ǵK*bt~6Un/޻5.WW^}>LϷ\ouvw=c /q/>™재o{"kO%zj5t8ET-p M&ϲDX\FW lS^t6_ g:;S+`V1dFMv<:=dGU g3eN).%hOM6Z5%1K]a0`2M }LiZƠI 5Co8S!^4F.Pߠ_"DIi9O!rp^Vr2Ga --ӻ~cm>*05VL*MrY迟UfI&FEۿב+`(D0Eׅc:Sf =KjBsif;Oc_eF ?guQQGIuvmuWfpF$]ƓfG?ccPiY>yK[$F &jB~ü,7?"f(`zCШ':\nUׄVq&2̋?G#zT?Tgg^;qJ 07+lQ g t*Z"w:H[VE9XO] $yԁh+9gkr>Y"MPPuw3@jG1<]PwfۦĎF7 rR~S+mltLe{lU 7Ζ'M= t|w v}&Fy@&mjSkm޵ ϫ Jˆ^F{!ʰyw'8|LEmGl[G]^`K[?o4.nzZnBpMGEtǍxyA'VblwZ$4Q! т72 fXH^"UZe{dK߷CFċN%^MϳBWIϗhWT#NJf /#γd۫@|SD'u@J#ʧ.(4bs1&ꩵ2%B*)a)d%"$jN~ϯJIt["Rz,f* 4nSk ^`k3}8B"g>vƎH ZoH+Zxtn"3UC~M1}$D<A5U6~n'.iQ9 RoK->+Jo }SWBOP%%&.CtR+e< ,M(bΙJy48Vkc_q?EojפKySx֗Jv6k.aMg2B?q 'mч`וrZ I\YtS}{/z=;ТgV$k$W4 zڰ`}`& NK1i)HT.Z` G7j!HScr`ojL2eV1'Xr'j"Q2{ w, |1r~H@z3rݮ;[%o(s]bugSFKBt!rsf9UE o.7pTfo*YJą3'AVIQ wTr78kv7X,o궾YKnݔzINݯ=g,9P=y s8`ͣeJW,}qH+?0Ѯ?aѐ9S , %8kjgըq^n*O^׮˾{)\g3ay Oõ2,57lwѯQKQ7uضhW~y&juhb}bZ5;o=p֜*nQ+/j |vXCh-~zŵCt-+mW Jyt(ҕTh!+th%9tў^ ]53ew+YW Zqt(^ ]Y(+xg ĮNWo(rac΀zNgpwIW;UBDW5;ޞ5VjymuS.Qq|+eiTxZmz8o:v(cubWFYBeg 8>롞^]YM+lEwbWW>2*[l94fi/*7~e))γi@5-jm-48C:}܎ű;Ϊ8TugJqZ_eaUgh@*)XQZ`A{uQ}[w%@m&VܿQg hꏟxsBYD-*+JD\'AҫwһF-i5RpK(1dpAl7~e/-'@eRr})Uu1Xr L}ufighv'w'wCkx4zޣ[0A@Вꂪ?'L[Cw7հ4634p MZE4dHNu ?kְ*BF:]!Jzztb'cvVOkEsP,b 9;yT0@QWdiZ^ZK4N?p-u(Y l̥&2) W9[&<:Ɔ0Qj>(ݏ?}Yάcuh=xUlT$ɉbdG4D31dCLHx22 (}[ }Ds.$on6!mE.^Q^6n_pȺZcׯ~pKadvѭ-{993* x"[%`YijQkb6_e)WL,Τ`&\h8݃j<2+eiDgyop_n|Yon3e/W4noz6i˴\jѝn..AI'- Z$oAx:bb2EB]>e%&p"rE$o:|PY&kfmӮDfb`Lkb"|~W*.]ϗf=*~j]hܽ{ 6ZNZO}]i,>zct>lw7F ȝĖտF0GSI%l,u ڤJn<(}@HeG'́77E1<]PwfۦĎF7 rR~S+zݱltLe{lU /}3 wmцNS7Q (O[WSka8W  BaF~+Nq`6^ض@U}Zs3t#6V'rj -7񄸜H(y2)OKYXk7Ȍee>r{jheٔunE"{H3/J-sފ ]]or+F)19Ë^@Q8M^qȎ$o_ߡ>IJL+{/b)wvp&K !ؚEbĂ'sini燯O?'Ӭfcb'>j# %gg8ob{yae0{,1u*[U_cCO;TlZHuEЉ"Cɽ?:+ =)|w>=P 26 QYcI&[xԊ,bv@#˒>*sE ۹,lJ" ?)U䨙8. )ϣtƻ9ör!wWts~`x,YB^ҳ;.hIp>D}`&JksNħoXc{y#OƓ@ _?r@ PDd클FoɛeT| )`a!Tq#"< "Ա#0 RNy#XɋPY 1BAKP&dP{":럨bQ&NT+!.Y@r*0]6H9EUuG3q40$PBl F~3Oˏx+vpTmv 鷞~_6䜲Vb HI3`V%# m}:nr\EzUfw7hEoncon9ϖ9_j*Op7Hfr;BkS,5W$,rIY3;SWM~6^ɺiss!צFÐ4b4e tEZS߬Vm"*hVI5ًξ>Zvݶݽ={#N}B ۯ;=OK=- Oy$N>~'75zs[:AtԆZQ/BV:?Ny}@jڹl5zyK' Y+m3`2>uX*K"t_;4M+l3vD+m޹t~6wpXhE.N izЪC}3nNSUu/ ]Ts]]po ]4VOnmۛ)uYiŦ#AXR);`KHS1 MGD|`44ZOj&g2cTLH~Lʀuev(:XhǶ:c)83S.2RW!/35waA'pAH(֮wŃKiOAfEIٙ"( = AdTiLqsJdڟYrM݈ٳyMj21 C&0i&0]$f䫯WgKƺRZآQ1w$ uخ,N>%+TFzxbuS 7!8$Ts)c @D)S,hU"VWU(331qv7g/Iseug/MOr˧^yV N#(QdD*J6Vx367Q{{ n@zM^n][,v3!>~;7d(ܾI _+֝ |3 jhZoL.`HJ5+䊇AU$|92AJl,%olMim2 o~%-~#lzE_y?я<:ϷH1_b"gs[3G|tES7: 'FfҀŮedZ CJY]$z$or,ir5K(]e^IDN,El[ %#(12 mtK팳7P.} p4;8(.|{0V4XyhtW ݅L 5VIa I A <@dFAبd) e* C (XV^g8, ߆j{_OVhW%!,(N-![a]'ٗv4&CGvJ*'N 7#9-S RC sT8kFD;2e;uN5FQa7vhT2.%9َL땘xAQ S5S%33cXHjndQV^c ztTu: T4Ld @E Uja& Fag1xs ꨃ(9yZQk\t,G)B0duʺN$EI9 V( D@3q9O Dbu6E,c]Y!X^C>ob%'w_O=v}N@r$2BK%%x$uh+|kb0 S)Zi}S] V%H -5T\SPJ&F;u$'6֕fXW 2jp4%H @'bkzg,3.D u(f4JG( Hh MBzIk=bI RR غ !'A@x_DXtbUNUaIy^-F71 IǰJeɠr:D>:PBǺM{s[WP-QdžcFG+#wᢏmŋΠquV/esw>}B}%Xʝa)RF{ Pf!|ɑo,R7QIk4[КVM="6gHP rf 5Z$"RrHU%xG8'%6f}V=mY/%֓h (wv./iH׫Y}}G+yPe*]w+:^e1_,.;W[w ViR矍[T\xo}xո?aMiFOt||7v|G784׷nzA_Բϐ1q $YC57-6%ۜ9߲3>-MF?YLPyEdPkCStd'R NMvdA$2[QS !䘳eШa,+hQ(}`s@V$2ϣd!Y?1MzSP+%S%bA1LRftݪTVQFv8>u @8a (.f]Hw-7+H(MЉoBPl;85ٮiLs@6ws! OS/6L4,0*d%r  z%a pIYjlIlA$@( =9s, |NT zH}aj9!fWiFƹXcYpk5ke=8'Ot@*sl 0bAb.ޤXl#llBx@T ZM V[.HP=@@Aʏ ɂKm'@љsR9n6L̡vq.jccG{hĻX lj/BX)PN 8#MFslxX>Bff(訲eIYD(&Q|Ttlo9 ~] P8ucD#"xpmUv)blT*5*AQTME Nk^AŗuHBGRcmQIW*Y#gB hUW"[ҀVy+1BV6*38_G. l1.G\<\* fVEQ:$T2TPEH9H>&J#.> .s6C{>5 ネe7x(n~|GrV ʫX6b*VqUZ'W$Ƽ0qrZ] \Uq/' lUFzpe]ͶvApkpWU\WUZC*=9C꒒WrA\ \UiQ FzpS]\(.Ju)pU*#\_=|̭/A#Ǐi׍,I?{WƑ Ow8/o{ aLZc3$E*Kƒdp꧗WkVWujs,i Kﴔ7~vguc+ÛJںt׳ɷ; #H7>1SgJXß?}z[?E&\q}ԉpL15|1Vԍ#H8R7ԍ#uH8R7ԍ#uH8R7ԍ#uHEq'&Ϝ~>oaҺgzx@X ࣇK4;fGّivd3#4;2͎L#4;/L#4;2͎L#4;2͎LfGّivd-4;2͎L#hGّivdfGّivdj%g$~{55hBQGO/Y(. /^ߞ 5 5V^Z4 'p[FdyfSN*'J&fQw9b|GAZWwm㦅_HHF2V{O:RYù!k @|(/BVo,w{\jyj1Nɺ7Y2HEfs6X! 3G;)O>ZbX 1& Ϣˏ8jS{M$B1Th0 ZGCZ8 SdrNdlj4D:jyKu)>Qj8ޱ'#CRJQbQ۞ĢPީEY^oߛBntRm)ԑ75s1[dX~=Yiiog5s6wyvw]|Zͻ?NoP īj7ջ>mؿ4 whmBviߋM*_Bzu^g;;E׻/*U5 Fs^OdDis:ڭ1n߷<Ϳb,ᄡ7 Zw6ϼ˓uś?y+y,hW7E$cd J:mN\\1`%(HK|pnG@`ӴK^.[?@#|vo軱yn60ITjhx 6ϡ}=p׬oqasQ~/w Y7{ׇ١MmUz'9֩nm-3A=-}5[w7"=b@N]'`M6R!LHDP2&䘃QyL5zd<֭izؽ厴h 4j.·c|L~,͖ԝ{Ӻ`k7@(FzU*_X5c/uus+xJ "q'#RB,"x)Y{m1L҉xK)n,-ք򲿪}r <5Ҵ\1uu5 >FLb-Ĭ_/3PřFp8_:}v]u«o^ulo^O.(r80~c=4z'3Oؿ?~GI+M&,+̋eGi%!Y}$γgO^u5m:>O/d?\;4C׷{c\mTxMe6\^k]goV>On^IW.אַeg|ɷ?|f x9;Jue{88^lEjd .>bZV҅;Gz_QJpEjVi fv(3ECEMVx Ƙkk׻Wh| ;2Kk9B̸ElHK$SXNNnR>0zi־}AC?gǍ--|֕E܇T.̯lU-bZ {^QW2](`n =]v={l?;t\W7n!Ѻ:G~,WŮy6Z:I\E () ǺxwoR8x7Ev쿸EI;.cfnֶl']k&=Tq3R|:;8 #Dw ٍ3_ 5?v A]L>1Sn{Ȗ!Ip F/PZI#-Y!5Zm=ב{INZvuB}\EFwKE~{s?9Tvl囋Yʿn%EAĘuCe[dXp^I{V)$I9vHe|M8| ndхG)OLj)!rTL}J\ Yqg;JKZaI) `9ajpND 4#9S,6A^iѵV:'顊痗׳ZԣK(SRPEe̕ @R*r*s$TKB}ХDHZa2cC1ڡod#,[Wbi=B;4CɑxxwQRN=M@ 17hPHSL&LxC(*X&!ֆ1ИUJT5%b|%Ud~l.U=e0"묞Xᘣ#_˫G@df1hn`]XI6rQg*&D2>ko7{EuVEnXH>\RveyE:ˊ0ru}%C!n!T85Cpjk.Q(-d}H33 i5/HS`KX`"}ٺYxGm#/&exxv0`_\1>a"rHa`$\Vآh4<R|T*AI3X QrR@yUN+2 #Hty<pLUt:>a1[:5 ]K Yhu2$JB36B`DFB!(V^a2pJꬆ7L*a%PsI;ț*!E}0Κ!-'حJ!bQI2 gʼnQdTO #„\vzAK ) P@Z9,٥J,`Ё'oTzCes Cq2F0[ea!󠰘#z naeKh2誃C$:Sd)i% ~a `I5 p5X #d#TtgPQ\`)[;/6rd׿"i7H>E"0KcQe4,3 sn(Mq45sbysk0ᑼ#F)(`~G_!Aa Bꕶ#r{&S=+(xBLcr^5kIk $D8PZl,@E  v/PUTy X+:EE2yם:ڛ1.EV̺(NcB*&6/ A;L')A"_ wW`V9{M[C>o9*&P$W{H Tb0&SAyCs ~:XA̬Bg A9J#HV **e^i *]xf $Y6 NX)h^ 7tY,ng#Lk4ZI0xnAmJMKKުh{9Ex'oV QRSm`_uВLЋ![[TM.Os{A =Ď`-X߼UȰlEVǮ;bE]Q("Ndvikn`Qg] `W6.ko)nDhP- Ԑd64Sg<9syex3!RBP}Y{Vi,g覭 ӎ`zU6y{su(ךz8;ͻ oWȯݐ,TҬ_9{neEF8U\;`I^&TMdTBMGGR6yϒh,1'N /Z:N5_hhК*2@ݤֳv(Ub(To}C23aOi7H}|&zRSO2^m1ɔlBT&N@F joQ.$u_ѵTӚN`;}f0<o!%>f w3D,>skNwuGDkqga#{]<wۉg,N|>}ᘺ]r5Lr<qγێOcg LAAyIKW݅綵=WI<ڕd )5Z@ewsPl C~J奉Y&~}[ifz4;o <(5(œ-VH`B6uS&4[ &JE#6cΛH~)9x=vG#aٮ~7JlcJo% p؃8g4|zM%/PR_!n& 7fL r3An& 7fL r3An& 7fL r3An& 7fL r3An& 7fL r3An& 7fLm&Pqݹƍ5oPKNfhM!&RP %땧u[DnL7rט)wd=%UJ{n$)*VD#8N`j%y+DΨ-}LǼ|Hy4kx\qKk6O^z凬uw̨bX Rx'^Ski~5]ytkD7_2zbSf՘l֭J21iZQ!לi]lw~NNasRf&Nlz=I4ٵ&HYG]5,z)}NݙVP$|!ܱh3y&_ZdQQ)ӂ+xK&d?6SͨH>9>.a|xzkdI*mY/J.nu6q>J:W8Wf gN!ϔÙ)IdJub,ΣGr*8u۴">2Sf; tXQ>y^cX(<i%/HR3:vݦGkPKɡF]D9vHses~M&)رo/^Rkh6d#K6d#K6d#K6d#K6d#K6d#K6d#K6d#K6d#K6d#K6d#K6d#K6d#K6d#K6d#K6d#K6d#K6d#K6d#K6d#K6d#K6d#K6d#oy0d )1BN&n D4m'bco)ٍ? 6F,'W}_ ʛ>`@l! 6MS2{OijAΆrIFOr+r6JD9h^9_h|@I BorhimTmV92[8{XKH=+RO8p| 6޾d\o,p/mi{Y?YN  :I[m`4Y':iINuҬf4Y':iINuҬf4Y':iINuҬf4Y':iINuҬf4Y':iINuҬf4Y':iINI[\|M:izI_NWFJNһI:iVϦdpZ7RwB?iW+`iP`Rtp`h;lTxHFZSD{ܥI ۣ_\k@Sormŗ-՞M.ߟ.ҵy[)jRtq|{k^zvm;-.׳:܎F ǷmqtH;IRjo&Y߽,yѯOwǿɦob|ۢKv~ =c: ?#?Y;>rvv9^ z]o;Ŗ5/v[ɚk\5׬f5kYs͚k\5׬f5kYs͚k\5׬f5kYs͚k\5׬f5kYs͚k\5׬f5kYs͚k\5׬f~>hgGJƍ ?urhT|Ek#f/w1J*wqDiX"-JĵVAzjՑGrIGZXGw-7@p{-KcsE]M `eR}Mw|,A?Pil, [_~z&N ^ s]MM'=f|*@)JS4r{ 6:+lwm2`@ƋMK;W'(_0[}c.L |6;x_%`s:`tyۤ48 jV9V/ fRT9 d9W9EXYD݀jQJ!Y[ϡ֘l< 90}-@xTgo|WxkKwp_ws7鄩ؔs7%j(2J)t5!$مJGZJs;~xÃn^)~|bXC ȯ25kyeMZq诬)J[ /qE $6Y3gj{H_!.E6"6Ad [fL\W _̐$jJc{ ؒ9͞ꧪZR9*Icf>f:]000)R?гѪA<8wL}J55 1q>戚:} s \ bϋε%A^9# 袘U$1A7Tj.y!QW3<|8ly'!vw`ݴE;$«U{?bO6s !+f!b08S9#+<s)xЃ<_Q }Kɋ$9Q9g3G8,"XPԙɿV?Qƹ~8:4:&6Pw:WC.F\yL㘿w_,!/ 1 [Y\b(] eZM¸)ǡqR;>gm͙'7uΞuy=<l~9{vLz1b _{\[FeWF7,~!YC"wk9=J~Э9Bt+ljqb0u!Z{yWG۵vvy-6Mme˥zhFj>vWy=mY$*?Ta43@mv3l3޶<.51BuO֝鋍: Q"cgDN͈lr^56';>~Yjo+Zc# *RiR<|"pQ)FU0^" E*qWSh97x3ߟS!BP&G&Lhh*"h`9L-Bqy3{۹)bq/LK!?Dh޸3 /yA7%?ޑ@~ȂXyȵV'6'vUĆLGRJ75ӖH7icչǺC\\"Y>U9(%s#,E& i>$.% .qsL{헻ګzjs?yyLr 8/_X^}RKV0w̩+W« VϫX0֤*|0.c2^}>7Y/pqml@@뿳ISEJ%S.&N3r8 ғS']R"DXއɍ- H JhW`am01nIcX$ c`0,&a  N}FJdf]{;*HVW%![Ԉu{Һc^5s!$ 2D"W%ɝL$UTBTAäH# oMpo,`h0X\YJў}Ж`Im8Gң< tR)'02Xqx6Q(b(\KN ěoh(~P\.n94H '@Mr@yt^A3 ?">bɹdNI4eoAꖔ'UvuDfvp-%M:Nt`WHʛ s (XBG mK66 IzM tgo =ɠ>J@)g;d:? tPIy,q NF!Re]>SYh\qFF)s-I?OvS*&@ ,Uj屢 QFJ,mqFʁ0 #k4!g;#9bCJfv\|IzvrluűK rGn󈑝c^ a0+؆N*.߃e%I9B m8*H!U@B}ǴmUS?W[C]/LɶGZ'61^6?4SkBBH媀)}[=vuubJ8O F25MT5;P8*LbdP"Q 1P#:}罥BD Dy %,wL7 r~[cq0TGwzk;]?@ Qsp13v=u; tl A2B!U3aʉLb_HZQ&UӺ aqN;GUv>q ҸU-|:aV$W >3mm~<9?MѨ?ho9* ?UgֆUyR6M>)..j(YMm#ï;Nb:}sok#W^2r~GW>Bۦm/^f*Î28վvwu9hYo΃j4Y`VW2~gy3,{7|ӏ|]~AB7K-qp;Eͧ/m|oWyO~T rqgYTv@1(!VJD+'2{!>fM2CFۉ !{ib4OjB*D&h}tqRwGVq'^*Wh^kC!D!JSRRy6j *@Aʖ -#o(7}h$=FPe J簡TjQt?Lw7:Z|\%/X1|xè)%p&撹H^X%pKT1(`׌PʞNOt8 8ez"d7z\ &%4 CwJ:Ҟ8GP}*|a18¾g7 n.^]@Ud7 *7O?G/O7)q"MI'pMMF]Ҁ%íXC\`{8n3#%A3t؎QX &9t6B̮xbqv`i)RF`$qhJE2 W&V m,Tځ7 RQ4ׄH@ (CQ;W?չ71v#ӏS=,e{h UsQI@HQhK$8Oy_e (-06RpjҔx$<>*9H ͝Ҕ9?=ŪвKbZr_Tb.ע ;%QcMH,$8`diwqLYbWa18P F"}Ep}+k[6shuN9Ĕg*LƉ''Ui;_I`( AeGly|p<mI 7gJF"Swi}h`3LL/AN[kYRDٝIIeK6H$T|uA^H ?&&D2(V ÀZpk6Y=_ :,[%ah9!8e8`Xg&D#uEAFPAs#y %=eTm8v۪7?rAO;m,pDV/gkz}~9-2͘A6ƒ E lY;Qڶ +G,*drD0BFPD 'LîE++@9l&QmGs*Ȱ,'XX+%49덣R萐Lu\:rؙ `W0ߗGu>r_}"`k<) ¤_y.UQ6?_!Kp%V1crm#Iгxc}ft L ZGK!zƌƁZ1b"* NVFk -/¼\'f5&埧2l^jp ߚzOd"j/ܸKԪ4 57aiM..ȌY3E"ɜ& |k㵥 ҄ZP38wY 'X*uDE(pQx= $R/ Cxd!RMDD b FрG!eLD0`dˆ/t}6uo+"lVa*Y4CUʜ/"62eygGW㯏%ᐰH)d`!bRG9!F0o54*kc&PfTetۨT L 5TȆ91rsB;qNM-VA0,)JL~EY>g) %8y@DX %j 9yZǶp)'RND0=qeXUCWJ|J*>"qfhU"cWZye^RS,H\%D"ZLTb֊(W@hĕ.@E\%j);tqdzJ<ӥw{%L|TqHOa ʠFqgӘp:Ĝ?`* 1Dzh؋KG)9;+LϬO #,^: H/Fhsɯ!|&Qª̕~wZ(ڌ-2m|Dcre+n4YMrvZe^*"k/9vª1| 21m)CJc,uTKZQ>M}R$ 8R`:_ "yZ \6(v !#eR9oܰ @U #?]@{XsƓ`~ I(%>(o҅?yL~e}:",6L(0Ba7{ymQE)bס`5;!]6>nGΓ9oQkuZ=r%Kj@J!}Q 7#_&~qWZ FyJ,Tibm> 2a~W&, I8tg`R^A[!&ZQ*c1Bd4HanZ9Tx::iH0RpΨ%g<ysqF,1#$qg8X`$3IrX߼(c <lbx)to@[DklnNEn` ǀ95*Cz ^ wDx->cm1w^_ؐ5soMQW֤z"J ͚?m&ǙBbp_k_4{3&3+{锠 !0@Y+$1('tLXJθPJtde v8iPV4[ l<@ɠG`e+e$_ML82!APg Q3#x#"h,-i*P$[r[ ' !P5 Qpk, ,R RaC$ cVn9Kp3Jc҉"k/%% XZY&Y/C$lqop7qoszVMݒVIv@$n_NͪTvx0u}ȟZS)1s)&WD`A 8нk 1.J[ӌƗ5>V<7#Ȗ@ep@aN=H$O1'I5"$ *!,2V4"yn\/;ZtOp=8~GGӁ8;B`Ӣ$G~pyG|\_ˉh~1ΛKxvS !CzQԴg׵ i@dCRB<1p~p#*cX+.ƗEZ]3̇- H/S ZL?vhpɻ*a3,]68sYp:'0hx0R$.G.*Ε(0 aGgb/|٭OCɢ5+ZKA:-&C } _`%' @>42jEl(e|޲gP3&([?#,w/7ަ{Ir >vlZz7w,_?s67dJ4iJ~5"ᨼ{[xƄ"M"Żn#K6M]r+R`y}Y%4ZMnKByn/4W:+|QՕ4J"Z*O;exj̅Uw g?~9Y.O'xeRYo%`|ϷwloW;~hXeL Lqwtg\vҖA~7ÓO> ;Tk`4X]K_?'s^g>y?n;t\SMO~+puOpk ;! 7_Ҏ>XJj޽<%XSI8;y[)ާ4c$/ɯfz(B/Bb6~T_I1ٵ=ؘާ!A%W!M=zRu^k]fɯwo^P߄xR0˫QT= ~ I–1|]V|VTߕK =.oiBB}u&QR` `Sc-%&i&Ne-n{O~_&RB$|T _0ZV+JR˦?|Wxf]ǝ$:o3)?Q7$"}"IaC;.J^oNLW3:01>뮝@.OpON:qʿ^$UmiJ_119=Pֳ(VPTZ5}ηH'`8VF#ѪI{$~滣TZvk:6~3zL3/ʋ.g|YikfOkg&m׫xe`VziTkKK⃯t«'ۄ9`f TRb2)h[{ZVEyd;bb4xBQ#pȂIHZ"TRɺFޝL I]_Nνyec9/P@'𮦠`ʁPL)#"\mf{3[9Ϥ&Q5"czNjxZL9+v}d+5ͭ}~SzرD\7;oPӯrC|C5"2K0!J0(]{o9*m>/`fvpv ,i%9sl+#v˖6W7.UŪb2ೆ3CZq!}dE!3҈-퍓p#g9d-Sd Dʧ(xATæԢJAVזIix^r[^TVٜ~#D-%uK \%2 }g%RavΊkn|tA+1ܧą`QUCzT8Qΐ AE 3.X6!L@>8\*)Xp[$Mz- BP1Z4&敱97)`)%6%RdVXNFU$QNmârV.E {;IZ edYytBjk#YCl=+IQ+{;$&,9TMlеuVn *ea\X]386<4bO/9Y)^M~*.ћM,j~>4X}Tl0l~T^fd x:ݮ[a0"#dԭ1 ).͈հ~0!B2ӧ8:9{ae<ͱRlfb|pԀ'1܆2̄˫!>i~Uyh 1Ti*TE>' Q#ߧڋ8j/"" ,$Ę$Y%1y(p8Æ݅;<L+PMdMX*SpLː$K)) jlSԚ$Ei6N5&Grz&`c\IX.S97D sgRR]k!3[3]g4D#\4|;f]81ev;#gQb;c -MRANRI#ZE |6pwZ"Chdj3rZ萲zb6gNv8Ǩ݋)gBMg2} LޛClpT|uԀO&^ӕ?ٯW.%]׏#zLqK9Ks5M#ש'_~QrgnXX#R$I.j4um˜hDRr흣Ʋ;Bmхb&9h)ݻOBۦwmW3inaDн͟\Z7f\ͥӯut~V,jqwhdj:50Oy4A&7u^ottvG-=C@hr*7m95%uN$XLreECI{/'ؗL0Y].5(p'HU6Ud&3eN</롴|09 *84BهeJrG'!Dc|rrl\-JCO.!!tɯksɚeӣMʕ9h i#y! ̕U*Cu%ػٶW\: ЎKޡdCr![ 5.IH=m Ryl*,R@]XwaBtlJdmqaFqy'ѣǜE*Y({?V }z27I?M/ȋ?|_G(‹*b}ZJb簄;O5smu*%#dE߫7L,0s:xAQcɦH?>r`%Ց'@VpZ6U-= :pPM~'ĩZ.G%'%>FGL,f>IyWC)O<@Ayn>-삕A{>yr $h{>9z8z8b=p^y=p^y=p^{8zy=p^8z8^L#{@Uп%h(U\4Z`!*.8**'K(eVi!;NltO2' HG̬ ,=i ~1gUO쭶pU~7аCC)⼮KgzT̋q[׺ .#rlG+'H M RD)Qtk > |.n-GnT=OP`-yw{.O+uB|>! nǗQ]rR'OI|Z,cj>i`UҏW5 頑i^8ťV;wd@ ;}Gι[5$nkZjan-Wnqob,]lYty)7>_Z= C#\L]dV`Ӟ0Dl>dOHjJxcxJHF^,H#W ?/O$zu%Jn[nwݻ.ȍxrHėHKvvLd7BCE+j]֛ͶtlqϒT{n}Z =bQHwtߞ1.ڴʪ/Y)?%)U]oD{\}HT.\eFIs(94fWE$_$r׍Fw ȸ8o"x ,f1grt:F Uר PF+V-yzWz6&&VQR1qV{J<ԫZpa38r4:o߇lSK)oCB'aM\Z?šÍ,[~:1H]]7 }56.Ï ^u7r<0y`$ٿhE ? o 3ϗ7?C3^lp+{ v_[DPOk>ϝ-鞛xZ˭ 2gV\Hr'cHpx6MMs4qg%& yR#*"Gv;RHΐk6̓z?19M8?@V|4]D|u zhb|*'oFL;x) 9DB⬯`)K޳_&M:EkYh5',4Y$ 2Oz(Q4aG:[BΥFWXl#Rˢ,zxEڗ*CcF~<°_HV' Ec-q)[ %z)MQaP'Mfې@T~)_"cNpu$ B Zb ш6Hyʛm!Ef},%׌y~8;Bo'd݅%y Ab*D"ɱLM8Őňs{e'afZn?lT:-, @*Q;0B8h@v*ryELT;H{EF =ܦQr\ 1y<@:esjCTEqߩe!M7v*K_,HfiA2&|Q b^f4g֋.T?W/K @"_)#Xգ |vn3U&V2T.WC ^LЫņbwaiJp,+cfWF@Q8hz}_u$g 5RǦ(¥X.C_o%P0 $Gtb ք#K 唒:լ.I0Ll0=4 dĆ#S.er~[nwOE ];>‹1~5^ XCVfΔ/yٝW9%+1uw}>0lrW1)Si%V'AN"q.S;geX;no q`Yme.{lO~)VжwnA8p^  k@y" *xX11!&Z)XU# b[ָ46y9{:D-S P- PO4O])j.RXdKtkhn\PB7{ 3~_s{Ի\ V^:9QZ_%:u qM/rӇ~1g߀G0EN93X?p2zh/}?߬ykg P!b~jN܋J} EQmQ,@\זș?EP/jʰp!-?Xl0к2\NutO/eC-^Wʧ,YUysZ60ݬ:2 Lƃ]h {ޝɸbA:΀xFh<<ۋ=|Ϝ`E蹢3;p^0Ԧu('Q?6iPް6$Qs-9˝R 3ݾ1+U7^L/lJa^iGjd>9HyJx->cm3Jۻ-6L\AhYXFl_J[ 4@fE#1AЉ2(J𜉠sy%QqNh@rpYVecn ̬0S"&5ĘBb.rLŢ'+U!Д!`1ݞ'xܩS^\~C>^@‘ B.8cTAǤpE#edQ@l%ƒH5 O BTS#0(kpR +&$kc.j|PH rN֟FƻFN'hK2uZ{&gq<̊yq#CO=~Fey p|UOT):Ѹp|SFnax9~MIܱ).UrIƎ3eShfj.\Zp9.еH@ )JraPާ_OԎu 9p96MmTzn绵T/оՓ]k{ u ~(lWV^T [%߫Kz5;y:srRځtsja´0?hM]lr56MUrkڒ`u}S RDԦ:q1ӗ^=w)LZIxK)PV#c&fu9Y079r͗ " `>Uk٫*`sӠs#l9V9,hD.E1!o=e(+3}>z3 [/xRt;?'ӭN ݷN50)⟏d-/i 3}{iF)qt7Oibλ3uGx,1a(tg~qiz7hH2&&t_(E7)>-TMRi}eo̾tJݣO޿J> /~zeSާWjYBګ=:9oxVż~8>Kt^ÕPM+ʖH\]/uGek`UʇoeOmtR 5tҸTW?%ny5sB/˙~L()E ?T?[p85qz6^٩8]tVU}ytB:$P>?WGɍpxT+?JZ`tR_˸+S,͖OO(#(>Q7$JMHMrhUޤt/hYC^l>޻%jK\$e+`i:upjkBkb:<ܛN`k DP]I<`lCեng[*2~[טdgQ**on44 Uܙw/bM[ZgԣCO95J-# S bE5^9 +[#y#{Ł9uϩ!φs1*`C71gi LLhbpЄbLs si3&­kD%t ą/qE)ŷ2A≂YPEmFY>FLN9659o EH^Q2(3tnU H6űB\ y )̧(/2#Fbh2k<"%vPz)oVSQō8`1waʽ3ø)aa9*!{&*K!6,ޮ BJ( Zvк `l ūOW*0pr8(ݷZӟ0#Cj)GSihr+)^d:} SHV +3/BF +u?eg|7K1{{q\rD.w/{ jU.u6`&N> [kvV- 6T+/Ee*̳mmGE|gJr]ɫ¸ _GYq*v1{R 2(KPj~=mt9R `hn)1H4>ce>#s`6C#duӴΊi| ^yq XKhԡ0Zo_ ˈL m3gH1e"=XFG#w+>d\qyDZ}YY$s=tU^hw姿I+&UcV#;S4Ԯ2mU6 rѪm}7??'_[!7OI98ҧuq۳+w7O;X Wk-ؾ=B{z,',7{[,Ó-$n "8&I [&{FeK>vs!wuՅBGOζGT V ^8RQlr\(2kﲂ3N42epZxRN[C,ǔ yd "t) qP3qBKܾtRt];)=il x7[44ojE,>$7XåTVegDB͂ʸ!MsFv SE@K Mo 76w/%BׂV>-ѲJg4?'$Z'B ~J:cxO:9D&8vI7_Am2F 9n 7Bc&ĢȌ0:;0c$U*PP ]NӽĵL>w뇕v;x?)Z*e^4)&IhcFTYo~[-{sF|.%}P?'E\tm. mon3hy5,,7f- 6,+­ZTmmU.kٞ_,_ju\/Y t_3mrqU+j{{]Up:y[lh5.^7B<]RO˛ߏHu|tԞfkfnܸl_f[:RM3LZ5¡Ε}_]oHE V~k 6%3Ydh7/ivv> ik/r U#"b}FJ;Ջ+8[F)nۧ;+9ZDpu7)gpw+WoXsB䩾nf/r8vZ[N7 GUY}eYIџ}SM'iºLq|.0]w.R" 0 aLJUؘgWE\k \UrmgeWlJf\A=*">*ҮEz WEJ=8@+qS:5ScՅʴo߶qÒ5;ٶ$H˝d52נ]\k!Hю۩iz1ϑk5 X&߮/EB<.a!`9=rR/|掮H7&0aLI_ KrAD f!w^Zy`Jk(6)YL !WP79h^7>êX;p;p|GXGX`eV;p;p|͖Pw8p܁w85 ܁w)w888p܁w`<xl}U_y!ϖcO!cG!}};^FB(rIAh )KNr%eVP⪂HM$28fҙ8ڝS4N~myGٖF'~,߶Ǖb;w*K \3ս.\ ہg֑T+њl[UfM3}dN9=b2ΎJ=JLU2ĘӚgXl8#p2@$?5t\J 0xmJ$\$.ȼVg.s.{gBulv&΁i/$W6k0G]sq,[Nf˷*qW].݀i,&bb>żFm.,Nf)5C+K-AC.6Ժuzw4ٞNWmv2#{4eM-l=]U׍w`-8絖t2g>dΓz؆5mK=G>Ok>Ʋn=M\>顛[6齟cWy ͜\2(YY" 6$FO$&$Zs! ]3u%~EMgӆP4&~n D(' *ECE_2DduXp,S'RJh %%" QjSdTFHNg-&fAO &$)ѥhrIe3qV ;b 倅762ޒ^p>f,SsagƩ_(Dl~1" "$g-ҔyB:H@+E,RFCXD9TAxF_n) 'mV˨)8kIJ OZNs‡i:g?"~U{2ѲOfɱ:E5​;kI tBa3Z.1A򜌴E^R`MpAp/x8uxOqMAB`PґR!.ҕ85xۘWמ~ |w9}W yr Sm׎jytjȏ ѤJ(3W\ :aEP[nq,\@oSd e= PVeQ%G]F)nϿzY+4SYˊ4ly-7ztܭ*mI“c&)2TeSV5A2Uevl*A/C#45齃l,`F;'GĢ~^wW}lkMY=5\Yb*"@47&m-4ȵG 0]ș"F2 0d|Vy+T ˠIJνJXdR9h3( ԭ"gQF2K/o)Q6r cޘ#;@S[eăV`-yA'j2P טë9p﫟f%3J-}Sygq \MUB]w;OZ (bѥKߚ@SƱI+'S~Sq>h4kEٸ;jqPztMfUv`N&U̽U(p2-5=zF f 4=yl:B5K,ͫɗmi"6§Ҩ…Sy*tv> ;OqJ!94d&=EJ\oVzܐWnc+}ڗH}]/FЧE*Jb9x"EQC&Mrfz_a%::RG(…PrGaZ(#:mqkB`!rJɔ\w֬ga8)OudL ?_RQw6ϱ*& Twj?:e}[,` Y}M)1Zi.w]iSw.%0,_W`bg ail%Vk E9]vʠzه_gg q@Ϝ2#,-m҅t+Eᆷj1 7d.Ҡ평 i*UeuD0(a=*=k?YB)3THO0eXk`.z 0A:OE++@9ժq2rqTQ܂udXDN'X}C?vȹ8-v)osC]Zμ\@ϊ2ޝg+VJ=`iܭ;Lޞ qK?3sm?mӋ% |3tnyBQ:k1ne M[sSs>'gհTR5;Kri/ /m|.?={mb&&VQR1qV{1gzǂS ‹F{BSyWe{{^?lVRކ,gaMྤ(?NHDp(N{I˦?pN=wi8 }gchM`.ܹQ)ọ:iU/»T]%dv)K*VǮ4<ǂnz8WF9Gb8,:ʨƆ+s&M1;G-њ΢{ dc5gz]o[4[@ 3+{锠 !0VH̥c4PN)§X[:m MJi6cзKgsjk_bX&]/7/K OX #Cās3(ΙHE`9((" `yM#8N"lE"$^ٸ `xYޠbFfBTIv`"{&S&iX{))2! n;Ŝh2lqmFah% @wH3K $մ7.wg7斄;V7Vrj^`EK3Λ.xT&Xd˔`sVWD`A 8нk 1.J;ӎ'g3>NhG-F˂ {HbNkDH@T(mCXFev$O+' ߟbDw%;:,G\DZLapG o3e.bǡ[uٙE_.Ebɚ]hpH}D=(j۳ ?Ȗ6xZM7bG7ވSj%2sKa.P<~ Lp27inX`J:1p1ɱCJ|(LqWLH kc.j|PH rN>v;v2N[s2MGǣG\MeyS ?EWH)F(qh`x$H@aR#!p BzD8ᕌ./SR5>z_-9kZk]dt7Ba1%#+ t>ѐ[萅ATߛ27g.Ff&뼃tyNme'_= K{Ǩ,kRFba-5$|I{ &hVj 5Eina.֙>GԶijk]ӖRߗ ZD-m e۝fJcEz1lMu%Y-U!Ym01 )FYnoz3˥~4_o0VQ|i>~`[ٛ 5T.1–c3)lB0FB[) /!c,1n47xszjMƓS[us9zd WaZi >Twa.?Sѽ >5=`yďcn.Cozmm;I}i8_$oa8LPXc&QBs ߗ?i(h+Gc@_L$\0f6%hZUH{SG9BNi.h X|6 @8|j'ߝ\/չ5毼_OW%V5Uߔo_[jak7[imH\G42[$9 spRfi|zu\ wCdOo,'Swo`>{[_jԲ^_K πJȱéAY''hjBNvߘߙ \!Oʴ֭fP[ՎLsw)zݷeaVc}T(t{UU_c[iwS+(sw:6,isaA[&` - [LDܬxQm_X6`i7>MD|k20W+uj v rj.([FfѧZ8t9ZqD!^Y[,v,xI-e6n:'UJS' ab@ 9X ł3L?IW8r y=yet6=˙&.d%ʔǔck)R /3_}--^eFPrI@`J!Z{`\W5a9l2 y^)lizYM[> eU&϶Ļl^LTs50xXT·/xUݮ*db*Mw2ft5%֌G YmGLFBm |Ak~=/ۆ9` z`hn)1H44pTQSF!OVO8j,4>hT#ٻY;iedxt[b7ihq;{O,df拧7 r7lC ,@x:aj )zuQϔ28"b8hepJޝlog揰v{SX]8Fhap+v. ޹CC;d'v F:!H9x̨4_"Έ1aLX\zYG/%H|AB bZq§SH-/ W z׷h|t&3R! qKG<#s.NX>_b F(x;0B# $g\Kr1Ȫ`̫s/9?jS=u7 9|zm=x](],1u&h@ ,6g\Qs=ؤS%[?rvjo^7xm#0b(9#2kW"jft9<Cx%<石7(i!"zJ`qbF5Q#"Q;:V! CJb #52(B;NxM Q9Cq^]]. (`N&JzCɔ[b\ntA%,͗ӛׁ|SL'*Ie8ZC< .r e"s`*#`V i *$6!`B l wmHk( { nw H:ؒG_[-G/˔-' ͪb!o RT(gHC4XʲRz]0AzI9I $'0ӆ,}6!L*.L,-B&\Pi4xαV1yel@nGM "vfdfIS[踨[Zh ed Jο7Lj.%ʣR[`#h5QD&EmedlT- -*68r %GKx;P(~ܿv<}Al,Q|Z4^%8_SMH7X/=wqx.q A$A=[ *)b462x f![`^>PH_H{͈lJ 2Nb?n~0Fs!OW*/~AcouB99S n퐓[F_FQΕY}έ%;7]_\^_YmoiaE_QL!W^5GXt@ #ХVƊp'h5`Ld25{6ob Q}`Lg!$|Y%1'TMA5rv ty:mP z^4b rnƣ|z]Ä-}|4]5حζ-Svݧ]O9魭}k;?62mNN @D4M7z:-.l>޹f;!lhd3ݧz?ng`eh|󫛃y'k{X;ꖉgoASoZCw#[7{vz794+>Ccřv(ȭw%+:a݁d.&LU٣_`lbDCٿRn#( eP82}B” )ɽtAJ#`)$` V'k|!{mȾFes$hkv^LSPs X1ydAƤbN6(yQ\gJD*nceք۰g~>2=aӖ쳉K%|bʘid)%A ޑtZ(`FYU3jkr$X_YrA'EJD)ڜ"k9W3)ښ9;fTӅ8c_](+Bƒ¥ӜMOW]=,Ud4rb25d(›ɂ"#GUzX+ ʢcV][VZ^C{> &CRY# AC4E 1)34ƮFvFxb֮jmڽ] B#*_U3f8l:ep&hgA(%fϝS7ЇYi1XŁ2YELtML H¢&Ñt}C*2$Pa5rԗr%㋘b<X?ՈFTF5BkR#ڢB2 JhE,R{@8J;R"0B|IՍ$t3 6N $wY.!{57$$-Ho8`iWXY#V#gF>Rҋy1:qɾzQW֋׋^x\K"5$(( igTqQ  9Y"/ Z^/D/>CSYTXWq E9n _&HYU+qqx/{#-ƀFklbi\!oPYWjƒm0JUZ>wm##,ɣ "r_ZB@&D! ؔQ`Ƚ{c#B3(B]Z2"!eɉ Қr3+;xis #>VF58:ҝ#'2f'P I+/ސ:|Zrv>r:<ԨO6=y!G|_dSLP[X] hBQF'%3m\0!qWw% Vͪk&~<W{ *8+P;هe Ɣ@Ld)U6 .R +oЦGU~}T7RY m:k W,d3&3W3ɜOSEyHY΢dm}]: Ў`,C:kJē'AHZ" 4L]#mzºInHpMpk5FaQ8pIxh1gn" ga{n;APkTC%yA G]{DaR ZG 򜜷ڻHuɄ ePY"w,ҊyFDNc( 2vM\Pj~psB78)Ưm{>X d4)nx7I|Xr<~(\{L?<YJcf<$#?]|kr;DnE4 ^{v߶;Ä/_|RHV 2>dכ[{F1.8f'FKI O&*\ޞL]K̮Z͒]KRNdƦ*)łxK/X]i8m뷋 ~Fюhs&&^N o4c`NGvĐbg*W*wos[^C.]kbqq~!y_?]&_ ͞hs4JkQ.S{ʴǟ ޓơ}C̼~7%6m(Ípn&j9q<頨1dSݟ 28F[YȂa Uak3pMJoLjȭ&D 8U\%yTx1FG`D1g\x3!μޡ-fvz_]<ZU\k<[g&؛>/e+(g2ߕc_ GXV },Qڥ/_x.J ɟ7^EdgoD30".cr3NO!W0IrK(m]!ҵU:lw!qK^otmaM HalCQo[ŠlvǶ\_tդ;ӛp)3Nt@]9d|AkN{2 n܎n Rǣǃ/oW݉Q!Qw8 3B(`s?Jm>'oUw?L`fƜ;c. w? /~I?͆HHsڬg?vʊIJ}z)|뵜e:{jEz084;ۃz2LsgFGvU>A<~OƕH%B. .q@Ewp[nm~~> >s{\ <UIPCׯݿ'^Cf,]SwqT`{cl^v2nŌ(C9IHA׃ĩi 9@c02]nȢAs(e4;Lǝ!<} }؁rq8R"Na0\(yBB t %w:Ea}t/U%}JlJ?vͬ Uoh46jS'zaaթE1RƜ3]Ted>H?#A܃C0njt\`w:9εs$z]lVI,LZzBde Fv ^ig}r<y]ګ.OVcz\#BH#v(+?m>%T)+ k*4i@ RFs& FǕO:;E);R<[ \x'_㉧$ҹ 2-ÒM1F2D46- ]moH+Dή). 62[4MGYHr& HJlId"-&"ߞM]SqW|~]79`*&3 srX`& 60;(1f9 +:!2[Jo/DQ&?rG5qdYhv|oN ƏyFꋲ bB$f.F:bdƔ.T:֟tQb3ڱഒRy*v^${3%4˂rJ K^yAC&׈0t/DZD|G<3ܒEҫןAú5 y"SUkZz M071 fhߴ>j{e]\J{D"V5JߙՕ;Ju @JDnlt>%#uN`WO'T(C=_ų{dW$=ȫWxU2#:*򪃼JcSpuO(zGhih`y5_|?^ Iw6 8Gpj@DD"'Ûf@Jn? /f"IAs앩6xLleO{VεSm9jM49G}3k#)8,K|c?n@ qKd,TdCҔ'f뵐gzZwrw\屎GpeשQ(9p X(MN\24xy&jzc8RN aLEvyVEƤ] g0$q\R؄!ude)NM9b &p y3k'g0r !-wfM{G5 ^0̒L%\%ATᜲEz?pZIr'8I-(_ ɧ]Sm)vgS:lAO㲀,xkT0^`Bso4f$7e*qSҠ8|/|ly^(xdJsr΢|9,K0W&:%TgFBʠziyP\w=S剙 DR *)c0hI0Z ~wT댦won)Ie[l] @%5&{lH^jqf$/]xcZyT&L1GCHJ%&V:QJCnI@(F(-a%4>RKSedBf6VMV'd kUYZNF[2)fh>>|>,Moy"חM/l2 eZ! 2sb2۔ 9 /AeG4 (;tëGbVX P C0jQaC-Y^2Z7ͧ.,˴msffōٓʐuʐ^h}aX{HNzc 1b>F7 [*۲4^KͰE&IM"k9kox^Ҽ4/2 aaLsC6:Ƕfy7"J֖ }[dͼ&7YU!~sawwlL9td;<*%An{Ɩ,՝R/if a4~/)sB7[4;b]5DR Xb9+EsBm[Bx:(x? U&׏o{sn[3_Mt2>nv4"QNgrz:] .~Xn93X}['XNPoA9`B\|6 Wg?rwX7(h:Av8\yvkމpg/oշ3V_]-toTm24{=V -f=hoᑷwu7ѷ2?Vwqq}~y=*{[%Uu醹^?E_?þz,ESŲc (˿}qEWCcKw-oVƷh%#'ڿV 8R_uQcw˧Wѯwr_ӪgƇݎ6/8E9B 4㾪8h;uo庂˲OߖooqtI'x 'K;h=\FzsqYi+(xͿ,!i[#ch9Χ#C(U/Jma%C<jT&<2$Gd8pT#A_o9"w΋abyo; Zj[,;j+XфW\- O+z P&\rǜJh""x5ދ% M*qbTy^IYeFu6dЀub^́Nox- 3 0BD 2MaB3Ge*)(V&0ަV'FHG`]󰸝>Ӱ6664?қiNn/=6(:`F C= nDž;Q\8cHqXEXu:€l%E8?Wจ2Vܶ n@ =O2˵a0-|L[f1.r"ET낋ݳ[qφFG)a)H2MDIQN8e%#+5h NGۤoQ 鋛R~g]T}[* )\๥OoUM$ĺswkR7D(e .ewK\MIWK#ZN\Q >]R+kDW ZNWRΐVѕъ nit(=]!]YCfka;CWWmMCtbJz5 Xk_fsSoU Ιe6{UIV.BSlJ!n̆74?(xh[:CW4U4( i i+d K;CWW*3+ 7+iw rd3+aдCt1+YW ѪKWjЕ"TK!VЕ"0\uW vTΐ4tbt([gճЕ!xc ']']+Fmmmmy[ma? [L)+Hel%6hSxR1ҭp[U{h:[Yi'[2 uU)Vq))vŪ `>dZ!v6K3:M2w]_VZM߼:ol DU: r,ŧE4*yE󧐛6:܏93gQzkI$3X,8R)v2X&q߷ oy~N,Jx1 )s ?tcazqWO;3]h4w18BxKn_SЁV-Id:R/AT<ȅ7YF:*XƉL,xɒTDW3mcNF!.D,OvOYeҟW0sZ gU)waSqjXSޑ9nf"N mޫWlg +ZY_vE+Qf80GS bWPy4Zm67cua"2钂7} rWѯM͏ǧ<}SSu4G򕞡N^83U kj>F05|eh{γunN[\iOI\|`ECZlT a'`+{7Ѥ]wk*Ny~!g(I9H"*^nS}_)4v{!O&XTlƇVl\'XH)rٱ8p]6>I7rIu y,"$=MeW|0kXK|B;1LL("O -Du-,FJnn~%uc,h0qDM$??+5B]8Yi.t kM^iR:6 V?S#k D><3TғdRghVʧt&ֆ᪗k%uղgk;SB8~h|!Ciuz-w"=kyOR|ۼx`$*ej@gDn_dlQ@y * ~)58jJDEuED\QGD#k%:DWw6HWפ>=]!]q&3\]!`:CWvFhvt( J0q+ig t(9 J2[jv֝UKigtW֚ӕD ҕL!+NW+D)MOWgHW+ºudADs+-]+ug vǐrADdOWgHWV0"M  trz3 Bqzj/~pى C+Nt~(UNtez:뭭ZϚ_D^quĀQMnr͚~0vi+ M#S9ϐdʰ֊vCvBm3YKM;%]!IWךe7HWzXBvb]+Dky PJaz:C?/OsthW lџNWs+ 7]+,$ ]!\eBWֶ^w($=]!]h%]` 'M•+thOk?tute4w=yp;jG+DEOWgHWV`%U;V;+;BǎtB]%]Ye#ۧmF"x?}i! hbuWVvߐWd˲;Vv䐜fv8[g:n'~ԏYSe;&\/;MrO?N3;qfջ,+~YzC0{+;ͥ$ GsOs#p23vҌ ʴdO+w+m} }>`2́&#JQiȖXRV8dN>Ϧ˚ 쳟F _0gzK%+l˱|z*9M+xQAf43uErn^pc1(6j9=^١࢚e翥[ Ķ u)}D\lՈ/F*EMn:,ǃ~YB)3THJyK"^!#@1"LkQ*ijrjAsQg")笇Jt.s p"VFX#";X+\$JH!dNo4i^H|e:,pU -y0Vk/ [J9g&\l5?zVU]^hav3̌#d];M^.Μ:cKwxiFѝMy{;}8EfʹٞV6=`OuR K(@? Շӻg K e,E.^T2cL̏sf4JP} 4 $a  좷 (}ZU 0Pl6AEa"{⁖Il|6:U4t/ܽT<;!Y.¬85z{Z͊ #oa|}Hr@}(C<׻>@{M  %䷇Jr qqVEeAv-cvE+nn= vi Zǝ7\#K1qV{tHba)FETQL> -}S SjX/pKu6ǡctn6qZ|8T ʥ= >WJǽO gz nfnT'}՝JZr h} ! / _ HQs1/ygXSZWk3/>4rYCf_.ݿ/`Ł_\f|- 6_P'+wj1Hj8 /]Wz]`,]t3C) MS1@u,zP'"Nq?J Zf妚Kޡ46e?a&=|KG˰DZ8e8`R-s^1":"㰮FX80;ܓ}ƑO,?hcӇv^ ] y?wZo[Gon~[W: VXlj%P~މ` K9ݨenTr;;t[OfG|GOyH(-I#q.akKB*Y % <|9zqfqn1騣#`9gg0" 6>N耆.4Wu&i oܑ3 vco?E|řbYZkl߳ 6 ,C-Ob:5E=Lph%%RTXay=,EAǕIƧ* u~J`6WV|sٽхE9dY͛jYРhPWe%E$ eyo[ )enŏMH"#0i :p4o\aT'f9W\# AMA1 nV\ȳO6|eG&4׷ɣuKzj^8a[|>Y风jG<J.E8cDHo oXYgӋ#gWaf2׻a[++Ikd1,dϫ~l[$Fa{בJ$ȗm2lJa^iGjd9HysQ b lv{ԗwm'vsݼG2CI}|^O3n#QUtjl(*d"DKsbC;prbzW%$w#m, KE ejZ!1@9QFW3EOV}r6uY^k-1q^9a^0z,+~)G@ eH'rCHl"ވ6&+,+5Z^# X{)) nȝb΂w4lܮYvF튡>% @gkiXu^wǛdOJ\MݲmNUkSa:PkMY<&WD`A `8X{L+;b\nx:;{V5צyh, ߃Fs^#BBi+2*%kh׍%?~o)#k/Б q:p"rG0- Jrq/:Μq~qmga]kdvS mD ڲ i@.x*FYI}H(CjdwBq'BT3}2} 2 = E -t|Zãw<#~&P!8i<, ~4n6^}sez)&9;RһQHwSd>ǽWvnim;tuh5Ba #258X# -<ϡo8L܌ֵѤۺ IoF\ܤzlR\9Bd7zfuU*H,Irl!i6o=$|M(exJj5Il!s Z{{]En#k*-m)t6"j^vg\,zTݢ|fv$7DK阾E=ff)u9hK>Z.g " `"5y&_m"yBWIaK)U4"NYΘ}p![eNzup4>AO& ڥ5}r<Ճc{=վg>x9n<~ hk3= #.Pt5_F?L&OUhc7m$XlL;P/knXuoLO|*KJ+5# J== AAr@ *$9s#Ϸ [z o~7q|XύhZ/7U}WwT?;ɳͮ_?]*>47~r}^9_„ sY6CJx:Ok֛7.W5c٧iX_/j?b7:54~NCZ}aØS= fʴi]-+G?>:v6}d̻L=q8pǐRsZy=2+wNfb!js7|2ۺO\l"<&ן\_\eT޻ 6m cFuV:2ȀK,Z%Ôl$k u$ygӞuW٨AuO hl-S&0!2a *L1/%5T(GgPx$&W-0W}3Gmw% sՀ JUŞ,Ȣb3es_W8ic(W `? 6 Ŏt,Lu,tt,LM/1nPۤR TʢQ |kӤ>2Oj+DQ6wŃK%o2$IEiEP =R QXkG rҪ4vqfԌ?M* iۛcT Py]RЯX|:ʗpZfsdtbFuOA^,vg{禼{ᅥNl!TL6wZ 0[vF'%:+`F_̋ϬFAwZ vHr Vyk2ɨbABM̉Isr}7Ͽ=>T@qua;YrVxG1CV6w}'g3^_=64/)(Pb,Lb b0]+Z޵8v->!tr_л/tdڑ3vD'W- ))#u![Q|̘o> $NjQI K*m"G-)2KXmA`x´ vgpxrv5!jubSk*}R.c׎.o׋nn5[8~9BCūن[r[xFWD zGJ3_b 3>:.>hH7n5c1>ڏqٍQr`WړVЭ<{:Kbwk^Al!@ءgnq̿eR׻oQ76=[?+stZ 0i&%hHAULEcNE@k,EqɣQe ):%l v@aQ$K>'jJir`r8[%r`}q aS98zt Bs?s k^TvjT(%s-`j5Q]Aa6VsOs2Z9z~kpj6pC2hL)N(<˗ n+AcPLq;Ա0D(ذǧ;Y%4G M腘.BP{;KE[:*SRdZ'"vi195+]L$n̬;$bPDJJw gDҫswKtݓӅy׏s%Ӧ˻fWԝL8Q'v6f}Xfz o?l̅Qm{ 7B _@k ,b>fu=P|;IWOُ^a7]?|{p +Gn -}\|a[Y.(2#'9?orwM VzCzc_m@^$r[H~vmy;P}5rc=Z;}~2[ [ꦝ^Q 鞤eС0< UuPR c~!]z^Vd£pɧB1gS_ekWN3 (}s`HVgk|C!S$o'Ugɶk/1VƔwZXPL!ƒmAFC$eVWgl 8V (.f*V$o {3$jqE%> ac<>QO%~ީ}s"@O"4OK=={[hI+ BV"LN;]Ik.)Bڤ2$zy( L=9VkF$-a0R+/%/LR-j*͸e< Uc.T#wM>K2=ƅ~@'&/ʌ :}xb ;&/G- Z dí _B *mVHcИ d vº2:]b."ۘyYZ 1ͬ㹬YG=Mrm pPJ_%0Rb!:2#p&hgjΑ^|XH%!g(訲aIYD l82G >BID168΂n%ƾ0b3x.#RcFGFܸcM[)cEH!hJP@UI;&#Lm+ BhgB%]Fd<RZJH(o$FfpkVrOfV\^ԍyQ8ǵ<ƌ5 l T#'\xQ9h}L6^ 6i̇y| kX=U"qOՏ)l/TvM'ŮT,w]2Z#.]6(TCp!HU|Y.&A{wQY {\Y^B>XB&$!a06HcQQzYzJ)Sd$`g A%f(v R\d\sH[tPEt*K"\cϺ- !/T=g^\|>}.z~]`tzb Q:9##Rh5'Y?b<6ixu-wz&v u&s ]=JDb{;snv_X-V>[Ƴ0nVsvr|~pcV1?Ma1cEGo3IٱYJ{\emPQR;DR!h04] +Z}RHH 1f@tŀ҃ wODkzOWS#] ])z@tUxWSC+Fvy9]U(G:@B 2C1`-q0tUh*JG:@" 'p_u9x]U`EHWHWZI3T{ 2\+q(tU*w(iL"]E]U .}0XQ]"]b퐞 2`cp`{WJIHWCW!5p` *ZUE9IWN[{k_N'zX1GS/g-_ Ǒg? /ҩ?9;^[)ݔ/HFĈŤ%VTx|}KN'cOF;sT*!~Mb>P^Bfnahgu|Kd +C%R aOzzRRFB%f*W\Lwbw(6Uh\R*tO=|I͋lk|瓼@p||]z^!XNr3=9;0!֐">+_z? >9Zlja>J[|*YOqzntMԮ^!Sv'^Go+ux1O/!OT++w,,D'+e"g.j¯Ow?gg1o~"B@rc{W@w}z~Kڽz6GZG۲K"oUXwǬ}ǟ|ķ;7 Fnh/;W{~BmKٛuU*[G MA5-DtVΒv(ͥ輹\_LɅtVT\U)qƅ\J3Y3,lFPqj&է'BC(B%I:,0e*:1g„I"_%?'{j%{**#i Rڎ(9ٜ\?SmJu$T,\mXSZ@ Y;[\\הR#%$B"s_KAh!ИU07T3F+PtRU=QKʇ83פf %JJo*|A2Ŕ ǖȭlLE"$l SKȆ4<ֺ2a>K+5Rl{ XU$XҼ1Q [(9rAEF@n7((ʡ7fKÕ.M5|OG30}dH֨Vx ;P< cxs`ՅY,TG7?XE9¶l^ A; > $dn6x|cR("d*Kq VLA)V  XF;KrInz ȋY4 QTɣB/ ā 0H HLCScUۚBh Bؚb}t  R(VWC+PS`E[LnV2 ÊыA((`2Бxs6LE1YIbeҪ*>QZ|MC@;3 =\oF,:ga1=TVh Aw8+B A3vh'jjZ e"~Bgj>C:b:;kF*P:YxڀJ5񥷦:x$tIPfmw;$aS6i` n}AVs`/ ֛.2 6`_|^lX"oև˴E?q$;f`.`5BgekOak0ӿΪ2UǨ]ǴjZkIQ3QFhћAi7MW%зUfMj:9nHJ4𐗨=a]˩4TsC֡뛷q3옍}u`x+AZ964OFѡz7(oXV1^8R.|n*zJ'>TipU/xp`\SclciIh z hN\76xk+2(Vj|ҡǚAUJFm:_3Ag2vt VPXG%==jyAц)\sA-4pYqC OU!͚`)o"a2نf@2.O 4%尀d.نFp:qKކRZ7$Э?NE<` \t*M .Fgr\UN`XKEu,*f-$iAH 8iBe. V3P5wWW,"$R3 cV/~؋[q+q;.ŮʃX\ƛa=7?|6]fҚqRM]J>7Gao|v ^|꧋8n[mk9[lҮyݿj}:y5osgxW ms,A4GVՎi}|{;ݧ[70%~ys1z_0~j $&kU\Ό>#@J%r@kb@@m!e;;c6Y^pKbP_thn >CgV޿k0#=d!sUU4̎|g _Vte/{7n*Y/,oŭO(.s>&InԩRT3\@]_kы@U 4oGs塿(WE _fC\~~>^bGL*K`4/T`T,CqY6!-iWk a G 5vhjjieC{qG1zy}LL5&(!j(PL5TiM5TY Mj\p+aápr9W+T)eJV"(WPpjl;@&uE\I; +l \kCPv\Jfz\uWJ2KB ,\g=EQBWRUq6$5̸ W(Wc]Zeڎ+T{몋2…pe8U$\Zz U޺",t"$\'w< loX(q*qJYbHd#k mTO.k爵ZJz*eˬ+QWCOyLn%t?Cɍ s%St+ (؄i) ӨVcU*cffn~\Oҡ Zv\j;S#P.D0B* >x\JNtJh W p++U(BW"ƒe QlԴUq1i@*\\-CWQUwp7! AT2\\EH(Bm;Qz\Ȁm8B,`Sٶ=d!-z`/i* _f6٥\gzjQJݲ-*mO <`ªA3ZB ف|DHqYp=zͳ頒`4X-u_\mmWV4[8H@(o.@$<4'Eۘ j?f2xS@PX7/bsU*2E%H?;jM*zNbuMz {P~9jvNlϦMj{`bW["Dx9Y'Dp؋֠N'l/Ɨm .\- /x0חKd]r;ꂻ ièP꾋LQ,Q$``;fNS KM8\x XUω$D\V =[N0Y %Ֆřp> QᜲE"*IS}FRL0Rj͙əA>M86%`80ӸM-"9Nf1c![nEpwPDM/wgz O|wZ5I,L*C0yj`$7a*vr&%^qރ0͗gio|DOOn=x}_,|>H3 +|Ҽ*U|IWrEkd)f {70Sec"2q."Bm*uUnEOgތɨZ|P^_?#sFwl9KfWGx{dGoEW4"Q629zyqy}w xfCQ?aaA@x̨S(!ӵL_.hsuibE ΗV/?ߙK&n]M77幱ܡK{߼,^:JOW*meb ڢUݐ*yǓ/e?ܘxVO\u|~jON/dgYTӷַT LTT (xe>|ZvU}ZVO̾V.+-6ηhK~kڟfC(*h~LO뻣zRv"(&[Mn@[h>qnܸh5Isz_[ٯ`'+l3EθѦ"i`DO,ۧDˀO, A< *C_7y ¤52Fd碨&fGg[QԮ; ŢCL>\/sn=*{8;9xCX0A9fBdqjED4+ŨJM,I|\̩S~Nl8wVS,JþtZ6 )SCJ7*BUkgʸq,ꁋޒDtSg0B!Dsτ2IA3Ge*)(VթMáp@k`},ruHl`u +ozwumL{= ?faۋ h/!xȮ&uPHP" g8|#.89~g\xCm6]4^#31/m%wMRꭴMi5k grxV7rW6_ZDێZ{'p1|ئ>통zÊhw :&F ^b g/}_]<yAǢ &jaH E0XIŸ;9ܱcIeֻfZ^uSW>[_>_Gb+>r4@v^[ G>EIhi7?RäM -"R JIIlr U8{2Wp[-Txb՞븿$8۝ݻ;zn73>6)U),02`')cY+X$,*QVpIǣ}w޾r"OzG}l;Vd4o^ߐMΖAx3tp>ЁňwLJ4CŠ8` VcEY*)O.oN(oN" 5Tє"dcpLҵoYN"jc- dUF_!TЮBm,LS'I_Wƈݺ0F= Zl=)sxn`6Oc?dZbBO( c#Pn$!wE-sLř@Oh/-9: إ9z@SHkV2QW#q !;Efn2.Lbb=V箦r^oˇQ$n*__;k[߷Eܛg2b1r`~nWW>S# wֽd7j-ѺeNnR:BT- B`)v ]ӯe^҄XR+7~n@4.V@z7y=/\rf;~Uɟɯb4ݔ>փƉF;o@e*ǭut4V1z{)!˒~ރZ/!}P8BqHڌw{1!pkESXLШuњ=e.ڲ2(J^ ј w<{;[+C]I@>gM5[w.W˧|'7n\GB$֚ y @F|j֦bprZ~_l1~DFf։t׻7\VUVi7܃Q+9\lGBH 8H*jׇeL1EA A8nuXkO;=քKJN/v'\\dzqٚ8@Ƥj[}AUVR\U 9V&9ևAs֪R 3.)&ƒT֝j_T:Z􈮽m=9ci^p܃!k:ۿb!EmuӇ@Z\%T9)B^qLbE'۵vǮ\A2轎hcT婁MRHZ,M{N]sP:Jr:Oi"XNpa :ɋ@) qZklS XM*/ k cKSbM5־$vxRɊ c FŠYL9|ЀNOU Vbeӱ1hq蕵%0990Kdve8jOvS^~CW+3 mh`|pӴ,5xҽyC9rC@Pa@_)48[[esƍiyip!?{R!ϧޱcڗI{!%ylR+:+z\-hbs.2C&@"]c&LYO|LBjp?Y;Ha^, ܛr5y-1($8KS2G2mr'*R1q &zFۂΕc9gҪ4u AwsKa5(#ѭS+6U|նf-)&qݺs X-CIBqGlHgӸHVONZWڎhFjGټ|Zg![ـj}*|xErv{wrof"m ofokx5PA^]=,߼FoAhPa5[ovۋn|wv@WW{[?yv̇i=,'Vn^q{Eȸ7=y7Ss=\^uN7 ^Wdn#7nn)_n밃W#ۺKJoҧ.zau9cc^Nx3l=ʒg[>Vt .4F#\1P1DT19 9fNIVR8&z gXdkȫLe00p٧x4SoMcRݳ'j-{ҜKYl.kfuRv*hY3.U{FZ'Pdw7餑-HiRB%2E[Ww6j,,\}ئ8,䎬&j **Brr?~u^>eIgY-Xz (gSAyv`JvN, dҔmW,')(jSPzɱdhgunX/63 mg_hG_xT_xgygC;Ω-@`uJ+:9j\Aal_bg!vG 1Ycua s1G6?G#qLmkFYgk xgFU*(?bD9H+<7.V꼶 i~& ^aa^ĽuaCG/։6K]gF8Ž۵bAĺT1hPœCa9SQnwi.iW ]eeK_(YgxA%^_wHx]?@ u ZS9ZAXPu;fG UA:pD**ױU/oIC/>:V;=v}8hR ]Qb!R ȑbTy ;ŨZ'{9՝-x+츝ܼjk<滏PnKW򅘥8dSy1 j)/ : ]A[^i7ֿ c;KTS ՘J 芭 _dFRjo?xozJw\q;˝㟲<)È㳇IBUcrXIcթTUP .+j]f#oW!P)r ]Bqxpמ9=2-K-czPdr`ExMBQnG :G&PFUs,ЎEȶ?xl9dPi'Hvڲ#!9Z *br3Y @!5'H.,%yɅAĢY˵ bG;RV I+ # MD=O>u8gY&r^i ܆Mm_5댻mr~ҿOkJe."BPy7[~h ֟&q'Bhrilךioofp#o602«k}j#JvhMjg$~ɦRz7m2L*=R nQ`jtt;G1-A+"wU1rWf@Q8^hC*u$()58̪m i]%v6k܇T)fb\a[|[6" DwWȻK?V^@*}-`٪}OzD|$v=3uSVw/+;3 X  14L1jglHF[;E]%)7:̇PuZ{hXM]{ Pu[t\ t2*Ap(_ǖR_L k-5E/B"LjQʠ(zG1tG E- l#"t4vb VHzGunpt l|[_n2'HsKZg?ڈXn=W7 {n JGj5vЌK:Zqu}?>b`)f UC:E4 vKJգ%hj>ʛTGFzY]4/ o>m|W< LLryå O=cF@X1b"*)Z+IgjߤYS`*oR,m`뛓P]fx3J3IU36ʏ'S"(8Y`}pU-ǦoFF㏣ ϛ_/TOw1S}چoE{/{{ mf*T-IzE9֒4 JPOa1S vIGSf v(y {糽J2Uy 0yHbsï@5iخ>ij]8ulKAVL EQYzJeYֺă>eг7h1~Bv. iC}`d!nZ]uܵn+C?A^|.}(gqTKgYkQlxbUkQ2edΗW>{j\א𧓛 5 kWS`^ BA)epDpy0+y(ۡDѽM>:Wܡ(,9PAM7_I һ|Gah+ u#1QЉ2J𚉠ky%Qq@?nX?s=[G'3+{锠IDzpVH̥c4PNIMX`*a͟7]RY>0b} OmE'NSsfGѴƪJg|:+‘ B.8cTAƤpE#exQ@,Q8B厌 c(+0( š(5x6%r"62Hbƈ$Qw5C1{zK{6AiF7BsB;$5no#(L3>/uhaS_Ni=>xZ]M@A݌݇06Mvl m5Frj&YVшZh8e;c*Cz.s9BԍN3<9zuZi+XNw4n<y~jL,+Tu?易2q2Y ] _x:83 3:~ˠO)2 #? l[[GAMc"xSݿLW3:E{y4LM@p4`lŏv0Jlkxk3yzysv:ka>\nu-:M~ytW~_Zy{ua]^ 3f9o;]’O u[jyi~J|iP_ryJr Rw+/݊l}AzƇˈtC4u/3d*g^& úr~2MXʂQM0#5ٛM^|iι|{×s^8]~9P~nI$H'&.N̛Pβa3O&)0+O4xc:_f޽8 ꞔ3wfNg֋7Y:>F?l[xwc!u./-ixmVqK]zy>@N\5GxR֡ڨ: 5iPW`E պmh -ޮŴ15-&7塹A {`ni_Ɋ:чhcY ` Wo< [@ZSCpAat 20>Q/zͼW+e$b&AUE9z: 8Ox?Ct?4_*5``01gi L`A& 0e[XLK1IP^/L/QufɖohcM}tl@2:C|e1 w6ؽ9Eљ7{NSR2=lL9][%5|DjRxy?q/os^r4ʝȼ 0`ɡYGgfL`z)W!cFY; ZDl ha[0 ^3;(|yawv6YҕYج߮<̀ܦoj}|2V=}\|\ο8DKp.Ohh5ԧP3n?* Ӽ'+Sp6^lazprӰ]w1~=zvۅvd+ E@zu`-uw,O Z|߁8m0 BM+7m5[Pvò0ˠeް{Zנz^w?Zm|Е,&jQZ|:jZ;T_c4SeѼt (j.饉t||yZupo{zUܿdS藟AGۄPˤi^`ߴ4Oo%? %- 4)Q֖D]NP$vFFc#J^a$ !D5W(uuV\gVke2DԖ1zJw['zzCb=IsY MN %F%dOIRwܢFiM,H q@كءa9$$V8m0RS-yA)ıH1WOp߹ݚڧ׭nRM9! ,lJ/,N~YGqs%f}LVRETL,K0y@\pBs`,׵DZp) $< [ xS֚&Hڎ}gH:xJCk"35=U_W]] uƦ)<{[{+= qgU|gZ:oPJP!^%+JY`Xd\E `Ғ>1}*K|F#tT%7!r] T:1jKu.4~[kvݻ6NwLޑBbC՟ъ6I%@fVI)}*b&Wv36:ޫHOnCGuRoQڥOL6Ji}4ϛ_Uja[p:2*E \XKH^>R%։%։NSYȠcΗ&<֠pZQ8 1@qD3ݞ:%, 3p >b:s#w;+yJ{Z&Ξ 6 |u2*-*ӹzvuwæl6'QW۹ZivǩO9ҝn U|H xK7.+AWphީDslOmx6j.ϼvv}[η|wwȅQ/9z/M'.}(]oIXξ1[~YCa̭ͭPF7'fW⬲9({TW`UPVJO_v{B)eݰD,<0] RҥYC2;i%X4 yxY4*ROۦ KL#"h^jmٜֆJAƄ>WyKեD(h7 Y" 6$F_$&d$ZM& m緵%nCMMS)iz;ԅ:ɡufEG̶6l/lv wr4%#K `>IRBtP,.IYR M"2:G"=t4 %=)Jy$"M4 Õt#L 62&{[UZ4P,-c轰Vo(T޼gzu~s@Wl|b*x&$"H|ӄ wȦȁ~f.hfk<`%ծ/+@PȞI6+Zب3q;fl`XC,h2b&nQ\]AִPԆQz=moJkA1YID19l|"p+k@Q2dǭEmaF]9C&ϐ &%0AHE/83FT'\xؚ8&>Iǡ-#"#ERj-  edY,f^as,_v3 ) č6+eSZI iAvٍ_'zԞpq\R5-9U˸z\qqr-S@' <>8Cdұ ywHB4ţbW58ux#@X' M9nK'K?zh(UlJ*_-$B2,*H aoTB{IĠÉE>٩DCG~"}@asP!p;1bh g@@e$"̐$WrDiuY QRDrF&n"S̶Y&ΞSB5ڕk+>|9B\KudugN˧ǒ4X  o KrA $d80JԺU#n};Fܚ8xu-*@w5`c.ֱSM!7ڱGoyƼj5;i}g &g~]yfNgXiY5T{,R {Pkr*YEXӂ)r]}6c9PkGʯ_犬a XoVL5Y;qyZG Ru>k`Z<t#WsA68Z㽄ҝ#'hR2f+Hke" *zmHrq :~g 4͡#.qQ0'q!9qќJĥHU#.EJ 'W$WE\WEڵvG"%•;!28;*ʓY**u*Rj• О\Z蓁"էW$QHJ+EW \ ߣIƎg9{Vz> \=PJw%p!S$ M U/?-- ^VK(/ z0ϛC#3Xq~\*-~}{0]d`k4IkXaHM/D&dઈkTH{IəU)eO H`#UĩUuHTW/k/c6E!MgE-ZWew7˒7;ߴrgF8z'AZ$(@%[ C64=gtWYւ=Yțk\LWi7?7/ _~i,+WZ7,[sŹ bEf6R-K]t&$ΒgV(δnQ ~g9`_/d5 >^Ibq{0W9?^ؿ~;z#?Xc݁纞>?Է@ )J5sJ@H' dØOP!=hcDR`<I_Hm5e {W~1U$Ou"@z0p5wv 殴E>-z?S7{_]/hOV-=mm>&n{W%}p&)9Erkqͩ8EZ]wJKt yĢ.a^ F3ϣ\Ptʈ,s ?B;tu-Q0!O\T<Mvڈ`}& eΘ@,XT.Z S~˅)!Rr2m7[-n)a;Cܧ昽㵿O?z) q{ &R&j%e9p"6"*ok#w`RhAm7Y"yDu% $gOgSP%9tdTEȖ-5qv[~7`x7~d喎ܧi~We 1 2`Ȕ1:\_Uh7cyG猰)tV>TV e&<,Kn,ٽCbY>3aө _8"_(;g .T|ڥK-h"ZuɅ @?K\%h8ސW3^JL*&",ѲvٓдC|DCj88Ɵ!mr&onq7+Ǖ3ާ{V7Xn}Igu}@P2R!)-<dh31ELREOY3 6GS1]|MU[җ[R 4;4BCY;k4,"92$ РvpZHqJP^.|0R[K&wym+ye? k:pdQS멸Cux冃y ₿_R($>xJq{0M~{[EJ\oVg+A7:㒉 ;JJ37~l6IYjQ,9 zcɻQM_\MuZ*[һi4o%e \E ϋ>O\=QGuئh(?n [e{!l¾W8SQHFW/wH7& D.F\OÒ%VW9Pkm+7;Oҳ(ڲ7Ve.=!3`eGlS:!ݪ卞b[ Ǎq7󌹜0(e2)υK,Mud1wܞ-Ȃ` |/rmYNӄaZޓ 7w>`ʵ1M{CK<fz.ӯ&*7qgo! jt1|ӫ<˙;y-|tP!zT>1&TDnvͨҙB\! ,qu2`Zu_$ KeDOʐ 0֔ὦo5⭉9B]<[zՌ4s2Ε tjFښM_ٷU=n`<#v4ׇxmkRlc mu?6[ gښƕ_ҸissTdvvjput,KIqIoe٦lf$Hl@FkgsX^Qzk1=V2ۇ3=´=>>Ek;Mz YYmsJJMd3B9xq,Qux6&/ΆrŲ~}k 7B]|ۗ8̱\^(ᚐwƻ`y/0>Wɷq;'9wl<2(j^6~s.oY߫]1b~+ǀV^sUCTLpJ_uLX‹*RcNR2Jz7H#kB0=B7J:Gz(X5/@'䨢e^&Ɯ G.*oHhH GuOs brQ?c4t!@(<_] K\C?+q~^}gїesq0A,"x4-BU*͂R!$j uyÓёkh\E , k8/3*H\@I14388992w*Gy6lmq&݊aݵ0%F5xɳY"[ӨuC{8CrJQthsL$`tRQr+vd:zQi)QDD+x@G7QIxpВ A?{" 6Ҥomw3EÜ=hp]B(j b%{ %'p]vxMEr5|k)Gf}JvJz<,mԣsu*SAl6J/0 ~Ql|M*ڈ}?( :YYHv@{BmJh9'|s%hB|N?I}f'yȭ29yu5v'ӳK{ 3>G.g#g7uG}xNr˳当_W?f;ߧ~n18ŪP~cak~To!߬|8ͫVR?ߘZU/KXۏgmtuuc7KuG-EXhs˩xV%eToUTҷ{6}uao+t=^*b_Gˮ|sPgn]IM;[;[ck;9!f{㫏cpiAwUwR;?w O/?I<_Ҁ?v;R;;SHk@FrrHIVEjHM)(\v.9(2kך382AN.4.kb4NͿ/ YVS=ap0ѿ@h\BoC]VߢXnZbնfSjhM}5Sw_󸻱Kt+|ݼl4Ry%4ל 2ʡX6֚ &q5{|Uq - ڞv-zί#0>p62<"[~#hWٳ}T<~jVڸ@3bmd+#MUEkv0(߮.y>iaUDwHu+jّeE\W{$=kgwԹYx-] @u#x%}AAuapk̺zV =+ϑ󠅇~`YŽN5k sPCbK !iPJ55DF0BNL !h/R2*vw9X x}-E3OW:3T`b7w<2QxY T Lv0*omvJGOEx[i^?Ocٷhv^W[@)&4voGg=:EsƼzsZo?5k<1fY )V* ӑh+]:]!;]:~!g Vi EMx d"!G39 $q.72rS-v4RSL1.5.'ZZFb(M 빕ItscBN9+N|vU;#[qFNNu졮r^6 _>7c>TL$GI[+ XQX" \yPGD<[[oa"8[S#jy7 NpaŮo{|Lj߄r9xkiZ@1b*Z+eHLjaUi9W<+uo)1$,VI!;(pe XєReBU 2MxЯZ̞=!"pxjKlz?^97|&yK C{>.ҁ3@1+I !$|z5J-h&Y(ךY +z#u$ӬRe^n gFZS 1PKwQO$сitd0Q"N>q#gnW.m7\6\ʵetWw)Mg2 zAY+Q`JZť/&kegRV%У!Bh>;<.]6WRal" -ZD)_EQTdOR地RyYJb#{´Ԍhp´d?"{fqwRHǾIC@bDքMO;luvtf z`]Xu@׸8`3% q=۰R&s FRl&PPe:;W/G~zy2e-39M_}}-g#ӋӆE[}q>-ș)PA9Z&:f(esaС0Qh)LY! H1+mL f,%-ed<)ga9'/צ唗$,P9lI WƁ>PBZfh-l TkMjJRT̃Y޵,cKC˘\n >|QO쮾ᴨhʹ42iP:y49 PBo6 n0I` o$ʶ:WsP31RHwxpV[81p>B hy&a;L~tg8IQS2 >zp1NESJ,: 掙&>|mbB,ԅx1b GbSTEǪT-ko35Y[ԻhuwVJԊȎc%%/6m!H*ZxMLӅN>ppLipyԚagw;s񡲴gH\@TOSlV @0pT> !)0"AL\&d,CU`jhF1-LZ@s,FؖY bx"9>._vUDma˯#^,ɸ'/D HU1"q-p 0 A"C6HHHU+>T-DoN8nd`K狸>n2&]2K}munŶ1qgLS0IC21)"<!Eb xCx&yù#`P#ʘ݈e cJcA"kv:9᱐83 NH CFp i;J;-^EBтYhgsVzn}1V1!5XR$vQМ(MDB0*QEnpR9&[˷&΁^_Cj9l 72Xf?hvRg|xk T$zjHTy3" {g YYg6LeU'WN.,^f9fV\l:zgP S9PtEUz؝.ǠߛeXE=Zv_ުyUk j^+";+-߯jdLLGMͪ!U2:KKR)*E`#hP9HG/3pAГ F֋R49$ W82Tmd&)*հflP abݜ814\Op*x&|I|ӄ wd@g.)xieV]_ dPȞD) 6L܎ڄV2 .O\KۏGqb jWӎmQ+6=xk'/ւHd19fC:(qkJ*f HĐS99K,:@҄51D&(r c"I;W4v~~ b5eeD"6a#6`ʑ 9B:$ Ȃe(~8'%pKn$51qf9DMH}0>*@LZ9CԦyo%jO8G.Zg5-Ue\T.vL)LXX`φI8C!s2`E^S`Mpq'/xXM;C]vx}_T !3}O>=O`T;1K$=O\i^ϓVJ#,)3+N2ŀ$+wi| #"?Ⱦӣym0.}PܐK>c7xL}yv[T6k/u4ΉifW~v~Dͯ63\]OF ?̎^\O\Q j{?ㆴ4,D?.Oף߷z#Z:HpEk!WEZ WEJ Jɔ: "9\q<*jpU4z2Kd;nڛE?my|=fq$M3!|r~p_EJ Wˋ|qY^}\u5C :_@XQ|s)֞y[rw3Mn>@3q;5t_]z,4ŭR8;?'A/@U=x(=olԒ,hXlc tϛdfLFӹI gM'1LnmΫo1jY 4ΪֺR+~*֘-,˜{ib`۶d2kLF'.Rϓv-(IlEv9S/m/ HUFgۥnoy6BzM'L}婄S5լiyAG\"aV\Y{@ M/}iׂ:抴 KKH= -YqE*.0@ (`V3Ɠ.0(o~:Dzt̂ʖJp&}D|t/ݨPuZcEoL蛡v|}Gr53hp<ѣ8K9CZ`%/ O:-:L:m*eB6jo]ߏ0qyJUQ%Kl=a8E=!mMƝp5aKvt<*o=[ej_͚fY6/OIW#ۃՊVEZRW }5Pk~@pE WE\WE <)-f+FsDAsOeenQ|ӝ''9%7v6op1j4up㳙%5v&ti'=飒ݛׇ ;`4Jm]LɽO1rvc\wa1-25KY3'̜+νސ?m̆\NeEgB,Av>p2IK^жzt7;uARWM sYZk;ΙbϏuͻNыU:& _] n SL/3la<9cYJyv+M1"2d\I*=~[*kY:R{ u\pgJ7YJiKګHi5*[4Dn@=ȭ,7[ֳh4[uv٫i*[[Yjrw@ kڎ5\V%Lˌ~,٢˻t"Ss1..klnޥQsE튃CSi~rVMln~oF+2_CkWGHm,6YwίUc{K-XEj,i Zʉ6'ҋ?|+CrNJֽ툶d=X9i1M5lmƠJW3IʕUVY \4,rd\ˁ#O׳ JEb:{Mݬsn:ݣuN[iy%c`6) 4RwՁpJ<,vuud0 )E| {hu$ w; JBI5q6t6B>ٓTڥoy>:z6: =Edu71HZk>K)ItP,.IY&BUdTFHdzNV3pAГ F֋R49$ W82Tmd&|jXXM3B0`NN ׫<.bYQO|N?MC:?kк Glg7M: rGl\%19b/Mݪ˂ sQ0^(Y#TH6L܎ڄV2 TFj<~` }T\+@ Ns zLeD&ӈx.T{i>uVӒmqQUE5​kkɄ„:&lYZ.3/ l ȋc ,.iǶx+wat54Jw䓫 aa0.q !_~ڙ9RWz7JFRͤB $3ilI#jZCۆJ!.!Iq d*r'@mei=9F>j($zmRJ0d 1&C  R7X׳(4IBI<מKL1$ Wi#^x]=;}lc.ֱSU!w=q0nNB;ߏ~+]x]_Ի:F㳙FET*wBuU5m^]C.ovg*]c[q{]Z_nCaBYl =xX P b 81<0`D~%BPLǏ#Ɂk&Fn Ρ(+QX:HV,kVehlUH0OHf{eDQk* :1|hi3Pq* 1b)eȥtZHi|u7׆,#zQc]1_Otݿ7^ܭkEk/]qw]z}o$ x VRZw/YMtf=8f!l<~n==6~D=/\xMSGގy\;㞻np!ިd\|f\ؗ|m.g76l@JlnJ\clg 4gXHK",Wg( K 6q-f[/m9~Px@Ῠ 3!?{q-~sZrTrUqt*;T$qЩ]I,A@ZUItt֤eNj QG!{EL xy4$hRdq_@9J LaZ˒P!RxK2+c%bgvݩ*[![T)\4q`jÂL68,q QxdYwe˺2lx]s<_6Dܞv5ٲI9]+zb.fguG]kO>~}@3(:K9y9PBR*OkA#ucJRΪd)U:$+3fQ%95c2(@ȶKZE|1,uu V34)CvE{!G-˹߲lkGzaw)(&R$yvsZY﷕/1ۣ8mYY3f75jel<PvOp֘V%Z3f&2R-6piD# '7zՌ\&lUtY%,HoTEy-+= tV^%XC)hf"J눉) 0Ut ƚa͢6sgZÎ ֡aZ5ȽX!5k'BR\hN=U+=MTiGe,h1)ijL۠4ђj+Y]0BԄew"?w5=Jt3IM%۠((ZT{ͮ "gC4\,tԮ mTw:^ceo)HlDxc> mRH2k3)i\ܓ7qʯά-yDw_qKir?mnSNƳQA?l0}.|[y( 4խi=)D0E5 c)]UGqi .6j`dZ|\wsOh#51|͐ǣSuCQypUT¼Y7`8#.zA:L{9Mb+m?]~c7=XYkg:;{J~urIOߚMp,zCC4)`w./g7YzSQS:ݠ?B9acr|n:M~pu<&w~lE9XO] $yjԁh&Wr+X~ذGD4ULAՇq(\Ȇ'cWmU=oc{x6"?ܨa&_&&10e{ݾ Oky3:tJ-&D윻10,F OobgX iQbep%͆\h}qd1$rR{ʰǟx?\>)Zu<_ɶyҶOR*nB}力xB\NTyU<ḁ7SSkQP L?xSZMYIh qͦy+ <]Xo! "D fU<}59[q4Ob/EZ ,qn.lnkmtIcPCK"rݟ^3T;/0}6'T.W%lAx[P,hC4x_fX)8RdL8)C_FD!KL9n %6Z҂o[1zQJOuLy,1[fyrôMY<ػ-KJ{Nz!J(0jwUzGj8T'.R%K-]D=ٻNuJ3pu`"آW2:0BWDFq(-|LƼWJO\24zҌȑZiϑѡڔɭ\>}ڑ8owJ{WAWte`8.bARp)iâSlLXB"V[Io뀡CJ[cMGGR" YgR\$CpNYrn&(RC## ,QkΤ2GJ:rTGņck<~wNWz9M>=^/t&1bxzxH3=^&~}OkM2tP?GŬH70坣 'dSV2e5r_>_kNCoGf}S޵byz6֕nCiU+ LgYW{+/O6,p/ QzW*JߒSbIv!:C4Yh)sT S[ I,R$g&3R\ Y(9 sFLBƮ掤8_?ݹn|kn>.ww|QLLfdr`&dS@ XLPSH^Q;ho_q̦pdw_5<}E]Yz~7/en4q1?֪%9 0r~Ja4&bQk Q;7"Rmj+&Nk~1IۑkﶢVJΓGnGY bZQ@R +9rUQ6^5|P%p!瓫<. mTև"/<]†+շ_WEyo~cpqb\=.&i =xmSbR6cCѳ'pcq5,HECpyR4zŸ7Lhպ;&!PŋD@]*ۂ9O%c.BTɓL4+ML!ۋCG6XrZI)g ])=IH<9)4&:(ZNV,.W 1n>Ӫ E%3 V oU,◱!`Eks01kX@!r-b" -bc85 f \! \!nJNiW_!\YUW@sހ*}(pZpTjJ:b=ճcb]*Rp(_D.eiԪ:}K[^{\6}hB%)w^↕[<<o|ua1W$EO߼*&Y4j4ߑ%%k5ǜH5#ۛrS#m[5H0HfH5ϮG W@0g` P U| 4+$ؒ+ Wg+X6` U*li qj T,qGRm+y`Нydu.oTJ#0X*%Ղ^zD++Vi`%ws:֒|pխ4pj|m1 x +X͋+X k{US~+(ʌHty neJѻ :oBx~q%d;pǒ^{ nχrtVkQ/6gm 2ٝ;ǫ+K:.l[xJ ~J{w$mԎz[^lF73;@UPw2,_Ey qz?-y2Z3}0̮eV=yc&~?]̞B|9jj?$D#JҦ@VTeVfkQ,&.~w3҈{޼ͪsn,ZR1U-,D͖@շ`79B4Z絥dQM%HG>$TZ eZXBe\L\j_,br4JcFAk)x)';X+"TyᓗւYJdDU* פnEƖb1d]RvZ4}n E?ǩXj͘\5#jdit2աbDhi{,XgBɌilJYhl&Uk%ёLYTZN1 늳g'-$f o-C!uHJ[iNmFhRѤ CdJ SCgObUFҚw-"91[_~{! Q v0a`艠}WyCFc-[!!P2hOBUOĹL3IdϪ,U%X3FDE)Q̶h:xѬȵ STBf3LuQ290iAܣUS5}D+_+ҲSF`4Vؔj()F(%!BB}~ N* (A4ZY{E/sg&l.-6ςb&E ˤճRȦPB@*т7W*KIɾJRP/~ F2USzC6rNc%'dBxZSΡmG :8QvH%6]yLIZF %"[M`s28RYѠ3. ՐYuqZ?SIS0]*s8yc֢RޠEWwAQt`T:-u9mp z)ZpUTI@X$AQf L$4`<^TTKQvdrB`T@Ŵ`VʤCD8ެBP{Sm*ܙ`(q 97 x!Y=K-h~CB['J@!N)!uJ 9'h=e %ٗ5 N%T0Kէ ] Rh( 2ORDA ՜FTYl@ $DOoweX(C]+cᛯyk&N:A^Ѻ^LKQD+fcDe ՉܩIJ4B &}3(9̰[g!^p.p?oz3-YwF<]@i>I|8Fju`iLKJFmi>jZ. p1-(v5Mk/X qa-=%EPiDh2Q ,C4VLtaXryqT4/> K*z`:wǣfP܆`m@]GdA*X?y*ΈJdəЗL)icU1GNa>$ lc%g&bDGK I(Da:Y.Az b l+ lwX3+g4`Rt-AσtzD xdI!.vƢ(ά 2S"IE_(M6ʀV;Q=\-J>b,:ga'DhAV8K$@ A369MـJ6#ʾB:g9M]#i% r[6PJP?z+R@6'>h,e ԿY JX6L[G~x-{D0dJ-6mM:̃[F볡yq0>>[^[̵3F#7P7ӌJGBƔ(°ǖ?\; ,f6nYgZhΣDjс%7fSN#< rV1ljƢvz֤D˱ΨT`=@EheBCJ@R9$TiSO@Yc y^oMGc P|YouEXIBi'W Wd#GooPg].#1d z[c)U2z325gf9 UX+~(]:IB =8v*pcic5\+i֪V%0쥪Iu2 $jd-'NiZ-Q]4KcTBmEU|6…YaG; X&]Z8rԋ+υ4SQAN8A{*xMЉXm(T`&Inpi0m3;kSh\.C,2c0&J9KH0Di2ĹWrD$Ti`hM]~DZOW4"o!UQnիo-rsp *AFUنXh\F?9xnWQ=@M!ebHۜpsø7vBC/zDC8A} ЂQ aǣ}(_pQ3X @DbT@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J_Ú8&% >-y(`>pm%fR d@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J׫B3@G sIZ@}޲5*GIc%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V*>&%P(B@w4JN^VJWrg@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+^ӭZ_'.V>q{}\P0\k_E (hkH hKNz!Fԭu Ap5M;h 6;mtj,pխ5[s.끫H(:";}4pX JUR2\J2}ԯF D= rO?a\ywy;[7ƌq_N>'4Xз!*jQT//'_<ۇb8c|}6٪/Gy^_g'q;Z\- n⛵>:X(!P":}_jqJ=>J&QC{ ҋxִJ}P%C2) M5?RU?^Ƶ`\/ɋD20|W87* Vo|@_75}=wz]Л-CbV_5ZoCqF๪Ң 2"BnǽQRrL\ڷӰ^];䞏N^9]ۃn J?]{\hbZl`QJn7&zw&=qv`Yu8;y7Rǫ㖎[-!7y>],S='[طoU-Ss}sy?@w鱽'ۇoX#8[xW ^/oZŻ?{#tgCI@G`?s=VikmƲEh!$;3`6>ږ%Jv3{$Yv[$:-XCI^KVU^+m+y#qp j*5XOHA2($k0z:,0CA2NۭT|]M9`rNj뻆Bn e]N gE$p_)ˈ% ]aPNJH~P~f5U4Ee4@o׋DA^8 fzmbʵx}+w8ͩQT _(!* 4 O ܾ9u.w42qg n9E{BWFVDRvE=yhSӮ򗥺x4ZO> nY_U_FU1xJ>$n`>]I#Z1zP3pkK7>RA:D me]i@ˈi;M#JFz>Af1%sI;CWWնt(۶FOWoBW~D0ZNWӮt(7{:p%1 ]!\CBWֶ^ Ό>^@ݣ؆7=?, 7. K#ZN҈RO;tp ]!ZNWw*OvɻF&:CW=qR+)(UCt77m+Dٶ㞮ބwiB3sWV vBS+-Ct-睡++:#jT;TIҕTS It: Xu_~yeLz.hHnUԇwn6~>5T mv`#U,jsp-8Ufv39"\2?~NpQk O6.pV"z|3-tǟuw9Pp`k\BX?izOʿ]uܲ~7|? x%2v O&\?. ~EUG dx~u>p߈JOa⏟i}mk =u{1 cWڴ[b:9~ZgcgTts> F,SyTCKI 1P >罥B$2Lye,wVz=Slp.]wMaRs 0WekK?myȻ/7}xV_ٛu_|΁緳ui)hE6;#]NEMU=;dϯ-@q4!O\QxE'*]^iy[r{֘gS ?dlg H]@S?=T|&uYݸx䟮*CHY@bigT~]3_;休i{OO䷺=U?=A2bd?QnZ'm$md7$ F>4FJP-- y"ၸQhz޴dBmR0%߼8f<8rsVq1IBN+O#}לy< }fv$XxmH2%OsI4#2B! wl |XMG1!1CQ6̈gĞx͹ܨL!8OB+Yf$aVJTȁy/f# (-@')8YiʣyG GE5%fBsmÌe*rMsYɡEbϋO.BvJ$*1n͆H ,xd )w ƅhbϋ‹mƬP> >_ 2* >HяV+~L6bϪ kΧchz7I6jE7_ B"Hc ! po"]|@H&"Zx!<KփcQ^$&211&C "Lx#`G`(pBf2+9BT*KDM$*؆=9.۶D=9xϣsWhE L VyJ~z;y`A̽10:)1_j"sT( :?nS-2W!FS&FE)xPLlŽgs-.F CK#Ij, IcqpϢUK .VhR!Jg.`^sG۽#oBkBTSh6ffS[a{dְ yO<R .1UYK1z4QRnv|05u)QHBC ;\&V,ђtYcpv k6mц Kcdhݳȉ{O쳇vO?oi{9ktY%DPI,-̃N C1<|z̓ab:_Zfb`0 蠳~D6V`TCH:BRn@B= pʠp?Iw_ޏzY9IISc(\5cL-:˙#.LMQ7y; дSG*$ ꐜլ=QOu6go#=<᱂ʮ 6A2V^XDp.:gA ΈhIz8)-/IE {m.|DH?̅!Tb:O4P ϡG]!<)8r"?MRX?Gl|z9 R}ܙ<-gpY9[or~of]~*?K;~ǯ% Uň9qStŻm#i["nč7Nܤ"db /\A"TUuDP}NO&/V6ZA-Or+2l<yEY &0ޗeYF9kIϺ,hJ>G>49>u0 <$8.UIlrm !KrbҹV)u$ҳ}>Tخ@'.?͑*lS3]B8 IUp;oB!gh՞k:FY^cJ0] Lc[, e# |6d|^} l %EtB:%Yt~QopuXd>{^ mcL}zJ{i:3W7 o]KcTyUö#ޑSt@)^UsϥH|O|Fc@KV`W̆'e@LzG:0f4a?q5{H :TV9OJz?@a]LJ) t:+Ԋ8 ?99/ɻD~ ,(c+zt ?[6rnN"޽S u},/~_(׬l j~7+~3|ه?W(tJdkh>m$՚&\n]9:D;M0IFˌ֕'-*3Nڑ]JL888 "N[GTDb w!/׼ϮkSkʬy;xyUiFh5gV9?K鐰H)d`!bRG9!j côc҇4*kc&PfTetۨSwXTIȆWpc<Y`Gi]z/)MQVp+#m 8;ؙG9{j~76`W)+0s -%& ʽ:K9~v"/-&FaF8``)8,Z]dJsR>y3Yǣv%^jj0ΧUʲ˭Yb]5kv[qHd)Eƒi!dC| O zap#ur׀X,0[E֖Db@p\XhFlw wZuIۮ*X8A'EaSV˻9E -+J q[;n:*bnooV;|c:*E?eq<ɮU*aoxrM=_yAa"; ԸCE)pI̞HYyZ݂׋Y4NUDRRU^*5]T>DQ[uc_Fix~t1ZIe̤7o(S˩nmYU/:*J,t9 W3z>c8܌̠=Jl/R?Bm [oEz1؋R~I 1k[lv85+Ҽ\,쯅\u!}PrSC\몡X^B3Ŕ%k)?yg/ów]r(ƌIV_uB>=?yA6i?FۿcBW7Zpʊ"U3PUԂZ^6HTKO?:<6<3|lLni}B 5p)!Ї]k9;#C9Ka* 4Xp)iwS3-vTz^.^0~s+[hH7E@({i:bfPJ#EFc o<2{tLmLޑ&P EA)sDjR>PPlyPh]PlQRCي 2 ƈQ@ڸ̚t| |eO9QY^GCJ9ō8`1waʽ3ø)aaۘ88&^7sԟMۛmz֓M[/M,/.ǃ/}^&HLZW }9*D5WyYyoKaZ|hzWTɡT̲E#q%r e"s`D9XF i *¥&LIjZ %JyH%ֳR5Ҙ8[myHAҞb$Q@OxJأ=LY k&SH8e'V Mx BP|* p* L )d4&j0rKٮYdKʝU11ɷXK٩D`TG E Jq k <Q`]@-X8H3c7G}:67lہG`Ry~!2JMiER4<0smR>xn29U)$Ť?޸gNca@`LQTH)^*LS e%,fSvɑv ^I5ՀpWudzvx! <+(ٗ>7j/rZس##F G$ q^&I)P #)6gF H>ۤP dό^GG'Wݭ&|r}q'mv}5 ՚[V=:XsUi0?8$ /E2eN6 H?8)|RGޟ|+?ɯCL3܌Xcrp?`M\J)f㰟~e~R),E!@$Ko5/uVq qr^w9 - IO@uXijc+zt?[rn޽mQ`Cv:Ϫ$Yů-+?.?nv[܆LgsxAYaqJ=2xv8g4٘} Qoǣ;qV)࿜Ŗ^"N$sG ,ؑyJr\z'+ =\1U./Q,rTΫ,dC`FsDFO>ƣiGIV?EKM0޽ߜSL;I`:I\.IZIJ){~0MĺK&t\tp \% \%qegUVӶHI I!\UUR \q"+XJU;mURpJ\vg*pW*I[o &)zz;p%%"K* xg*iW [IJ{zp$K qdHJҲU-•NEdWI`A;WI\vU߻JR$\iAlz5(aQٳz;}a0ElOaB̀_0}C'A| KwX~L$/ v, &/|\2NV**n4_M?,coߨ7\%㟾S9*ǫ!WB @x4~H`۩DGhTIv> vKgj֠kVi܀4(N+B >9g@({T-2m6v xOap Ms${m邒cOoX!]`Q=]E6XU;CW-^](mOWoa]+L3tUt sqw:]mߣW+ex}tUJ*hj;]zt-']`3tUj"gUA [+c_ytUW-mrmkOWoZ!"3tUN<8d=]};t{V=c'|aե О{'mtuJl޴x]aOWV=g`9]Zu \*htUPZ[+Z7_+7R J-/`JX^|tATiԦ7HӒ3ͻDW+t5EWa: m+BiX*"]aap ]{ʾMҕ!R\3*pygUA+L骠ztE蒺"ug+tUЂl;]m;W+#vJW+]hBWX ]PS aQW`]R Rt Bv `vF]tUPJ[+XPˏ=wj%xwjNNm?] 5}dN[}Aihf&i6rL;yu</G6XVA.W/sp|xڕo+XQ~J}_~s~(L0}8/U 6O:U V/ykKuKF<~Qn8gےn/po,!*!+'NV?|Gnzfӫσ怅9\}!}dN>%6c~yJ-(ɺV3KYZ7})˧i`5U1=\]XEvP. oID2w* f6_~p?_o?ίybd?ԾKӵz{J+RTjRW?Fn8AW@p')\-">';[Y*+;$f_ޭ-y?%e7W?:v6\_royM{~ٟ8fzni2_Wq\s/]ݒ_+4!*Bi~Ij{uIGZh~i+ϸLgNa%Wnn JKh\)Ugx8LjռTDp`1WI [BCDWn*F~}Z+g8rNx>m֎ !RpM"RX+Kv!UN0TcK:N f"[TP2ﲆ3>H%&T&6:,0#:PXY)!2ENhKQp) b) 6Tw |Y# :iM&0~H^]x:[t܊w\;)zY}eOܭz_vxnuW'p*;: uJaV8d+ÅR[Rږ[elW=<E7zۍF-I9+pd \4I;ߨEIՒQcQ +y@ QGcF9 q'GMI`mъ')`os(Ps3p'KJ.qeTRI'ѿ9u")FD`sX^O1GCocE2Xƻ06aǛr :Kֈf.f1m\(1es0<46doLd2U<rV*R8rgQ}8tbBh 'lf!I9hB4cuZv*/gX&Gk)wd<ޑ9meǣTǵNG_xQ#YwL[ˤNܜr@1k|hGwbOZ/[_(696wq2a@WN+;)@ x`VARr{> c}FCvl]n[VEbާ!(r'kwRH<Q'4.!{!nS$deۺ&'o sVK2Ș!'`<ɨ,Rk*2+xP>733V\d(woR{xr#.\{ CG󲣦z 4yԆE1ɑbTËbm\G(+Q첉K%\bid)%e %tZ(`l5)Wc\^D" \DRM3ccpܺ16frl eυg'97Cܳze6y t33!I(71dɑ2!3 퉘--z2KVZV@{. &R,H6LCV2 f~` 57JHZpt18{s=t9sM}YɱEbϋ;kɅP@'$^6YY. EN(mxX@"ƞ‹mƬX>4 9@a N075q$~ef?rR7a6\?ލReF5@ʔEZd\AAL\ ;qcp^vZڲxǚ̃mC&zaa¼wm#I_!i.ffo}1~:˒"x_5IlQmږjWQAs4(:K,}']_IwHe=WXI·u$u^gV[εtC`\VQdM6Ϫ<||gIsBTx m#HV@]ꍖ{22Hn 9"p [x#tQ"1[hE0d `:E!`>i { &!+޸]m=Yz4< CPNN $(sm,*o"! V"=<[1 Bɻmϳ/NgA)auL*"oWr\ 1y<|18/uC[8#'T3|gw v- q̅p.|2'>>~+j0*1Ɋ0y18%|wD,LfԳ@$zӅG:{]iȝaKʢj_:bKV&3#,.&kr`d&X\9v6MLЫbbc[-OqiKSwց48E[_@Q8^h/C_u$'(s4̲5ЖMAK72ZS$~q>Y{ga2&Co_0F{Q+[irvڮ;Igթ̅%K%@,Ezܙ<#WpU -+ziujK^/HgCV[7pCz?>v$/7I/EĀ5 p<FYEPG$!{3pwu hl{G-h3c P- PO4OcYe&>Dþa ͭ0K$DkkbJ?"c_F夓_šx *_z~0%_f9Y%7ih 5(O++z7_޼Rfa&E2VH y4Z2(@ʩVe p6ȹMuudXDAN1[U:!J \"mm)O| 8 XjlQ>;8Wʅ %7oy5#}v ^jIGS@J(O?U^ۣpGSrU$K?xz4a cJ|9 O#W|$jK~Ө$%(PFlp K>+5rxJsԯ0ћt8\H.L½MZ7]]m+Mzɋf~> h*_ʆZ->TʧߖV_XƘݬ AgfQQ > >}˓tx˂hujnƗ 4_O4ϧl0묪R+nqsqڂc`)#r09sH[1p=EykZ25۞ ~-,vDcX;ÝFF)G$g 5R|)yR UJ3j"Nl^m߯oZ,|Y)1Hvz>8$,R !sy0Q{utD_=6L;&Asrh5Z`(3*h2:ڀmTISw`Dd9sFt<&Ž寮,u*k^iYJLl],EqM6i $=mErέ~DjRS!I{BS!r/TH[;(*1b*1@U2kAGF-]Xϭ@G?v:(J ̝^ƞM;a:IVALW_ *F|6x5ٶb#g{P9  ͻs' T /@^,P[Lha5Z-,8,Z"A.%,.2A%E9l Jh+iyiol| j6r10K1vepJ:mykճ)]6y_0?h Ů:~m·~J Ծ>%-Eڜ#;ΛWo.::iH0RpΨ%3*@Y8 L#ĥE:__ZLق^^y VlgGwXK: @LbL~`NGXЫ̔uJFg0BAc$Qs-9˝R 3_F,,D!D4`k U cJ#6h жpzw+蚭nVAWT"ZTHȏZNQ E9WLc.wZ5tJwdwO\y1s EO'y4#Ȗ@ep@aN$J[QqhD'yZ< 7L^'+KWt$ G\DZ%9 bxOgˈ]8J8~iXYסE? Gԃ@QۚO 4LD≁Fwy2r̾p1ARcMe%*"$" )) n 3$`;S!SŶ[qO A楂Pʂ1cMC7*x2IA[\jױ_?76EvJrTk)?8^iы&=}xg'f^g ܴH C|7w`fg؍/z@ei5=Bק /iQor~ZOnef u\T֦eT9wXz^?C<͓]0x#x&U3I#Llaz]^ YU/&}fF~X9fa`aTﺻr^uZp12æqiNS=o vB 4\'o.1i Isz2YSbSSr+BpK/vSGzQrSG\뺣X]CK%s֫}#ުpl%O콽XqAEV^tKtx׳S78,~45O@ĸGu-%Vox 7M1 !^&xfljd[(UϕF=}yd3FR-/nu^Ti3hwc޺Ǵ2QEERf@sսk]ʊbhÒYjkYMnSmxv)b&o)Z@ea}*fH yWHL`ťygBj阬@<Jdz3<`kb@ AC& 0e:Y75kg JuBVyf'JFƚifھt'bRTl@ybq˪N1sKQJwI]d+EFSŨXDFOe~UUvxZZ+[ome,?zzd;=W~4/^E:GOoFN1d۾ ߏq`w8<3Ja݇Ye4ˠzgmf]Xzf|9]\f5ѿ)Wm.\J *Q)e'BqCGρD"#j1.w+W$`G\%r):qe0Q)I'Bq%J#/^$\JTNŕ2r@ fpU" ίc_[ yz{fZU*?ܧVj,SBRI HA-H XJ$LϯO.vZ9xw5U:F\%U݂pLYjr-WSjtW=_jQpՎrM^We2K;>p YLBսd՝ZOqu8\{<=)CO>ևw->}ʫiԗo^iSbZx~9cZtsgWw6="Z9;PB$VA?xi6rZb0=ղ:J>DKrSj}cwT[>Wk鄫#s^)8.]M/WPKWSûpu4HLK*`]rbTq5Upuӂpl1r,WSCTbq, WSpzKT~}ᄫ#UL[`^N18[ ćNkWLjĿv[Ϩ=j5Z #ĕhIu9rbp5ƃTNQBWߞ_χua?ɗ\:n͗<|^.%q!gWypv\^ܞכͭAj_ "Reu{b|vq޾@ho^i˕>oێ h ]4QEmU뛳tY'ݱg7?ß0ﯯ|?(IW֢R-e5|+meQeUSYyA~9NA~i6KLN2VRà/U|==k9o2r m3d +m=2eH|z>gdu6=x(]%]5jR@lNzW./~)o&_MPJ=6%6J=郡?o3gv*E|JA~uvYr 4wdR0=Պ?tLO*'L!%)8bp5&Y 68:t\MD'\!`‚p5.WS,WSڡj 鄫#ĕXJIq-]Májp -WLN)CT)bqRż-WSp\N18,tNO:B\%v5rՔ6RTi'wu2g), ۋ_v7/}F|?#޿r^ڴ >j7/ :2J;/pOzy6wz̥J\"堶]ڭ/ Sp`zʍLOPiNN>BL{Jw=/WSn\j:ʓT Wǀo1j_e>_zٶ f6Yſ~=䋋osQ߽߼йhuX^]#X6(n~nfCz~ӖG97N)}w/vՔ|ħ&fwޏ{l|gcl}vך7⇑Zn6o~Od;o͋ #a^$U~c~-[a7 w3S~a0?} 2CEC2g>.\CAU}r+ȞOkx]g/i)$nYBvS@5)SYRJqN^kˆ_oo_]}~B30PW׫ Wr#Suhf(%bYٷ8H E);e!4VP욕HBʕ88('r)~06l$#r}_c(mlR~.OwB 5}X_Y4<\XH6gAj9$Sm=ɥzo%.b0D:y4YģQ%gbaZtmb3]^}j[,Z[PG4?C2)8)QL;I 3I#s돈/IhA11@fx̡5KcG'$UF-9[ gHBDYL~w޶4ZD@FmqOv,M!:ϞJ*7-1MR*c1s#Qm,1A{ᑒ#q|/FԜ6_o]dbʕ|UFKG k+JZR)b5tϘP)v-hMZ6vJb$N*SA6p(:\J}껣L oe7XPc}F2*8@&ޯn NX s~ j +Z `Q l[& !S,Hx %6DzWмdn-GtXYp$u0w\s 0!JXJ wQ% b 2۠#\,Vz!訽2Y98J8I9nXT5({Yq6D)J6􅲎oPS)(Hg]WW Vt)3ޫ)"Fzm!pGAc̼S17}XH!2!5Pj>fdY&1ɬۣCS9>TT]B(ȠL!}! \\Włt\1bHN 7+ DU]Ӊ`08az_*; bcGQuH0b"x L{ sEQKo:@G*KWsh#ejU4<) 䄸]F_W( "Ay[ɰD(2Ӛ Y!wt]D~90{:AsDaYewuv<o 1v<9^,TG>wCbީ#8&[WA8NCo Km~}uqWռKȓ1Ji:'Xe51r,\Hc.)b q~%`QAjuj +ZI Ёa6(X,Zhٹy  N0oBjnAx2ݫgԌ2z <GE̓'p&"sQ>wsv6WH̴,䪛 %T˰ 6'(o*" j<}F,*jR*4\v( :!ΊBPˉj*Z,UM LψBמEwi= jƛJ5[s+*"*on%lΨG81,! <07 5]l8p^yo ޜeXޜ_] sìLoF4`5f namz0ೳEf1hvtk~kȏQgB9£[Fm3F)/2zvhߴWa#{kj7LJȀ€`mJC6G=T3 -|.EtNJdbsET0di#VQ7bX'CUP|-hE^H"ufmN`~'E"oQ0fp\ gYTQcD.:j#VeĀ۳^ `>ł\sMk㊯M0zvo10WdSG 5^\urm8y̤Ga&vT TkJg\{64W`Bu/ٝ$1x>d;xuk{t'sH]Wnu Ɖ%abU2M,ܤcPK\AHMśε@eAw*qk 6 bt(? 0`YR }x^ $zD{!V?MO a D%Cx~SU9 %NCI.q-V@g*LZ6$OLRåΣD ]ftLJM+ #I!2y]bC?Yw H4fB(|"!B S3z c jv= 'Bn.nZUf*`3WKb*V/㤰"V nӋwڿN$,˅U)tf"L$׊C&dCV۞Vr5/ZE/&WN_{3^_7,竢.p(?`\&\tT[p5p'9_'|5>}:;2"TYsk1FU8Zυϒ'VU!j"g`_I^@-GPw鱘MmXBIb|zʅ1=&X_0g ,i9]*U8?gnkuXuꩢ4$%\f3߬\ PmATz+S: \{%]Zz+TijΚ ~*Pg Z̛AT+Lq*[k+piۀpm8oQ&W|!{. W(ؚ`prW T*NcWĕfwi=-YVpu5GpkQ*d$f|U$[n>΋CeEqHBTߩo% W<:d*}z]|->L./o57|:@QtfÒ޽ՉϭYƥ(D$co3D /TO=x2{=%b-oM)}V0ͼ/Y,Lcx$/cO<J'z0XM1EC<,Vzͧ> H+VQ8;t\șIt쥸 wQMM!6MNJ>~0P>_neNx\pU*c>Nu/)neg!bw\ώzZ7H[=[VwH5_Ky_q>VbH$lAjz{Ü   Qd`*wLJ- ĴN9@*\\L0MфJ1f W(:u$I0$xLx•f  PW+7i k P U:O ,< W(X+D(BW(q8c:$\`~=<\Z{\Jk Wĕ l@ӻB"vTzfUjz3ٳ*Fn>=\{7>f}᪛JٳL\՗6=J9 ZXX1#=8=K& L`Y0FiTm1 *] L>|P\`+D0B+Tkeq*%\ WRHփU: N+ӏ+P;Piq{+,W(Ppj;Pe߼B`sN g *nC}t\Jmw5D\(:+ky#j]ǮPWJ#\HcW XpzW(سծB+' j 7C.\Odp WW̳Qp@P f"3*!\ WXYS\˳ޛ]̞75hhV^^o2Ӎ&ƅS[Ѳ ݊}ܠ[szA]f-lx)c]kjMӨv@'cCdSS'(J9YJ'yⲂBIZ\x4˧ģc\ 4u<;m[ӳzvR]Ppj:m<'˿&ه7e9Sϸ*HZ xWbp|`lf|.x_KGHkgtLe O,)e|}vr'oszzvĔv,GVU7ݱFw,|isykRZ"}cGn!4?w7\i\0FڅiTkE1*$LB9s7+> WVWRRr\fsYfE5 A@߈c| ? .ZJWGg'qa.N&Z?@hn3Z* c_X!^=ǔP7E娇\o4i/;n jGUET_opȟRZسABnN5g:|TGMrP5?GFIeQ2v\3__km]3g|$֐9}q*^ʿX|=oѺF?wvnu^[j'in^|dަf^:k(؉`:k(YKYbgMi_tDBZ+kD(BVW)q[m@B:{{l/u7Z#\ WFۗW pzW(WPpjV7+* \XZP WVE#T!EHB+l(YcWq卒A% Xp \L ՊޛP%MnČUN; ظ&E{pMU'w:9N{,瘉#ϬH󠗩Ci쏾wG7܅in)1-+ W(WӫXn*3Ep%1 k PkB+e/epP\)S r P=PJ[ W(`pr-W֩ Uzqe+P fzw\m5Dz\Y W X2 P. w\JICCĕsFƮ@R \L2jMATii"q W زppry0PsBt\ʾy% W/+o< f5zޫ6N%x-kzYg{c{N2 zuZUZYvn-8ݛ;ũaA} _?b֖Mvs9{2GVs&A gdo :;z qj?ڜ"6ޱrI*Ks ٔ+? ~Z\@fvM> 3 VyZ?U7y\4g3(0N]RЗ3/u}oUZ`W>[Mlnp31d~+4_:_C_:r_0nǺo^%eI\OOM)df ƒ4KNZ^r d)JL&Y; \!Ϲ tKnuN.OJfϼ“.+5b1D ~Dh3Eѥp̥{x4ϒ4\幷%eΓ2MLNM)U#vi gI+<[6o54x hܵac?]f}-?{j7c;h?x,Ϸחդy ,_C}sS9 u'ZlZRo~Gۮ-"FdGp~ _tu˄%]G-a.LEӒ~pȀg%lb8:qs->SJIլ #c82^G>vy;(u0G|ph}.桗q{ LT"n@lzj\h|7Y$Fnvz<47xỄb1G@͖?;ͯ&Yd)pdtfsY_:~=҆7nąq=UsT3|7/τ?{6VO\v>O}ngmЈ037(.z=[B3&& =~+ՕmzxH!_^'DnlK1Ηt#JBj| ZNҮn/S2>j"hf l=Nq],+ uMg˨IJ>B$BX??!7I̹y7)c2p"NFi8)B8Д.e=J}/Rv 3$4-p'Kmf6ORHKUY;S8:e}ݗe;r7w$9Ÿ#KE8Hn"|5Ֆ4%)y{gHҒL$cW9Ù穮36noTB9/1Sjt& dDgapu GQIE%ֲd7$Vރd~gEm8w+*ay͌a)n6]~:~šicx~zq@%v|L'y܆;ʯKr,.J$s :Ddph!.d)[vJHuzW2$ꦮ>-B(CR9R@t<iu@,l nV&Bݳ-ԃ-|T[xm7gqQ!u逞b<= MgWn$f,boz,(#L6UAYbDS`/9T-'1t_,.P^J蔔VlV|d!YN s;CQ-HE=[ކsNIޠb{}6laM%eSLJ썈B8)8%<(enHc"&o$0Cfd(Y4jrlk2 GmS.& dRM.ƞaoù~zIYj"&ZDӳE4E,mXvN{[G jMΪ(QtΫ` J~# anL :c]<{؛tk]g"0ۂg~.{r|lZt̯>Nq.?zRp!4>4IwM.w$M.~Er 9KK\[L%ܻՃ}e LuڒOUQՍ-+]L>-NGY,d4A2vZ*B{,e? W{+-eizo̽8Rv bS=>8+I(AYs *):2M h9>!- 'S!5޻pځ<1A)J09ϦkPL^qH=5Y2-n9>6mDUDw~2W{&%$хj %] e"Rg%QH 8'#TJd 't!E}ڐͤLp,9wYoù'gX[*5QGrƄhy~;jm-uqwkW/y,&PsYv t&D3yʂ*" Z@;tb64lhYͨXs"84]q`0x'cȊ@V[1/VXpф+xPkzܤ, O ;r 2G BX0 =m1HS-MR$&eTJ@+5:V(eJɪȮ*0{mBk3AUB6f¦xw"TN'D&0@h~e̽#݆]yQm?N`|vq&ף9K3?{7N+I_&V٫>/]Ev_E/_CuInsݭf& PA{ף8ɣpyjuli֯~|m~iijvzlVWVk.^1]sQN]L3^b4ضa6#ݣ*Ko[T<" ObKg:O1c֨KyxwM|huQ>G;9o뷽&/l:Zi)M7Ep$=t ::G՚ QC:/xpD,VPCV3}Fkny.Ej%hhBڷid6l1Uyl=~צĞ&ċJ'-{?7 ̑ YsLU23w>NjiubO6wl!(v9XƵ1mj2ڥ7?]Tiِ룽e^NF?S8μخֶ חMkmEM>*p7HBBҦ6([,.h] !yb Kz)|Qc~0>8m(RbR(3Y m1nv>+a=(O*&5Dm]dj-g lɺKfV0WpxYpLD"eQIC( mLcBduFNwr[oùݴBԘcϬt7s{ݜs֪hΝž} x<٧}:h!hH>Vؾ@}rUMp4]v {˱wd;s`(q_1 fwkT*[0ϣ+cxsx?Ho8:Q-m3G+ 6޲)u2행{C8viZi_xUSOɟnZ>Lg6`}?LY?X]a a ]ҋ"`wQSGV«$l] g v:ݫKl?0OWE"<>MC^G6>$ Dş~kNF?t]\GOG~lM::,2gq1;{7e Eix:G#ѧl=lxlSo5J"4/]o|J胒,6 5Bi*AWJ0,恒~mWf\FuM0dLN*$tǨt^aH|eh0=*g@ds0CuͯCZ+k.c,;0׷mW3zvܫ_i~JkIx/#@J1E\ J!KƄ"Xѳ\&~b$b!DIC FCI$SժP5q̠Ցg m8wkVl:]]c/ViذStv<0)<㭟6Ͽx%R PM*6V 76|Vz/ԁ_L*%xѦ$I$Lu$J`MXj":T5g8VᕽYSW.~uusy:udU$e u7h^`R1BifP:QGGH]|v}GS7m 伆b܂o~\ƂJg&ƾ]&pL?xYAyל}"_ӛףtѝY;nēIyyJ)PW"d~ bMTƫB2F)_ٛ&Zh2EYob1C+8 `"PEEYc@'%h!d`&a@%Y h!6!&5 HN>%aqZ/Ұ ,#3Ak -k w3U8Z1N@hqDEj@gAU4D3Я#[F#Ã()F( eD !cmK)l:#`6ˤbiȌz`f^Z>Tq|pyI5K Rv{Ү;I4 4`)48au8@ouf/KGbPBB1i١Mu;?lRNI士n:%+~bx,vQ:$W1>KwuVݞp͏lԗR.G5z1*o D'*G[cj$A>Aa["t pExPef?0 AG+eLklRm`RH%i"F<,37I9d;;bh™\ vT੶ϸ2#Ob OA;'ls0Ng* 1A^{Q|EgTj=~ t8izYkٶ. (ls*g4 9HhCr(+knH[PbK;Þ}סD`̙Y  @Al 쮮Ϊ2+,2"yH׍u?sWD=b q,"wEAI0(#7SǙ2b1ߔ;Yn3L_3&³!}D= v?0#̄rm]j>q $5քX[­"RkK" bF%oC^C/"H!=9ַ8m[ykB@T{ɵuo;jQnkv{SDzzk<%'xtRjcy}Sr/}sgm^;a>M33d23P?1v]Qnvh*$mkتg5ZUs}'AQz;4|#].I]Am3 Ҳ j[_ޫ{:60m{LY&j^wT鑹޾k'YeMODdKͱ>M.d 4yK95J-# SRtE*{RF"f ,/,ySPxP\PPRCى 2 ƈQڸ̚t||ʰ?ȑX"yz +7‭iU"bK(F[ :aFhNP798rK8.K6M#:OLM].ג1hA&M1>^jXs&zлj;o^5UOU%bJY+;rQmGt@c@ QS#L$j) ";nQ B& |D$ŇR3_97 #$ at@q`:-0xsE~Fsƛ1o\@I SZmUi)r(&U,nH@\pBȩ\;+p;F&+ xP.%6!`BLWc*@eT,QC,Kȳ̼Ӷ܇ S)V`0c2h!aVEr 0Ɣ;A2N /_[pZmuY vbGdTaXKTKQ250rlE|U/j~3vǺc,e6QSQ6En&Bל3a= +APKE uf%G^ݏvmh+w9́R+FcvYۍLf@#LT1P*㥨E2U3`6gLe#fciyf@ ,䣔G@-0"BF^`ƈЩŴϴ]Bk[#6$(0<:0F9?W{@Q D2NKcT` GA7ktIJ9Q 2HWM/YTDV;(FP5b|Rdb%0T@K@7jTq-6' Th5DxuJ ֦$4sL9VK%RUd=IQ6ȑQ6䍲YY4ݭPG04%g,b<߽PqeK5]`z10+ `+`)$ŤDRxMeJT@ ,߼&9-(hlRGTQJmŌX2&cLEEtD Cfх\H)O)<@xqXxǖج@C&ݕ[ݏpk~zK7${.øv Cm}m> \KAxvۧo?'[ljPk^vlMOlmleG+>h4˜je_5|(k~x-(D .3R `Q D<0QHD%E1|!J)J\UD%E\@qV\&QmjU&e 7k/mvm5zTUIG+G4Tbha*o35iR 5+q1zkCWW \W״ȥZ}/zko.D$ɀ2'FJ1)sTF8j X:ƜI8W9 *oIS ɞ0?ތ]*,FlMf.п/@H>lg# V!o$fk0w0.]-Sչ?Tk/%WJ8F +YJ_ +%Wg\1!Q 8y6G@r]? *ڜsȑ„^K|rO.%>'eEGX K_AF%WJ_AZg@Z)lK=R϶Գ-lK=RլԳ-l zfg[ٖzmR϶Գ} @"́g38wHrzQbѠ`>v-YyO\ϔy >ᩉJ|d&#QHYu *1k5f,`ZFL&Z iIHl3|e[|%TY8kz?n2u}A"L'yZ=3k2t0,Ev^O3{C#N* (z !4%Z$*6~ N^;-ބɸw>dOr2kSzh>vGcH M.͢t! k(f?}Ä͖d TD¾۠ q2^\(u06p#_׋.ԃݶ%K<&hh_`n6_ fFZO^RK'}t 7/Ubj/Žx%zNi{`"w6'J}55Iu;?VZWкj]5ww9tfhhwf!eծû5zm6jZ^dG˻Ec>d̓|; u+%GtUv怂;a[-M5=2p Qk<-o v-V:$ອX5F@,gF+=_pHa[1PHT2b=G[/H:;3y=fBS(xg . B#9aC5eZ:ɸpX= Y%W"(iB16m]U?u$.cU&9J-(rQu.eNrqh{-1E(9uq IM ]QQIཁY`I;bXXaHϨa0]Sre['G:sullxgg1lSNL@\J4B\#3K =aiV(7 ()& zSNqR[M*Ff$5sf"ܒ19DId*daq,e!-p4gqqa7s&yx&F D8,AT.HI1.N^'ZYFԭX*LCR'ZK(Ib3#xIt;t$LEt!"'UfN~Mζ:,fEj3?uDHI@) ,)4a#F%.|j rq\꒭3*9V.rQXZ`T%-Num"4'E}'~g!xymu+efy(gaݻ1 x?>' ßq8Ti/`U~4xRW:UP[9OiT "J9!a5!aGD_ʚ#֝uR[FLYO8RD>0KՓ6ybd0X^dMԃLjd<)V1Ii FрG!Pr#ҙu6rtYlMWo<;>e%g{8W!搑`e7KpYbF?%(RP*W53HJCQP1l {^!hVսy5?<}>JIB 9wY9A'$CG֝ 1"sb:eiľ0K}hΤƓJZS)2X;#a!^ׁ5:f%Lm!7=pFo|r0o|]GZtsy˖a]Y[-B!vi'fn«sفQ` |s{L YEm uS݃*Pi*;讴*J5'Uzrݧ]wvS7y:.-^4؋킏5svsr0-Slg2PV:/>E)xPL삏$Lmȭm B.e켊r[nYGt"XU0cx!w$Oڋލ٩ܑL˨W[qcL_Rk5XirTI,MtLXZXdݞt9ҁR$YJmTeB2^&tӖNZޞr(VZާhV >qUU g 1UiGa+]ygxyܒW^7Ro[Dž!ڳm3dax sYo!+׻Ij_d,Xz*fۖZ[XDk X { {0F|m] 7Ϸڇ7-o߼O~tWb%]L7 ~78쮦i7r {}Qk s4R 䢢  g+@aVrV٩r`*Fvom|P"a ̗'ͭhڶ7y-A\5Waeo+:?iT|_3ydw? +aEuYַwn2'r:<7GO>ȅ3&66Li Z I\YtkX{x gq"F$k$W4 rڰ`}`& NK1i)HT.O|Y'bv) O\N w]-h&>Z@ pW ߟ8L cI@XYfc΁m@\V=R'l:,JRe.qL28llP$GfLٱwF|/z&kV^Rһ) ~E{@`F;, >:6sYRU: M*VI v?r^eë;@W^Vό٧TKif2sAz >,!0*5Sg.Ōr=. Z>K1аq{PV3sTLLZUeiĮuFΖB^;TBX "%C&v\MC[Z?ņxi]qcY]Y &DPI-*Ii !BLSEOY b 900kf{u2W10x5X\ w"+dFSA"AHK-iAcb(`4NaQ/+ºUς6웜Ai%VXg94`م !"_ތCe$LdAE zt$< 6 0 u+HO*kmQFS (}G"<18½^H)E$A2Lʭ >,#JW_t."0p\b-};((tA۰ o vzs9I? Ï""cU NZT~;p88O7իei$.6j`fZ|\s𫏸foa_e !?^N.އAVY ۪מH_ G0> i &| Ҧ ty&T? "t# w(!1H5=^|޻qJ Yz%(gn0Krt@\Y ?9*g`.͵Fh.KؔT@I+9g r l0|lX#j4ULAՇy /]e(;ve=9ijQM {79ǿ0!%u@ 0dL||Z3n0`M΅Ōit8/O#++Pl,[w[V 38`gӠ2W;֖cd#dA*? ѽ7rণ"Uށ*dL`)s) Ԩ#3 ˔}=p{nhlJz~?D/{+p=:pO2s!.H.hт3E+3Ad!f0}@sfWâ3xb qf'yj \kƤ{>er vKq@Եd0=S$A: dfHe8D35QbA;VRb ])# )9'oHp"N'Ut@C+ EGǧl/xSVNeU>RYwG1_b E[8tca^FM/Gj4,Bp (1dp_F0B^C5DsJ>Kl"ayڨ<QJOuLyj<;Vyk-,Y<:Z8KftFSCT1͞jx@'.B% -\D=ٻ:};vp]Z߯Q4mP9*r'u0b;)ԘJSKf092V+g4u!*tH!'h7k}ۜyՍi/>C?_%&(!!瘑R8M6,:%$/2 `y˳5~4zߪ^1CJ[gMGGR"h'ߘ¥I9e EX/w~b걏4? iir W1w|]g۝Z>;6tn3+h-ך00E!3s'Wx ̅8O7g2|^|iG)-g/I94RT0WzH3^ty0B?]{׫;5wt4 MҦ 20LKђPp'2bAu!Su8Ss0B_ *hmGCD= u  #ԅGnn;\(I8O/\[2jYQik=ĂgNA;%P'9mwFW YQ}e$XclRˢL./17hqio&uAN.,x8ΓEo6{xbi&@MžLw g}Ux1PdZ] 0E[ ?q|y~,nieX5sJ+Ke/"R8ԿLf.x-tM.#Sm-ڞrh,2%4Q寺?Us@TZt1vyjNAOצ0[Lacp>cDvn1b> ~o𞵍mچշbJ%\e55(6ab>#QZ}n&2{F[fVr:o]0=[66BLR KB-0{/>0fD4e&@Wzn6L4wݛ !;݆U{$HFyjٹ͆g\2%w̩ &F` giD*/K'2䗗!k^+ <2P`6yb$@2N M+681D!|ZTx7uQ!p5ƒU̚Ɠ/t2ZSjy'+e lzS6 i"9e0&mῚiL"Ft(Zhup5QPDLG)a)6cy >(eObwI[fl[F:i>Rl&Yg\Fg}N1 d$Es3rI!Olh؁K[ކ,_Gf-6XMG]Vs«=iq=f)8(/wiW2 vKqFj+IO-\Nyo YIdLf$`GP\s0Ts$ :q:BNw~aHZ(7ownoxnW>u܁(&&3 k9b s0(ˀ s,:u# zB*7*ߗoy_2`R|7/ho|ҏkhNf_RLl:AxS;GWza+_dzn~sͯi7ry}Q+~܍JT]7@.ʺ_5pOta%g[!gWϼ9ݣgj6_ASr3=rv%wTr9Wj^%iKRvT[%D&i!.b0yƞ"C^n¹h+'^ۤM&U{z3~ɫ2Ǖ†nu^4ݯ|1X~bZ]{Ws|@#fՑ$;Ǵg j5˳ ߞDNP`JUةUc,%=\@2\e9? ~*p*Kpup>=i*yhzpDql >QZWOR+NWuv * 6W*qͩUֈc+RՋ+ɍ:\^O3jg|q^YZ竭s +BMuӀos sN+-aT{qUJ 4 >Ru*0ea:Kɡ  JܓaYZWYJ/8?) ,Y\NNy>\e)%%•!4\TU>D1yx,'\)83Z퇫QSQ{rJb)\;QZ#m.@|q{W<^}.4Y|}_7Jז H> 1#Ĺka7+م[pu8 Rʌ^Jk]d21IN3"SR* {8&/b}1@_ /e:{RS*vb}1@_ /b}1@_ d7{Y`COf/dB_PZ%g) {/f/YTkq]cx^OH~ 9Gs%NJY(H-lwi|!!Ρ$Bp PΔ "z\M)J+7d[ A1a=36r\BR$NeIr.<O5Zk9Bdk3q6i[ykq2/P|pg8^m[0c|C?_!/9fK!$G䞗t: &(A=JH(Z'kgr;ռ{̋Oeq5ֿf:s)t2]Qjˎ ^!OoEeZS wAb"&Θ ,xGU`v\ x>ԦVD27 Ph?.X`~B:UQܾ_rsxݵɧqـY?70;l9Yd?EYgmy-)k>:>ipEVjͅe֥TTBK O}?832RDe4M"49S꿃=p-'7MF8/ڗAJr'cdQ`s LƜ 9L*4OVv{7pp,HəĨv!Ă2L!ZB@'DLyY_Ng?E\r28],m~ p%![El^76}7=ճA,D| ZnJC2CHԎ' %##G B4Cf%EcMKb,CB@p*.qcEN Ch BIwL𞖐_ZOqL_>v ;nj<,cR{u9~p ##^YKYD{GX<d =hy4%xĜ(5#"F$L0 &q2y:AL׭yqg{180rAx& 2 DhaOD>D}4@Ʊ+}ަ-:Xqd!Daa~O#gn/\@høA8²ԁp11^(">R,:D4vLo6mb;Iv)9ҾM$G@>VJR^JAi1ibRAb2ؠե>W7coG?Wduf{])b2d!dۅ L0 ,_ /sw1*FʽO(i'`u<*U-CL=} "[jy/E<ؕ2]LhR=Yi0=>mW3QE !q{~ՙMWZaюڞ壞EN;ЏxOy gSֻ qA@|g n{Omݪ5^2@ٟ(r[G榣掄1MG9~m!5̖ݍHO:qH["{wgrRd^4O/zr\K_y{[W X5if痹dPI\!rrzBgKкMyK|2-9Һ=И+sýc>oH`:~l`}4ܳH~<,/a8VOUA:6qy=M"#{AKӽy`œNn5> х'YU+l_xnte[-y`m"PǁSTByrZ[ERA;{\:qϏ,BH|yBmug%!w͎py0B5qX#ZP`i4̵ҸMÍ [+A:<}(qq5}&Oumw67n\w#}V ՏvM˨0q":݂3ݏ?ןyiחC?GىY_l{u 5sugl:^П}xˍNed`JJ8bs_Zu]Cm4eL4W:ys@E$B EuR$x'9(*۩uPP) +j9m+vd=*: ".Adm2 aU;U51:vL:Ƴ_*X4^KLR2Z5/\Z+qB ҬױcұY9g~е}۬ml+{8zwa܁ qlv#7Um&nK8 J"9*Oz7`g đwei Lt2q41$E9cpJL48*zFwd/ĩ\\'YmB9q21$ELA#ACA8. t 5zՑs&C l&D 0au:hW%-;Vyk+No=C[Y泡;*"jm\0P"Q n3W}(ˑXc8Xja&Gu6 \Pu#M E Zc@z ˱k%e<)~&Ȏ! Yy4ݍPAlq9I.qO/.e Si̕bD颶r-bdENĽLubu| *-U g;L*/$݁-)@S0V[:yD5;tL cK0ڔ 'T- -Esg§D3Vn=}uqѡؑkۖM^2ER"dVU;k69S,7U}VE&~_~W _Edv<5ݻ?Mp¿9Bgk>%$l.'qzQI£d)ҳj%Ί'WEknuMC* 4me1M44^{We."^!ĥ†nu^4@_W|Y'ttu1/P˫Ŵ,fy O?ue+\c7!gnUmX\Of+QGYo3MxFxye-`%wzS@$Uܕhf\(Zv쭷Gpt xuv}Jŝ3KOоꀩ-&B{\q\)/L#%9a%NH[ǽCH#8{e?!;m}Ɛ-dpr ABxc?^+^U+<qy1_.Pc8#3yqN/׫̆eI'%W>:Y[ʐ KE]u&.',)6M-~C2&h^Kaޘ|d.drV6PⷊC1fm>K!&byrfJWVå޼gHBH~ 0-rY,r;觬X&c署!)J%3Ћz*ʍ;dO| j>tAzMzr[79nl;[Xٻ,[T^u{.ou M_0_}{lPy o↏Ti%]+8$7PVׄ+}jBkʓYxm߈0;.CgW-VVڮƷ]MGmz6-va:V*Vߊ_Rs3#('JfcۻVڂ|5E4O#qo6ёm sUTTxVS/&L,)c8eȒ圕c&jus%%i6VQ[S=YqH.zr*DH>'[ x`s I͌"U}kJk\ؗ e\(.|V.3&xCٷY\쨼7'Trb6[(7m,TG(DYhu)amI894e5`2!&G&#\2m;f]28]b.,2cqqZ31ڭ}YZfmX{`̢4JN*3C!:e4L΂PVXl |X oz2cgl5>eD2#F|Pm)VMI.Q2Q F$`ER_+D`heeZeDg9cYhe2τ%Qs#Q_hI 0QvlxCƮȋeR|}yQ̋zŁ\E@VFNCQ R Ȉ.I!Gɦ‡}д̇f?>| k1?TCApCぢe}ƷwȰ8W\2Z*_-B2E&8UTQ[[Y*&)J =D~I@H׹[{Wt#TMH`C DmXT'sR6!Y(`]Nx#{@xqz kzcV/gK DT:9 VR=phd90úU!1%0ē`T5X8ڇB:Ja(GѲq!^i]ڰ=dI:tr)>Iް1oZGz-ǻ/o}`޶YJ-[-z0Ō{b($hAyϚBRFg%C ~PfCVJ(oؐQ*8+v* 0S!4$ȹr3G^ByŦC˗)ppnuGNfQCBCF9 6"Do+M9 h`(T_ع3m/KʬV)KQ cpƍ`0u{:PYùA!kH ǭNU|ǹ)0T_Xr!lΆ[K!aFq |\ncL*Z(?,ZriA(5j*Ee4(D#!-jvub9:]A+HyK^Sy^j8 ߱=Q>bxn$#:,KkUq);wfƹ_fT19yn&:OV }s鷫Hߍu1?:fǩ,5GUꧾq}OW VHIKw8p_ì@ʭ6 ̀|_SQ{7uVNJ<R0Ww q#ލ$y&_WhM3r|H'̮|y>[r]N$(QFRgt/Gx5WsYtBrv@5!B~NEcnx?%'Wh)@Ol?o'9NrNǖf2G.pgJLd*) n/"_ y<VGk`O',[gg F'-\k.-pq^,L4Z<rZ>[EO , s*Fvx}U͞#)5^ǝ7;_/hho2ѯ23hen7뮉ߕ\4d+n&i%s(JX++,TQ?,P63Ů ~AO'xrAH ^jRV;gKYS>8,sLh[UP_G(ωj@AK2G|#B0zaǕ,ԥ{nl҃ f82@% Kh-tsw՞'A9#7a23eo#4QvRw=]U%Iy3~'?uWAϟ&oO̙zv?Op _O/~OC+]VJX sY&=erD嬞nYoVz^5:ԫ ?#M8uk#'So]m,q*dnڔE]L/h;\zR yDїy\O% :-jこ@܇uno~O7x }KG=;SVn_ԓ]3n74-X|s J~nIǫ[I/*V[ez ˔K%u)7EQ95oCsNdб 1 +ETє(iF\>ѡ}i@eʝp9(7VDE]*:歍(`IԐGNMYYQZ]7Z/y<7tvLugsV 2_ӏ_./Md:0k3bd"/.)Jd脿CAO܃`К햨lNgǹvs%xzWBXb9mSQKBE [<.Y?O'y:[?~Iٮ,XVq/{所 Y$Be͓*5OBiykNet kazCWWUu":FrZeY̏S|]яiݰ]NE5*fcx26҅?:{gezKb/R4Dӫ)]x<&'#37¿L4vh|#n`.)BxD9]ףNj+oyTۓ"o?+1,*,U(«ucv[Y4WX% x+S(e/) U|٧6N#BA&,A{|}q('S0N[a}kw ٻ̷R 6waM1wCe鄉w0`bk'~7" ڂMXӴe |:S?0ףdޜm^b,Q L#H27Wn/@074Mp5 MZ]iD)hiZہL@oJ ]Zp]+B@WGHW5Gtg ]Zy#P O izCWzCWV41xt8zDWJQ}Qju"v024NW+m_Zu"tute{DWXp\"y"k3@WCWVsmj;zCWWwEh] QZ6hWHWv=r\P}+B:@(jqJliK[n5L(a7ct%ڷ9jx 81@;QN(;[dhӴ"zCd_h:MJ71Ҵ0\]`k{CW׹tet"t|{CWיUF0Ԍ tutƾN o>SBWv=^GPr;ҕR+#B$RV]+BicJ[RZ+,͊(P~teS ?tEpmo jyPA:FN٧@c "\{c ZPv-`u>N ۶GWgzH_!yЮ[_YbyHvAPkadQh8KʔvKܳfX"luuz9tOWN{'AWhQr<^5`G11߯~>k#io\^zu r M{yA;#kBg~ӣ1݋?~Ͽ9~׈zu_k4:_okm~G5p5=>+_z}? >9Zlqp~Otzn'ۨ]L9jߚU~P0RgzxǽxvTvxT0/@$w(薉yxBx^7g|d0?} ~!ߠ91?o{^[0|׭{WwBl?+gCqĚ5.}-;LѾ$V 1k_ub {|7~{ 5w}?Arqen z}~/Rfd]JzQ؄CA5-DtVΒv(ͥ輹7\ MɅtVTF\UJWr)fͰ8iA5s? cTgkg,|g?/4ta}mB! T˜S ]6yCюzhs&tDK)je"9цsPU,QFɖV1IumЌv4GIhS@7o^BRMVa8 pJ@uM+5RœDhY{n b1;Lf l׈fhO[(:TӨ%gC D4\E{}Du hQJAn"/[Dieѥ FaI|v 4fèiLU7*)> "_^WTg|S^]*6YtT:DyKP[рOBr>}9oNB<A޴\s9 Sf0j$7.(]cp_^wF8'֤@{ur38:Akc0mƀڄҙܜ`/DmT/ *Z}ƥ/-@M|^YbHuEQCBNFG6Tق1K(#mtAڳZaB BQ:K+z o'0L'mTȗBф i'S B"Cuԡti~mmVJʆY BRқC:_|ćE b:±e6P6Zb"$l ]K ݑixu dFCVj;xXUA1frIiޘȆRojPѦ"jх6[ r  (ljS s'FWkVَp5TM`ܝ&EȤfC2#I6>HiN +QY[4 *ӫl5cd$IFhU%ٕӘ;F͐jPob!8w(c)(duP BHP%D&T+Di?hx"W X)[K1 : 筟u0wTL@TCJB*)ԙH͇*QMu87 d&/sAGM-V│D]`#)#MEUP4kϒ (Ez@?Piu `UW $u9kh 13obKRtDRH NJDA e͜`3d5>! Db$ۃB6rZQ8b!z_4yZȠΌGa/wnz .USW%$' cEUJ=Ӊ\\>2`zfۯ׼WQ ?ԂU0o3?8;5͈ Ż5 b3x:p4@.AϛБW%sJٷUe ìcJXżs mG5j%eD!v"}IƏǧ~]̻z$ӂ{bBW6c+F ҕslX]1CWbuGePBW'HW%+b>6p,tb^* -G]থ QFY:E^i1tpbR;P:NE-?nZR{Vzb~7tE_8='[o|֧/LP#+=HCu"s7!:o 5Eg?oa2rgΞ)WWכІ޼ܽGu\gƿH Fw?8SH i^-.6x4(- M M`_]p$bq3MGOW@yЕ Y b RW;QW6c+F ܒ{CW Rut(S+Q b6-.ťGQ(tut5~I o|BW@KAFIA*ĠӒ6pqz1tp(6}1(l8EIDW1iS 2ܴbh7NW@eNR"͂ ZFhѯ]1J+iҕn)bV}u/[ o/ tV; ?6T&NFsϷVKij(E"-[]Or,(+P/v.@Pmk biUaiKuSXrRw.Qږ/9F6ɨDTCWt(UKWHWhd kCWWպt(7nrJDAIc誄ۘwK +EM+|:)th;]!J-Z@T,L"ܓ,B/>l0rәGW؞ !Xtpyc+D+y Q*ҕ4ɺBg_/_ juh;]!JKZfJ m`jWy{jhbWPUR-]=)q]c'j|ozlzsmhEC3ZNӈ4}4Ate)th;]!J)[@ܐ$pT|3)th)Su+D#ЕT R7D5t(v|KW'+)2AtϿ\.BWV>v(jJ6i /~ ;=ZINWR.$5DJ5VM+Dkk JhKWHWFr%H k {jhϵc5u;Е&9X5ɻ>QֺHJmĮjsELuee >lg5?R,#9imGwTY1lsLg#g>Ka{VO7]jp'0grT5[++غuQq:SVR!iFI+ݻB! FV59I2SwFu;4Sp BJ6M+DkT QZx5 F9N0=;rs6ˡ+:iRs "-XtpYcbvV-]&Tf9tpic+De QNBWJ3ny B6=]!JƮ.V7D9tpM+D{P*[@22%DW3B6u+@)h ^"]YCp@K%mN "JA[fK 7j\q=Ugr+Y@Wv=o)ޡ1]KIlo6z%mM`um)4h5#Z eݶaji$4͌%58`Ec B*f-]] ]qODWXB6-Wu+D)LKWHW[BDWX5q\\VCIEKWHWnoZB9+XS ǮiJYjp@K5g_] M+DkkK+CAte%9tpm ]!ZQ{gQ*ҕ% B9 •+Dj "J:IWV}hjs@KP֘C0́͗lgT5uG7խmQ\Hb ?.ezPSwތc)Nna%EΎJv$' *&X%PA14x3\vڻWK_,(mń{ `*!;'d1H{B@x>jR̆ MdA^qp=}L,1aR܊.)@)_fWI'. ZSeD.44 3xeR118,ђ(lpP(_{܆llJsقK`+ั7"/u(|;\7/qu9?D2 ) otVKn7;PJU{[}립}oOln`YnYtp%`ERgg*#&(S$ TeSVk'=s07d.ax3 (^C`+h<F aZ5X!5R (BR\hأX pU(iVx8luKYYbRԘ3Z34ђj+ htb 9SBBފYXjh:Ht3IK%۠((ZT{NG piP* *qz0r+yuF9HQ6R|G";³^H)E$Aƒ> @ j$ {UdMt5IC;:b8w_gf)Lӫ, s1 /U^N&"TEug9Beb1Fq2/>5 S%W7 ̯'v5 J,4Wˋb_<|aW du~!Op9iuV+K|3F*K7a\uf:7#硄>;eT igvw;ޅlB>4v绫|2^g@z~k>āvaߧQ ?0;3Nɮ3C(ԀCyrº}@]7k?0]w{" `.KX@IJغ!(,FR(zxˉ/ĕEDُ߯~۔~kB<Ph)XzYmTL.Pe,|8 _B !TzWvX?n(@yb,@y2:a㺷ެ>A_l6{Q1w~ϝ+`g>@5i|3/[ll// =jMF۶W2nB]ᦣ"UE2O&E)s) K`(S:2`p5N Ԟu􂖁`$oS˺T.eKM}LβlhV{uXeYqArA8EI<q5h[Οߪdb!>|cZz2L˚<8 l 7/4sr2v?v#8%UeQO0TW^o+Ck|iɀvk#pF8!OSw3a|Lj<)]afC1hσbXʪ~ dޜLw GV9 fi6=~瓄Vټ6le*" [eMGҤJw\ `w*3H.)`)7n}.#yu / ~zvF]4LۙbPyIΊ8M}C'FIOг ]]-YqcLR)2%B*)a(d%"$j0Q#).pR$YJYPEOZFakrMxfG6q! ̗i*} $s2>N1j_4"lm 5{M|ޢhBXu~?lVL쏃lF4{vs&Ӈ+±M?:$Oc ѽFJ+tSڜu>WQ9H-šm`\AQE6Jwcb.n֤eNj QG!{EL9S)2Jrk95ŪNaoX(׻B,؇UX0,SgCWiԙs8P~1gF~݇Q}]_GF㏣=ľkNe0 b_؆G7oX=ûc;/ci񳋳r{`bs ᲵHo]lҬ;|oM3zbcz.\y`׃fVΏaY˒P!Bx 2+[Ό ^fd^%b/GkV$k$W4 zڰ`}`& NK1i)HT.Zy` G7j!HScr`ojS2,95@s# wLǟ Oϟ7n vY;,QfW mwb9xf9:柏e$>Bt!rsf9UE o.jpTfo*YJą3'AVIJ wT|48kN;xZdBBc"glԚ3  gJ:S)R e' Τ8M/^;MY+utRऒFJh+H2UH﹣49c=pMbc _S!8i.(y[/Tc)MG.tFV!'偙Vc2vr/>s+yů|Q.L(!>zf)h6&+hH+p[A#E#A1 C[GƱef撀W. ²Pw= oM]p}G鈡û4?ۏ_ Hu^ME!ޭ{05JP%jntCҽ5aW(iXWUOxrmF2bj8o7ꎦJ?"M1G~Ӿ2>?t:ۀIM.7\sfE6Zt}_љRrfQKqr)ǓGԚ'tǨ)bŷy>Uֻ"ϋyE1~uwSWfo %Ⱦ]ٛZ+ֶ ʰbMvޏy\d/{BdE'WF,Fj6t/tx?(Θ9,w6#J)??48`8^7?-S]R8L= iM͕lDQzwXlQ2kjlMnj Y$?WVDԖ=`aGeX&7yBhYUdSz[^2Uc~ퟞ&5ǕvuXIRr ).q^Ԓ)QݨyV/ٷY8wa7_GgB}h/dZE5!{hEo.,xN^wW+=޻I TZmwӮwvw~} 4FMV>5;j|h;|59{9ˮِZ7[wvqt=E͉Q/G-U|n"</&W`9-!Ia0VhL~Ĉ!Ñr8pP{U"*DF(%s#,ESz'!O<Đ\ ˲nH@Fq4RSJ1.X'Fe%#1( [PTeў>]!_6֋m;fm'i\Kj}dt]~! {;1:U[I caM"g8 s"[Rqol@@n ܩJ%(I %*"@1D |['xfNy :>Pqc4H;J0FKg7 U#dɒ=4s.*k7uWԈuk( M#IhHS&D* $AJL$UTCѹBTI)H# _MpgGѠĹٲV|vA[%A+3}2̔(jJژBbVi\g3xQp(1R.=KNvLA/xZ.u^X#yjxJut9%IxaF'd.Iʝ_R!@[[#3%"Z&/S3ARTAKDtBVtH[ `M<C l!Dᗳq &Q'H1q;A'IY,]BRU9FΈ.XNE8"ќ98$g0Y1YzcZhrācIA,8wq8؄R$ (P. 2"/g٩;/x(9P ge=fmq sq{z;sa}޺{$Gwg˦h@e8Jȍ_W{9'w/T8uWR.Ksvb+Dor@對Ȩ9e)o\G5}O|W%gOAxKP휄D_0 +%g@ H'(Èq'LTG~aq8JrG!L -akmIg#y1]"譸\<ћ?!wW۾Fau2|v׆3n;B{ p_lBH*/'xQfJN@xJ'+TPD.HAMJz΅-O$KyR: :qx\2"TړD9ÉJhStb6FT$9K%Չ($<+QL<^*1u$\fF2К&*y (`(Bd)4~_O`yw#dxgF?^(lyv1E'gQ[i.ܯFm~'?ɋSM_pYXW//N/O*g2 R!gAWijCm}n|KF?#qE0j%@ŜĮ)q,v!&@Ƌ&Qqt roٛZ1+)j! [r@; S|o em;ϊ@&c{5;vm{%(V5o>ٖVww ~1 y<5J+*ǜ:g#9ksX~`xVϊwThaVJPDa\#$hZi,F4-B<#S]C8BBo"qg#ќ D4zF Ȥڤ&1y#Xu2d|AD#r oDʚ y '29\ɁIq/Deh[34gRc%V2T{*E \`5<=k~nJtv8:*27&z݆wb?j{Wg 2;Hc+]d 9ԗ{YL «kYQ0B 1Q.d3 oSYT塂)LUYCdա8lCmE>u]HeҴhh_i4يGk%DL)e@[<`AE<$:z&iywnx:1Q#ޅ{Lv&qLN9WE퉕plK$Dmc%!JaXI'+IPAo]!`B5n<]!Jc;:AHզHXQB5 *tBvtutFcDU-thi:]JJ;:EҜ2[DWXմ-th-o:]JN8ߧHW +۴ve$=tpYk+D)S+ m2i=yU]ZE Qd䜔MW6vQjG^@Wl]I=]vt+ZDWX3B\m Q*)ҕ$7}ŷު+eۊ[l;+at{e^y)% _7߾GY9ƙ?# J RD_sIQKfB1n[DX4խQ֘4HO0Ut⭡+M[ iGt7vF)ҕDj".m+De Q ҕ"B-2m+@ˉm:]!J;:ARR*-ZDWX;+E xBb8Eco]!`KZCWB;w(jd֮5tpyk JtBututeh5tpUki 䤣+L%l?(H콉F? 8:y;PQ،%zazg}92ݻh2*b R %olr'*d.MѲWaLSmo_>/&`-sP(we4ɧa@4/z 5ÿF^٨% %Vj)` s2Y&ܧjjsP -jH/DB]Vڮ{:kcO92ckZp2'aE""[x-ăꮕ;ersΪº <zyjX64ޏ2.Yr~̀f |)Uz&aZ2!Iݙ3!Qxv>@Q埣[焮Rq3/^Wz噻:yKa#u,:x`-V+v鱷wˎwGHYt ڭV<驤48xKgk-J QE, )aiI[X tFŎOfŲ/]!c˻#E7 +EtAh ]\Fy[ C{oOWҰNR;5DPR6JNҴm+@ko:]!JF;:ARF &uU{ ֨+Dko JI::ErKZDWXHBG>kGRJ i@聝SݞAke[ 2CNWxAgGWCW*ĶCV5DzmXcwUqj7Ȏ =3n(@W6=T`g~0y)1F37fr&foNZCW4դ4( hin]!U-thj7S+AEtu{`kZc#c}| ]]Ib-+,y{ B6$SR9s0zW0'W)nxnŝ_ ftqht7,] 5o\pK'4JB%[,s {Jwƣ /9lV4l w z}?+ o~w[5ZEE >˥xVuň8Q,,)\L)̨Koa>Un"tI^еZ>zq]8{3#eW}gk)qvA-DoeoEœ *㮾={òo~!U۱\xoR{dSZ9/Zˇٖg !nPl3^9rp!W|5JS2lt{qkp#8OOe+~wat9=۷ 0Ҧ,cHRzMr,b2&mO C& |ӿ\n0UrԂDP._(hNfoyC )Iᦛox4z`kYBixv0xZoXծ^J3^fǛmZ۹§o}s>ocbYmvuW4)BD#D匣΍j^x*((0/3Ef>̥\,fn$`Q)9)\9&ե͎H FLJ`5ɪ R謏 } F9ᔕx^S)`vϟ32Jy/& gy,ri'/+a⽿0'G_3wjz~#gER<+]=UkrB65 i<]&sB p'.$s8.E>|G) MT`ƃWZA[0UoWˉb>L*sN:tYjoyf{F`=2Ѯ-V_E&v\|{Оn0[/T8˥t _F׽4ev"?t|3t ^a.,.{?BC@FWxؑF4dMjnw`AOUOG Rԃvy1CPyOiڻIJXrAj>2ޕ1odRПЫ "k׫kbn@!"9^x\(a~KVNĮ[k~j9/K]9  I^zac3 QsCkNB\R  1 24b4a6j͙Brc $=՞Jb,;A]`5+<Ԁ# &I5˟5\1HA, ݧ`{Ұ7t8~軛y4TrssOðZ,?gktezağ)?]Q -XYSb.~+%ٲCUجXL+kuN:ڈWW:m^I]c[Gp=?:ԺaJr'u0E1۲py4A%3N292J&Q)ML Eʜ~*'&-Y-Y.Ǡ7@.=N|sݿs3r[|m}mҽ\^~ra8a623R ӆE2ؘXB"W@<[8#/ϻqG 1Rr(k::}^Rm¥ITᜲEȱsmo1<ؤwP顊 )fE-/2}%h-ך40E!1` Ly(O` IqO=3Ob'>6RO<_(ܒx"9pNRYHSr6ZX"\u Tg#!eR<(]`^GA~CG,$m ,ciI2Z F?DF]1v'v&X`=QH-;J G;e_UGpu19dzۉaMÈN&/99 _(wEpq^RfZ] ve4,A+:墽+6^g9EƥRRea"GY oSd1_Rn%2]LֻW6 gpU3Lg*پ-6=_?8,j ^r]W]]?u4Ѧy\ou ),}):9[>'jZ>#Jv߳M{t6mWb*p[r:K=i#ڨEy#imQNBl>æ{5=^(`X hn8?G\s9~@+<+ʲ N>߽|=q͜hY\x8~[ >,0xz𻷂rj+{p"x>_L3p&\l87ߟ?D:3AOѺRdno7bEɸɀ J]rcqN@etEFé5|OFKX5Le4j/?m$.DcC֣tMw7IUw#GyM_ l(ћꎞޭ>Ʋzs5߻v1ά]g犀8jh@cNVx׊/ >Кa| p鎍.gK+;&U$7)G]^GXGP,fq0pV/ܮt_B/ ox) 4>>]u y|Q=8!nn8NO[ͮgBN͚jb3;2M] 7f#І=-!ǵջweَ=o9ޕv p5~jQqvIBR\t jWf嘚ᚰ;f]8k-ru[Nԅl[Bh~YlY5<$z]P`|4u *RRB$xi h" *Ũ >^2CחZ(=@7x ({X92\w PM;0Qz̀F8URP.l~ezŌ^9Ч #_m1޸%loީCi?eYA\x(&w;$1#Fbфp1䫋8~ 1&Rq]%BHU6!t,v/#-|Xwlg߱l׷!FgֆlHɐVf~hteD=JK3h ZmSs@‹k }7ŏ0"8A姜=Gk?Zvdn\|ގ W$Ogs'P69Kp8oՋ-8jXd;CDPgvMLymcW47yy[^vV/ZZ7t:_׆ml~eVqwAuGWa|1{'|ANh1>zβk/E^ דt03|B</83gcx+j7Iʹ*MNfL*#Jj)IrzCdRTQ],I)H# OMpgL.2mi QXVg_ӑ(jJvG/dhF;cqeGrYBu*;$&|CS*F>vk$Q>8OMO5˜:F'd$rUz{;%9 p7JpNe\#-ʚW(XBE edl}S7=pY!V~y%Կx\q٦€#L* t*ZU"*O0`1̵D€miX&:9O8"@ J'IAQ!y1}h\hcMO ohhX@FTj N%IQE0 ^"t"QBD!x!44rCB/L&% H1r* օ$rt gKwT9M]Q"ќ98$gI/,1>dl+qX$Bb<9.#&"V@rIM`&&9նB\MnM$zrb r`9geٖͯ$@겭->0D9!5X\;T8uWR^S".*D1r@"Qs"qBGH«MWV~-8| [j$D&(݁/)e1Np'&gQ)'%|- r}{K˥[_vS/~i|e,ZzFأ'BGR)/1Qd\ ~j*?p!;4SJɶP;D>u})J[[<]o i?|\ɩgy{f .g}(۞k")B$L˔!Y &Cr1H } %~M9ogvzWgGӀ~}waȅ\hBPn5\[mYr^BN*D˷}nm䩕X s:%MV= h| rt ;.$$MDr˖XW]JN\4bZ(,Inx(r9qU֡"仠}&|4ʤ95oÒ>rJ  )1D/oQMVj'EDdȾ0z-&mg^D":RN) :ZI(v:e(R9 Mj f)K(2rFٕ&, fZ[•,I#A+9ae9{Y-W}bMY¨yLKh N(bWbCC"_=[E~bZk`DYMO#$A%TC$MiCbPGIϤtaT:L(tNÜLFD{罍 dsi8(S'NZx7L+MMr1y2A 3^)y ړD9ÉJI[DrEabT@b6FT$9K%Չ($Nam1r~J{ 0HZD%Ӗ9 %Q\J8.]\AqWrN#e~EޛPZems%F =I[.ZH)8^8I0* =<eߠO9lֆ'CSz-%Z^xS/]*[іU((7$yp($剾FPd Ɨ9X=@.u9qz;t i 4Y8CNÆf@>x0q7%_hdvɘ\%XeԷ?Mћ̛ԋ^=:s8w}q}r]_wʥڝHWD#oH.{I7޴5wo~Og.#?˓??]ٟL;䯧iJ ?>MZ+vӤT+EL3[#N 5sq.m7'KmM a83"cȾAGB[Aev- bb 1vT?5~'ݕ Z\"~g~=Y`&?IzEmݳ66%Q&Y tkC/oA\_)HQY N@ !:$ketVPSr07$v($5%{Ώ ΊŇ@Ű;]1BgY s*A'9a3T}S}\$8@Ez>:eۂr$+̃8Mwj|.O{5N:RQ+O6"Do+M9*U\40l Jp*Ɨ:~ͬMxӔ-7UShq#+ DS ڽb,-EdTdA(*i-mI%//xo55=ȐLt:ŸFR\\FF\:rB!kɹT 54;ow 5z 4JQY;p<ᓑ!FPJAbTe>ԃwϵEd)qDڪëx(C댢\ɌjǥN'ΚX-c_mt_EΓ샞 G@p[V~ /ڥ:KF?ˏE#7ljWe.C“Sj"ҽUhShq[ĵR0ŗO ףqj:ˆVa(r&*^4#vWPŁ9D31 v{goʎ̉hQ7xqotn!DxIF~\Uy &9ΗplؽTbm U!fD8=Z#1.8bGFKzg|_fs.ݵOYQ GF2e*)R;r?vP"ORpa9 T~Rc+L8[''_SxF~MzӛJ@qMv-*dJ5ZęHzާ=EņSU/*,\#ŨӘbu5͖hxqcAڲwUe}'/KuyEa-UvWw%58~$cdG J:mN8ssp* Vr۲"8r ۷f/ xy]t='NQh19 &G KEΫCy@nJ'ۄZi;e>ϢK<~c:"cQӵVYfT;A {&z%|m^OnQbRڤm'4Ēt֦T(&&enفÖ{FQt>'V Ȱ+u4^ۻ,[ m%=`C{6OBԇ2Kg>8..Juk?\}lH Fߪ !†7{JKsK:1>g:u8֫X}#Toۤv|o*Q2Z[L,K2t)\ARX+5^*%u|bY]_ڲ.}ǩA+ZU)J'h/UoZ a~w=7:9\~sHMk*ކ~ ch]=Ӭέ"Nop*`&M1z}0pᲥJZUuкBTOn|̩9A:ljVQGSJd:{5;TOFjPY!S1.4ɉƀxƴ/(/\#Z‰-pIۓ|%I {V-,,sO:{Nz8+'wo\2"AT; Spln_O^!A{47j * U' p(ݓS F\CR>iQ~)Y +RzkS)/) >_L_//,Vʮ? %vZۜj-j[STHm{ƆYuԗjC}P.Hnzܝ;MA@Nk~n<6Fy>g{ )og&*F6%߷\lK)zhV&x@KFـQ:r//ٶiYe[/N$@~N֤ 'iNgr~,ӪԣIʟעxsR7A% +){F5 xRΪc[ߦ`>Y؅G)OLj&PƇDLRPf8C?}͕HN5"uL ~B`\8k"p ˡGQ#αSIHp7ADj\i!5,̼yYy2DM2U X.J]/ E˳75!1z+yXLAеtNF;uFk.4˺ F@B^`HJƬC6YbV@Y_{dZ2ɷV&I , R G?Y[z[ ҀRS̨l`WP&4$ kFP1G!HTP&׀T߄LJSJt"T]!ΦFRȠUJ)P;.j!ՠPwVrEdܠQ  @rLc\ XYu0wD `L `iƗ)B`@YLu>J@Z*ت3\[|W TA]- uAB!00 ƛA6Fq:Ҫd :!JrIJ2Ba1%OpHv9ok=/3*XqAyjN$\Pd"U̫ fY0fcZi(^Bdᡔ5K q[O κ$!H;V@ !K^Bk֐h юmBܷtƢZ<Т td!.h]`=mU'Ad(J+1A2~ȃR!*8vGyCogUE תE¨2,,{njpM"β@ A16%VmZ4+itԞEw64IxTF+ j@eV o[6RS饥WoU4^"7orAV}M>K,a* z1dkR@f㠝zy|<5|1o+uT$K'U P]C*ih$ti#ll% f=#<;-,F ݚw)DP''ޮ Ę#@9Hͣa#[*3bT2j7LJȀ-Cۚb樇 tyB1C[uŹîQ$t  J`Q {-HOOdPJwU߸YaKl+TOko{`E^QH"NdҠNn )E bX0j`R7JQYTPcD&7cQGQ1İ 3+~x$mDVc-S{N낙٬ E5jփ*Mg($]u21KCR 9Ok{iW&E4CS3AnQ%#pゾzul) ʽKC$`PrAvƒf5$Z!T$fBN Zf !Kk|;U ]шw9ЏL0H5ozF~/n^,}Cڤv7iRWcFao5Z7뿯J.T;+JWqZ#Q܁uw ְ7+ϻNJ~E/gMVzǛvpU uJN v9a69N'P^v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; o5!'G;+'si/'BsN B,; pA@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; tN N :'1'FN e:F'ub'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v8I*{;P}]sN 5qw7:F'tb'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v'{͜Rm}s.;YY_aq*3%h%v2%H(i޸DBOqK }pV), ,VʃYJPTh'DWlbɨ+BHB,gCWfˡ7KNpLW]֩סPSWf2LWziijCaqvoCwٳ7kKòM ϗCH.gB,?rvhEo^`pnvRvJD8LSG1핼[hVikThz{4M(#1ҴA ?!"NnS+/+Bi ҕ^18w2tR<]J?xtE'DW7NfhNWD#+\fBtQuj*tEh}8t"Q0]!] +b2xtE(51ҕZ)$6N PFtutP)k'CWWMF]ZstE( l㡫b`&a-Ӽ!/䯝#wz8f -T|{w\oyYv5W{O /܂;<<_,՜K+vx{_*;4滏}c'(۫ >l%t밽[m\hy_x%&wܙL]dl+:ZW%:1t鮯./'jfhwmB .lL_W^{Ϳr5,{i1St]Lע{GeGvӬ9VoeɹUSѢv[ӕ6E({ ryA~F]L6G :cT-}§=b;(tV VB{%6\{%KlS!V 1sѮ~+!5و 9{ʻ[: ?޶;0CvN.+^DSQ^vtiJVK-Sd7kEn 7=_сX __OApzBNsRAeoh袌1sIߞm/f' Ys=8J/O ݹ]*!ە^\\=nj m,׺y/y܅Fɭ21(IONJަ˯4?-lmERuczy1fze翤Q5bqݕtZfLϝ='_|',nWPn]O6xFmZCz?h_Q><2BLޔW2]:ݬx}p$slcM/Gw7eV|+Y̸П'sc2{r{(쯋󶜡oϯ)^H/_9lUoUOڱugzuJ:wT2,SmrK]`$~B*oYHmTtpSj o:&tP/@ϕ+2!C.؇0j!3pZt]|=Hme[jjPUpwy$FrL@R=gyv((gHQ(?>7: rn'|!UJ*'13?ֿ 9;v j7N 塝?hwey ԳTk73O5-GSo:=!V&C뻡i(LGHZ?\aT`CWk ]Ze`:FBFMp+LNWrͭtu#~>싌ƻyˢ>dj*i_+W^ K/l*wCG]ko#7+`?)o  ;`I7>ږ%E%=*Ilɖd٪jWq:U<<hD*Fߠ6Lh"Ft+-4*t(MGoQV&-tƫʌRꎮ ]qB6tMU[g*T]EUv;] )eT-t( ]IQ95 kVUFٴl^jhS=65tp5F]etQnjGWo4PND kBFfdPFNW"]OWhOY+U[*t(MztW4mRLk694k3GҼ Z;To@vhGU2*EU˼ܣUjcٮ: 9P5>6CjcszVo?Jnf!ς/\/](Ϡ?_}{}ŧ:r3 ?'ԐڹpAcϦd,YNA~&K+ie~fU'77SȶE̐?~ݭ|? 8!зVQpq|Md;}̈ =Q宺^5-f,mCUPlsǺ?`E^u8…+e!X.(ecH8 :P1;kuy}=!~V>)Q),rHLh*k1$0 UYmcLD"Q `H`6pQzϿj f hCe%رVg~r}my֛{˛xW~1N77 :.h3/K/_+-4v&w#=SO}k`OlhoY~Ycm=4)0 +Byn5ᜰXC "FlDADîaxrn/B4L$&Q`JaTqvoy҆ oP"ņ%(.-mA~X+-k[gz':FZVR 70j X91@{h:g/jH Q$%09I*4&q)]C:oCz_&veO֡3VKkkpw$`YŜ:.xKm-/P4ʫǛw^2IJWx?_]zu{,sh4)oze}İJC6w++vyë_NCP8 [|_@-ASI}z6XPSh|2WL} ^LSŦ6_f(*&^/XũVU%ޫ()Jt7q:TuXb*^<"wlܞc(O9>?bVqގb(9glQ[*{CHu^qogCx__ '/ֿGw6zZ2}!2Kv`yP1$о-xƟ;~byn$Z̑q`=!HD`!G)Y"NQ|O-zll3[uU=iM3Ÿ̖EKΥW/_ܧhUP2Δ/nfq+fac|e `~AQ\4'En6'Zpeg6畃E F{gC 8YT_lS65V ГoGE۶%Eo\ #ĦY }bQǀ?P72Zu$ ( y V¦Zϗ&zA@p&__A>PX㮯lo#,<퉁fE;0tVC3 yد&좻@Ye\̞iX.=9Ml9?`ׇm 똿^q&w9eήFx>; OSBuѱP}F~*Sqe :C}/= a=36p*)cx OFfD˝ Ӿ}jPTB<%Zlxh/*Nz .wOȃxmߝS2N>N&{!CEhCvx-O<)@ݢhuh~sDd|:\v/6X~m*Ogd=|'.$$h.–Z1`׽ɪshte&5 IKLj]xxay}Vy>T5>ﵣznq\"tDO#Z"ИH<H&a'6ɀ =sޜggmހHN;kd4 62nu4[mMr\ҙUO >q#{]:>r5ǙF}ʁ1xF`3)i酜86kᕓn'svRI59߹Cxk뺒~"^obST$ĉm0`ڣB^Ѓbkm!j< T){ƻTo_Ic6`o^Dأ#JRim\ȿR7aR yXJs/pTg/F Z) ER DՍucM@L2+KUR AD)/AJQZ!D إʟ%UbD-r?Fym%yp]xhֵ|(>:y!i/7M@-B=_GUڛ|iC}g-6DsQqygLD^ q"xؖj!U`d3x#򒹖H5Ս~*RCTҪnӆ%*YET"]d*MƗ0Vה||:2N 7ڐ#1M_g3sW15ǣ·swǗHet:ጟΗI7bՇnq++=shjR^Ԯ[׀,έtn<Ňd;LԮlN!_᪹B{gνs[]-=_rѹaEE-%Q1x1J4Zx9cNys8%1p?\H.Ĉ9ߌa=[7muiݴWjegj&7u(7O$&133QH%"IUz7CUrjk{p"h6T_'CeVkb5~opO ~;%;hS:CY)no g՛t>ɬx&\41rEso>oPd;D-sUyӼ%*&9Exy5֒id2]7muC"jw#FyC]7J ZSFj,#oֳҎ(o&%'0ŪA7ڭ:U&b#Qu[CKthI  vF[f2b*{^ CQm݋"΍AN8QX$$GDN,kj7@hUq @<1 (+AT#gīF.ɴWcf>(V §\ПX\t,]͠61GLxU*WpUw9^U caM#g8 9J 5 87cw OB zg/ܩJ%(I %KGR B&iIa.7pڭ'xfNy :>Pqc4Hv:4kdqa ^_̧6«x\@ɔ =44]@ru ׻ӋK4Ի{5PFh,5F@c$&2$)wJ0STQ]6ˉ̃I)Ę".4>5MF#ٲV#?-M J1rv,u9?EI@p"PO)d.fF=c8$HrD~k_ߏ~uh+;78gs6Ѭq9KԨ/|P^YoֶeҽXʁ«?Sդө .|)S=j@&MøAerD!'舶V5g@{ғ}sTj羴|O# 3FB2D bU:L {ʧ3̧ vNBd"J{Y@rOQ42Lٚ:ПXb#A5Z AhY*K􃕎jW#gT;O9%ͱ>PN7 ~>ݻ MQ~TItE]M93#:: C8--Hy?JM朻ZUBs1 ȵGddDG]ң;?ڧ+{9unh[qd6H>jyV 3^>3׿ qToMn ~Cdݣo͏L 8mwv 7[\hXF..&K,\jhoi1őZ:!EP5\@-ljEpRi':?1.}o'v/Z0 qD+g>BP"I(DX`JZny X }P P)sYB,cEa)N '@iNP`)r˲ kyY/FY4E |n+ 1z CC%YczrZ#\\I@PksmIv>`P򳋣t~'ҡ`B ƀ0pD`2 ;mLX~qL$Ӝ3NUˆb{oAO.jZij)c-Y$1p³%Hڄ bVHtґyDb4#7Wj|0-v`jd ϨCQ OrZI#9;)5 BIv͑=ʱ~|u(CQ;ėObpqڵV?!gHE@1䙂Mқ*L]mwfksYǓ@l:wm_x|/C^ފ!gOF&'WoQHdl4rۯz.bOF%_i|g ep,_rѣmԷYFx1hY^;@2J{D"4R=ġ%Ƕ:r2nB=':lͺ7ۯ|Q3^χjqjNwfh ^zUxޕTK(*{W%.[~TkԬKR&('U=d!\/iNri^ d VA.HAPS [8ASn/Z2)OLpLРsoW E(#h|rIY͜D-Ep#Ĩdbg .xpR!"e`u< Ɂ`S%ga+9;&]l%ƽNk$iɀ|(D)Rka[f vf}2}&d@H&t٩pV}?or Z!*mft *xaxi~_Op֨|t~u>mGjgF$Zi0iާC-:DXGۿh{4Um ŨqlbS=C64Ɇx`aWotvǘN;>0kO % BkAmZH2Y1\P_i)BZ{ "4//c6p[0#Z٭dV7]yٿ74v9ýO~&WẤ |`oMQޠi7 0r߬yk}=,Wn5yK+̿.m-Ѷn~%IR`s|1L/RLp(9t&]\PBJA]- 6! u޵=jCk{3$<wEt2-V{-OAD/$xt=d_#T2X;)]): !XʼnO()ZJJ#2u:'$^T6. -#/Q&Q8drh#r2"+9"gxISai;!j>3jM޻>eqa9pfϤH^@SPs ( c%BkF dTF?{G",/Yh`$g!H<$"V,K${ SؚQLOcURhГpA@JC % ʐZ5^XZRSq29H[?2 ']Y8vƅOfstxƻ7e=es*qr|y\NP}xSr `Z֯-ƬQH+w$`[(H{cGg,BMc;#A?-k3ĞL׉NZ9jOfR{}L j?"Ã5PM4[KMRk$ |rAXm괇W5v>$lI#d -izeM&; ٩㨌b*TWދ6:.K`M橩cW"D䅈 dIM{\OwYfJa#{-FvAhWOJ Icʄ64-HL*)KJHRN7${YrnҜuNf%rMEpqk ԣ( qНF5r YE;l\,\|.΅YǮ<'@؄˻9~iܲqO9('_᎝r]>֕PsDyU?.&XiqUf =DٻD~jMfùDީƞh'NZ_:F>THR䥶)rl#j#R%咩F aQB GՆVuը!s!Y"AJ1. r {֓yBȋwsYt|t.椴}pg oZ!Sd ?{CLcs,9 5NK&HzNl=5tbKOJ+$e%#43L܉'N|u$o9ud{X)!^=zo7l;}cCar>,nYb=Ů.ا;#m۽Ȕ+ R`i= JGJ֌aT(Ri?JGOeiΟ,4Bjo3=8߇jEQv\jJ簴 8[P0pLjܛjaJ 9JŽ?OyC~,ܗ%OH0J s*y(Zkyw ˩x*93Nn \u a\ZoqU.:H\1p{; FEwU.:@\9$2o•3L㪫\ꂫ ȻrEFŧίx\uh\ w~ \uyW*w 磌2w\ug/:\B#=RT8 Z㪫t˳WD|@baVýzW[eG\m#Z㪫z(]pK[a ,8 T.SW93w\JϸqּݐG.5bϏ,zIM /ޗ3>7xF;ǣK6ؕ#P g}3"3z00r= 鮖i*%,>@L#W]apapj^֛mqE>#`qp"qUzq)Tyf和]WL1$Sz$\`7CK~\u2{\Jw;hgW]@56GUWʵ W+`$\=/Vn\Zf]WA• fJ:*m0w\uayvu =\\/03~tOK:O7Vٌ: ~rɑ88=:]/k~bv(0zW?K09 1%X9]Ի(Q^DND-Ջʽآ`~R0Yg->-%w]v_kXz V/?gSZ5"4e<*@"?%ȏiڸk·xLszK}>w߲ ecpK[X DQ-ky験}+s$Zj+&tq+:bo'{vrl6~BT: !]B]޲X3-]vrds6ω ZkmѵEň QڊEgFecNiUvgP@MsǕʕG3O]Ra %{5T0W*0ޕ 1+]`{UydՆJU WEW*% +x+a>qyB W*7ax* "Đ  ]Q6W]%,!*`Hޕ &3T.sW엛Je!+x`3co%ؚ}N.vjyOPS9a \ݛ§${A߮N󿞝/ptv˟(Z2 -uV)J*_nkحa0zJslW `1kY|ʘpsHDqq `0}jd+(j]iH^%7W +Uka@]u{O\4]-JUώ-:\1jpJ['r0S#̮“WW #*83]lVf]u-:@\yvv \`~\ulFUWqUeFq%0ɶngW*; ?}0*/!*)@,+G{W/plz|f KYU <36岆dT,5)sLdpJvt%Sl՜ki΁Slʴ@qiqAmw/ڣS.}tSVM񮥮?ӛ?~ όXeo>,gTGBA^+{ji8 ݬF?C|?{sJc ݡ?!_kAy.TARtWoY?\y 5IUKd[ Y94ڄfI.ۈo~}Sխ$4ǷڋRgW]OȭOKN.E-hҒWM t\b #Y`֚kKI>@T4%$o m̈&};)^X,5oy?׶-j!ٚWna=5t򵙢a9s_Wc8Bl&Y,S-YI+6 րR5֪ b$ m%L%kcKT-h1Jb+6nZZ>!ׯߜKɖZc%:Xk<h\UeLIqZ՘I5-Ts>&8άUkVqvASJR,eʦJh9h/!+m՘)l_7TjPd[4Tײ;7oQ7HjMP: D)5m*%ў"Ew")WmhU&|$̎#u16RF1 Ў~/^իE6%f QKڲ^aZyRJIJ߽L>VùQ*[ojjkV9g%9ڢm19\ 6E06GHZCg9XnZ1zVmHQj%ja]~A?j]hǨ Ҳ -|)RM"` WBB%!ruo ((Vu'3B(6kf\5>@qņlQ|NȸX65hLZ:F=V!xRm:n%L% &z/Z$V\ %b'[:x6v+^T٪{g~*skRSaq~R3D)RS[e\Uٓ)d,UF8} .MbtwY'AjI{CeBӶ.TR*JVxTO(ְAiZu:R2 CEhjY/MGkQEKQEPlr$VB1RlzRnVܸ jX%CYyFMvYÄ\HD$RX(ѡmä&O9,5 Qb-:zXCQOHݮl?-cji%5¸gaex]kov+-z:hЋMuĬeR)'n޵Iۢmq.w8Hלk͜5c( VCH(PELh=4F[(2·Pz[sV ,,xt0wD `L `iƗIa`@YLu>%@ `- [zT&{_ #*A65T4gB@Qā.0QQP-kyGRX6 N85+mw2J".5F5 =ګϨ`3/oHH&9%J,{%`dV.b d`ȷ{2 {xWZECk>81h~!3à\3ƥ"D' G1͋BIJ!pB6}fعMc]^,UG5t9g8/Qv`c{2.0ACka&-A7 /ፍP-hJW%SdIW}HV*Xh <90_`a3ƅs՜ (dUʫ f&deZÀJ5}tX= %$0AɣJA&ԭ.(Gep6WwUS U_^jcΊf`6Z/k`Ɵݟn?ɼ d}E>`Ye> ]{ #BK= RR{ȋxC*Kt]/s&#Tuo{H H: S{X A%h v% AڱbX bWHPl5CKۃA5M(=1hﳱA(Az;ZV=2Ö VcW{`E]Q("Nd'7 ^#g0E?E FyE ax`R;J-* X(2#;L yT=$XW` QYjЏQ3<4)hc37 _H uЪUP%HI21Kf R 9'е-D?5^?joZ 05Ja-6(-섃NZSvlti }W&Ep4CSQ3Ap8Q{*:!,iJ M\ѓFn5N !~ކ`Al) ʽ.H10C1 P[@H28D]p$@B;o ]!w9wL0H5Kh!w~]j)j9 Mjg:zә&.5:Kcl-Ü렒 Uvb\H8aٟz L^|bl%)V &RA'E%]\Њ[~=NͳnZl9]t|ЛssGz\^.Τ/|F'ZV8^~#łX]wz>ÊxaxS|(}[Nޥ)q8r2am $}&h!zv"t$FRSx< [g>mpGX qI $'8 I Nq@$'8 I Nq@$'8 I Nq@$'8 I Nq@$'8 I Nq@$'8 I Nnڋ1%cpOKm8 eO2 @$'8 I Nq@$'8 I Nq@$'8 I Nq@$'8 I Nq@$'8 I Nq@$'8 IMEF;$%M(EΣOeI{&P>Rp@$'8 I Nq@$'8 I Nq@$'8 I Nq@$'8 I Nq@$'8 I Nq@$'8 tI $gܳ'@&քI= N(@I IG'Nqk%#S 愒@_ƙzgtI/4$'8 I Nq@$'8 I Nq@$'8 I Nq@$'8 I Nq@$'8 I Nq@'ti=g{3ZjJg=,o ?}]ZպI_ϖOhSpI(ʌ&F}p ()>z([9"΍mNW22]$]9aV(_׷hh~lY c̞O]!DMAe{n5mL~ۿL[a,+~~FvOwU87e0&&jG6tD4Mc M+Z<&J;MJ-OQʎIUR+Cz,tEh?PtuteqB+kXк+BNѕ5AH7"f4:stE(N]W1wYY?1x9 ?^K4*1$}2a߼؆pn"8PUj.lL_oSNL NNob9-ɾnM֖5򴓑_d^_`nm+OW ̬3ٽO:V깺cyTZgHp,-./LJwJ_Olx?+x_&7KLu,iMDlu@zCݿ~c&Mߙ"bm](\hT-<^~B/ߓ+7=Kv:w&ET[Z1i={>trʒ3kSѢv[d['x?N~J'Gl+7sb΢&Kthp"翁gܫ?Sa(ؖ-c^:i'BU'~!筽U#ihhpsg~Zm =(iдr.1JʎxT% 銞nN:I]z4tEp ] ( LW'HWZ=Տ皝pQW@+}9RՋ-7gޠՋGߧדoʛn#hovp97vH|zfu6{mtIvƵإsW/>|xOr'ux@'~Q'J%'ve.}I۪ê=uKwi:obS~ɀL^ |ۖ vil'l Ѧם1"t+U랶xTN =߁t}aûZ7_üñX}˧n-;\?׭^_g'D<7닋ܼ!o ۝-Ktu6=O?-q?'f^q8Vo;ksr5gg1gY>WrUdZ_"JM&[k ˲`-lsov fl Uߊ i PIik?˾%W̅^7cdч]ltlʼC,sm+}{K!-_o3h7ӳ&I9ͩ}pT>,V~'-Rw?w_h]*tu9[805DJKܴK.foA]Xc !bѭ)k0֘u]28݂Fz0:7vV[/7AE ᵲHZ9 ZE>KkZ-Z:$>O꺸^KvkUwLۧ#n[g /e.Gs[7$Ox_W~n(K]U^;=81䬍K-5L9]E頣MʩϭZ9!L2|#˛{Y%h[Mr^KK:%"=[6y1Y2|K S8y3?hR<=w(nt_a9Sɥ |''[mLܜD6wةD`q{췝 $JJ|8Y=Q8i8J]it^\*U"E"}5^&A{! $yn 7<ٹ̤eSΒifi4Y@t{N0dZ9JZVeTVT3ϲ#@h81rVӵa"KM8Qde%l'c wPYΪsurw f^bT5dFb} 鿡o`IuJ4h"hSBcLKAAjS _*;2x3j,|Xă[Bk/ ƔD8<7Z971"J)]6ڵx5kT+t zߖ<;&MפY6Y#S. AD @Y$ &lb0xxŏ&OpBNr0&2H&HT&3H QIF,;ǿMPTKv:'In,$@ ɣᜃ >$\0Bs`e3"X`:gU&%&2,w)""2qYd9rw Tyk2410Hr5[.O: //aQ',2v2vLOci`%ĤbN6(y@t҃9Kblc5I@vT@+^~#"=7@Bb(VrTA09ī]3fVOZSn˼.G=/K5H^OKr9F+1(S阆r03w]NQkmQ[#=Yi1zZtADYzCp s%}&%U[3V#gfUjq.߮`Ѕo l> 29xx~Y|EO&'vkl10EٛɂHI=g.%e1GV], P=6 >2hpduYLlNgL5v5rkp;&hjq֮|BAkvgTRdAFRb1ù6`%p&hgPVfϝS:M^6U\J!]C- 8|YEdT'kEF~}#E1FjDUY#A#qbmcTD$(ʉdXᔢ+H9KnA:g9cH]ֆC4G 1- YBKeeXuU):qɱzQW֋zЋ^ܹ]K"LX^E-S!+ECYd!ًȻLz6zMjq>49N xw 9jяI;.4taV'?|Uuj^! R9'Hb0FKAs(8AiN$ RP80H'UR<@H 1 ~DPL(; ֢0J}t*~}LN€ȏOE+liIė,w=Ae08Yt`2fDd"2@9E\F泎9Ƚt(ML F*q'&[rH2y_5iZg>\7K"F;AS?݅eL,0s:xAQcdS՟ t`Ȃ0r@3m ֜_;d$d8U\ @ĥJ1bDu5=88Om-Nn =t *b8 K}T=`5)ݼ@z/4r2U5%:k& V*i,H7Do88y}tT1zUB:Z &YBc&̉Oe!l:Lt9i='-r6!(^-Et\ V5c/}`ֱEe!حn!ʾ=h:M |хe>wosǍUh߶q[ ; L Un7Prϰ~ GyJ IOySz]m2:9/n5Ko8._[g_NM[ =-r}NvVV7Gz: $85jH\ʍa 9'ex [c¾t-7Ȱ!AGV(E>fBІ+(a)M+s*p;dx ݢtJn#wW1 n4zqx6ƶeMǛU  H-&^X7+s, 3 by c`fo //xP bxŐ9k&z2kB9!'B>:$DQkgI.0-̠1FiJ9'pEc@(NZyAr \ kW#gbNI i0]-O-!y>5u*ٻ涍%WX~:gאsSk6VR{Oj!rZ&^8AEI!7==ːoGg<V51fϜxsP:6)9P\ 3#(tˌLQC<4([g`:94Ik.V#Y;SΙ~iAe4ꧫQqnٕu+y~ʞN6#N'ĉwOFL0?} sCc2WhOuQ/1zXף?.7"V`LP%+!OG.$#[l97-޾?]#RL{rXnC»wbJQRO :JI}u%hՍG =.7 r_" l;fb _>}-*&/n0S.~q>\ Dq4 mgԚ=_ͣro fձD56pX=tyVF]Gϳlh^L◍_hӍVG{?Nآn?a>w\2oL-/*t/1FO2?igHigOwgbGg::o6\\9uޗ_z^<_ dEdB6hj\g=.Z*kёt:<6$|kj.LEɳI)#`Qs:wW9 נ@ɯ!o }5jg?m2wDR:?Aà%u-/z)5V0U #c>݌Qcnlڝat?mHl9L@6-7Z Kս56J5^l. @/וd>k-N6Mӊ}ֱ"ŠZZ[k!>ic4% ، <;!V%8PZX;^+G#0U66~psGTڢ^yq)kmڟ4l>vj8k;uMT}l6~Y'S>5V VT`5aYvR?NjԺs;MGJ>K;!QRhmzHiqTT |NS>2OQFX]/\^Vyx!q]6;]xR%B*ȬSeiYm }k^i: j )ujmD!Pˬ;\OX~yzP? U߮^U`6A_=jZz{zY#HeL*^7Eepj<ૠZ=ϟ&/{>:7?"[Ӽ\`O~*hWYCi䥍(e8.uxY= __Qkf/N዗0xf3`Ⱦ&Aqua7Qi\RiH2F<޵ UsW[W_{уR H*!@8[w%e㮄\t( Z:|mWT9:  i&)3sAEXk! hȩ0A"u-`䁳Ȍ4"ƖI8g%l-S D$xNT 2~72Iцw!9E-.U>M|:͗SKgkuK.-1aLieXA[_J4KQ\snIPp3% { d.BjH)Mr9$xI%hYVZs_ tGRȔƒdDNg\xl(C1s(H`!)l7F(VMC]W#*cf^"GM"XahW-x8PSw PP>XQ; c]1 7L ]1U)ZP֢6^֕(HAƺ$ciB֎:1Bzi[ 4hqն-h3D7h4T>] RU n=S:|(p%29X| bJARQgL6eaK@hGG\6l% P)δ³&>seT.R+YgoKh{Afh(VD604dJ2xg'-c y۔l6hR(V, U@8va2.VV8k*8R`BrC2(P)%86cK} r1Z}>E,e#MD,:{.I _+ŮUhWftޕY ̒ݒYk3q] ui[3c@9gS}lt:ޗ1!&ݗh8:'ܤi^SQN~Cez6ii1,Dxy0\WAPip"\{:@:S,m MN±hX$hf$gxQf9+̩`5 LZU mHhH4$]4[pa:"]\dkpƩ᳽vj«FR؈jW:Α\kY^h!k$2עa]F\NTD<8]{ڢq_oݠu>qd|D2/>"BU/!J`H1zDWX5+Bk;OWҹ4=+Xo誆kBWu"vc+Cn#"? OLu%]!]YΥ0=+]p{]Ǟ +B1 tu v)?=ݳO.'th5q+nb"zlo }}aJ=Oއ+i+N'TxQsCoQukYU'ߧ^7C'E׽J<2RAd *8t%BL{S0'k 7;\ Ї5wC Rnf &Csǝ]!`7tEpuo*m'遮p:՗q6wiKߍ'kr=}7|BԪW&TS&G?y9N˦ 9vNUq^ 9~N}-.kzD MZNӄځ)1蓊Ӿt9W++e_*u"FtutE ]0LG"BWTC Qʍ1;Еi]zCWWؾ}vut4]!`c \c:ƺNWRc+REAc tE(c'+ _ vѮ.p ]ǛJ:Br]!`.Kp ]ZyP!<ЕgPOW;u>o ;]R tЮ 1=+zCWvCT Qr:Fh Fo {b7ZӢжh?PV9#F)wk/4h 7]iB b#iS}*0+ /tEhB|7n*@++E ;zCW5c/tEhe}vRtutuR])zDWV#=Е6T +izCWWfКkW c+c}r#`]CG0u"tutezDW ]\J:]Jc+gs}Zt2-zCWwh9g]+B)أ+'èՓ 6lǎד鹬ԓ`b25n`қzR;Az?vd5trNjԝН}'LfI᥁=_P8,.X6\)(+?{Hd!p{KpvXɢ,ԒgSl=,y=jnvu*~cۥwwmL{2qt)sɏ 'T'zۤ* lF1GoJzO~HaӮ$]7߰7ڀ4,sxk a2 X(`s}8 Yڤ7?®n,S~+|?oUpW.ە,li. ,'72`)ŗ8{/`#lQ&]H#LYb\ʁqTZzaɻ2mF@&T*ɔBGNs G[Jý]qa8_jj#C{ipiܧ d*h*~'xO>hETZT>x'Vq̀ *D A-v+Yy1(O, HK!y!yH4z |.yeKZ M;AZ'jT"q,0kLR 0PƐ"C 0dLIY{p 6Gޮ8d`J> 1cV۠UP`~"D8a\UbU'pC-`49 RϤP#*'QkR4};^9ؤ1˭â CTv *JВ6hʌ^s0A DŽQ+rgm0#⹭yn fIc1Z OQ' hg"P_+':#d o 7V=~! jtpIrޗ=w]!x$ׁ"(ˀ& XDPIhdb6=f(dZX)Jq:f9F5]t_&b d큘%~=7"'CfXr`--2LRgE VN,s #j&_m6Au_QEg|mȩ'[v>p#܃;* n@Ȕq"nQ&-a֊!Lh]ܣ=X*YȩޡZ{եa0p JR5IEB6ȉwG0n=qtH ( &42~D翪<·0$kXrAW Q̠bHZ12TyOܦ+v&jNi@,.Os.}p2sIY>x+l=!VaEDC1H )B+\d˜8^Qamk<'Q,dTٿVvS-0(ړGr(Ȧ\*؇AYѻE¼;owM@ˢyna:_0Y[P?~H2Y_j(8By\Y5Vk˒+B6:0Ӓ. N#(7nvz4EuϡG*9![Jp:yS"M.mT/cv@J\9w:u5YKjy{UG Ph~E{G6O̅m_t(uE<~TUC^{KH)+JY-xɏ _l/u䵔a2푷_1<k^k_x ܓ\BZ2ˤvUmKi4_rͯ/_r3c[A˼a.=mFμQko<I.KШ%Z2mmPY"FvA$[ٜVv`5Υ,ЃEIKDL.gFZA vdlMÑ=Z,l2N-o _4n >OgXx7yxYP] ~=Glab`B7]6 #e3D]vv.'_4{y[bYdVT AL (a#Q l&nǜϢMYݚ:#v <ݚu[^w%j_>XȰH%}IN3`r k`SB;#c6UY 181dB\-Ś,ECb]N,F mEkM!{L K`<ؚ}uQ_"%"@-ҙl\ԱRF&QV$˔^k #ƇڶVl,)q&][Ix%J\rJ5uGHU(.ˉٚMq\%.%PBȳc:d(`(+99#E^]adAL.]sYǩжiBXQaԬ}._g6gk]q5'U= 'kRJälwWQ;_)E(V1IF /u:㉐E?yv 8$]ڟ#6$BZ0GcYB STɩ*8.~C^Y6|Y^|Y.wPj3_ =?o=dLyDA?2 RB62D'9Dfݪs41%e1ē`+'<'aжa4k)JI=e'nMNB]Y]I.P99c;0euj9Uծ]Fr ! w}%*ֹ3%*兪T aA-) n.gJ˲8qF,N\Δܱ # :x':fEp\UD8#Ya)H\\ vW1Ԏȅ`I*K08p5jH/s,=dӚ:GU4=SSlMwGlH5:;NE"2ڗ *4V)ds Ȩd0ŜwQ2^ftaEv4۔hR[R KQ kō:aUْ߃+[r4Rfgs1(2|%\uY "#+qbFp !,r Ds.ƤBy1W1a/FoЌB*GeDc"!#.C+w,Q#AiUbbGOQz^Yh*5V?މwt'>O]qú77mR7~X#̣x:o2ů;htI?tsC\05 &rL nUo:0Ll4Ҹld幘X}Fzzö9Aʯ26 sRxxSHp4]OjI4_I ;uםٰtkvFMbw_-@|_S!wؑ3J.yTwWOݝC-]D:4;o{=~-/e#,RlZ50&_?5үW;"{Ӎ1.kzuwapUOa"G>p8eru]]-X )\Qps@/f-?z__.禮Lm|o:qyE/soml ^L΁M #7޵,"T bpd!ݱOɑd[MQ,)Ll-HvUU]Ud,MV.|-eJn# N@=6=j*{rO*R<* `RV ٢@y\xN6:] ke* *Vzu,2߯ə5SF^Ѓ |gݧwcRĘT!B'4`:ii)'QY$Kb /sGyΧȲRsΐɓ.wl߈ O0 LѶ=SB[oiWsghq͜{啶=/y"裕n }6?RwӮQ5u^ݏc% Fo~{h6],FhSUq^ 9b:J+9H%C7G|:NG Ma|  T9Aȧ sUPU- (!(rq{Kr 棯^pv@-F+^ؾ&=ByczеX[wn.Uc&\O鍠r܀gVۯ}"k3_OHS‹ MU;9'h&SAp" MΡRj{;x}5;p 1 Ǎ J+t[(`Qq9.lU8~izVM{B;c3ƖcnUi}43>.?2LU2挜R.#YG)z!oulN'ǹvs$xBo!m*RPKOJUX;f g==Mܖ>v54ExOQl+kexJ "rGR$_.0"ڀcL#4TwROd Z,y{}^S>60_!k4/1b\-O =SKFΆR) d8ZƹJH 4$*,V(ZUr@rbw=n*Kk9G!Z\bܗNɀ085QFVG71 l*X~3ձw_ǽ'XF/v.z"He2g%\n@oHO&qt UjUNY0(9:]!ky87 uHD1kgI/pLqHg$' {d;-9tt=L vZOdѵHDGw':'lT9~#k03{.+2𘴔Lg8V3Xfy&ےdBF6'i]dJd4.:&DbR[ 2nU42tby \8.`'B(%*B!hҲdb~r왊:%*h2y6u>iE`}Pe -Zt PMB uٝAȎRF(p3)@5/gnƝIh볒^)A0t8g57֢0]t e )q!X`譁 !oK=TVΐ>hѱJgܒT*4?4'K)& s.L,4&]|cʄ!T=l5cb؀fD”j#RdVXNBU$UNݨ@zUOjmIǠĵaR&VtٸRJ<:!ʑ6^EޕdIf KրB^t m  O,ɶ]fGm: zg_HOi|4" RUȆ0`g/ ٱP= WK2)XB>HyL yŒ1"&$lJLFw5/BZxE!^%2*#r[3GK$F TAi(Gg '3SЕir ThĐXA: KWY( @|e2ώs \òK IѐB%f{L.!Gk!!!Ҡ(`>:}s^ uwE?`q_$D\뉜.UkD@'H77{O5k$4 }jR3Fn;JoTsa$ 4's@Β 8XɧJ(]u⟚) ' ~%IcZv]o׭7aoQGM?nm!??-Fh,%rqaO]~?4 ޵5q#SrиCUzصuRl}9N*"oFi W9$g0_O7>tՐbicOY 2܅(ow_VŽjk9B.8 ?&T^b:,-dTb 7]_Z rXVkrp-g?z6xSQP=nzۡ:۲}:6 jMe\(u5QJ\jm"%SIV4K>+qSO4A˃s[ݪS|+(H3󪐹2\Rzc/ PZB "g%Xlf6PonӲF]X[کwEt_^_lWy4˝MFuWB)UU.Kj/;csz]aYTHQ2ɭhhͩO2* JޡsRZBR3d*stų\̔v/- ˣە+͊LK*!pIN\ZT gΥUs)c– <``fOTl~( +bꁲ-e@t*"f*UJkaM{w8W\R[nͥJ F[Ђm;MJ"MwC+ um+DG ]a A|ü\̜L%HW+: BJ`>p]!ZxP vP" ". rwEh lL`D]-ѕђ1 ]\СUP]@t9'J ]Zz"6UiwtE]8N|Z'Zz"&曽[G"]yp ~p5Qd ]U'ZCٶl~AW<ծmXcjSz, (:E9A!ۘ!QLʀhC@4MpáiB~&FD MsA07&Be v"E ] ) f:" ]ZzPJ骃t%-wpS=u?Ҷ:HWjׂQ?6`: ])kUE ]i2vFCWשPJ;?wE(U.ҕq\ o>kBWqete+]!`  &$Vӌ|.U]^sԂ`$ME()Iķ<@xC=xiY -^Jv}Qm& RtRxR0H ֡=UJ'#]u4P2+"Ne3A3Uwʀ"$pVD 5tE(AE ]Y!Mok ⭟j'2]+ǥ!!` ]\T;m tJnz9`2~i^::Tʶ"=JFU kʠ`ァ+ -wm+B)M.ҕ~0ަl~s!ϙ(fIm/7τ 0  nWI(4AtbƆCWB=G6 ъT"UJ[HuR=ꪼ:Lnzy[7úFEF -'փҝ=(dieUC'EoSxlwvHըp{"'_"'-MN{d4ouI|5h P2L懀 7e~{Mh%%9)*͓b,yu\}rKxX7 8~>Mau'Fco6^d٧Pv^٣j]͜m]F+.䗯.5o\3yxXo.c8LLg7 C頋> wyy;gg|ߎx=|m#K\ݻ #`T9~j~iVE=U,c}Ǥvڐ1 6PV*3FNy|&~.!`7H"Gn>\#U,&2%w9V_qpO'#/91pХ+WW&Ir{Sޤ_G~x;y[s|߇lܤg8%IXF۰%YDY 5lzȏտoԸ~yMy0= *7ԟE٨ߏF]RSὓ=-& b*lL[tw1&Jq^VA7Mc!kcrF6z1;4{3*?ۏ}KA(ro5#6Ww;>FN)R̐LIM+g i4u+5TEeSY*<! 8,)r>ծL#q$06Z؏`\&ͨ\7G#=n){I;z{34ڭܛ}>mmΚ_] y닾 ;-feJ (d*}v:yq_l2**_-GĪL*GBIj/Dة6pUΓ/H^NN_۝+W]FAkK++gLrk9vD용KD%Y:J"Ҹ%5!ɦN'=3Ҫȹf/jţfƐUUJRHJU"sLeY.yf'rjG /s=Ngq\s^rk>ZE^ީ (t->|U?J&gd4;C[.; A+OkA*e4~bX UR)cqhE/S ԁTEi"ĘzW5m $BKV{-S^XÃ^M-s_ 1(QaE%xiX J=NDTg1ҙ4Ɓ,-YR힭Z&-ɤ8$@L &0^8d3Zo}VͺydQmYɣ:Yݖ#:9 .=T&zG@SBLN<]l.D^)ʲdcl juxJQH_O~$}ׯwGiޫi:@MUmIU$vA_>h.6[u(=6 p (U"u钴RUbsTY/x#%rqVqh$\Cf=%&[]>A ,JqӲqцߐmc9F?T.N2aK3kX$"f^Uf~oq.9ɧ jy>[sl/@`!es::Qv|IE1:C||ȨgU2Y*{v2]/5:8UN´hK0mU/a %}Ӗ*k 31煑%[1ѣ:&:[{iֲQr72Wh$7&Wu ]M:zE!t)guv72c &Di3I<+d``-%kWi\l_4x]׎}[|;~ѪصeamѢG! Jp͵NӬp&u%L*p3:Z…@Kui{%wI;\׷ Kk.LV]$N7.gϞtM)XrѝĔK`nUwu^/9Qf.G\ANFcUR@a⁽ 26F'*LSEz˰aG).vTd4>kp?֣T?ƫ?65USfrÜzn$) /0mgwX`s2M)^Y`{>llz$1~'1ݙota[n~naZ\KUf/Y`%kZۣΆ.1>RWuhWؖBn6nNKu\'&׵&$Y@lؙ>E?bMx~g|uĊXlL^?= q:OϷc{=o;G7e^uigu{^}Jy1ٺ|G{P_&vC4\[P t3CK6$vM[ˈnƓ!A'+k7`4e2de&V0u4aj'}9ke-4~0io!@Kv>k' b# lgG֘21opսPP )J/8+k81{eJ VP0+`%%Vǖ%.z4I`'(1^?G@M֠.>k:mEeNL䒎Mp݁JbUyP|xQkx^~_Ll4x`/W1ŗ-~9IL٧#LI,1$Px朂 g} =/}^|Ul,ڴ.F~0=&ހїQڼ.L QFLɅˣ Xaûi-Ku}^N?JʥʛZ!;_5G75)O Hk|hfvÆlv/^ʶ͊;OjR^HC V[Xdn4P146Q˫?!(k Ds 1'ZYax[|s]URz뛨WdOt+V%?_wݸ]y/WW\DIwrWNE ;u+EJQzQA^OG]gLd'VNֈD \tFoQO&;`ChEϢ [u` K+GJi,n)ez4ƒ,d_̓&|M/0آbZ^KW9ߩb#7tgSs`opQ&ZFie.P"P/q=2Ռs>{A%u\Q<$ -/% ^ֆ¯q5J2_)cc@H#4V"]G!A5L-1.]\ A!Kj#8{ğaODVT|X7Md*7vJŤJm?eu!` 09J$V?.IJ6`'_d7YgErŷ麵M헻 i9uY^g&[HA Va|ߗ4 djٗvPE{S;4=a4YufY]OVWE.V!|Ou݄ԫ wKZE~]ZgtK2tE:)!h֟b k8ǐ8wRğA'-9@*J݃N}_ݶ8DZ]Z)e z%i3IwM=(ڹ }о+gQ-;h{) 8Jtmc4AT%tcCT ΔCV-xHiVHޓ0B}IWlA8%zͫ-`RsO8Z}ǘD>Ml'N( G?(#TBO#( éNsolZ=P.y,T^(T?5tHHչ]޵s.DɹEkeV((iβ$>@疙+Dc*Xu.L4@턽8bEiF cnZ-8+`K40,!OB{KԺ:!u #DrkQ690o,* JqU~aS,dYZF$'H]jYc(~ "qsM A;#ZӰV> L{:SeO ifd N2Fm©ΧʶCOoKCXf۔s*O"ŧpُ95ZH}_Ƙ,A&)'<_C&=0\o(!#xw(LŁ |v־@~GC>oɘJUEUW#emX+C(bSh'j+th9.ll"xAgo  &n$gU1iJ(m}戸t4{!_Vq2XZH9yk{@u LKl6ZFpKov ZJGO}8TI.M''PIt)^Y%Tea@#]+1} %/V HQ婔 ߀[״LX~"v7peZZU48ّZ/ZnQ5ZSG$s:≁]@5mg3vo^,nR_{LbtWQ?w%9jiHފ>"\>z.1X{ObqJ糶bD>hXƉTy+H,zPSź[%RYɁϣzAXSyS1^~(#98*7%uڪ48MΪޕhCĪ*SB :-u¹M^aBK:f\BR֬(V_eHҵ)ʵ"*wE& 筝DQG *&4âO8,?384K!Z^Dˣ9O z{S评 wC?*yJ+?|6mǽ~~ZEZNܟ^/u̧fw}RXG᧿v7bV]V'4+WDUXc0#ƕ́( @oB_돇|~Y.{ڽ836k!!!R .t p%B9֠痾%ƬŤdj\puut#鮜u**G [|%|P|㙱=-ֆ| ky\3W|u*;GnƼ\ϤCʫ8J1I¯Бo<37?$QSƌH|a%AP(S*uhAmq'`#&*ܡBks)A T:[ c-eϷ#!;oBŗrOzqrei4ҪBЀ) JQ/2 6+ ] 0JwI@Qb9!V#h[:Z 1p=H  ! c5G!•R|Rv/hȧ}}N5I|S:TU!p>ޢ@Eb`P@Gq0 G"nٰW5bbP,^8P(~W2\*ȸ_$\q$ ֪B#xfZ9΋r@ GoGz1=(fQ|Ff=p~bk]U@ /SeȖ%#$CG¬1ni1{5h|8}J<teXԉfo9ZSZܻYy P7(Fr:j cv?=GeTp *][/+Vj.Ϙu0ۖZUV!.]ci8N m:m`|DbDkoB :8+]>I?vm,,TKK"oZQD5{ 0V_qׅws T uP=[0֙ xEn3LCکeءcp6ևH%D'@Xsn4l?qh?QċhcY1 #{1<>#掲kbC17QrGJnnT VG%tn_^n8%ЉI1!y:0_AQ(FǾd-o{ԔVWB88 Mj+hʴsly8Ӳ!0 rEx8[0Rj[geYpA}6Yfg=~!݆i~ʖ$|+]ֹs>-M%a@r`jEf!vK&gP@=ەcX%)XhYvQ%L^ aa=QKc2xipOxx,~na&j(${/fly]M1QѱӘ׽ 1 -_.&R|C#g3o6j4y&著,Wls>6B2(eHg D[\Mu-[<МTIA $9χ%_O)w޾}Smqt98Ee S1M׏7-bf/Y*(Xh|/brlFhe\E!)I}gMs6ۖE*GOtJ@saC!R+.!N jjEx%R[)[uÂ;;}욄%[MR}=^2|YoY`~5d<(62+I%=LJJ "Hă꼊H{XѬToz]* [=("ƨ@բmeZ|!sR3Cæ`k#ǡR!")fV6Liyˁ*#h5V/Ҋf~["MU0 Cc*A%\v3ҍ2+H6oZrAIoL&Ӆ|*rrCw%dNbHn#F Mk~e/ʢK.Z 0ܨDc"H|8$!uɳ:'3cшLAfI1k"cewl%*Cwx$׺dcL~ho1K:Ӊ \F+y(Cp:l%"(f6Mn$8*W?fQe9#'qC#'opg![g+0hՅ^2ò Gfeb/JN/qgG7*ͤ$^l_rC2eQ_~,G1i㜯UMb<&vB6\Tκ *؉B[LKe{ֈJkˈۛOdDn7ٮ}a,~ۡd5u S0[9> 9)XKS[1udLOiTMC p}TDXEP:,XmakdžW,)  $jcortcZEkbe&Un-F+ P=Nu .8Lt<҂{r6E鞚=z;[9AXJ8(V$(N|Y(5=njY&_fxp]7)zRΪӨe9WepCo7o(eeчb %?ᨘN7Q+ߢo r(f[`Ju_(m”1RWw)RPXVǽl`Q{o °R'0<YE١s[C6j z{'X˜CF6AuY{o/y^QP4~AV#DҦut:_fY{C1c ̇;2_ {W1hʮoCDƩl[D=#aӻr>R }\mw; XtT  W l b8([glzI}h\H`HbN5aCIxN=w/BE3Kޙwlh}NS99'WX6,c1\K>rfAR_'i\q8-~+˖?/?iſO՚dU¹E"Xx"?ⴌv"g[Rk'5JUwut!btk?Ӵd ~Fy˚U1|Gsѿ~3|{pQ3fkQd.ӧta|DM39#L6E( 1fY`hdE /EF/xBHn\WqX-Ć.:! IM~0itӟ˧4xgk `9JV|%0W;槇?WR"P:ˆ<^=-6 `Snܼ ^C?y,ū(x*]|Y!.}$(}_n%Ger˿KXeO_ϧ[~&x,W,鸬Ջ˽Da+9OR2_QdArC5,* @Pݜz݋mqT\jΧ2ΩɢY*wlHњ(nfNK258(,0ռ{{H3YPFnQV#nD>¾UUTV­iTA[{8rz[d1-x2c4)@5AtپDW!.BéW. `zjQOJqU/_JBՂ$ꔤvt建CMsJ J m:eW+W.6&|kǪ9?a@zoPW0nq[Q.;pW7u o]{107]%)ڧcW_.țI>,J[w8)QaVJ3ܟ(UY.&q#JI7r]>NF:%AbWԫE"@9tL2f%pjIVb=\87}oÊ*Kl? :@QQWƒE9`~v').K57p^>WYQH-b\Mմ1 f`jl<?/(e;7].E_ZE~aalܼ8 gC{}Hq:=y\eK_O!Χ/75 [0SGzɧ)?~iLJY R *s2nAVV_F{od^l%,uȫ^xI" 9h^O A ǝ6o'5x wC96h!uTbRV"T.L9ʵ!XP!}*$*C|-}IQ7w: gD^z;4"H .ۀQ‡:7o9+@q~y"*[tI"+yE6`Dh1ˋe<-cY2?y{4l7ʧޡ_}}+>D9 [~tss8t^Y_sϒg-O&W\\Fa1qB|/c AH«c8W}f8HP _ł#@E. Gs9V Kk>?)TZR >VurPzΕ_hzQpi? <.%HooOM#rD4wK̆m=w]3Fq,/oMb(&ч 'j}CXV 6Zن]7 -Pjq 8Z3ɋJ_AMI٣'cқ0P"K$/W9y]<4 \8b%eUM+I7wѥ ޏ0E0b5{ l}*]B%0'IiAoDJkuq(Rٳ\V=S~Gf'w`(_][oH+D^.`v3Y`݃F˒"vS (R &q22DR)B; iNimNvIoͨɋ%]¨"T1Q4nz~=x攗`|+CM:‹0&kxQ\aE,.8߱8wS6lڥI/­b̔pTeTnzjbs Q4Lk^&<[,s/ƹA%8 wbP^M8(Å&XM'MB=laӦhVLH3+GA;!NE$#iTn_-0"ڞ(!SIt쐘T/*ﰧ.rVȭһ+EN8i+N܇:oUx<NJ AsTsgOϛx\xQk8ZPʋ1ܸ[cS]kX@ xksesO,lԬ 7$8 >˛S[4iI؎w_%lt{WG NH+D .CuHd预:Rg)3J.|2r9&`39oR˵nj9%<Fx%u pU[! _m^Py6u͟9cHAnREcRVY3"S "e-u\BWf6+9&T2iha`Ș̃d7zDHsYhpjELͿ!5:_S󋊈p$X#)`MS򄅞Nbm4LJq*5k*i|" TK?|7kA/RFV=eTL7 ]2k8:yF7E%i/_S ht;}ewk k6_Gǡ'hɑԙfr  r39" 2:s`,ٛ |BT8DHLbpwXve=ng+9S>Pm!Fu'-.t4AP2,4N{nқiJ9:|vTG-<}e_cl} oso;[*Ό) n'\֙QnI9y\ _$&:_2'4/; Nϗ(I ?mK $v_4cftm2 }eIÚgعw}jfYhqi{gi?]2ϑ{uy75;%<]6?}褆bYK&]ޅqd㳼lIة; ;ϝ_yMI S!Ih [ d &SL9iYZ4$:Q5`doU>\zvhxvʍ+T3 f; 92s1),4p|*muvIPrqtTi_;-范FǠ?67εE#ɳXP"l-nZg\hԷ*WdӃ: b5(',̣,UT`m1c򵿯KS :,G9WkO?^G ̈́x-jlhi ,'ELxds48FlOZ`x9L͹3)$sD:bԜoS%U8(u %hv+N>F4G㯣l#0v*]I9^޸#}4` bg>cZX=g CRιɩLL,}sluW*v=OЕh^XX9p8X# YpCKn1tfYL h(^cH3zm~,C(@7H, 8Jkv\XJiōҒbPq FE/JG\.=* s&ц۔pfeQߺ9Y3Jd9lx(LYǰɸ E^%@`Rc0=F fhpb0k((9oGEVV,K}T1 K0:j6GݱlJjD ,_pS7l,٭CxRbXLcړެMKw +b*;MRH*%"Vs F#к ciWuRc.ncb`3~ʨO cx5IQ.^8u$H*'cGqB9'$sr'hOU9P=ntWQ!3.JJ0:&ffotDJES~}n򆊖/ |L^ ,܃̈́O2]I 8N]0)9S'#j-xQ!VMh^v∛w3bhluE[Ieo%wm~dDBA.}IgXa\s~07BOwMKSس9Aa)2rygqqhs)uj3$ M/+ 7N),MC??%y q3$iH'S0EůbૈЍ#?ɯ|Lp aL&o >F/X["0dm {վC{0ZemLҖ=Em--U) %Q}s,vy?G;pq5}*bzD-`~wVc8M8nt4D{i 8*'#v3pFitF)ȇ:k℩ Kr7M4{?<.M}|7,TϘyPo  nDsfZlY1[^ɷOдDj`z<ВBɥ=+47|2 w)7c݌yct8( eUR"g"j`܋ѭ{#B_]WV k܈A<ݎZGl6P.qfAi S<'H 2g2?ps9VopK`PDò^)f:8}-hiRvM!0'= ̵nX@?/$TT8!5}9se|_wx@^a [MgsbՆSƜKØK2VV?=@8Fq*^I8kDu ̱f84p#ʎo+M(/F9csF[@Fvm6 $} r?E)Ыqh |s뮽afEFtޤz(վt^%٥4kp, lYѽ -~9<]ZWF xb͉Dr...GF O={9t"Ϲ0ӃZ 5wT~of+ӑHqPTrK38`-7F/tRu e BmsR3ܹ";50Q-h+jK.CL^.HFi,zf44`1&1)-Nvyʡu R(J#j.e5*? ( #oq~=99k@ ^p|#E9\YrIMct&rA:0N`@* =WBmA(J{f1f?&Ϯ")zxr [;$o+ 7)/wy%EݸGRJ*;S.Z`EbV[dnVEA8N?\\M^YgP ۃ~=[qz8_] +,  pf']X>#ś'67oPm#;FUK9Llgr2+L=AV1dT:pf$<UJFXWDDj5 Zpj,h”,&gf[@rmoWa\3\ IA/c`Kq:mΕ41}lLP u=MBI_j?YLHNKЙKh,&''W TuDz.^ۜU |+_WH Ic'-އbp% plhLksn9Cq`[/A፱Ɏ aA8K %P} w62^u./-|=a1c=ݣ-pfܘ˅!BWw¸n]9Fx)8OYr2'停wkAL3`Uoݿ[pq59+#x+Nܜ?~8xwVK GHmC<@PGFTHS)ڟX q! | @(/ e.aޔaCIFh!t\S8҂/h!Ǡy׭M=5*SQRʾ¾}>kI?(~w.J>P.@T-(m$Zi.a6neR6CQm6痹$p+ّo#*lNO ,UQp ꫴ\lG& _1gYfz2-p50} ܟLa+LkR$1VʎSR::`LdȷW"‰ "'J96jM瘜Ԫ-!di$81I§&`r*wH ~-Q'˱6%Vq/F9ȱ1Hs vP+h w(R_;P6N1!v^+'*wP^SR\KDx~7$400 *n{Ԋ;1 ZSgWcP4ƫ)ly)*MA픻N 픏+a`W+a9RiC;N^q.W>$2#_K@k k҄ 97 O(*_|+ݒ hW<+] ֜HE΋swqQI?'-t.0[2OX.}7]}!L’50^즤ZNRC~L8c}8x=%hl1IΘ k&`ZF렅rυ+g1;-ZCUl ]v%%WLz9=j E#k`,cIѝq`P gWgʃ/D8tHBD j`;Sctw_};:1.(Sau^a͹ nѼi50wWJ: ȭèaRVf5$0x(y>p)0\ Uop6\ @@FQٖrt@\y µ!g>̟P}LIL9aE%k`A#̗c'V ㈔r&ecs C50ȕ'yUTrZ+=DRJJU^ڞh-l:6;p~<[WVJ 2,Mpq%vJ}Fi)aTWD&\]WJNh]ƖHK׌bV^9Dr6MblI&H⧗ܘ{YJwgJw׊3; S /vN?I8ajo0w<>0=n(dt]F{QL:  uU; j3Td` P?wHпi8ͷ7''myV"#q1.Df9, oc~h;bڗ3^g5 w{m#VLX,)i*zt\)h-va +خߜ^BivXiMqlPU h.Ca3QZF*&+gbKigIZW/ΙC"mլMKAjv"1 l)il= hNL>ԝ{Yso]Mz=0qs@:@qаgB0da!IobzI 3BUZꅆ}4 離ayQƱMGVxa {@ݮ <Pq:8?7~lHqL$YwZA٧UY%] !ajWCѺ gNo)b*l*88SJɤxf7uD2$iW]b\T*G+3Jn,ׄN!W.N.S,0<=,лNOo=9FZOμZ_:?xZk:{(ޮ/H%АRw5Q0A{.mMlG(T{B|A16(`Qx@y-% tiX-5g &|A-n~ ΰ[#ָE9zK="2n1]1յ:K{|q椥ٕUȧշߣ~(X.R]T͙sI%EkGw?0^>]$R\c lLB(Rx#ִIW*dg 6[sO6k}E)챹lU-{Ztj-NpJRmp2mm1T/1[sH6CĬ$߻3!KTx>C}娾6e_Ӱ½5ʘQ>E| Q˕pFA֭dGQqӇ6I#ZuW𺷑ji)YD·-R|L%),tK͚!:7!ox-pbSep߱myT}֪ʥLy,rΜmW57)VGaqD݆Ĥs Ș>Gn_IęDo+@Dl8Y^I3i~>,Τܙp%gam*O0[ŗ T%Z3`'Oy:=;ojd,FɛAR TVT̺c~&7 p8'0Y't]%>Px! zS Ad,* t]\v]`l ,tC-xMOwJѺOr~E˕sɃ) +BZM!r+v<7y5_.SbmqR)IJ=(kchZHkonзwͿV#[@l5z $ ]%Y|إy莇e7輡F yc>٫b́n|M%F"Յu?mWbBγP[UԪR4(9}t#"a TҍXHʵT-,oN&#Pqh_ @#L0Ͽ)/}*7Ρ]Hœ` T~yuWCR˦xev*7Q Iޚ)Aݐ5Jb`Z_">Jb0Yw$'J80f,{XxxA9G 姓+uw<&kX|IF;&wh \v]y.>g/^Cg\8{% ?EwW/O\|e<wy=G7,F/,ֿ_T]:/ˏPz~D\uRiTRkzrGtXhO 8z)_FlW-q9?\ч_/}?c<`ڃ4 kL03s="?"t%L)ap.YQ_F{@\[_1#@J|Z`4XK 9Oc(d#P)+Ț0>p8m"&u kT 5#k?Lϧw( 6VNM+[͹C$c=5-$*g|"Oh5ɩso4+uu_͸A-p+̺ X'_64E.2+`yP*JB:8k~!Öiq..T **FڳlCV_.@?e}+%So{VN!yW`h`_G R30sd N q!tȳ%8R*D-'ubg-'Jg^dG#myb', ^t/h}ȣ=]mjV@h,zI3K:7B|Ճgzcr* Rh8KΝBi l"}[1ԚՎ&Z<3؞d'r̽;^_O WTO/}ZՉߌnFo5}[7ƞPF}t4w+*E\Ϊaӽ=jd*9PwpIqoN0c#}Y5ZtSQ\Nrn69sĆRC _z!Nwًy7ĥH%hHnc2/co,- 8Έx4@?XD:nf\YTjctLf <̈́b̤]s2Ɯ$cgBk.d3V胷!Vێ !D[Kw]tZ߽ݤUN#.Z\Mb%xӹ6$a@@ 0(:X V\Ix[L _t)ao6ӈ66<ΘJdG] Ds&2hA)iA$R|QMgH@l.poFJL5G3"R!Ghs4Җ ohS; UU"xɜ3tbU;/>o.)b\XK1@ARo6ꭿT*rVw 饨Q@9OVC-~0IS W2D:ۑ[s$m2s%Af.nun#w𑁟}T:cib'WQa0CfJr uǹ; cm(bG&M926xxԦ4/7/(O_bPϲڍb^Tdt\ <=1h& 0ș^2Ţ.1Cݬl b&%[ǩ35F0EZG4rSPbOijWjJ).U,Q2Jmd:KEδcnH_gWFKQ^8;'u;>a12"֯`{NX8[.ZNXT_ t﹎9ށBƨ$\7L!Ԁ܊0<"}Js*:-H6 zHtř\`: j IIg|;F<&%Aᨫ' 4 \6P-ΒL4fD5X}٥7r1 [0SϢݼ8{B{ct&;UmjjmR[0Mz -Y_ kVJ#=)KLM%Sl.ӵ՚vHW<@uv8 GyKbXuf.70F5 L|&PD\&e?sDY9SXg6$ ?nhC;dU>k9C6c8;yk3s#o,*a>O~j4SZo7%1 ޜ5A&Gr'NK rNRPΪQdn y} ry37.ٺyf(apKW#, ˚K~Ҽd#i!c|X?%kˏ=}%Pom\L^~m_my4tlOI֓vfLϳ~>#vOXjT h|^!K3;l9'ϬmB.|^)f[E:Tg(^շgiYnC'V ~ք)o)cs<'=v9j7q^#ZSvO}`0`ubgFޗ?k㹆L~]~u$B_ QX;\+-]e/#bӳ7GwGݒM;x~q˓śی֯˿s/!]l\\}o*[oberr; xKǯ\Xe}~[pq6%5?NgvSxE}pB o<+5 נ"t0 *vuF? 4-`Tjl `/Ha^u}$c{$8 P݆d@g1)Qߧ.F8OL +: "lvSm>H;#.nAs Tlvñ8n+ng<u]km k8_70_M7c6k?t i?rGQ[U{{!˱k~c7ȉxʼncHnȱ9&[ 4,ꨢvG݇iQ M,xV+4g q aFOoiQEoOiĥh}VG/MgX)힝sz7GQ]/?CvwiAf~c`dJ#o 8t?Ni^` $KZv?lK* ~nC~^ģ0[}oͱS8(1]P|(lĹ icAӠ>:? ! nX >DFiPAo̽'oPXmA Ej&΋gեiP>CKWW=V![yk"TӥK`67D žAh?<98Я7XoA_nco؋ys: F7@bĺAr5E{$V Zu"ј:Tt[G_p+g:7[du>q׉0:u1M~cӾL9]=g!>gJkΪkvO**Il]Zcl=Gn-8+MCΡg.=6 oCR/؛n7s]m D98'WO[Vj/PѾ[K/`Ƒ9oji'LtBX$YS k)vtvjDZD$!5Պ)4=ec9hEz8r+V*+F+ѫlUfejas1!# Z {HbLiMjdUC!Ҙ|u0{!o=ۆw\8faחoxӯEY`9Wn< 1mGJ:2l_j>~eH%ή ^HL#Pv!Ws Q^7gB<8ˇ} Ҫ%s $T8A''pBlM%B+Tc"Xju:I** BN 1dMl:ĕu /QLb|96 ed10}C,L'}94Pg+Z?la)N J)b#:LLf!J,y0bnԋ6(ExD4]԰ _{.1]k)!v(X)1TUJ)Oԭbgo2U@}ƄPf>뇵d3_UǑcWS0ʸC77 4#O~iXnP5pߎn֠`kw[6N6ת)[橘č@!/b $i;^BڼoqZ\t1 43*KrR:&IjxH]Vap{փG͹d;bړFz@";*8'+_c?_@1v8s+y6\ز8?Kz◖*^Ћ7cwc{zvyzߏ]\~K&y{9ѼzŻ?z5 zmXf/d-Ĥ-~9ӑ6onP8JT ,&6uOjV `چˁ&jKu컽9ZWov*:G+<{ăii{_r)/e Heټү6w8wdK.`J'nS |@D[8[X]$3Wh,fŚv%ƪ%YI;ڑxfӏSюc󜤝czyn<)힟۝MݷvSYdW@—=#?wﴈgP-:(6|b)id=+ sy,fiC0uN/O,Ozz|شT5!8zUX-Tb.$,Q>yGL%T.@c=/a,{!S/=T}1^k$o7?9o5kKF"Ur H,F&'\SUN{ZlDD7sWG'_x*"%Zr)4B^۽ K/і1ċV!Fjoꨰ\x#j٩dmv-YAcjթg-G8^l&a颷3]u>Ogm-?P5ZQg'o*/ɰQnI>j6aUuLȀJEM`p|(g~֊)3dfœ֩*bQ7oreP\'~TH@^~II*Qy,7(yP\"\u&YIQʦZ@-P DoSRI ZV_U=FbilI[ĥ` qssZDx +rZOeUUKY Z7`tc{trOo8:_&ү;tilO ԹuQ95|",[FB&ps4N[7H?]Cd\x5X4UMB,Y0U5< K$!nl\;!笢w/]lZx f*w&Qͺfb|~t(%&0\א9WU/?>^nj*e2>.`!ɾrIY] (hL%N+y4ݮ8lWj7ZTΦo S@W=Eָҷ&uf'-ǽ[}Krٙ1(n61Gv?(e224Iz7ĬJ.%NC੖UxC%~ ۢ9Ś15i9Kg嶒K%Ɣ+r E=z.ơ():Kxko|8b) >rHoJP߲OaRբ`Pcت+ƆQSN$J=ⳞA?h2X٫c/dUgōn@9#;&70X#} x{ij 2&>Nq>]OV4M׎oÝ}w[Q !SOvҫvDnҲE:,v[ _R8LRc)1f:)z0O0IsNpJb 9j12{7 iؓNRԗpX6MnFԮ+#\ m%ݩJ,ȑ>mg P*^chN)CQd9 Lig옒X;]6WUf7eys0_AHq,|O@XK BgǞ_fiw>Jœgi## NG1nr; c$$+;92x~?@LT2xc9*G![+aĶnq# էG"FM?,Ƒ.Y }!|Fk+Aꋽ2KvGKl8|hh0 93߭~vm9g_n( *%?L}:%r JED#~xS,(?ݾY5 s||˓|.+]#lsS d޵u$"i2CZ@.Ifa&X<$%4)^eҬ p𨻪믪 m|SED=׉%Ͱ0ӴgrQ"Vn+L,CR\p׃ 3 B[,.",(9խ)Q#h \p[/E]}1~sy&cg#erigN@4Ɓa^T}d@\Tu{e3s(N' {dDgmέbd=Z=qA bT0_WipgW@3a- ", s A68ȁ\:QF&R!u!G9^HXA( e?%'ɌmVF#`A@c yhI݁P;Q;n"fX:-. olL2dvFߋ]:f2cYI2lgevD@,E8tȠѫN];bȩ]`o脜v022c~X%GA~)[ }e~w=VRCۚWs]UEt@T(}idwiJ=+m 6Zmw}C6}WA;)KrH>fUu- W7,X;Mm葈S9GpWCP7MQh ټUG*ʁojÝhLF\CƲ2,ojuCcJlB$FvS+}hJ; P~亚qijMaCNvQ#4.*2.} UT]gDQuYQkm]rB-`8,k1mH|J6[A/,x67g<6tyeDB{k\!~+߹g]̶mv v^Ӈ^N޻oQ1v|׹ ueQB TVA `WͰ6rG\?exÙdem컔>]'S٦*E k]zN|6ZݑZ]eђ^}3=)ǂQERba*[=*iw[ia-Z7C^YK)O7W`z<K9fR P ݦM#X(PJ߽)L;۩^Hg4Ce# `+UVrָcݛcgW=lp"`MQmh*+b)f߾vg@lBds=l\\g[9 dȻsĘ{'ofnimn5frlE*i8G5An72|j߭-랪@@|6MSބ!$;S -b~v pyH.YF|6(* I bO̼S<[,hs\LC>4"N)4Ɍ SjǀyYX Ǒuj*6: s>~XG tfy|95D|)y!ښЫn E92V;j'T>MUjxvI`E\VIy,9(y֚Yv'H`E胑k_j::(h9C Y.x! 9%j]Я'ˈڵq뮍\АA 2qƻO2{ڑKp18بeb8ZL2'Lfa.UgBV201Jje7ϵRjwNIULBw:)SjDt>jH[\1Vv2ƽ쎓ف5GAȓObsgkҦ~@_82;QG괛0J3i$ aJemM[;5Nڅ{R%y(Sf@ ڑ`1Vz{rfGZv`RG !bmf[C2->zsv(+QGh&K,ԡ-'VDAc}ߐ}%j1J0/2:,Eh@ o@[4lœ`aM#F&ZP]EVjwԎ-9D|`8yl-8vJC_tƐQjw .s0"aNvpj Oڅ-0-r.5d!@ƜŔg 5gmoY?SQvyZEZQ/~RGHwӯ^˱۫fu .:m_?ɜjCq{U_^jMG:]\}VV\=yA {6VGTE*N=? dn7=/QU{7;^?nnޟ|YQf3Y"VM!xՇ9բPb]eQre B8<%Lv튏VGZѲHvhۭCD5c)HLYfZGjMX:53dZ6ü![R5zڥ^œ &Doœ6x^}+6vGND񔨝]ڝ(W㟿"I>Ӝ0uJq{Q7 /{rV.=\pzѦ-;Vvpad%KĻ捸ymZqg~4';n`Ì7}42~+vLI lCӤ)M ` |޾z[AUUaqMVDd T> lr!IW7e3.(BG4g͕wI.w5#'KUlؼǹ@eyuvW?"캭6m}ܴmd|IJ ~‡4MS -Sb!ź 1`)*Bjˊ0H,Ss{V"Ge*ue9C9 ۧ9FQ-CA\Ι RI]|phn@L֨M]ly|C:bKe N&LjCؤuV>a9v{v"MA9s0D*ge"Rv[F8&[ 2c>y稷JAqUQ%+_޺-)gw{B6ZEѻ/ʌ?{g5\&Օ,}G9ZKΘݕ\i<^cp}%ڦiF[RUMZ#V؋2~zaJ[ۛ+h4GS뻷CJۤ-mW*w:õ]CvBV˷=׿}䟮bĮ~fZ+ 2=N4e(LZ6o,olo}cFr'4i&Da^D7Зn^{ u8҈9# !1Q4([o`AñS;tJԎd.9̳dWjn G;#پ3ׯ7nd{oon$ny ?Xbzd{LvMudզuQgə)[ʤ3֛slC`Aw`IBor8y|8]ю~he*EU+rTm>|oL}}_C qWk]l8(P-nta3;-1n?O,h&6{-/H /ĸKxyXU}' ȯ+,Nkc88w_$?w}6 p6[[!A{Ё"Wg<:_Wl~udxVDu:.O3G&y i`=K؞0y Z4`_M%Їb$DyH!:|l* ,Leܯ?}7QE6SV0nsEcE*qZ `2,ə+=I]֓ AP|SYwy}9yZ}Rߖu(HdV@m(K9k}WrGe/D+}ugZ8Bݹ~?C`kFdBO?$Ő_ % ERr8eM6&,bgtGI]{)Hd T^R/$,a;# - fBP1 @hJ2;pBd&XXV8yVݨO3#QbYԌIm|*2齗 ƒʤ2 ]6sF._)u8a+U҄a bXja%2Ԁ,3gFY.Y~?m q/ΙTFr,`LϢQ3 L8@c*Bƌ"b̬#s-a=434G,H#`dT8M'Su(mFM I{;"XŠ!xjTOi>f+;&~:3m=#qRȞjQc \tk KwM!EF `MkZl4ǚFAJ^p+vILhXCyYja~뾜:i[%rr[.}%5aMR U Wɗ[v] ĀԠLF X +*kboPw^kJc¥[\2sHl SWO%Ĩ%cHv=fU-d@ΙY %UHƢ?m֗~ بEZ7)uīHZؽ63ZCXМ*1QË:La 07x3Rzv,?ιsי%KlLx`jpѯ 6{Ńk1ZL[YMmDukq=1C7L2\Zg"46rRcQZUWڨLͨZd2'6#N)F:jw @p!#^h .3%WyjlyQJYnl1L/*s5~h*1j/]gs/ڨ̬/*Rѹf`Ki8b6*mDe?a>-Z%@W?nŀ^^%ӧ\rDMz9zu^Alo{9C;nemA7v+nDbHok,LM2?1Vz@E{.mp̷bњ򆛶0~SFI6@}9\,Ř=(&}@)}XCOL?|;E$ dA3I{M{j3-ޝ=)Njx"TdmJ<>"GJE1Uk4k+T^kֆZ[LA@hE#'9r>|P@TE8c ka@m&C9BKx}%Wg#yMsb{XEK/-FJY+,:cҦ͑{dUR01odSL#-y,c/ST +}9@LLbAK^զ!T;`)V_XȝզhO{oSF^zHb$p@}0pb JÌG7A1xӟ e :|H>a< x, ;y|f|0(tÛi>>1zIb@e2>Ep@v 62}u /-/d7XSQf0HX'_;Φȇ#AN{]۔|vڝo>Sw9Ur$@2]AVckGR*f 0iw* epi z JX13JyWaEZ aZq!=DM@(g}}$sXj>5ê2;lv'oFb46l}jl)Ongre6[ٸxO*[樢r۹R-+@Д)JyEc)&V^2Pz.s=s,Q ]aZc%8&)SQbfZj4G^S pw¸UX{fm5 |.n/(2~7祰3'?~b_2.ᓙCa5|(ǥyIzMW!MFJy3;.Wa0٫5})Η|y`K=Zat}Rc \S6pd@뤪0M޹0Lm':Yh)iL$Ui[EQK1!v1Ѕr4>?* ka%2a"3$m!\*%@7Sly Q!` 3yƤU GQV:kH#UrmUTK([33O[ lL2.3$/%5TI)ֆ E-rr͖ɹ*rUZV]/ϩAQ½6 {~ɦ{h3}swh3T ɩǷw/:%"l *8go薟(^:]Dv4R'n'Fɔ#n}0MO-A'ov<ǴMD\OSq3Mp P$ *)"q; dhc2\`"{rah0/wEr_'AGqA{i l0@} [ hb е{1Y YTfeϲ; kk6e/\ VIV\r.I^bjӡHK)NN忟^]+ɶݍFkGfЙAk:>kPǔ/fcţb)bE+zrYw7~h-2\f6ZK.*!&ʼn6гv! (D£iQ8tj_TUU ȕ8G#s s!b.3De"-JpP;ɘ0x(b:RB%éyU.UIG˥/K!UC#4ywnm=HzÄ}iW kkuC;sci5.AIK(1O6G(ac[=_;vӴ댝q۰6켮r 9aD;$$+mhC?ʚb J4Z3Mc ʭݏIkѻp"FhZ4s>pUNr9JQ)X||vO܁/bӆw<f )y} 9 [*/[Kɔ9y+ I#uHSYG!mT!sהm`R L RTtC( lq w*a6J^;TN',|*FQB)9]Kݨ3p'O6%ω_O@,4r-)KY,%-XR KރRIOuDjKqBZGd!WAE.moI &ϲg줚b$ W_24>S9<1%l^?|ujRn쳊ijҀZd]mN񌂴452@±Vr4~vnjJ<%gBg"YTk0E1ETF 'IiV$GD20P)eҒ¬f”Fnˏ/}t) &Akoͥ<61?x ^s;r)0Gƭeiܚfyy2Pm-jIBx:fx-{1U g(PaA\Ќ&A XܸPJhd%He$JbS7D*}֖܆IkjĜ#w@v"';)UQAF cM Zp)VF8@($C< G¡Y{,۽"iuMmrmhvmF<#?C!8{} N|W=ukryYCkޙV fHl(^Td%]E7hJf; ]r4j4-KCr1_2`8UT"ϥZ<>.g t+Jj7ZRFL?E0Tً8!%}b/iGL\whk9 EMj'qd%s^0#B { Գ 2eks9h YF&Nq-naK'dz:`AY3B#l[ks Mjj  4{$H?@{얰S>Ʌs&([*Z|M@3LrDR|JyϪfKL"#b{=.4^9XAŚ *dP *V%_U# %Ni65](視`.$%ɶާ;tuk̠~Z4a,<n̗OK.V]D"Q+*KM+B*t+6YDU5 Rq֏ =R3[̂RT=(~{ne7LOUBzueZ,DlZ8zN4z%rjUwmqt{ Φ!74x{拽{s}^&̿bJCR NCCM8e{z%OaQ,*.%9AnH8Z G ȁ"-$\cA}T&E8R JOfdx. I sE>^ ʕB8:Ⱦ71##C `p'k\ ]CuX2 `w˿z5wZT,P{qUQeiؐqD؆9/uEwB)ypwMS)=BKOlr[ʍaR |ׯE%nR5n̉V.G9W]z]6^jxuTgNiJ[`$1|rC 5=hfd1hTFf[4Uk|xAF%pm;4j:n.s2.s=]"ˁq)2 +IVJIZHL:SI>G^_]NJBBE̴ ݂w9=1N'[/],atOѭI7:T`4:ۢ*ĥ]ާ7%.xxR#sw_a?o0el[ƟYdiU /US|4ث|'}"AHЊ)*y>̾u?'u@f0۱;jI$F8)4UXA.ILOdNמl3Ksg<$P x|{܅;& /_k"~o)"Yfa%рܹ6LOR0xKRRVJ " DK)V{u Mm hAj[ klU4DXԍrfN$>F(xwBB!Eu(N*.<)VvBEϸR& ~Jq,_ Ad_R$xQY#E°8{?QJˎFp*4ǞЮISlPlh9rYjj[ {aJ "xt(Oh#H1f&!O<ʛMD1IuThPRё0U[cq},Z)X ,{,t~3G(1p"e^$K򎸠akO=WP.TML(DWp9F7mTWv(Ɯ 1 ϦUkxi=6P$'(- RnZbVH@s$N .I[&,!$Dit"t CܜX] ƻQB2C7n)wg&C$fc=9?4 Cec%sCFR[~rD}ez7'nWvTUb#BsB ~% np2)beb3M.Xѣy[$7ΟrgJFgᅳ8chkv]|Շš) tGF(]ݍg>__}{fgDO;v|Y`;3~?\ی?B5g}xG*9\:V}[J>SMPDᔀP$eM`N7k9^mvYYcZNkH |ϬbV E)킢PY~'9h'PNJI#=+'L wV>YIdb/L5}E9kO[\羺[ЫX͇r߾zW___6ɬa JflĿ܎ܰO>fri@7Oimvӷ1)sڌQZsږ /77o_|I'k}__8gt/=Y"v!f-uIn;' ?РWA.n.<LvJBwEYAR< !(oH KϼRN|D_DJ`%+ Y5X~/y1BeD"Z/bTpσJmT9J%)i=*ɗtZZQ?^^"C-;k;Bis%5>El ˇR0ZD'ꤣ20 <e.C}R|on1_13^J^]S4thDW }?׿ƹ=9s).@>kPUKjڇu#u 8G}EOzqT0MZF6^cd.J$M3.Fb`U/(D~'P*T^> .Z"m_y~\}\P%NW .JBPs0錃W_ h'Pƒ&8'P\<*MV:js8~!H6pe 054!!7v+I2O@3a֧poPz!Qo=;,ZsjPFDG*>:/P9%zxvd L^)M2v3{Lp3~;:RtK>̊\jS/.^^(PcZ_O"Xt) `m)`$'e[`[~,OH֩ " Kﴯf[Ԅ.S靐'eX`~Stnnmh,#k,sJYhUQCsUU]$aBۭDț H;5Kܸۼ>5F R/*v,쇡$N|ue5/|>y6DZV8(v3H`J$;(V *1}@$Em={bш)Tzq7sP7fogioXaVe|4HN@HVb׿5i|9=/FmL-iMs3]=7}$wE 0%% z|gq$$8,i'IM2U%h.m+TL76r-?l_i/J&$ɭrJ&$}Zr,BйigZE&l\~'Ү?+Q{k;V5"SwuYۼ_ݚ%4N.g㓤-[t_('Lr\LI'MTOnJQOp'1}qz !`ZhW^ZhV Ju Ƈ<(QU9)/VbU]u_=bv>no鮦:?=9'͹ ,iZֺnny_*^PwUU']9NN┿8 vj URSX'&7mkxpqxǁS[êH^ǓAmFGkai8va%0 l&vlWӮJR$=;qF4o}yYލ# 9k#ZMCa:pfUmbPu;̓Ewif |džljI({oc_+s]=I2zWϭu9)a8}h.O?ẍ́U4i\{a_'F|qM]iutgP͵ĬxNC!& $o(iu %®ĻN/bs4 vF #K||_/qBSg\P+SڪQUJquCPC0kt^/#'tpTB/g_b4(=i8OZ}+as&_a{]G]9Ht ' ˷~(8ݴOnpȏ9$QZa -ᰕ lHuN'1nt"!ǝ0TayC gĻ D![XV"_@ C $@fr 9 Kn\U/-^JvؒyH%ZS/4\K("F홙 =!T^yPLBE.Fi y[?uX=d0>QIU#$J0Ik:1SNpFc'^u R=C'e흿(v1 ˬ#R2B''TdS6EASe.+q~YcG8lT0Nge;%U“^{ܐ^3{^ċǡ<%}(jщ f'f#]%_6yFػTS)dzUGe`l*Q&ʃxv~0Ϟ`׹xRvgϒt)btO{ytz>?kg ݳ d~Œw׳/ߛQN[#/ue.r؝zΣuzu.ż|źbo#*ՍJXdAb(*ݩQ^4H #OyGͽ" MOf:5 M,$ .XT磸kCB6r5KڛDy:7ԗf0LuQeN%7yrS@&J#ccFF2TF  Rr!Rjo0?]땋R3\@gf1m]1u0ÔR̖ݵ `盽+34[ޥYtϼ&.1x2RbTe!Ff|𯳱?ecq|X?RbU:!ΓyRzRƣ돳ΓyR:O{G%ض# Y=#\^@Q |n /w{NAG6eˣ>ŀeYRbتB/,+CU=LGϱaP?uH h<^Zqt'J*^߀ݠ(v/$я[TrmvIg;uIgtIM^>~X' P=$0< #-Q,T{N,˜.&o<+Ņ_6>ShL ss2YJ"*3jL) +edBY(mp{;UpUWs_AͲ:Edu7A.)luNs]6|ъ>=#=m6_?WW9m6_O ]Uo~}˿"c*jj͝l”#X<(s{_w:`wV 7Хi}jvNzBܔ5)L6f" H^Ŗ7: RS[Q^#(LsQ!]J`Jbd3EypstwnYڳtUO8vDHnc_R$9ЁD29zWf?U3tgQ FӁD.Ҏ#Q"!$"LtNh??–H:wi OkxZ͊jD+Bڹ5] RBH!$U3C N(Oi LġQA f5A ^.>:iNk|tmU#;)’.vArVdyTl5Ɋ q%~-29ʒV` %5tYEҺX(v[DOQfߛ.v4C6N3Ctzcq귧',O+ɩv!>`_dTmj5~F{6,F^Ğ,vVN&7ݜxiI^>=IWhF43㛻4~aV__=bh~%J~û?YRL?TnVL^PrvA/_ Dyomy;Kk$GP^Nοn&k}u~Ȧ;wirZY昴#bŖO<T;Fr)v!YaͅF,cRcY5Z=$ZyD|z E, gELoݡM'D)|Jj)+锕tJꟕ/唕tJ:e%JvofOV.̊E:b=[SNh 1=$vu3唕Hs>B=^m| V0IG |ei|&o8[q=Q2cړd'dZG ~xلLY#3tLyb NѮ{73e VE"n$H8JUݦb&S)W25uy} VbfJ){@9H1Gߥ.B}`Tw5huN,*Py%B,ј2U9u\B4 T=`6܈7 cA!%1 uo-{n3&/Q}&m3#{l?{6u""'ͰRFomt.y= &,4޹Υܦd&ؕ(^ل̛,a .4  (F~g/ æ CNVVM?09Y5ív=[!pZ/ymr.Rw;>^ry{f=5 E>o)YaV,(EY{y1zB#ɛ{f/bGut>_R5Nӻjwy27SoM~ǿ ׭lޕA,R?C' 3o}lnutqKk~(m27-9.&hU>6!ض=l'Q}o_l7.?14Y_[yrȜirqh_|S̞J{k>/dE I=@#w?H瞄īx0ȚٲQu-t 6$F/ Bc|0RN+_\,e?\[ hTg1"l ɨD)9kC,ICT .{,|`*9 kJ+K&`QZP_VZ^Ԟ>_X=@}<0){CCZDYn Js=țHcf̰5Yq)c]^⽦Ann]l S&%thCG4NU`͔;mk_/VRH>pۧA=ݰz-wCn.zN'VK!Few8ۑ: ިbEXT&MA]tVO [ɷ\=rbeuKơŐ! [ 2S\fc1KbFl]+pEd4<|ܧ ^룞=fDI(l=8x NDzc/IoZ94xCCG^Ri4zC84jZi4xF.IN@p7q>9Q{Q́ge (,1*X†.Eeoj&PS.h F6渆Վck(6em@;[RB&F6)[b+ &P%/mGuњA(1?e R (rF%!9JXBҐUOO.XrL/h e}Pi5y"%uXii*PSW\B2P*iÐFyr۔M.%Ra粏](fX'#1 B`dDKI,9v(~PJ:p"e%l I{Y~p!%g7C7ذH7͇~;`I<*c l7ٙYMbV`67,ʹ̬FQ~|*mf[P(;߯TՒslw)SðUݿĥP*'Zdc@m |SLT(@(~:@1Ɨqg8!X]d `_;cLLq}w vdg}>!b\CacU={f/1vM) a9)"{Q?D@"*zGXʹ^G;ȧĕr!B(@LvWnCY=]CBtMznݻ7DbY:!ѽ+EШ_`_~F,c#,=0Bp8Nզ:2a`>:F 5nTWiB,\:eT53@DOZւ#BAd`IxK͐sCA##{ eMgr%,xdnDgE4 T%˧ Y +Aʒ2kv}lc[vbvm=3̝װ{lA9Y5wZAIF`}_& ]]=$"Y77wP=$A8r<(ːa+x86<%v3'sOYnI3dғsA.v[g70-jj+y1F~_`v|M/xWst+~ly+|痿f7fy 7z^y0sLz*y>U_ko_o&ݛ|q=GI>_^:혃)ay~!~=PzR~IXG]QkPG)gݠmuGWJ l-ѻO߽&oDO5>^}||b;ò.{6nom7rwdv7ƼE"7nk+Jj[~Œ%N" HҶZdW*U:,!(Ret?11GZ+4"!%ST2BRx\Kia" wZܦKv>x'n6MSiGX Lmp:bE !-tfc&ct֐ܤa#6I]BK}[n|mHX u=΄E#T,u m)Z f+9+$v9t?t2"2#'lLaDce )V 'YLrGV6i4]Xlsk Ex|` n8셤y!IdB!pmP(νCpg`cBvŶA($Ee_bZ_Pb-Z :I!SWa=]z@\ZvGj#OԀ#,'cG2-^bl썒|ÍUC3V1cc+kw6z^(֎^&!]!m WDX[~ +v S62)D8\;\nc>ϰ!?ܻ^f4=ѹLz˙{Y]i&ڏ(bomH2%`ex=+Y큱jR/{[ +~b]d?=Rcׯ|ܾBcK{WZ#0a(lsE:oͼ?7QG(!!" 1nW@`ic?ǵ* ׍ GkeGA*>{8C QE Lw~sbdnԴsVxyF (뷤v_E,Ǽ8"t9Yt%X&-YY[4  ~_fdi[MQH{ov}t}?e[ 3M-VO8 ^Hmvt}.mF;ѢpͩFwOUܙi\.k8bJMbA)pȦhuUtGH&:ʂY(LgPkۧq WPŖ{VWL-(sVP\HEL+*0OW,ShC6{aڱ4!l)_KPUvAY2b@!,FC _ܾŷX'gÞqoag= >զ[x5 Vjk73$$q—x(nA‡9Lp~:a `:D3*NO0 dV]ơ%ͱ WAEח7V:eĂJoab}oH%v,r6R< r% ;e;b9dڹiRN] V{f-Ns7<q 8}QҤ)^H8uSBԧ8yǩrҸQ;qS8p*˩2sMHM/Sd0 o5?k a&ũc,޻x \W`^`~I1=&K Nv>׭()>bz!z:>mt>vl9~bR ~7| –o~}[dh^YE [ >(Ï>/޷wz?B"[S%??Ydtٱ=&i=ފmMfkDa*ciUi_?yݧ X N<Ol$Ziv2]M+Szu;X-HCB5z/c~A2H7tT7 PWD[owO[o?$mL&?,Ag46C"r\o=͠ t蛃zwdwq~5%Gc8z8#`[dLliOZsl |j-xPo-onnwnU-GۗԌg ZSZs&PO0u[5;P"`T 4 "QUynE >],Tr~wxGmWV94p.x H4_r@$Xn@$jc>c*%._ B-&Nvj jhŚJV&n]<~{C=*1Ss3O|y^SR1G925ɨ|!`i@i:(EV#6&$ug%Y?q6C^BA6:N*2ҲvduQ1X hًk|V3]AK%BZj*x! ?x٭@~^j>kM]\q%@6AW:y}xk֪Ԛ.F̺ g *{YS^J*ͺ:}LqrDD C`aUR(#f>bj-7uRU2*SsOz&?l/Zx{.;XNc%ӵ;[m̷ NwA1) r7IQzf{9Lw%~΄oYy߅y-dd<=c?Za?Ɗ{#lvDF{Cw|4c w+9A sID,X)i,JwpssK䧴`}8a}^wa/O~amF]$x#W{"В3<uj2 x7-|.hggEH{?n~Ӟ6IGOr5?D~aV!1ǽޡfB୥O!޿OF~4-ǛW[i~dix7u7~b|GI 5ˎ-͛o^4N'÷ǼG7 ?|6 ̿Nh?N0J{0 ci; BZC2!"zW`CSD)<2dV 0p:Hj"ԙ4l@}%^Lg'anobg&+!ADoL,aj\$Ĝ z?׼&&S'E'?d|mk~,X9z+{ wjm&@0ћ=!= ~SV-!Ꚇ$;p=| .v uQl@RwA: uaϖÞH gYt/CbMا.g|?4{ii Ptf:Z4shQBٗy cq i͉c& cY~UH1؃_hc8VmةlZS| EHu2JrN1jxUD.}g%awf l9[sGo?j XW w")@W6#nT_{b mcMaO[qhȋPw/]@g~5$Br|I/$Q^!@2Tǡ39gʾ2#XGiUC^JҲs /J(!Sx5N!0+q eC:jBD(9-;gX̱DyLifۥv]j5rRR_wڗDQb);yaT}e7픜=šЪ~YS0_nA;0@>M }wLᓏ7O%o=sG~ :!o7៽:wk'oݥ븂jl7j(; kAew`w~vŶ۶']JsKZ^w:-pǭ($ͧE~<D=U毥ײW%Iӝw!Pk:e ׎W  .vWyV t_B%;j_+r~)]okr~s~eF^Єc (\ey>$]ƟP74ʮpc`AgYdI?r0E=rf.x_%iHed^Qf(^%n?m :ٵ>H ڎntNGa.ӑQ 5$FDbzLNQ'6eyΝ0 ,5#ag#!we~DrK~ApgFM6bR{,J*EӢL}ԩ$"$`֓PIGm9g %BD} Lc2cLH&,dňD Rqd:A:-&M][y_ 9x+}=^Xg@\^7J}-^[kr ݝ+Ɇtwտzk;a7BJmpbӳ:^u!᠊ň).@HQ)YHq24d,18j |1-ihЕbSԆ|qkJKۮڬ5ɩx.jGM񩏜ސ+ bg|M9n譎({7Gq-mWO*.t(.FTyt2c#y#^Hi1 8H̐03X@("0V }Hɘe8|}Cʒ(5'2 P0"" BxDXX$I##Bk>7cx@sCAB:Ac] c ˬ-_-Eꕻnȡa` VG9}6UƘG8WXp f Ri06~ e!,_˳W|`8%4 C%(:XH2XDE$ GHXG(,QA髑ne#[md~ D'5?*LWq4JHQ$(FY=Ғ9DSqDls[N&Q ] b0[.LD v% ]NщW{-ї5Uͱ##ǠXsY<V9ƩQNvKo]:76U&7;eNqBc !,*#Ԙ,/Wt눼;E21(^VqyXv,,#w{$Z򗁉mRVn_͎&Y9oX!lcHoû9R%/: o;}l{\?2/kA"=$c,gݪe8J@Cݮ|OF`N"*}Ié|dEO&]=B3vɛhKV#jwޒT3EFpR'Y\iWb%IlYsB<Лj`95W`{#OFıt`=B j8/T{=L,c h&揋Ǭ9Č_pp7ί`8gʆ2)S55кn8 n6(olh ]b`"uaZBT]R,@?12٘1ww\qpi7G̐|\9Ł/V#p%antH3b]8WfX5.6~c d~ m[8"i(D|M7>!e=!0'M8! {S)yy2lXHFwV:RV"񝟄IQ#wF_vs3Уa§Fgʓo~0cS۠CjR_JoM]v~""lAf_V&J|]ydm>F%Fԗccv q?ev.4<lA>l+Ə-$iT25=W ssĜ*SDA}'2v/$ʻ뫒GG^CLl\aFV- L D*X !&/ϊdQ'3tԂ#Y GeE`XK쉣Tr9SZmG7tlYM1xeh \Fhn%05,/R{G{]*>o[M a!OaA(5lRS,Ԗ-I 81 ZAnHoڻ RRp<ҷb1@*Ǎ`MfEye]>(gZgw#\wjGvxjC]G˞bwkݨ~ē~'2H#ĤP[sVҳwJ@ܾ- y:j qz9Gk%p-V=wiFKܥ}f#%pA'FZAsԚ{V_=ĀzɈ^VZ OI!'; RiЃN0b4ʟu`ڰ?nI|e2= `7W*á/T/~R:\e}][=+k5 VaNxWNf9P-1BBJ/[Ysk5W9IIi0Mlu%JOk:p Bp @Mr4h>2yvFϟ4Uo)gY԰𸿤lY+R6jiQ;;sYhS|y@*Y[W6S)~hvֶcGsڦd -/0 4vܓtWJuhA Occj* :XRy .*ou\j}%iC%[9E!vѱTM]ӘqIRqĵI 7)G4"'YMK{{& !HH.4Q"-Ykt. {owdBzҾgu3cTpm^ .-TOZp{*S"69׾oj9,CeLڡVr/ LboU܆`kLaNyE9.P: 6fX-tZԨp =>wKC,EI )z/Y҇f{RdOlQT x p61hB].` t4U'b 4yxIYGIaw7ձJh vwbypKOݍ!>=Le,뽱MqnKM,<&Ђ̘k vǍSEz&sj9 )sG[TRrpkZMk`DF-K@z{y=~cr6W6-oIyP?ϖ7:P2Y<~"I)ɲ+@/<{: 충Jy讉摽tXhWǒ5D"$L.LS6~Y$gAQ u;~&:<*D!+D):e;uԅj'p"+Z G ~{?'H!gk'Њ*ҫӼBKxߓŒ:?n \֏DߋUc-T`^ nxKaN=uD5Vu@SY_MƏkVG'UKjAZٚ5ҫoϧJ_M{iJ\,Upا̋:EtPc.R3>3NE6f;}:>wx(e1m><߁xTmYrflf$~ a"MInsL&^.>~S.gOS Y}AE%Tʊ`- vp[MU"`f`~6V?kCvykdvV2<(oݲ9Гnsػ6rlW=t%ܗ~hĹzAyxQ[`=%vI"VXid82UE~YC w%R?r`ZXt@@(D >,굛a9}}5Wq0.xiW _s_3?PXa(@#4(D' SO<m'ԍz-YXtgD1X^m5j(8w% M /^o5kz9"kf6_ˋ`l{=uRlR>_vEt\sk,qߘgH Ըxl=}w&Ӣ绪!˯lm_ENAEV]YIĂ{~r2ay^d#J"Ҫ:X[SS&[p/ Ҳ@aGP:ÎwL G?٧yMqt JkWU•6TT+5(J5KLP-%Mr̐DR[C^+p1p4iR&;~&?cMB Hcx :qŅĹbNx':ce A8 HnBDŭwLa `h#hB `aDŧ$Gj 4c@*Zּ."HRarT24bֵ }LYϴQ&F^i3Qj0.?}g94GI\'y%0G*Eq\.i-'/_3o;XsA (MG>nw/MiPvHQb>`IQ%6يjsv{]hm-N)sx ;66~yp`j)yVth|u4̞Sj!R(<|Hz%=iyꋅyO_b-Ԕ da!(Cu]D^%?~8މr hH߰6-=ᮾ lm;b,pOi0}eC9, | 2M5C-Ayv>C/2F;Dl43h$&k?Wtr2eJᐠݮLXd'C!/l( C4F3Fq Z _FIVTQ`QtX f$[K[E |} x8h^`w/E3FvW{QSF뤊M(C,3E~^K:;{Q (-YnIJ񠯝 lj~èKQ z8_{A$(4/CڹCX3mۧľoN[1s_,Sz6,v>ъɊB + 4.DN~( ))뵿LVJ7z}$5K$6QRhL|${;D!̴<@7ֵ պ`0,y&UJ0HPHZam\tLEXJgp8τf~<%=z{O& O&"`VD&<{1Cybn7/X_wr)P\[ź_Z_R}_2#>\母8ӘV+ݭn|K,QE<jdd˭vq-h|;5)5k'515}kt&%Mф+o~i|K+z>Ϥ\|@T^$FF?%`!5qiZ_0Nͯ+;I%x~IlNZn7))-F}$kUc8qsXy9+Yփk8D%1 xt>`,U5nAGYd8M;&3Gfo8ZJ~4Q~@J$VDӵIQ@d~XIRݫd~}{ڲr{%RX,5s{l,w+9AL wc cp^Mb,%*Wi?1DȎnF:q NVźV-ր~<@V, GdH:EQGkY#NB[AxvyC<{W=7v`Q~r6;?~;fnqOi)S?L}h$h׹S,|993 fB3#gUFd!p$H$(qk[[Fg&Q쳳Zp_1 l~gf!r2A\( 1"%#qKżpPԘF(Vx8J@(p7sW9gzܸ_eG/!AF|$y+HIK[lͥunؖɪXU,sϗILDI|Шء\si#1-<Ӆ, 1"{L+(yL |\ Fly%rCp[ʗ*∡ m@ 囜n6A +(Qk*`I`ZPR4HVDz/a#C 4685 +(غkgR|MXg-lwqJ2hEQP #`)/'b,y eb' !}`0) ) J L2y|#gm u"ԹRp=*-F ZA][gm%n0k;қ %)`0b8f2S&D@ki nmV`)Q{0|yCޑx䅙pa=( AA yˉ@;b\+5%c" 5:A/e -A%"`gilUGj>n,@HBV9X֠Ae9 0VG7ZFe6|S,Sj` (+^%eDs!PQ)A(:fJ#y-C2>ZB~ReBL&J04xuā59Ay(nŐTu\kł qXna @Ăڦs0( DX,l :6Î_HXa{8spn@ck :QBiA5YbRb`z̚}>B [$0M^5Ȯ o-Zx @oa I| "qQxˊ`5le;v Eۍb`1jddL)XY+]IV/r%DߩQIP!D6Ot=2Z*АRjZXJl͚%ΚRL,eYtbWM5#ۇ"TL>&Ghk/y}(H\o0l~w }I/nD.Քz/ yfZzD|r/wtkH|ZOyCىOj" oA|ҥOyCvNCrD|28'O+ )oA<KMO" oGH$^ QfnsL@Q+nS#8g馋괫+ΥߜJ _-b\y,+S|ޘ2OH#4mETfZ=QVK?ݖtIVd簙^$ʊt[5oL9QqKs5>u, "~ܜ{rD%T[Mƚ0<|6))#!UWokD@D|nVrOY;͛CGO_? Dga}54C8T,>|pz2>XHDXHDcڑsȦcv@ɄY@81 ېPDz4 }DswtW %M^Jzߛd]%T3ɺJ}d>VIVH8,KzژEq)ڻuh܇sǿW?AXCjVϦo?{{aX,W/V7Ɨ24\}>5~𭹻Bx5C e "܁mxu敓כ8z('9Dw-546^F0V34bx{sCQz1r$*j'߂۱<ɮ`Ҥ%J1F=ؘ\zOWXu&p +Κ=xy9k<(-+{<؜5][pqJ5FJk4}A7k7+LyeƎR?o_kI-&c>9XL d_Gɾ}%*/lHD! 5:4Hk#,rB ͜/Co8~Լ/qB\ abfp”V_f#g-F9F"A}]xYv}CV GŁ2qr$(j9qDTՈƺ<_& _ %Sq- 9}NvJspǷYѮlJ7Ls"390Hn(XIJl O| +A^yoh7рx>ή./3h #*D G1XJ%(R46^[Qp"]I+!E'Klwv1TVsj W@-UiJSG 0dęf 93vP3I 1˝ J !6؄YH0=q9>46Z@y$V( I*nAu$fUeDx`` [/)w9W-:r38~w|uO?]|X.YFqq,?dw͇3)a6~Kj8 b澼jk0jϼ~P:&R7CZLH SSSS~ԴǛ2(s {OZ^,/D.t6Y.OJӴĶ4Ь5ւ`~yY_le;TOPRbS8ޓP)5r3)k) tw rF0A۟CVZ! +)OCJc^ж|LK^=&Х<!#Pj^@b j#d NL1zؙN,tgpzvTVK!NY5BKݝ Y*uYnL.ohMͲzswO#mZjAC(M2;wODmW֎R݁קRՊ A2\&8ݚ |4]Nȭ2>䨀o|D,V')+Nj2G'2G¨+\ {#R#1V*,gB@V4$8% iv+ð+ ZOmE^?7˛d֔?i`2 k̩ DiRp[8mFjiF",4>Bpį%~Nm btbn| \M:PEbp~NUuMNk (Cs4(}5y&,lsû;/uyAegbI[vO605kC1kO_)oKUnxH3pHTƖq~'x%9Jy. ͙͡W0𻏿5qh& IۦH($]{pa*A?pFV $*&x- =^vsW)kZ "]OPZ'Lt=6-c-lk.[(w7fQ%[2/-jZiѩQ~bsԒl5hSsXcpln&L)AD|aOغ Ku<4dȄCc 9E`Xjg.JWr1esoaG p38 LT nȴC焱dgΐ} )#5&s3"yE$_(ahwWHw_$SQ~V`Up]vu^,fbuQ*2 =cZb'6*c -~|=B{!K{Aao[(wM6Q ims:2!hq.\?!d 92Pt1|V SZ$#,n%[aF5Xb+0)؀VS2rv2+tt#UB̀ݸ s[ֆ~Ä?Rh2`Ό9\9 ۚc'~Z>85U+9/}ۤeWʢQ*\hck [A$Z^ H313&PK"[Y%@ĩ݉}7 DW K<*X(Ǖ 6 *'$?o_+4(Me45ZL?}}z1{0:hmx6) Zbh Yg 8xQC *X)XH"JH+|>+;eb&] bŀ!6/{6rEBM/س ذw/xizY俟jFILSHƈ=U_uuUuUuǤfSIMfo3J~ Qq!&Eۃ1HDXEfd|Kr֙g鿭_?؉{߽Fn=ƺw뱻%Ɋwk.o(Od7wT(6>HHEFq)H1?2{G!Ͳ3bнIwmHfc~~ðaJat+ѣMq_`\>:Q|кZH& #B}x%B4ķ( ps>(!,C@Rԩ/eR|Lb5Ljl"jW[ J]QN`6+^^~~$sXu-6p64Ow*>T oLǟmt$Ao o7w3,Wۯ~qYK"^7];X+~^}';#8 3]3r2s斢E~3 _ړ$+';7ly'\RBJנ;딯s7BEXF>%TnGňi#T(jw20T`=lP֮3x:&9s #o=@N=YtרT JRTOݼoThƚzM)'E .8]So=*G^uMެ hI߽OEf8ѲqE:@{ -P,#KK @cXIOÛظ_#4^[:jB%|pw|[va@ق=O|JGPx RR3bx!k" %A__s9JzK-ux V` kT0t]jRU5F)vEگ £kۜxHIr=z ~`&aWbV}`kWlu0ɳ㉴R+DkcT4BbX(E iTi*EJ CY@M{($ՆXB LotydZ+EիKYAtbTQ\N?WN ӌ}%:O>1j4,u*Xu~+.7? s`7 ݣNrKbo1f#s5_xFX;ZwպԋE$b[iR}$TpabA! p7K{U7wWr1Mv i}uQhF?[9%+* LJ{3 d 4LY6!1TF d)uD@  ! }-TH8wفJwo!1{>b%>⏫ݕezt%qN%o?!ybܻb>1An<]}*OS(>H3ۋSYLS,]Oa9V>SYyV($zۙ-?5bBӐKSaZ$Vlq_[Zh-H,vV&gdǀ}`~vҿݹA]_;.(DU0ed1I$G|F(562 060F U5-%1L1YT5+DVWDu-Ͳfu4bc=Wvr6`&A2m[UgV]`O-FOw6;~-2]~OaHkjyxy{N-7ۯ~dz~6`* d5/G#]{BEp2\ FiuEq]x@HW*p0ѺBCQ]YZ\0b0Zˠ4.֚ђUn1ͫhce ҂e/n=H QWGĹ\\MFjVjVjVjYcQ8da8%:0Dc؏}1 #x8M Z_vRue?[J ~iFs)ҖZ XD>dSa"|l6E! 2`GU)egƤ+!": Qr8DuqίC=WCq1fX^(VBw\:~ϩ9!g],slH3bQLv$e`zX**;=jXTłb7UKqp3#3 R-ݯ7$fRm}  Yw"jBdg?ՙFPf?l%뵥٘V$sә.2w}>&C?|Vw%h:I8?qog<1a]s/`_6o{4/d,DFڲ#yl-~wÀcϲIra"H:8(4Hey=fkdM@sc4F Wh[<YT __mTl}BӂK ,ʵ-h =L)U?bkFhKa0d搾%hDd3웫"*f5E޵ƕb{](l $3Ncz}kI_Wr㺤t|.U Hw#IQHT3sg5b&)+>i9\5pYB<>4p}Nz̀Ņ0eF=WZԚ1^\$tpKz{[KXdvVުu$(do:c3Z^bTC]p۩'@Cgguefif\Ք~鱓V9 6kA.F߀<_!9޸OD7L#EFPeL0P?# CcL 0-Lk֌Ќp -lVK bXU@7Da"Ur3,}hNt!{UCרSJGllir`5)y {K0B`ؽ\Vemg~qq;~HI{IrjI,%e1d`, 6O;P о/HGeel;l\kGV[g3C{\ q>KnH&ЕtDy g)E^C t%9P=4|JgK,Yp0tZ8PRW@=/WH`@00LS~r>j\)Q֤ՇwFe=X Y 95[}NC5d y;.afU6֜:-hc2L(˨"pLBdޒ[J*á0ژu-zKZg$Un6f/1ʶYpIyKlTlTe?̤u %/\S `<~*#m Flscl$W_)A@vD+??(]ڻ^Z7d;^^?kf|25W6hE^F,=y D '1<1>"B AŨI89)z NZJ9:9 %jN( ,w_wbSf1BC*}!9 W GJ7 h4鯸 $5/d}|x*X6ybY'7mU4xxL*/ k1רT\.(WzTߡrS2IJyw%*ʑw| Еh#;'@H[+tA;%r\b*%r\b*J;YYiRĥGxD Xa%bH(9`{%< }w' 30l~(e\d$! MԒ \DӁ(=(&T=XZva#r#9. 8/oy2)R^0AT4A9ekq*/62omԃ(L+}4޷CV% ~IA+9BJuUUq$a~$yU\|U`8o*)YTʜ XlXK $$˺:}KDz=_쇪Hh$Fږ383iZ~Ź$;loWS'ӃśXWtW?l"o?jtV%'/qDuIuMf!'e8 d- D;f1TRwpk4]ܷ<¢3v}\~ y-R,i VTAm5]-?$M[ĄK/Y:~(2<FO&Sk DtҐsἝ+Zyݲf}ulY@W?j` =wϙ~x~e:Ƭ1vRXNv~U[i֢3 ׻2S 7^3@7xx^bA@Ҙ9^*FOa{ڮV=_/.n}t٧Ӌk(>r~g}'fjq(GZ=49ZFfs,blp|ւQ\=yϕA5($쎫PYrjr*w`V{].l:^HjU^3.\m*5^P,vGcm~hܧpyG/~mhSc;~8EۡPeuɣ}??f$AQ ? JOTP:O:VO1U\S3c -kZzprf%5PP۶P5ZbubuX] =}[?=9Xvg]KYV$CKJശZZŽGFћ)dz;&)-BHiA6*`EUD%IGG x.y򔂋AsAhu@;/bFaobX>w7H3 q0veެ'c[kL/]6IJWmcȳ$e&ʪ#YJ 0eU]7>j].#_~u[RTdAy^1{2 0s罈Bз2>xV@y?P4{i`.T]yLӿ(,%kLwғ7sFB^UN֭cĘ,x}mu*1p) &1s5'! ժb%D9gd:*o/5V!@: soWH0)xɓDai^[7E1mU5W.Ol%\.b4=wGTb{`O>* ~at2gS8F Ș̈́-%uw/& má11XI =Bojú_Uu{+]XhZ=GHQ^4 MEqtʪw.LaJ/Oǭ}<[{Vwɣ]×n{:8V|5qp<6XL˿a3ݡE2׿q Vӌ:MdZ2A`pv_lz8ygj5OQǴn"1Zh3rڊԤ6+q<ӫOwg~nlj]kJ֯Noa $(5o-Yp _7 `ߧZ tgńS' ✓8*l`RxgJMn{-&Am6|xA ,/Pj(Wfz3'p٥g+&O#ƍKe7@.awgі'.>]6wq+{٩{s7VȖ[$n5kПxΔp P+aOogTBlt7d!#8ِZzXVqVf ?WlP[Dr9!VݯzD]\76`4VfctN#WjUH V6|ܯw:[QTj"x37?gU#5'ܧBoM ӠU-zA{Mnϊ\?:X*H׌%C9 {W`ZoH9Do# ߻Fgk `; B{/3'2{q\ C5?|Q#wDg7jwvYZS]L[}oZ]}BL`FPH=4P{nPnm ޣ e={r-@4#zC #P|#yXIάqh}uD}Ӥ[KBu1I9$& 4RrJrb4"x9AZeQZA`ї52^;N-vEŌ!`"$-8P(g張JJ$ $dx(lߴV=6/15<$#0+τ>H%)F)l &dD|1Wc)pNJ%̂mQYBQgQV),63 *~<+y@j!܃UZ'ܳ,1Wk]Zg6j]-ofv;CIVav5?sc;#ҔzQZ1s宾F? vV?5gd/9}(W[v0jnNJ}fY\&%0y\%0y\ eC./*AsdGwmmC'M*>NHu%5,QP\R3M5ݍ^%E~" 4b1b(o*f)@ kupgxdM:v. t@ڡ؜П֍Az =xekKzP.k&B4P$E&J1G`Z)S#!MmP+rZZW[U0z_Uk؍ckױg;\eV c7MCG&0 h&_ 7rdbl}`=_3d,I)gӃY>=FSݒ'[C:Hw "_ĭ=f%U_[җ ϭ M` Z9`b*[Qh9⟘L6UynO@B9!WiSG2ئ1=RڰDh3$niVcve92D`7Qnံ1%XȶU= #)ؽv{*2#͌ueӝQp?3fAA+Z. WpkJEsP1D 1!3s3$PGa$`JK.A =lmY+auY1 (O8UЄKE6/%H(f،7V?H09ϙdbB< mDsJ~[ FBUc\^r/i{r/g:,^=,ӹ'}y1f૕ f\'h,4(7j!^f9Mp$y!` rVvT3&_gқއ:g$~pnPwчPb8nAAX2I-BMS%Q(B$,JRNx~)7Pх|c( sC|1L)Q4+h4Up Íf0c*PEr zQ!4ߣaQQ۾|6&j$mZAƅJ 92+Ҭô*ڬngTJC !לbژDFv#+RR33NP҂#.RHV5b_S~\RerNȵźOqkѦ|7JʶfjCD SMgg"t͍Yg7|΍aF&}rJ'@* _ǣSyУp1['{RܯpQmȗ哹 qDb ˹F>S{k ۥOWIǸ4*4:N54 i$imV0@:7Ā3ivPtnZ{#'n'O,&Eh{i3Z&Erg1?NZa>c o~0^ǽ~$EåyZ hyUr;Q=7"l~ +.,0/zS+`U厁VC,ʎYnߑ1; _QHyt #ctZ\dC(]Q].bZWI Ϙb~G:d k_!dwW+FrūWQZ}hIB)]=^g_jikLEsn_<ާu7Mc= rӭ#[LHQDedҲur{<~6]ś֥8+l&/͡'hKw!f#«gr)J+ȥzzg/':ynga3"l1ҞT\7 ${-ѻ\%x=OoիVZ'*')-A^kY *o *#|za9n*-AZg+׫Oס[|IpGNDU0! Z0R4$CA4 :q٥iԤh:^jC I=vI<'FbZ6KwˎlSq$}v)a R^+Bo2^*lAE4jJfAT5`hQI07W>/4/w?C {e T$11 k7|ۻv;cT=Lہ=ܻGqLKbeM.Ʉ@I5ALXD8RII$uIs\Ƴfz47Tןooz(p#q c"Ԡ*( \GT*^Bt ,cW;'nC7/j8C_YY8m_Uf!L(F ]Dy n$Pg㘁Oo{A:k%{0pɿ3w&k~XyK`m"!foF7Ƃ q/x77r,&$Rk_>>2u]ixM w^L of&ぉ :6ir, O:~LÔE $h4U{F4GRPXR%Q(B$,JRNxQ9H&栐 JX djT%ƀXOFJ -!a >O}#S İa199:xO .Sa T cTRJ X9v6bd$2lsBC[}ޟQ-1*Y/ʏM禙#̑>&F=섨=`#)َ$G cNwny+Muf $4r?%0f&|k 熼UF,:cXl7,w+ Hr }-Fdg`$NӷEu80LvNLf55 UTdkPK{'skuDs*MR'"1W!:g8q1IR\Gxqhne=-M) Bx0K+E2pvPo:wd׋ g6nozlW 4!6ltQeQžnM$DK]bh/kTP䪬VRۓ0iܛzF\t4Ɠ*Uk{RilT1+rˏ2 {25ZT/|$;ݟ=G/H#3lϖ9f=l4Xةc&`fjkCf ք]Fs@;[zq(Mrԝڼғef$q|׏R9n1ilK0<\h Ot^օIR%1x1$kH^p*:D0hݙ1so.)AB'xlv(I&JOyKH"ZkHPqW1(Oł4wBm2(Gzvu)4 _Ba@н*ə FͪaUbix0~)m#ATdjsA ұV I[QqV4%rgmwH*DxYD5P$EX8H:@S:1"єK 識~Ae( t)H~ R˄HG^fdM/zX"YXYX *(%.رqhZD44ep$ԞK@S,(b&B#Rphi3p'*[-li%]Wmf07+':X$8-Um΂$S;kj(Ą2"P!N$R)23C VH DƟ0GL-HMjܹ:}7WMSaÚS&֌5s9[[:&,@|g}u6DJHeaZgOzqb쵛.Zsuy)p{MPOj7tQ ophot9&~cYy'I"'o;v4.6aa٫qW~j0^zBy~~ތG^i Z 7 @Q6Of3 7ZMإ?:d|эTFRѪ QwjѿيFCfUCv뗶ܱ[Ɂ8yOs}mD@Y*+F§څpɣe\Mn h &ũ_z,an|dNTW K_VwYД.e$\t9L";(TpgxsJShv:Rs_GjN(h?gb!dcJCҐ$J;R4dP1 Iv:!WOb1! BҐRi;gU@}[ň"rl0«pǒW 9G|ʫ(i<;jțp$цn,_ƨbOAɽp cDz,z, E4Yf'; $sk_nb'#n-dF]߾^|>:?/ 1$HM Ov. s`&[ A C1Ri}3^? g$q1ҼCE)H#)-HST!?{۸ l,_a1Hp&A^&0X9IՔeS?U'_nn:8ۖyMcl`|UcFŕ{Zzb^bPLϝ!"to^ю!DmjSeqւc4HNʔ\KlxJ3Eݼ PWĒq9LZlŜ Kףz1ף<QŨ㍨cL`Qt i' n=$%0nm骿dB`/O[ *2 Dm+a_nO+z]a+$?necMݔ{R3CnS?5#v'P[#IDFwe2P'#rx*8E-f.r߉vz)ץavIz_SMTK)ӄ'8U#%87_! uG;[8xajV>^`t˼]k~cyyb+ԉFxYsn_,vg-zP׹F}EIqz+,Y,4R11J{mݠw2LiEmfbx' QNEi,5j/6]Z\e}>ɮFwux?c.O8 (OVj&4*-qpZ_!e=@Dy⽚E<7չHgM2 I8Ks8u @FvꑇRG C 9 |AHwUA;ߨv_yBo7! GkRb6+Z W<^keQJ6'GF gh?0clg_3q8 y{϶w9E{N7ՌAui Ս _ j8ns$͝#2An6d*< KJ[z ԰CݟW 7gzM %Ŧe[LhTIRh$tumq]IoEXHQ$L.w]nO0%9Ζ_8Mlgf}?Xڄ+e_\U7"nJ?u 9`BAӊ)Z#Z,8޿BҘ`ޟW^lV4*+dpIgQ*!,1fcɳreeiIb:+/l1*#;YH;){#\o|:I>< c7a^2*[f!XEDLr 0p_f> -00k`cО_M_5|p%ELlLXg8,!k/v0V7uꉺ{o= j}u} W"kp?}HWiW=]T{Y:]]y)9t/Z@#o8٦ Dgzs(b;sĚz@kQnس'ht {4ik{!D}Mdp< Ykzn"\,P08{a@(p@ f "v dx5魛+CL| Sƺ*[<:eB+xG#`HW!Bt:j6MJ gm_sun;8vauh0 kd0Z@F XDZk .1\P"Ɂ K M./` {jϯMz5Ysq`,׼<>\l*I]%oɓ3[&0g<*Vb/23..-} UQ3XIίWGN4*yS|0U}2wv2LY'3+ŋ_?y?Z#LE%hH$u2fbE22M)_\o'._N{dF}@7j5c`JR)HZš>!^cR9N<пEߖ Sͫk(/~QTne6O;˧4oXԽA ߮þ>{þ>{Y ޿ ]Lf1FNP3Q!2%s $bAIcIUZ.LC$JEzY5lO8jky'uh]/wxK^Vê.S3q,2CJ"[-hs+t"P[ O &Sa4 MjB*R40lfcTg?RqQ?$gow).ylyCV}tYFдA)sf.A pр|BXMh&isĥ w綼).;KZ,bUG Z#A+ڼ}{\;v6h5/x tRnMEJQDFj fSZYlU<2+nWȘܮV#a!\MԾQ\z@Wg:cyjAX}!O|x,³ xG,ldv vĘqMiu=s`1޻[}&:O^|p|NZ\2/q)ԣlsTz@$rlr{j~w_ϢÒu/~ߌAͣox xS饰?Ojȶs:J[pIDPjqo)\ROe6677 ꬷ'{Nz*VCP]UcZRǝy{z'iT<*Q}49 ^ýPSR]wPuuAXNʼ 2T{mCwț45 ++* U[juχxߞQV,-%x/Ѡi8O&pMН)i&kľHkľH\)b&5L.]1DT_6iGC$W}4>#"$G1N>w>'S%u?*mo%TU$Abr:Zbt*ΕR.uH&k*y|vqҀ,[٧XF %~{|&oĄTfbf,(MB%~[Z T za`KpAތ~ ?Χ?n0i%ՎzmZJei!'o#QJĈ 3i1l*2:565ڬ!j[v؝1FALhp _C{7m_D(Pn{bHm!0 "Ujύ(p;MWTZ+EU\ -.k/] ~|X*rH, )]j7;=U5|01J{Ռ Ċ!+|4tu`٨Z}0;U~#v1zpغƈr ǫg,)!H& uҨmm8d'<<$%aB|[ FP(f3s8uTs'6RI dvS0uk bzBMZ15BK&-u03rJdnM'(' ȁe g7*(Tj2wQ:RKrFXb_ڒrn(,my|!$4d6 b#NXvIŐQaf:Q8G:.ak0(m44\n6>~Jr`Rk\d->&|Y,aĀ-nӑ],t"~q Zo֟*+|&? GYz̤L$NA?{WF /߻ˀ?nArnr%Я6dIKQ^o&2" gz(jg-穮ꪄ6n*Tm64)$ʘ1_'wqhN᳧d$k{SΟ1kVxC/NX pڕrɛ>EX!"&/hQ8ahZRQE_Y@j9|5ThƆ/$򐑰n!x%U Vaڵ^81(n{ Y+9Ґ~|kK%(SkFy/ y< ?zgLmP"hꢓ,XǢD:th'Dkȶ I޸d!8C }cxb%Ӕ8#hD2~CN48F'; Y^9AxxkxrHNR9k E䊋ąғ=("(2T&m"SRdP:'U9uBtlĠ<A="EGEA ^*y%4F(l9Ӹ[r%U1"Ő\x. X?0gE ڲa^hs/}9>"߬\ϻyk1f5\^ _nVUalX7z&Oxʥ:Nf^02'kE sM@L* "ISz52&3J9Е|e>|Rdו'.Ts&K]d|P&r =8;R'{z0@K gNGǑ#賗juk2tHu {/u+ D݇ ^.||u}7w//rEҾY,rq]1J`} 9)Q׊~X.mַ5f~Ww,{V>gϱ鯋"xA9i(ͤ@?˻O+\2 UzOrZ!zDO\P87d.Wo]5pJ'#~VLfIqYG|,֨iRegT,[2^`F;OsĻ.?ikB4wq KF5w3zJxK;۾_VFDpYҧF˜$YR`V[u}؄7|,u7BßDI'ns7"M4Fsv[+T}CyCVaw<24Ŗ4 PԆI t~5}L=39y\]c:hY6O]P<]ԭ5T]l:j>xE 02-4Lu~fP9&Ϲ(eN8HvEف S/^tL{Y@QZShX0-+4 ԝ_ @n`hWyˋP~bw:B͜Hm͗(7/1C1OŐ|ou v}ZZg͢!|~}(~Zܮe@I{S7iw J-*:UD+є!黧dm-JdJ<:@CPCc EQS<(7!F/wLRn33M?JzJ7~ Jh ϫ/BnG|S8.|],WƮ>g{+%5\xpT`bDCfh#x5;юxBgc`Gj>jC(|M?Dc jg}g?O,6?3w<4EO!LFONc0N\Ҍ2R9% w".+#QK -!Ud%"+7c&>h;@&AwNPK6$f/BMD$#TNXBΧ20*!_5i&k6RޒIQaEEpOJDc]%)j˭ Urs&}ʔ&bJSG10MV>PJm.%HthF0iRD&&F ;:'evԴy=]n;m؀}͍w=1 [d3˯U_>eiepBTsR Se5J h0mHM[&Nh4Ih^`|)$%͒TCLvtRz "9~:L2KtGq}Y6 tPh:=J)Ƽ-`$22H%XrT^ͼӨF+z5@Jh'LiBJtH 5γq5Aycp4h9"Ohg1T^ ^-Ik{u TNA[:x.&DuQf .\%+noX7J]mOŰHceBKM 2, !$eҐP(M x>n*LB Q?k,tP&;aV072Dz{*4F\iFA9f\жxsf0;9W*Ty#)hԔ.kg,Eƨ}—ᴰ9HC ZJ(*"} оk>hUy+#ܕp*XW{ĶM#Яr[XKcyȳwziA$-Vǃ\}H<(\ńR .i|- ~3/|%n۸'16 Aؐ4 Qƞ JOO̥]s*oҸX=ku.WD=_@]+3np]V|dmO+2ήV7wޕ5q$鿂ˮw`#4 +,+N; !5Y h }C<Ȃ|>x̮./" C4輼 fwhH=f>zkkHWMr{&#%3H>?ͺ/HWkz[sWgw^TH9_Z@Bʘ Jʂo RpfC{6x!5W@ 2HH ?sNqeqi r<GW ^.j;yiW 3iƁKR3 džQ5 I.*mE ӎ[T(VH 2,p!x~6OR\0B8dC k QlDCEKuld?ix`tP`G"VF5sMO]Ft'L#@uqqHI")2"(:BT![o@%~c_ԆV3i_TB{jfDbYJOTCE[ӥd1e \Y`??&j_\G Ӑ7jTwy'z0_~<={2)EE&=pl @$'DEeiC\T"/<ۚQy8Y5qw6ZHH oU}MRiՊ4+-z+*lqX'W|JD4cc]O$K!} L &մ"aFg:R E#-Ѣ),Uoݎa@*UQ"^*5!ݲQMJ^~A]Y^p3+w mxCh%;6r;?UM-E kŝU#$8Wx)Ŵ<7:"o69~ uD˧{g-]zRQa:Pmƍ`cZ0Wv*ӽXލmfF,pWWzxSikbzY&Z _u h>@ZiIO1R u "[˚7)Q,]@QspPOI }桜fcn';w"D杰 /Iꉗ6z2/ 2դ.3ޣ7s`:"[{kԄP,gyF-ѫq0`2}BՃ?=`rK 8$R5FGd ,JJG8` Ui~r(1vĶJ<ßiQ#u7V;KL챧֝xV2ZN䩓I;~q%^yIv\9y\xLsv~BL" iJG^)Q2_0 D'=^&M r>̐EGLt5(#ge}IcЖ؈Fݰfݰm z4pqB{|eĦ%ża)Xo һK4C!9QT{μ#{-a  [߾8da'ОpPf6U*`BQjPkk( DՆŸH$eDS u צά]+\-i4#%i*e5Lv*^ߌ1Ta=zsҋT-~3T=fY09UXu}\ӼLBuz| yz96 O4 Ɋ?zu(+v1C(|},oU_;c3ut?Vd쏋{x>uoKg7]a2d sE9\TN8 }zvG9[_! jQ*\#4VG߉1Gnu"**21;ݬicnr?O ]OGn/˽wk?o$ļ X(&C1+{H:rw1igt?Si:xyg-|#&@?36|˟dƅ˂O.W2&ӫG2?k0__|%&r_|VKxq5#3`KJ1-ff.MW6߯Lpe᷹x1ZG |/+F9d_fShkM..ȌY "$s2F)Qo^u0[3}Pӄ "G6*#j"tq?^,<AQ=UI\zXw| :UsQ6laNF(HsB l9AlT&b oeԚG!.hEzPr,[xV5\& 8h'+_s卬iGyİ;j hfTغ%c)\k\Ke/\IS4#*a;8_M|n㲽ITokO]5BpmH BH"!F)1:pԞs2Q4>y!8kH^C(jίL!z!lAhJZuq[ Cd 㻪*CN9ּ[[O&p=%Lqo3'nJј y ͔&Q*Qe?n@n$26aeܰhCi,-(*cm1m8BIpgl2MYj>jE AGRN)F"~:>N~tz}_4\.^rhI$Znb> R  [@}PZ$q=VX.6 KRTVѦ22D4~2\A=%̓i N>, (_ۊK4ɇJ/ZeH bK Y*lfX﫬j3h9X3 ~9QE%KP3B:zSe/>gp6uf6W¸猶s8P 父B3:WUZz 86m`'ˏδ0]6!%=¶b"D m|xat~1<و0BЙmIXF ;N'!Or}f<;%}8?Y[iU òi4}Ozk A1(=% ƉBNaa9))1y?ݻC(Ɨ|#*-EDu9gzH_ib;ClI YOT/ERbHVE~Al}'rGیTRAF kTH&M,r>8l!#AZå#NqQA(;3x#% #ꏔGbI V\_ZOBPb1$d́%Tb-/B^[d10 x~d0a.qXr]!}j\ ?RMړ9BqJϟ:4RzB(_twz~ ^O׵ѳxo&MS^-_+7*3QK&c$7MVrY($Rr,{$sfQY0^'auxO+t<V}IQ{tKƙ}9ΣK.$2)fNV1kZ_&Juci+YZSKiJx+G]{_oO70tz+trr /LOvYhde__ǓS"- RU"5̋*H)+f00M.IJg<{_G0u] Jb˾ef$&=EƟ,qle=ޑgߠ Iu}pjFnw` B<:S1w0gMf蕌ddBe#=}-y~6eWϖPVokfqٱ:q 9>rnsCМ g"w-l٧Ɵ#t}# pupV1ՎqR0B"ʺy34 hd%$|rI5 PB3I -:L"uf `uTɨ1 s1Js ej!ЂQI]Oii T=pn ߨd0أ Wn;F FumuJ{1fȕj [11e,_%=?>~>bhK:>oD-:f vcF11MG`X 'c_G'rQs12.]h"QVy@)Aai`L:WrSe|sxIXI5EkA)9%KѤkH1u-(W] ڐ\D)B 'n@撨$Ug/&̖Yί-rv-Pifnŗcl=uȗy\;9p70 ʮKa+o`WTDauqѸ8O#'޵n Ta:]5$\<<fV_]ap|!Wzs{1s#>rӴPZz}( +x5O&>m?v1ݐƚZ bVJ`9E)(3bw>yIat;[w}&L/H >OZgo[)%uVmj>WK?/QhnWs Iژ&%H0"0 ť|SoN$覅sE9!$ ?`3b˹fA%BXk`\41ނOu| ؚOuE͟3˽[GU'р3VΒ_߮K`^ Ƭ4bRخ*/5=~cn"<+(QhѰp<GҀJ[+Yx䙱[dWyk*!EW ?."z}s^C9.϶TH\Oo{kOyA"zr8*ēߓ;ʟ)giuIR!Y{,0x\(8 E R2G失-P# /,XNYul/lm-" %|̬(e S7nF gR[o,˸}3֗JoɭxuRVM/Wvaa4BƯ~s wrrg7vMȧEO߭2#WT5 Q Ң}VQ ]QjAS$ a˹Y ݸQb7 cRiJQQp?ljc=q",Yy8 aD_" pakb7e".h.sF̗ HgUt;?z;%b8%y6作T5]W; % mzY˨*衢y ny%Ifw/mh*ѽKn2Zوqez%;wNVrd*6J +c4mc&sg "9 l44˷`BiYJ+7ir1<(]XD{.e}~LګT|_(+vIk~ 5ӿҠl(b"$Qqf+K~[E Ӂ8xC%9‘E^x=u9/#v)= N=mOX*}wa6q}ʐ N(3(ΙHE`G'C,$ᐦQ@ll~AҀtu htA+ oVH̥c4PN8(|OŪMܩa[]+j$`ppCQ k he^0ɰP$ig_G6_rnjo8Yh'.%T4fl3MXZ՟VIn}?[,JST\w5.kbqn4X<d5_%yyǿZܑ81Z͸o8eA+.߰CYssĿ.oJO, d0}1ISLot^cjINkl2QpS"R*Sh̒q؂5/.JɓW8}jjI%>hhktu2ʯyPs-o+{44:*fJbX"$zdMِnY1 "~hBQL`;xk[NNwK Θ]CA}鮑VkeMK۞HbEIPL_fGkiEJF r7ëy"02kH?>/@ҽT-T׃)fF\N`4 uնMxf%<]ڄ{ydQf2rx<|p..{dQ!Q{xT)c[1Ao9u=>k(d-&+GIjoZߣDE$tjӵ_>^cDb| cG'J:Ǿ+ߙ.=z|vii}T|^ic3ZV{v׀kx b:#.|W:qYoDa/ɇ?]DGܕ^. Va OYA0hx>\Ƃަj3Z$a{}%N'G$@5G;%Ůvhs >hWlĠ5١V6JQMvk&VNAAM%F]>!#5J N52j0&׫CMBtof5G4JnwZo(Ah:AZh 8C S!L㱝%IV o}pnu8O18oR/"F߫f7WSΌ1V[7Fk(μSU(m]uSe{ $v:~&n:zCRަJg; *MB}?(Mz@}䗬Xt'5#;M꠾RR4 A1e'y &bA.3`n; KB("/TG36‚y0kbtX,| yy_K-?;]srhL{`/o ( ( ( (Ccba jeDH>cF-'K! !rBtb5o&SKev>ThGwhDez1_> р)k3A?s+s΅rSً=W/KSՌf/xgQ;WPp/UH^;b6Sϴc!&([gA(j?GW %gh8!J:_N-׫5fWf:+>_ ›>$Y~|ze?E'ݡ +ήݶqa¬C/cdruKmR |z2N1(*f|TTi勜~ Wn+?|9x&ח>OV)Pm2 ^۾yt,13/r? [Bh L q; _-wx}ǘkk upM^Ko¼l~p3g$R^v㴔:vKA 뤎v;\V`Tk-yڭy,SM&JEiew;X&tL[B}8䙳hOg6@[؇n4a1nX2Mkj:8䙳hOg7 JIcvS^-^!R!ϜExJmj70]n4a1n-0-uGmZ%/4V!ϜExԉMP uRGRT%/4V!ϜExJWlh7q'[* Rc쵫p*X]8䙳hO1*MFvKA 뤎v;d&Mkj:8䙳OH"' Rʻ@njB́˥ʱϩq;k]!*C96t`^$%EJB =P/Q^tASzxJeF\Tvܰ\繲 #7HH#IHopŲ]l6.)l2z8z0邜ğ/7uQ Vc/"C`xX sHl11eDQ%>Ȣ{R*(!?pH;ׯwZ œ0g Nq~|B9%PSW Я2 naP~^},g0ūKs?W~qߙeh9Ee NKBϡ3'n:OB6>,N'y"U&fN./P>&/!w^9ѿ(Y?u^o!i% smذbjEݐb М>0mH,v{;%)5Ԁ6h6~~daȅ^[dGGVfJ+^b wtcsYw>WRI5IJnMSHLkt'j=A8@tZKLXԬ{՘f w /=(\ ypafEn,aCqn)fÍޅNf *٠w"0RV9`8 dq\fk hWIqp}26ZZ=PBk@DUpݼ)'L\fZ"j=՗9m:Xߺ,eBhtyL)WQ8 Ot=qSJ_Ɠ*b doo] ]D.[|%-q{Vhb"s1E1$_Ɠd+nm%>0w87O'$?3˾>]<,/ C)b$PXXǚqV-&k+ˬqYW ( `ePzA(WZ2L@e13Dx ^ 5npŧA"`EΗv޾meoןBTUT!%&Z G;i)֪!G9h}:|n|A#*jP0oydw`DŽ卡Aw"lJ`wJ"=P2c! j 67NEXIkWBBiU ao+CKaoے?)8Kbn!,Q:'3Ks)E8&-BkpfS=٢gA|irr=_/4o+{]10זnx祟HBG]TKyZWX 5SLubr@L"@2%!XϬ6 KcN@ 9-n2xΔ%"~GIggdz=/ v*о& >_!y)&w7dFb]xO={QDc[-'(*g:L9bs _σkynXm)%ZH$fI *\Csn:a OGŭ"$#! _ `̓lslPI'13 u}uQԶݗ) EܺwtWJ o,ͷ?|rj"n??'o>-baIᣟ+Y}:D݊Q?d lh:Kp).e\+ΥͿG3Y 1<_Lp`ET _wkmHEf7KR3`Lr$ʓ`VS7stD4i0%}KwUۗ_Zh~TVbt4M1oٸ<& h[O?wgvx᮲_W߿~G{QZrL> }(7O^Kgm?V}FaKjmQx/E kؽF8}(DPYkB.5um햧8"{]r^/#™Su{ܐ)(\RYLit>a"G.5".< fzI펠^χD<=^CCCC[ؐevsWq$Gk){/)+ &lwlW슠w7b<怞ǢǻCC[䦟w՝)q "/ުEks@ > =hP%[T`-¶@i՝[.;hjo;xPy՝}qbjyQ@#a]iAWʾh6PxGfV8Od'-ް Eq5YWS^gxһ߶Grh h8رݴ66 ^^YQEk{ynh'lѣ6ѝՁ'OӚ[Ke~ν&Zm[?hwEΡޣ&x-q>Z$@>R:Ohk) c:\8ۭݙkl|5a ɡn,gjlܪ0ܘE)"ɌUj۳q6.`MdJ" r2\'ZK .κKY_BRzfEo/Ow+7\‘ ojm6YGGe'ܲx3p\SK_X]]F<"El(GJ1Õ%]}d~;@utQ4#jqQoB79fתR̹#<~!!n9zÚ?+^ۣ+j*9-W.%Δl\j $/ .+-<$aK-F ÝVbaw,*oj p:ڱT@դЁJ6BpdΎC6@1LX+ޔ5"-;<hxC-myp3w<(WoHjڒlr))l4rtR\A.[.  {[xøpDC.gyw.,#VK׊ ԡ1 x=JI%Rc5ۍT9لəX  gu[yp?-u{\zGPzJEUm1ǔWxlq'nuMG{vP4mm7]AEmqUlG6UGGAdf=~pƮz;ܶq@ccᶍm{ MG޹iUَcR,\_}>_4=#o$- _^|'˓+dNtrKy\ e!t yu__|(w^[W^/_t+|_ӷH ]yAޱ$"O蠳 | j}u%pAU!k C`<}!tcX@왭5r:$Bs^o" Tng{#eF?=si9@둊K%`0{V".Sr^BGx}}9^j;W]D#L/ל_Ox[^{[78炤xZ|c$?-LZ¤-LZd>oztlI*U *mRIkf!TYub_pяfK"YU߼u>w3yD@Gez)1_mt#CgQԛ<^q9M%YLmЋ?Eh-;"Ay]Kʻ;G%O?1&msS:iV:iRyS}b@svP+C9WXkm$o- 8ChHKhwK/ #4FOt=ߩ T|G4c)Zs'(TQq=5o|S51[co6'm6'm2/bT;Gsv΁m}+.Ք R!XUTIcI}$1|аqu.?#֯ 㱮ؼ5X)I+|=\&U:c$-t!JpK2VE,9XaXtXwc3܏LQ\"+6:EK;`\V&YlUtJDx!m m 1S{}-L>N]<.1cMXVۀlI>3/4L˻dzoHl#Nj{|T3&}on}՘P[иׯn>pm#qXb6,Gֳhe:/%~X4Xk}{yz*d;'Ĥɋۗ_ZFRY8>: ӛ셏 'jq4?VF$F:>:f^{'#5V&KzpxIow]ӪBDh}莇"Zm<[E141IKI "2Cu2rW.@0g/%ًl?Qmo^5EJDc1#ڒ&;g E[@,.4LXTʭ:]EO6FL) f|;\it]-=`t @mt\#]3ccw6dx|,4Vid6lzlEkqR;UսȌc.Z4@Ƽ!acd~kkaK8#10+ 1q m^%[ bV@7 y-U;&hkeVNkpĔxɋGM&+q%~B JA…($K@&[tI$8.ƪXшčG"[EkuN*hdt^:VYf/:g5h׬cۦGy1)VEu>D[XFĪs|֒<;r[GgW)a'5 Ym/.cpIbJո$?ϴ%CQ+vj2 r.ۮb2+)&3L" -HVL ؙvaLNJgF@~hռYիpOGק^'?_KjK}M>3f֓ (J@w{神Ξ0{ٴDYlݛCgMˤ_? =5P1g ?Jj T3VJ"DΦO0S6u?e4Ix%⏏~8}:hNiܰXo65TsClh&]ֻj\vGLԚ A2Гo~1:3t.9-tٻ_q剋[.u/o->?;|56:N/[+xF뇫_],xrF?wo~, ]֎ߜHӳ>:o ;&Oф# ER)zYiIE CՌR>dV?#Th;Y7AUU XyYEYz/ (7kɒksU,(lWdՉf6u]Ky31b~1xY{׼Y2c"RRkt׼ٙAni8?6^;ޮ*(/6vؒ^ ?063",85Obwdd*~C42q~΂8O9Y~A[eҮ7nxjLziwoU_Lqj%q&U5 hshiF<4SQ=gbી68 *ckod=pRƒGWf}1tqalRCj#jmf8q7b+f]lRݵ7!VJ6j% Rymm+ooWjE 4iެ \Hc 9[boe4rRPe}ħT &M%zwmI_!]m\@vޗX\SVx߯zHICy-Uw׫-GJ\-9̗?f3ݑ,Iz&YMԐJtbNgcv]~VkFvhaAmCpyY^(GQ=ސ\:@Bz};Qx+Ye}JsցB޿t F4kZjagMs-/pjfqvP4BɆ+ CO n4A3p@K9R_Ɉ4-!Wb._pc2/[jgw KT!TV8^ډT\peyLp2(\ CpOYYo pMe#œjF#_6D-1@ [ȡ ʑ׎FєS'4$O!;)(dsU(ఊ ($emG{7|@[j9lqb _>$~C1$~CՐxiQx"Ԋ -q?}S-sk^) M"lhfh6&M}%C3;YiKЉVa˚–mTBvgC!nJpR0{nJ}dᙙ}ofNy6Ը2W4p ..RIyP g$#(ᆔ ofIV F޼M< J Dk->ptڿ,(pUFaИ\53v4ofeKyxF_+ (eQV3k4A ɕ( C9܋f EɛYْelPSG*sPv*8 #ŝm8_R_ULY?AY?i~{Ei$:`ؙ@_pHqOG$ tk=,)0g5wRqBt=:ťChr2z'(T&?T+ N~ ͇녚(nf5VZ˴h4u>a|=sS|vO&$#T&H\5H>A mUbn2CA*M# Lvwڈ|R&Dpj]X~ ,G~ }! WG2kCs_$NHpZ6s[eB2-Ze]ƧEVsX(Fko'Ѻ@z$լuj'nM3eCd:NҕN=QJ|}JrX<[ydR ,j W oҤR4V 6X&dzq3Rp\vE<]}>;Z< 6͖?RqNL&5w}{^9-%hDpyu>.YcMQO/(Bq#޲LBTJm2\nmg?DpO.Tg\@@\fi"ס{]Ð$|t~dQbOO @A}?ua:/.>pևnkU\iK4zc{4`g'0h_l|xԅ,^đ`.f s^D9m$]PYn'.Ikxͱ+?|t41ӎ} L)u» hl1//oCTCyEg"742/$Di2bc:{ΣY)3fd}n}\az24]Nrn_P =ce5p_MC ;nY8J4WF*ucnd:BBpp[:2E/Lm0hi3C)1N6P`oxjLʖn,zZL` N:Ey3w}֗Bhͳu@~ƈV2XZXϭ&1 &{t T=i/j#؄*HSϟgnvXX+Sx5/Ot5N+'j6V{_qٌQ2j#Wj*ֽXr f <4w8gT`-,0mF-@㋊ 9ҁig{ T+UawVݭVS,EԚBMuW;vf= l[5 5!M9Mq:\0>ۜgs('XW8BQX3y8FQ;!\K$q"6f0.IG.s{ / e,k+0TUmעRzT[pxgGB6HJkyAJ M(a h"ۼ` "S[\+gYxG8&VXoQ !c4"lxT~[p 7Ÿ1e'߳?>E(J;~2 Ǹaj O~7ӛr=~͐]ߑ*%h$?~{tϚkt6x4^_+uRO넙j-_ns|Udk"LJz$Fq$ƫεlwE0Zmja. B'm^;ܼ@/ɄHz@ +ބv~CX\^;;LOP'<lyOhGOw|̓[>S/orȮ^S\Q?k ZsxkȞ4?2΄5|{pEWPP?9kb "Ff4[XX0\pIK\2ʟ6Gu%d8h5m4@١,бd ؉*>9w5uh,$ZUA' gʺK8]߹I <2UȬ*TE\rn1-=Z,qp.pD]NCBLg<#p)B% : __09:bqjj+s7] oZוW@:WjU(޻3WGߟ㧿}Funkَu[u*ߞL&ILE[MVߓ}կ.G/5vL?ofS{ ]gsr(O ޺ºvoxQQLq֒=e>0I#a8m@&=b4ɍܷ;чZr3S6{!rF1ig pÌLԞ@۹]dj bUqBʝmE&Hn$r|-gSa% bbD)BvHD N>jytd5ϕ$RIr"Nː5@pˌ^g&R)^d(2a#2$ؗNZ7iu P-)m}by3э)duCBԺӫP?1 B Nڬ!ǿ \[Z"rI"H?viS wt=bpCˉL1k/G87N.Yd&qgFs'7Eg35׹CUcg;8Bwص~ {Ղﲛ3t<0[5  on ׇ}v L:ʷg ns&NZKG ҃{\ J ֚"VIQjt=D?vH " ls [Wixl<.k|0FL#@7pie m2Kp#!'ʹ{Yeh)iDeT˵mLLr7_cepUjZ(!D 3y?< Z ogٹӓNWg;]lQC R =cf{˨.xk0c;RZh=3y% uy^=&㡐k*[wǿfVF>S^4Φu#Fxz1KEc4WJ4uFPú=]A[S-\[Q(G+ E3Ƥ`-eڋ3ӣڤǁb.XOAoAM lkr-媓"!,bgx E^9⪘:؞1̶& 9 k[y 0 N^#"in4ekd+[` ܇NaHH&9ks KV`L4wFLK $S8Ay軣hjuN]J<ڃ;cBeʼnx8j \@Zp㦘ݶ=X])is _Zh̀DB-h*K$4kr~/3ITeM/d>hw`$k첍z$I$7>:׏l}-zQ]9Jo,?'LTCa:J-InEL Nэv?p_.SFLe3 €t;'>QVfLϘ];Lk ^]Ys#7+|;<%Jìy]&M:(X")Vjd/3%H29JA*$yc>ByȪt*4¨a1۪BFWJk$@@T_E֞.ouMѹoᩯ) To{__ER[xl*W_=d+x͆);5vox›?5X׫ߡ e%W&ۣ0F j\jp=DK- ArO]=.dEjα">bE ֠׉w#BQGDHN厞9 .Y֎Lı,e=KlAYݞGXknkSד=NkNH$>䎓̶jGJX< Z'M5ňݦ!U Bݚӡ0., ܧZh:>iN3f6|_v4| ORƜRep·/ߦ/qz^QT~t^ϣPwcˀ-ީdv_j^? 6ӞMlNf g=}7נH'\xh'4Y%a!7_\&__))$3J "\_G.ɗGr|Rk6$z77^۪s@K5!"ݵ\kRhޟy9Mq;;d;%7 JGI:}7 yޡO/i<11%Su }wo򤈟7Tպx77PDR P^DF>4竈·<yc 2^0f `2 4$8_kTeJy!#KBʷrmB^?Dl]UpkC8׹Lw^kmyZyR[H2ְͪTp tXm/=%\")Pdti4룏镝~,|Mq-2z^o ae Z!H-"e~?C(B .5nwbf6Ewզ/j}88e \ 4=ؠH"BenBkj {B(kJv1mbWH!0U?Q*[l Zc0{z%_=xXCe7+*Ტ}I-q H3=^<7.[Q+$n76@;H-#= \(M~.33jٖwo ogo&3 >::j4!bW,MLdb8")rǟlokήqd%Q#u0éOFZ)&D)yԸ_.=A"j z-ڐm6n6 T{VjY0Zu;-4 Tj-p~09]%]@5p@>RXu6Ǒj61ѝTiAqaX%.M>5VѳjģԾ5hmout%,#+ќl?~*!JȭG>5\\W]q`Ӫ俭aE/O$wȵ>@+#=BPv4xf3hKЫ-rpblbVIͲRf]$:̈́!87 g6b=l+@Iʍisr4C"u _P_ljb,!~ZHD-ay8Cò&nv!x7/gumV*C)k](0 h!`;Hl>eXKZ;g5]|Ex@6(G卌r(!]rOL(F^ZѴ ϸSIۇ%UnypF(۱TvC<;~TLPMBRp/-(l`8PGARZ AD@@qLDvw[u`Q`Nl&qJC7Vч{aQh ۨѯn)#t{U7)9^RPdFFHcPhcņM4UgWwC0w4A%B :471Xzi)E%EMo/VkՀ)ox/?T\zV#&&jkA6-k h k5E#; TDmz"rBV%᭕:XvQOtS"KyúpIU~9p tIs;ൿ:p(̎(B@ĦA"-g"CVnRoZv8_"C$Eꌦv)4$8~af$wq/3FnED6IfDw00lhL46DB8k!=h 1d᚞?eVP!T(VVCA3|y/{͗'ETPMh Qcȝ0 * ;sx=PBۄ" )">_Etm#4|獝Z|Ap^@}$3Fc"vԸH`w2&SLc@ R tQ+IUsSL(Ff**Gs<2VE rbCn9F&Τd2D_tTHH_ S7 dl ʈHRDW)p6w1 QHcIQCM'FamO2^3Gx-OsdmnCa9Dy"/Z^lru Ĺg߲onYϹ`ZLfe0:',1_"W?~zPx2ѣΰO˾Fo_{8ʧe6Aqf'.F>if h."d6Aqy#/.QyXK7CiR=~;@5J`A3odD790:׹r $Rq -k8B^*_Hά\HP-2 $hAH '%r,#F9[.y"p.(-xI2-Gٲ C<΁IeS-İFPN)J΄-dId 1%cZCjl=S#5mgWCf@`"p)ٺ"Jʒ Y) ;!$E!uQ(]% cALmEg+$D$KrR/˔#bVulJ1) Tpb2kXI-DN1fJV[.{X,f,luyZϹg9BtOkSn+^[s`~L/]eκ9s9so*gb'>]NOn1ϭL]cqy=A>;99>Qu/N48 O"Y1W㏜N]D?:|b>7tiu1y/KJAgCLlȖz:^)9 ߾9'qYU{ӫ ~à+  l2JA$](^yVDw=Ƴ+莙 C*NXT lG>; mY^vs֒qLޜI]}3ҔOԅË\I,Q`F2ZeK?gX lxvbƹ0NX9k5Dl ? AX C*u.q)O[ǯ2DbMLac OXDm)J̄fr4ݎxɒ _if&FX7;4 IYJ5!ŻTpO Ϩ|^p*pYN7;gP) 7ů$%'AjL(Uٙb yRܝ70y)~bxgSg'nS;ݺ'9U^w.w_)gng7&%} 2ka-7o_E-U)BXKwb 0< AF\k۟@DQSl,pCcl643 _ dnZVyh"wԟ9|<*?3Z8?` Aw]%zk'uAHnԽ]ɪ* (AOѭOjDHǬIjDi_TY$1לFPlN`ZNy?_jqE3L[Hh"ũǽ,(d0К&>vZ$ICKYM]_oWJD l'qAN Y^QF;%qvO4\^3vvןzȒ;-$JAa.@-@m|FF|>JNw8=>ۄVNLYV 5m;7oZX8_fO P\q;&cc?I^km Yȼ`6I7TFEF l0]/ZcXَW/fx&̻%:NM.ݏ ˬK<0Mo jE : xcNm{A}tg&dŁϊ!c' -xNf53΍QǨnjQjY{qflGugh l͚ l06>7>ܹ{hSUww?O@ gm35b\|bfi; C~9A)ٍ_3KΊ7 lxz׭ą^oQzG7f/|#h ϱEӏ{-}|L`d% {/ k*^t'3ZN/% "6=Ff.+rt<;11~n],=˓|I_qX&mHCh_<3ehJ/>ǗԘ8$KYSXYECOF_o#] e-?G?&7"fn#\-ݑe9E,)Da~_D18yq 7]ԔY|+4皍#$D7 {'C&wÙmQNdK36j)J!D;`gK~7{_Eۂ7 R[.5nŭ5)AM5hқ0/ q\U T=qq^WBD^#pl=ܸԷQ.E"HՍ% 2Zƾ۷j K(tQ3./f}= "tcI|:ksnTcH8z/Kst0-/U@Ո7jk*ѰvC ҌNj}g'Җ{9 W^iwa!q>E}]FIF_9[؋TK2U"=p, CA(jZ648}nZN> }rxj (,ZGe/[w?:KwqKT(-G{|u-| V1*e7avm>wBzyM%4ksOP;]~c7DBť`F9L2ssƬcb8 U<2W˩tET_A-|4 fi,;üy͹ߗ:J*Uկp̝; tw[l|h XeI.HuY.PQ] W#©X,ۯ[B`CZﶟ_דP1uc;9BEH3'ՠKH'˕"bufb%՝hVh!Љ"xK ,4Mvī|Yˡ"m#L9O-ԘTߜATpB t*!|%(:LtW{ m܏fQM+m>]?y 3c AqtmǺmʂ`S-ېsr~]K eJuQmv~LjtSW6v0u``@ ?r%]2l H2ԙO8\VSݾ~l@W5"V sґnTO6Q^hԕ`O78k$?iLZ1_yx/賙ٛ}8/ Et>/t`.ڏ&|~ч. ܵk/pE`'q_$XbC(1R8&<1D R '38Ey~^E't8JL*z73i`kzD.{f;ٍ+pv /ߐ(|0lvu3~>xr_8|jE]p{{LW?"tݽOiq폯j(:(X?{WW {k~wyyef*3֊ 0Ҍ-p6sLuBY)&OS %m"11MlMH K|oѴУ`ζkpd\&'͖b؇mt}9T%)4NWHGNKh*xR16@EQgM2mSPϡΩVVY 7I3;8+שt,2IsmbW B}f4!}\X삇BNuXŻỳ8n^\[,_Yya:c @ʶ,v9)ySYߗk)wb-τ/]kE϶=& _c|_3U |&$8+2afU9 <5Yw65"Ƚ' Pexjpj u)#DI,8wXS΅@d/r g/lj{@wtT'c׏>nz5,5Uk%5yHo*1Drj+@ؗg60(ˍ3Z@Lf[`@+&ie3[x2߷tPJm!{f h],^P;ھR$ WL4ꋄGXBqFn VGmױWtg9O!zbΞw=lj!>7؛LEgkf7V‰r R6 &ѧl\y_5Th!@)E r#Z-k=ܱie(VSu~nBQBk$6>^mU*R+bLyjD _R.+aP]GF[1xDD#2c T (l|w"!Ì3G164WdiI!ul+`d-/&LO6eNc4c:O+[e+%Ŷ;$Rbǔ81"iaX'V<+y,DaLF'*na(1%)\7]L6ęKK(fWhxbUXL*@S!7y+0(v|=of0=/'z{I&)JtLC}[gn2qtJi\e%.?961E#K_9_sGWaWOAlw&KA@; Ɖj*AoRHk*,uQ_ξzrƳܑ#MWp$+Ϯ '&3_Euﵫ/[7os*n4(ioЏi+qs1!'$%.NR;ΈPH N q0M AuW+2@HP8һ_qOcx=ǘ0A!1%:r2Nb\𩌏Fjv/AQa,Od9 Bub+y4o/ĝHP Jjpƍi*єa,)8q*a08Cm~,ȃ!$lΓl@~v7J^73Gg惇'[ kYea6Chrzc mx.+ t߯_m ˠnn!w}=wnt1ob~xƬ̇}uА@Ogmwbͯ[:xVy62aǂ huS^9 Lv^+֚:%sZP8 -8%Xú-=|Q:1R$\~'!7Sw4^+18G1Gn(ooc)?4##żoIvW`{!u<:H𮳷tFi0o/i{zG!/r0npg?7_w7W+š̲&TI_ȈmKwTl_0::":nk%U­ {ZɗF|#0iSc4>RdYISAΌFf:o]lN(Z1}FZccw 6f\ow-OO׉Q ܒ,X˘ Ic+Llft C0{o| K;ìvyw-1xCl^ TKjWK9j;9c(YWw,I m皷s;}Ɩ@|)R̀w#RZ/#j!"+_nZ<4 .__]1 1C&L R搓Yb2fj5d (惼!uX.xNF%yIu;34 gWz@! nAѓT3ݥ&G61nD#Ň"RӽeP51CC&ȩ:rO&4߆SNl81^덞ovyS͘ݭ ?y#}wyKu6GX3wk-гYxh"owY6^E]tbK_?O@7چBa+|̓ݭ_^-QM x6v-fgOSJ)ga/ܤ 7w=?֖/uRzcM\a2! ELImv FtQF=0-k[hvkBBs%S{{ ij FtQF=ex6[j&$;2%{r-= N?9g1錧A{Xi]= [%s97ݫOfzu7LV]_U/89A1СPV@BFx%gЖzN1 n0r)pZzv)ȟ"L( ,qAsYf,RFGdG0bT@"jP-bOp&&Ku3BjyJu4ňZvr1IEIh-d dK\ < Z6z(i C2KϏ7'{ {iypt,*F2ߜJ1&8/5+Y{I@sخQ`Nn{1Rz/c$\INa 6 :!b"uFмc_&p) :X)k Ў@V7Rpu# ,d- #_̊P`ĉ(o@j lfjȩ7\qEwŒk-6TD2K\ȺFg! uhDŽObT,|ĿVD)NORf#>` ;!IlD8RVY{MC #eOK &(8v h橰X:c&IkztcdAjdS$"씱Πz= !Q5G")pu 8cr$58  q%޵~@3OE[(A[$ C=Rh%$9kiGnq[XK2iEsJȚ,-Ɇug%ָr_ .kZY{Kp5%-9[%OnRݵ Ivwx$<VX˳jP_#3\p&vv?'37ǡp4wSpa軺7vԯ'qQfr?XzrxuP(ӳfU.d͕Tl퐫WW=#i0&1HIM2xK\EL+O!T$z4*lPNޗ (u Б>^>XHۺKJQ\lB@!z+/25ECGцQhSnFB$gz#_V6ݻ6y1<%ԶdR֥"ye*I~ /XEWN@>(r@]Roe0)4V2 (P+!6X0qUu !?bUH"3 ըМ0"K-hhfey3FS[(B"wYBz_ʄ _" u>3 J˕]̲_.~rt A,[>7&สjgƀwmtS$vS'*_a)E-=*+3Վ/H\)DxƄoB`3ʞU+{1.!Fwx &bDM#EH[U{`=FaPAec͚ zH'a,t?(ASRfʶHH 4EMaiP_)6G9gi1K1r a氊5#u lwã r RQU<^ErݍNHfITD T(17ұ-\6.Q<齂U1(#E;+.Tu_vH?rLi !bf{jcJ20W,X tXpZZlBw|]Geh)0ˈ5MCf܆#ErT-sʭO.<` D.OYXEnP%i"pn]JQ?nwMt>]kt>xy^r#VdB\d exOG% L:):XrhoȜ~L(4 ¬udOن ç'YieVa ^\&gH e;VvrnfHTQQ0_7\~F*$T<@X 6{3fpMe<0$=-]> Loj%$4i, e-)l8:kp?|6_z]OJBP*V1.BZWw=k8$&I* Q9N4©# kE`ъ;;J:]W@ȃ0CuLZX,^K"Nֿ1 ;q{fjDr޻5AZ=jHo=br)) Qoq2\IQBCr-)AN~޶nѺb:ȱnhʫ&Z.4+W"w[MB7bDVAꔾ#ǺyUoV^7Ѻu!_nJ)U"p8%iKJ H[kuR]IQvXnjeB5#LqZ[E F$:8H kGg90b,xWhŘo:y)Va c)/7JzC L-"} ^`{t$c/\QES!4P@!5\SC %chM EuϚzuO#WcCMSMåFFV^k/EnĒ7":AI?4Eb 8 la[V)HJw/żAѦݻB>߽d aY$s &%X<~|ExF ZZgS)|@n6\^.&4(EQzkTLRBh~y[=3~HorFTo,zauQ g7}~qVsgmyH%K-=o[I/.vbо[,}i /? ={a? ~v;MtM6rrk0I4E׭SC!%P@4]ZJHXbǿwd{\W U};Noog]sݟ,6&:'6.L\)C7 pЈwtVx҄͝f…QRB9ؓEӀ^;$-'<] $gYP1l"Bfo/ޟfo߼LU$K O}i 6$1 j]@Ә1_'AlE ]8ު^Xr=''3f&A?̳Q7LKcvOnp=[ B4Gb̸z 1 'Ǡ%2ANͱ*F,ZAs1cWu5;υڗP+$KWUf +yqn(H&&pcr:xۭCXbD`Tt kKضN6)hY:KILJ I11MeOˌ{:DD.kOu_3Yv30m8'/|oYfeT8)QIpRtMR b%oG r,<b w K"z6WLЎDz0n~>[wIJ,n,seq6n80`1vz1ӃKqw7:*F#Q߆+f*\󐛫YI޷A7,`)`E TrCKZ<suFz .0xpN(;?NQX]Sn^eIk hZQq"8 SF*kHl4ai$V^".`{pBZw YsᷠT**ߵL_IiEqm{jE v]~*B}Hޯʵ&J˘uUa nRuS{hEL)$|UCsJl޼+ZqABsOUy ]mFJ4*Ɇ&PLN+BdwH_v'M,/C;sƈX[LbKZ/[-[&{,K⯊b8*S3Mds"7J^N.Mţ(D{xeKFY(iHhF}.>-o 2'q%:rȍ&eJzOL{FQB9bE3HyXԌ$T-Úąb*"!͕N\y,NDZ$C nyB ̌t$w1k[MhktZb A['Y3t+;: Z8.eP-Q+2kN[˦4B awQ8e0 :rVHv!]!Kx 荛j7OJ7 [Y5O)Ca+\0$;b "1,) `@QCے]"L;~yy[ZSp!HR2-_f=uze\ i];A_$:PȺ`ގW. ڡ&oԓ!ԕ<"43菣 xS[e'=L!Hmt02z1ذѹV1CgpcqFI .Þʢ0L\t$`3Nb<Džyݵ0ˑO$+zlqblE8S;L4m'HT~$?;DY{ƚN0}XRJ~ tM jz0D0W\%U._H_KLSމX{Xv>rj믡6 kgO8c\c֓ s9+9dez9Tp8uRacV !Z.7rO`Ts uFm>!v594r1keZĔ̞gPرĎ'V4}k^WHTcc ylñ2˲[$k{-Mϖf)ѭɿ~5ić=8~^UrvV/+xG8!nA|ɀ=ˀa]Cdo[O7k/_+uu{S}Rkwg)m䯝[ɒŖ/dFnlJy̏ckv0qo-@ sq%=N(SX`ހ}񨶚|E٩-)cT^OR5}M)Fmd$ zѷH{ Y5̮9PJ vZ0u7mxoGUs=͑ p\WDV+&90lkΥuE݀V$Zo?VYTȉ0B#v8VExcoG@XW"#}'5'XZڱ9$[6ope :[g PL C٣B/ *J0wV)lWIs:Mj1Ǥfw! X)wN9l))&ժ_߬ R )Uk\k)_. -ے9f}, +èC&ӫ/4Ř~\ˢ/,1ɄV\>SiiduJ'f^9*'2+[:AG{GX/d(UO#J@yh9m^~;?`ALIs>sc-%lQoY\5XL:^2000ĺ:e(2y)!de1dTEy 4[R F7|:?X-R=Yб~{YP #*603#XK2 ';xf`}TDz:F&ʺ b$Sԡ:06=gkDBvu:!HJ~7}Ũ^tvI_T7Dyp9nA7y*ood&x(?ـ]u^Kv!phqYn \;\x(ׂ@9V.ݣeC o-G+P?~E ߐ_u"=[ I,/)Y[6W럙_n>7|wF 0zG^_ Ш5G]Aq5hYZE ix+mETB!V9{Q2?.j M>&Y"+ZaN3$$FT,h2>R$ƒ"VgNF݊PIfq]5@qj-ܭjox#n}UM0KSMXqǠV(k3I69QJ+ 1OEռavhAv *w&d'R@EYbbZuQpS$M bVhj-[́J"TUrC($@^~jpɁ3iFAZNd@JɼBFrK8@?(&gWonc ɾZBAZP&^ +v@ B^mCPZ!3p!(I$Hrխxծ 9yZY A6@.x81(.kLrJՖ\r;z<_nJ|a,}[ 8|$7fWEXw^1iD[8ؓ+?8]AϫصMc娷2>OA,)Y>~kyڢ_^Fyƛ`6Mגƥab Hx?nc9P:R~mwyPpͰ7}xZT+ қ s_؍hN.R7FZQ$g!*4N W"%)ʲ%HyPFs<{nh;C.WA-_OR^[XeѾu[cEZ\vu`% bFÐe=IiCJ: }:֮TѰ؀D([6EP&Kie> FfE5[uJ'4`;P[W~X9Hߋ46dx%ϠR3A0z)!qM92NHp2'`:L[#QĨ3Ĉu/b} j@-p9k尷f"0aע=K Wy`0 G j cRE)>fYt0JHDi(,Eא,#&ג 3w*F[ 313Nh FZi^α֊^i M̄@c"Yc hS\XMbWBliAeZBfJzNE%ETK<0SRJ+Jꋗ [:aK+o lyIiӅUaʱk ˟=ȘFXcRSZBr_)fM)LBSeNqiaLXEvjA$[{*ҴC4BV䦳Ͽh[lAEàxm/+Ɂ̉B€Y0B9DKG]O%l 2 DOQKS;Л^l lR#H$G4伽_( /H()1I5֑zBl <[Zr)Jp1mP ӑzA677r4>Evmg$+z&cnܔkE/!io7çQrt_õKO~}=η??B?~M?Wt7Z}Ƚ(}<~J[I΃]] ~R@ e %GZW0u؈ͮ{p5& j:ۥfF[5a0۞ u$Vg??k'V+n1Yu?}3ç_ܚTuPn5S?_nGfP>DA;VlSo"-^*պ'Y4[I7ng:kn1}Ήѐ9t/ n}pw΢Q<V=gMk- BX'v6m@/cn[hw[o'uP8ozOn/cU_ }CᜊNǐpdyHv3ЉJm+I ?H_Mɥ%9B@VxSNo/jceV]uqv\\l>xVR[f-%Ǜbq6-?Xsb [;x*@Tݙf$ڼq FsXHPCҞ xpH i>Z惤Oh:ozGTl~;h`ztjMےV:[7:pvߴHҁoH`iKEG9uVH@寣%E JD_wǐOtV+6\%aG T~kVFpr9'o: Pք&!4B BHeMeX')GRvOauI(g8M뫷yԆ); ;0fZr /R[:KXJ P€qX'Z :}wۜ3`*n6OcM/ioRDiǛ&as桤p,D|FQٕ:|Bu$"^Iu$-sObb!~$֟C"a(6XCtN}??fΡuwA?zR*=A@I$CSlAdLL$TtP,W/ \K0")Cqr0 ,KJDGxen[Cv~HkǰԂgJL&&tD15+zx^y5/[Uy&Xar `k +0:UjN2A4V`w\9ݗND]'ەn2pv9p.e9&͋$B6H %@:-}[ flu#/: 8q\%}d ٻ]w3u_$pAXE<0~z qqp![-2$9,Pi"e+(ciKpPCpA-W:\ ڽTYz_fgwh%#*ªq#/TuAȎE'\M;|F#H4gܼN7ߕި]/<(Qgc.E' wqe =Ze(+Me1Ԗ$dRk> 7b9 `#GLa-6S|wk<xa† K{IstI2$5'W|^|n,NHb(9 J[r| /`=L^//m[̩?I  rCp)wk}U~@Awa=xӤaJ%RaX()홴;e+-:'ٻ^jO^N_g6^+Rn9*_bKD]q@`y̭rށ5sgRrx0CR4”גtdGp;+ |Cp ߼$AtjE[-v]~Bocl>-80u3i@8=Uyq}y7jڿ"yRr8d*ChN'6;q/ьbF?sj٫TueDw+^Y|u݉7f ncD)qD Jke#Bn"(=Լ .Jg_\.b$B@>OjO]UYU;Ho`Be%*M.D{R퍹ƍ Yč(ec'a#I f2ܠ{ydȕ8FG<p b%E NYW*h頴w1lę]EW䋠 Hʮc{"yAEDf\>K9w[cș*I]Q^nyh$[oJJNã [gF7ݡP-m+j7_ MOÙ+mzz9# joACOxf4?TuI.{Vg՟'g?-Ogm펞F^Çqr.p2Y@" L)̩!B{,cW*Ng33'PFh.lt!;mcI3FH`{6YЗ1fJ(I(JIb Ilqݗ2-.P䷥Seݭk*!.u3<% Ѵ:F[F2œ|y46UeZcFa-I^rxT`td!߁$;8!z:`>䜾ϗ :oʫP*`MQd5-7pԬ^ i5,-OLXOVJ_E! Rg 6t3qb쪼u21l8lRtK UqhQ0t"@hk.nũSZ"y}M㜱|:Qם8AA3!HeRJ $,WwIX5HV&*yꛩjEk~/5X|C8Bó",9`^."cV y )?UZG9 `(%Gvuo0e|m Nǂ%r9Ị{w1\ƅ੫@F0QH%`5۹-r9 #j ׅLh,%3#klym1x\xsĸ(<prRpLh<[vҼnͥJV~?+euUvg%)gVa:_|9t2&h׀aJV IH 9Y$.`^Sjo\6IJXNbVJ(вnF77k#|E5w.RY޶ z8-<NW!]ňmrSvk<g4ݜz0osbKOFL0r@˺EXe(+[J)^5`8Cc&yIMhb$G^ս$)o&}36A2۩;ѼO9 CVD#7a=i)L 4ސ9ĐMs@&PuG=4/P-XS dUr"skOZ82Ŗ/V֍y(XbYF^q!I<{H%Y 1psmI?mV y5^LM]3<\\jBfVRS\:,/ni i#+hC`XiYq⸪BGxiB sj?f yg y{L%eÞg-J:D6VJsB4:vPj{8yiӺI?=JFĴM$N^{ѝz` ^&7Y=0/gHW,~>QL݇IL`&f0&U'81c#AMlvӆI<w*ɤ$BGش4ƦSf:13e?v5a80:9rg! AȰ"򦹝1 gftp,9@͊u#>(YϲOL &^NgXݫPSeu\T"ԪV'[+EYUh%5f+:[cFW2Fgl=sMZ) ]YlvCAۑ?u#b)ɍp] )Cqͪcmk4R\+pP<ϡrڙt~|vԩR=#/Ry4^G"+JP*m2cʵK(-J"k=N/P\SQB]ĔWG?B({**A(B-bl rg@RM߲qnì Rlk6ҌR]Ufe݂#vIO>',Yœ%*ŹO8GqtиKحh%[>kaRĬpA|p^{LS uOZ՚Bj:|Ds:sSH~TpdQ-"ԼȽ4\2C櫪]u[i+ ŘvVB@r,:gl Y_L'+-A98i4 2%ItS2/ngI\!qu)EչZo~̸7WT6jqE5̭U -5J\e'^[j89k!-7Uf9X(3%Ɏ5go>ңHל~췥1CCJ']}$icQSK{ V]_ވ]SsVo&j,:t\΄:HXkI%Eb@~lF?Av>^"w:P]'C)h`NxҩDL-fCn?h1$$RBH'zh I =K˂"Qx9(TdiW# ȘmFՁlOADC#ˀCl̠q* 4,="Bo.1q[XdA;7V[2X+K1L)T cA- Rr^4͖wr6'Idyg_.7^l47%\K(pQ.F.Ϧ΃ؑ_=7,atu|-.& 쎪T 37hݽPpimDsc@*9خUځK騟.7%[Xظ~][u Lt J!B0+~.mdaRR-Q\qG,WPy=u<.$*y$7BlmyrR8֢eM-TtUGl[cEAj 0jE,1d, 7T-Ks9'ZAR]OZjUJ\ʼn"NT.>ɇi8uv^+t>^*0h-$wmw|@gk/ y E~8m=ې2qOI ֌< !Jbg!,&U ݜ])5$aM89x>Hu)bB , JuLZP-Yi ZuJ˳[ֵ'VY6ko5pVOЛ%~.0fR+Cyx] XA%OЛo `@|%Z M Èֺ8xT֍U/طHPU<]Rb iƨK܅٨" e`) iCJj8%!< ӊk6In7 U*S9<OAVNHK9D/C 22B D2аTy a&Z`Ib^jL)̘0L`OSIF [P7 `k)/G4ӋrL/ʙ^4-ުJ%:лL*ZsX`Pb9CHGs$zˠPPe3ADG3ײ{uWxAzhGR̍y93ZE!IjI8v@Yh@By0hQE|.3Q '9!@ ;EG=ȥoShNژ"V.jrS% Z ~y䂲m !F` X,lHB.3ki2 Ty@RZK˙19¨Ԡ*!d@\q҃ %$wȁ߅e Sn5h)(sY6֦@[ d[`)/*2Az>FkGH*9rFi8gi VG( ɔNWVR5/6G_uwj tmgtt9p a/M5+[|~7һ47z O?_~o9Df,pu7흗/^Ns;;ĝ7n_d;x}@/}ח8=(6hlGl\^4Ҹ_x})VoD>_6OIo𩸒I!^ݵO% D! C,/oYǶO2oL)N\PmWjn=Q`u9=;:n|f?cQUV^;qkw|P^?]n=6w@zWw g0ݗ~O?A"u{1ڧrOO~qJ;Co;c-w/.6Go·oB,ԇR{~{nܗ̓D_wTg C ZߠטV܉YC}U<8`14l>aϞ,Nsr]dQʉ'RvQ?֨"nA:aB: !E)>|6/Kn4ma_zac'r[ˀ(\>k6~&_JԽ(ry?[vc-8.Kݸۍ߅"?_6Pcc􂀰R]"o5_*lVZ0"bۼU\eXĈSX$fl1YjFqvz$[f_2u.#$Ŏ1 \t* X,=85|ࣲkU3^Kc̒2;R[6WGn'IQgw u\ k}">.flfi+u+uJYZ(۷o7VƊȰx}z^N δC6i, d1et%u,$2{ы4؈*8L+)"nXѥ_C (Ol@U hvbCV PZ˯CpZ E!#. af-ξU"T>kxDR$`j,%L%84AlW3" <_1SK=׺裗E $+Lԁ` cT9h&"Wy5op~ +cα9a)׵)̣vCr=(2ZuZ\CZ뮊,Qgge\{ʑ4W}ѐK ؠ;/t·Fkı[4oَeIndݝdY֩|"Y4M151b(n3r4ĆJ`S{bB*o2Q(sY*R|YhqD|ua@-&:nƔڍ|馛vYF+U"{clB`5Yȱa(kgi21Y0c Zp;JRG> >\do% 5.\KVirE0{ +aQr-MemPvGgS"ms}rroKFUy\U-hVmxi>U{[htkYe]3o-'Q+jimCy/-9v"olJ&]?1u9en]8t_I!Z:2$Wb CH#hlk9oE[>P Ayks6bt zQiZx^tsst朷+e9j9ٴc>m:16'61^h31Y-UNjg:gORtN{kqT3o-)4*"97 DO_#h£p^u,^J:En1'"<)_D~O>]l*GF5a[Zi?{?|-zk˵ β/ht)LhHU!^SA"Dd DPhg g?\⫊g+V8:Nr,vѴuxf$tIB)QCd SɰA7=[[fC`W&!}&MD&f 8ԒRZE8Q.)rQ>GTɈ@uO}~,~*TSgu3iPfDŲɽ~=2,sdnt뚝  MNc=%.j/.IcZ=Z%Vn#%V,뺘^e-;xuĔֈ)|*\*lZNgLHemǶ5Ŧ,jbRNgojʢ,jʢ,jbu) 2s":Bmu|<=e[\АϦѪ"n E($ ,`Ǝ_{)cj/.)t>ۮߋ.-v(٥Q'=d:-9g$L*\lU m4MEϓ%Ѳ%.AP䶂m n n߼>p[m I2sO ˔fEzQ  B4rp;yTV;,ڋK@F9&neז:jKMjˣ2B%H A"0e+ѠHPbYK6Ic'c0#.R?blTdOdixh+VddڿJ!1*o7ηwML#LE[͞8936D. 1Rä)b3A>=[[qzdv<ָJڣӓ8% fD | @"%d2bBH<@2?%;+"_R$؎QkM%JD1Vd[ɶV]N_','gt) [F 2$40h I;SOhpw?GSS. $2cj(I9X](#e)->ƸL E2Pe}bKae@*UZM9Yf,Yhl>+WEJMhnB(זɤ@ ߄؃;EދOj6fzyv_5Mq+g0ܤv'qv̮E)AlpřBڨV,ː"9ݞd߲X21acSs5O95Qs5Qs5ƴE:aІHp. 00idMD\v"2IŴcQtP}@#6;,֟" "JG`Q%a& aRR*D/ r4n;i+8*zÏ*UT9hu4iPf"thj./ߺ3)C)[ը*q-dhڃM Zs-Ix E5 ݟ[DȽ} ,~ ^uq+`/i2z9xΣ;ˉ,[sՓ53r4™a ~U[J.WDw{Oʕ/뱡;{?|gp8<;H]0b0| \_hքRA[~^)M.&'4oMַB-inq 1soZCP{RS@6dxO3òy鲦jڮjڮV(30Q(&jgx IԍIWTQDL A>bڏ Q>M냀{s;XuX XBKNN?) K&q [|%ˌ^H;b!Ld1A`u")Z&|e〬uD"nE}BP"nE܊qWx:ָ;Yբ4b_^VلF^ظԄ{4;7HF؋e)s>6A GʪGp2:\b=zVI!=! ߾k{Χ$J2 9P$g #HRE2Z ]1KT(3kBۛȶk9 }tW;2#(y6>q0_%_:?_gx _~(r_kF_G~z;|`~o~e=a~yv~t?<눏ƞsk+/0uP D_/pOGiM |7YG +o(StL_GTz``y;`):Q7giLbTT*Fk A&ْTQP kcd2x .j!J@F۱QZ{_b{x]fܣ{XtR.͆ LVSx-)( cUYh1*BY8Cnfp/~YRnd'^tr"a}4&ZLl$7Fv w M*PtH] ,O"ir }>i,<->i f@4)PB^^Egֱ؀)s:h0!/ Ó(36iwi09:v6U1@P:*<;A" Y8|i@/qA32]|{W+mk r`Vk h cjFui-?bH>{DUHC2#W(E42_p1*/ : ֣{ftᮃu ϗ F Ԧ(-FxbLC9H (sKa[w=r SmY$C`;Az/˗fdg<_<4fcfW+xzجaթ"iޜN,޼o6yO?D>PÚOpB, 7?͆Lx~| N%| ? ׇק 7vmT>k_W7wkahʪϿ}a#њ,'W7v=>;<  [93ZRӓUvXx72Yyzam l.,HXZ a(:Gڰ4㭑p"T:2.:^Z.݆6`FRoe)-Eb4. iZB*+pi?1YKh1/ηyBFgl_է bNdf;;HaupQˆ$ S 1Hv ~8=)W=+HhjƺnkLmΚ*Xu9+ Fcr)zptb8+%]PURJX:jeJOv6 ADAz>*9!*g$F0c2Xz؃dGgJqT"^|2HERE:Fc2(.Y.,I.R3G" ӳv'xC >P+U5 ^L/y#tZIsBXS{6$s{F6 %sMb;9y;,aa-@jU? x~>T/]*W7?O_op?-YBo׵PXJl|sbgcy?7vB;Ym6Qv|3uUa‹ Joޖ6Olc{k yp;9?r ,ԟ.Nm=0.o8X¼9S;`O գF KE$(.n{)֟VXJq$X9I磷UEO&ZE WTd#%itu/ZTݳgs&h`eu>D_&Ek&BѹmG2&h~$ #R/0Fы8!?R.d%DXm.QnePNڞ|bBew᧲jwW~gd{P?'T>(#w &8IRlQ$sª3 _N-Wlʮj˯O>$XI 2z5TM*3֬$KTN;i5ٖ5ɑIj7~R3yzdBo͖Ms[g|wZv ZlzcD>/'B "ھw6%V[j1:a %I  T&zHV:)%HQͦZmVc "RLE$Z>J܀] B ]/}﹋b86dn.o]Ns S$ȣj0ɮX{S^GF zktE%zg3y-A^2s=_gKmLhF=O;xWKU FƑ5)_U)jE/:A%\UsR.t6ULVA4{c  ERMMU*| }﫰"Vm$MIq :B/9^ a9ڷEsڼ}.a i@)A҇uva5`l4}!QmУ/{㹍w3iy` <}%=g,ٻ6 Uۀ 23ۮϪv^E5YtX2LAz%XC{nI=l36({P2wZ@dx^8xL%lQ?0`sE8Y,Gʽ7WS]U-T| RQ[w-~פۻyMH;8InQnCNqܜ#4P'.{ޛ߀$x"(av (3WXBnrq=ˡwAepLB #'>˖Ԗ7Fb*Wk *G~*,^I~xq,AsJm[K,$sð`kiVpWҫ bHM$u6(K픑N$牄Qa֠ /گB1 $@Վ3 %{̉Y~-x}pb(H=͜yTok`',HhRY'S`/8v#Eۧn'LƵΙ/gj<+\t2*Ts{)Lu*1 k̂r]46}f5Pqt036ˎ vɁ6M ]vt'VމKE &k U3 !*L0?N#ǯ a">v7VXbc'e2M]uB=ۉPP[aa3"S:$5UxQ,e\ מzHZWXڞv![p(3nZS %:s02z|ShA;笫ul?ooxKww[@ Օ+dacכ)dJ<@ ѓ@ ,Z]OXJUcAdP>9(K<[wAWb z'ܘQxÜGl5%1l(D j5#eɩhLF.ս**i*igVV^$)+e f>Mغ"煃x\ڃi2Za}&rhfsxسZn\*Gl^aO|=ehrpLlyXlgwO6QJo8J{%\'6\Ks;A8kexwB"7ҮlKȨR;Z\5A2锲k5$S܏ͭjnq `݋}ul&^I% FպHbViүh䎒 DKO]w>|EYlv A$V=|3| ɶ:g ϝO>$0W鿗Onn._}  Gp>ߜ|$Onɗx3ޖt'W^#iŋ ,9G<Ļ{8>}PS<6U .:CEH^Ujc0is\߾JܯKZ(ۯ"ȧ[_+O"|_;ǍLL|ڻ&͑T}ϟ:ש{thi=vR&[\\ru(7?˓zu;byƧߜbKȏُ͙+0NQ396/.#)l9Bo͢*KG.OYg9o][o7+݃m"Y4A,Y$V,[4$A!umV_O*XOkvkwٙOiPYK\pJƲ"20 )*obpԐHyN$:s"C M|?xTF05LHi-MyبoI5]7؝+H&c>o[.@yMĄ3=dy8î=[Vd̖& ;"ypy>3D@}]!7)\!x?t{Bi?Y'㲽-u'&$k#"e?:;עWK_I~['q~I}>^T!p,W%aN)eE<)LrVJBŏ}\%eG\mzxsO\~ -hauz tnqZY,XlYYB`S*ڪn2FVyO,JĄ@ 8Jbz[&$ J$ iQi"\8+9Qh/1Dttnhwk_wlɴf\ aEPYٲ\P|Asnߡcvz ͫtݕhIQ XT(Qx2bUMg>C#*⑕HIQ@ic 驉4(h c+ÔvS󮓐( *PU۽рP*~@)]nzXvHڝ#qA9>rHk`ȼæcQc[K'?;ׄ# -(9ʾY|6| @zbXLjiZ L:]9+:_OgSPr+qô@۩~ūv+5ĸ=ΏV#>4mX)K3QۋOoOn\Lcyf> Ҟ?-լtn-3@J>TO=a3W@}aA(5OhjQ׺F`0&.=@PK[X}Ht@@]P/Kcdzdq؅ͷX"4 B>)WV*T* *]UVN8FfSsxmjvXZhGL&"|(49S쯩o$$y#/^B{JBjoRh-HIZyy`UWfdkRUL:0fZ )L!aZ]#iՙ6&j&V  \4B@! u}mQp !/-rPPAB]s"f0!=.Iv/Rɋiʜ-615ǍeQq9XP=#Uq_Qۊ*emzIfŀ6dԶfܜV 0#m_O5n0m Tz~UVJPt/984[N]{u{Ld;k71MWٍY+"$ r'IsRS⃖Iot))*彥1aE)*/ 0U@+2㲘tFxoƪ#=32+V gZDmkU d/eolEcv~fl"C%lukD^Eӥkږ4׌C`  %9Ku 4/rEmK]qĩ5jO&h[S=8տ9 ˬuxAШD4͏W8S+E'H$iDEcV( GP4Tf3f|?VSj4umkC܄=rYEEceV2w`e6zqCИEtY(N9wbo<@'|s K[)A VY^Q[.%ܩVΘV1˪8Y(4-# m%Gf*1+(VUZKL39ϭ͍JJea9J3OKC/,)+UJ9K%Z ʨ$Q`nd"iE-1 p}/maJлdA2;`dY)t4jFT]\{X4" gk{wM1W|Z]Nbŕj/dEï>:gaE85ޣjqfgbDŽLzcAC2AX'0R7 ^g?ݟ_2AݾfVXn쯿ˣ;ڧ">OE/Gh*|;77 WfisZ%lEV+xLFի]k?oY05-hעdZCKf9{TSWr?-WAOg.#?`ycdQ]X)d-;FbN8pᄩ, CTY)mQC6a3w配PmqrHPV݉{D8>}\Xo%^]%بɵ/8in1pyݡcG&VsI8)1 n0|fo do6˾?{r.`m۶״}mzWЗZ}K5g3ǂi{䟙^t^0ۖuz=6jv]6{-[-h-&F/veD> fW~ȩ}y-Ri&- &Rf+%)Ypm$0\yRjHdTRa:3H Ȍ}u\KTc>_R)M[ %)|a_^1؜I{vbUQgvx*Shs^+nPN|btΖKp,DKG- G ^Zxw8vc|KiE-|MQpB"Th(1_$ 4xǩD΃* 𬬀.;"]9:L_6>GS4Wlc :$hbcEb" {BgҘLRHa;^hv FS2xlM @\KVII^&4zL*yUHN,_O,uY_{QUQ-6qP^$k1kKA#~ FH|0ڱ*;xJ#44UN"x YQ90Oҽtp 87,W%Hkm{C?d4(n&hŽjeьh}ɑ,dh#fz#ϋO#X I=湭ԙ#Y穄UkZ!P*)<O 3ATxFr"62\UYM{K‰s{ZU5A&_|vW@x#3l8@&#$DIRmc?يpzUyG009R;.p :sAOZ:;V KM$})@*D{k+HAGܨ#n B< }P)-9F->d)¦1_M_NaVw'_0P{>ONKVvZBݔ5Sltً2 4`h;YoZqh,*RpO Kᣇ=nIeLN6W%olʺQצ&B:8~\Cw#q@#IhA-:"Ƶm]+f.4oFR]5i:uHf1wsđ6S<ufYސv¼ⷙ:FtӖmW-[Fmz%t?a`#dNZy뢸@xCbXF,+$x H.HH~=tfqDR&#"(H,vrʭ; Z1N)2",1a@MI_%GEJ8)4{$;2#>|]I<Ẻװ!aSmW{½> Ն:d(ܖCoGԳ .84J [OQW a̫{j+&}]!!_KqԲw^4pXA㇑2B.x@"钮:|[H |9B/0x Rv.` +1V!V2카_tGqXa%scg07s]K٣,7?=ćWYmDJt6sJ@Z7B$hc+ˤdxVT)IK5˨fN=ؕϔmJf_]_߾^w;g no^gggMVEAX&TG~6{Y7-=K'59y"pz5w&Hm0QBQCbiņbUK !1H@"3_ DZasAJww{!m'ĩwrI+6SNfPv|cm 򠔄Tsid-ᓈo?ل6E$ Ѽ;@@=3V(͓J; ]~w_ْpg3ȷу1qb)xz+s)0G& R)܉${gF?Po ):d: jfl%H$P#7O0ihŵMkIJsix֮`X[+O ,Z>>8M<989<)M'1\-Q |W:8:`HVh7T8.l8X,Ô4",!4G(ZgR ,V0+4@ Qo}jl<9m|U,Z =r1 |?]4үsNs:*"Pmۉa/s2}i[u0;akj ϧIV}.wivy`LzaǦBzUcCU帤v䮅64( / q{ AJ(9d I녢6]?s4!bdGOBDR2y(ឺ#/ ܇VWn/9j Brj}jP@rIX,A*JIHB EY*pd}r-.IiPpj>3f2pcb}f(˾߁\a!;2Y.F+7gcތ`Iu6.Ɏ][ZA1;_]\Ni)y53Nd?I % .'3@QF$4+1񣱓|YU5j#r[X0q {J[G\`šCFA@яb(1$/]]%~y+L)EDvZ\օA(n3`R_k5,s5moxd⅙6CsdC4΢x~hd^9"{^|WώY~kK^1A ,{ L.N4af%G$Y$PgY,1^CI/Nֶ<'gO92']@ Q)_Mg-- /N 43d:k+5+'FnJ-`F`>*Тooͪ T*FI(p2DY=aXs(4gI" qčJScKIacf|oe/ȝtqopI>C+4[q>,$}+}G@X}wo؈PSl&2N̰eɥBH\THF`G0Na3S2$Y`.LA2Pg*F*dxޗ'X{j2 =]M;fʳG,bjތSEg֫R6|e~Y-(!#'JGt9"f@dg@dg@dg@T.4YPkОtzK2̀MG`{M3l*ԑ$ԅ&Li%[M6DhƈDg R@4D(FJyUG̥W"xF0h/88_O7Ik ʗ7A;ʬm2[`Vw{F-yb!u̍ziz |dǩZƚ 3*'RA(J ǹ<_q '#t21_Fw9i j#n_Fә(.o_ ԿrkqhqӒ6F}r|cLU~[lܙw% ?N_L8m<`?}g4p1璘F̧ɷrl|͋}?xwNudg/4Ө? O6V~~Ww_xۋ?hK}^?N}c{3߽|ф>9T;޻~f2gV}]8M?*};~gjw/)=}?ˏgfMƷꪌ7c?˭gmύ`Ǝ,LBEaqOTaQYCSGw?q󐪙yVw>(\]7y>k<9.3#=] s6+ϝ x;7gƓ۹t<ăYRD9(ɲiKml wscb$>w`WQ᲋ݱ{}ҙ zR(jmUC^^dȏ"789pQn8\pٮp?58|8˗E?F]Y4]&~>^@f}0q/<):QC]ϫ>=J b.9%wx*}7tSEE]kAVE҃vpyQ_ϯFg4oY.`63 E\F5ѷ!k!q̴ccڵl]eGnוg2+5sMg#s( p'ٰh\77hlMd㇒7~` %NsF)$2E,XrhHI4M(Hm`8&^ sp^GT9%}u4xqsҐ8HCא5!} 84!} 點 J%4$ưbPLSDbq>Uu-"Ff!RY,Dj"S5YəHuUˑ(Ҳ4$2"J) -V)TG' " Bfx jSͩF{YAʌ LAcLfijfP {?Z 4 *f$FIH)v P"IC&4%VŏZ3y֨u QuoPCא5!} 4L_ kH_Cb g#062بDVJi,GQqP?s팪*tJBHqN6"nPKI]l!J+7JƪQ5JZF IͪBĔ%Gϸ=Gv|p'(!;@|&?fNyq]B|kϹњ5$"Ӊܳ ^]ɷ b;o(ԋ:=bL/?KAe*ُptpώ^e;/8_|{kXt!Ph:א 3{,ab |?~upzt^<}\3ih($]8 x&QM 17-Um ENMm|5H.?doR%O~~?>Sa.o?uѳk^ᣮ:Awa٤Ct'Rh~aEF&$!Q2841ۜ-rd8u~k4Y5yw䕹8 /"9np~o)n@"i g&28Op1\_ix1#]Oc )\LCJK*J7PkҌӔ(RgyKGV80!A(Tkji1Kϫr hrFL/ހLqas3BK fsS*wfL '~3ωdëJd43i&8)iX`~T];@,ERaB bMvT9eu4y ͊R%7h\]Pe D9\\x˝Θ煎煎煎UĞk jA2Jɪh+%3*}.@,=J^T*QS1{L' p[> lO"v ;Ƴ/ZG:Jc3;vt~EP#nEV]2;g R^b,r-J#ee1å fN._Gxj-  靗2(M_{Y#,;I9~MwOGӀc;>B  JvN*C{>3L9V& !ՋE+)ˣx8=cV*j-aK8aQtgiN G&$_E[Μ.Y(+"lW|[#2}7|f^oetov?#6&r\&!ֳdWgqi8Iu,:oUZ/b(٘X y՗j;N!՘#ZUu^t6Q֒o,iEbcj#HeGsxiVc:9B9*ZVWPjM+P|7}Xpr\I,䒎JĖ0 48ši kRûk+@TiCkDMeb65" Z5U3{=Yi=*R8܄zApVJai͡)I8OcI#bB`iP$R)'rAbCG1^MX] skfn.|9t'(@\+Y<"Ȗ'|rJS痬r4/C6{^^L{<&P݋2YRP!*LT;0^j{88rYs| `aCT\,l w0,D8,,ڔp3gNA Qh ](_n~:yV~)w\[S hAkFNYP7<U7ѩ Ǒ$qBGV9UCspfgʵrDKkUWV6ooFDj\FQǛs3ц"ʎVA6cE6 `ۘ1t[|"FRFZPd[4,ŠB[#7([X?B'e A|J9ok&ҽ*ȹ[?^qH݂;-\a>6_*VF LgxˍA2w&GۗvqZ۫1[suRX B~ B&][7+Xũ@ M@@g]D, "rQrRZkLtDOr&w/o5i\|+c8zxpD$nKp: BǸ5v EVO~"m B\AȊFhVdʲ1Bq'[n`Lz;V 47R6qe4EhR?4,Sx')Y.^DRAl#FYׇJ TO $bxg60PsQ`[nA8Z& *7N4 Ʋ-ni(\dI1ifx->!-6ԈA$cS5;DNUDK.&Nj5>mOS2g\⩝o+͓kc-⪉ @Y&\ "wEl'0BT|06ݩ'`' -pX/uR,xi3 4f_?[ n8+ĕ$ա-.A\Z1E7_4NӼ?cN5VRP2|/R:gZݦQM+=X{jox3rޣ7^@޴^4_cFM`z5+h֗z݈O5TK6Tn$CXR+odt~ GLr.KcɥW%DTh汔A p:*Kf" % 622_s1&)`ޞ Mf=>tvPcWp: ܻ].S}WZn(׌) qpVU 0S{g8Qße0eJOi`(5f ?B/4Wc/q@b$wG*Xd%, AǵVSKckstqF/Pצ'L?QO'M2jdD7C29&͟)3EuD4#p~oRM &P$ޔ)\i5<f2Y (q M56<#B(mGfslGRm9.Nn j51^_G/8~H9 S>jbKq@Ջ%49|bw=/+kv/KPW=WhC/+3E=oC&MIjQ;RK[ ݌&ߠEx%; hfb (5/FK YֽYX+n;Sha<\Y5OnG=v,;q*tΎۏ8c* xJ /JRRG{7O*o.,u's$3/a'*N͐5Z"%j¡r)TY[V?. aCX,y `I_{+'~̡z1¸Xa`u=3}{Ax[EZn*vQ$\͇2]= w# jJHmGӛ%KI~]^­MO=W:nW@;! Ͳ+Iw+A=GW^Q%X jUyDշs5ZL ԋ$]X]B7ziE9@ޢ@LX`8?q#Amz=v~G}VFO Y ^ߔ:I;sˍs]y:8ʿS},ߎnr=\I<܍GF_6/!aoHH0T.^JFo7gTދAjQ=K'tN|%B ]䤭ց fs.8_*d/^8{]ѷeRbAZPW8Ao3 x1R'#s!O䤟 |ʵI5Y[&m$Q7@O)Fh_yJ'&MeQ nidvC$6PU= sI͈Ul?O;#UI׹MnʩP;9o/ιҤ2zUz4M]Q,eRor^Ia憔&XmMm,Y;9zV! 1>)bKef0oB;fjSfZA$>DBX3GԘ%;?hsI\2rY"I=*fGm'ߛŒqͷ"JhFr-5帳Ԇa[Vo2c@^T5-^r̚H^߷hz+ ݄uְ6폛hVʔ$ u:Gr2|9/S R˄T[]p*RyJ5ݻ5J4Mhe%e2Gq.IGMN՟ӨcpIJ`׸΢m7'Haj: O"$t%׀>P .hk%.;9eXEKN0bxץk7Lpr,gh ! k0*4C,}O"5AHM9ԙ],!.HyaI|..vť-fK͹u!/SHjwI{ ҕ֬>l; 2 5R *!'QVȷ`3.МBQ1ª3rު3zkEՠjG1/V>3`jc/~nɘoeqh3NQF`VR N? Ņ4]R8MrS?/򶅝YȓN:P!37[(^6$kiwg_B@ t#YaTs[l| Y{Iȕ$v~1wU_1 i+b7yq|֙~D( ygۻZN?AC %]*ܥ<>uhe`HLb/CP.|02` fwz.8 ]\׿8(M}]%$oM$}\N\nђtr<{ZRz\3+|ڰE# ѥJ@I*1pc<ӠebľD(Μ {qvB݋.&nZK֙|dLcIơa#*R1L_oԪ9qkv<ގnfwY~q." P1VhU8-,ZՋvz^,"eY-^H*E?Δw-҉_37r\ɤh^+->q@*[|0ۏ/,?QL'wxiLZD \ H (p'fI$Dޖ%@6Zw qm`<(q5 &w/o5i) af_WW_AOo~=t&Wŏ[á(տ$[Ԧ`D$-LUt~'6^/+$,Ǵweh<lPA2FqƠZ2rFb 9ck&K&!.`Kd|pLj(rFi[VFi:پN{ e=ueLOPƇ("X 9|ҙ]u'TfiU5*߃hS_qkň#Myaxt\8)٫?۽N`C~pf' ~{f4_GOO~xxzh&/~G?j}-p=G AkzXkrכ u 8JqnŎZ);.6 hseaEKN _k欽?B=YYx.夘[de$dXÚiXC)}vodš8}F}M[_7LGӛ2fzU]?ڻw H}pw¶{v2 P6Lqd̍Nnn~spNn N }sxN}mp=)-޾cض@JX}c˄գSjTӡqzzxSZq͌d=9SP+,}t.qR{Jvolʅly):MƐ(>IQSݍnl*YlmR %Eor fqlp7֝ic/F% (j=m0J $Nm5xΚG~#񞳘Jxz)_aP,HGݳ{ ^(CR;z @24 EʍՔ+W݆.ZdN {|oTSU+w$_~ q;؃7G3T!eVQ1ﬢNo%$@:ncۏJ>"Nj]Kw=}%\Oc ca'?Gݍw08[|ڬ>(R8|e=1_OoS oh>Ѻ;NjԻ5~NuGǛAdMFvP<=A-SvKmt*xx:dZe^ITvq$}b!Akv[id׸'Dn X0[Lmlpтee+.ܡ>ݪ^#p*wSe\ xKq !Yϩ><׀sw%Nȯ<]Q;9#v  YRxj?]66[7}f^Q?|:Tߓwzj:ʜĝ~C*Ó8Rɚ~at7Z>vCw&~JVB^z}j?i(:p^xvm|,m3bԖ{>5E16F0/+BPƔ 1Ku))FǼB)į/W[c>ߖi;޿ɭvW?U^גDUF?i|;G:sv ?&*A!no;B_׎ЯkG׋#{$/ 2AR9%k(\\(x!kC4NBƐI[*V}˷&Sl-mU }˹}[nU}h6/eeh1glR5X'7/ۼR..cxRs6\ Q E;n-ɝ3BB~Cq zWa`IHm*dzBʹD)T!,pN+$\{2&:J+u Y^h$"Ȃ/ IاJ݂\r6YJȨP*$+}r {ML;Kj NjEAjAo;^/u 9|}=AuPۯeӣ"n{d)_~}RuFjo!b6o݇Ju\tgLnsWWCZN:TNL4޼n㗪Kɻ>cՉr! J=Zlf<5c"Rj7Z&wC֥_~^ݐy7 zOE& j|DU %I?,B6i|ɴ"]GOUS쫭ϻu?8u/y|ӷg/~W/L;%lɳ^虾wK%|l]F>yީ3BX-vw|l\ϒƳW VPB5-b F~%p:ZDجV"[g"S&z@*͎! "X/ӯS!M29ԁY9ztl3wN ϳnsfwdo`g3l)e1lB`u7Z{EĠ-z..WՅi ; 294'VdO cLE%ƅ:AZ%MaSCgQڶ9̚r. B*)6^M>D7TMj UxC= -j oEl=yr-UHh٧]`S YcԮ!)@lrƖ ]+h]OyO;zl ʉ (dd=f*Jbx޸՝OyԚ- ldwA%`GKn!``DžxõڕIz[xVOE|;0Jm֭3U }5I;(ǼtFub@^$ErzM-ϵ?:cU6l1?ҟtӰ x-G?=&!9ZaK aaG1vnTd*hwx9yA%N{'TWf߹6btzk{*bgEg} \W:uL\ᑑ0 *V'_L ٫45otR#3~{۝,'j]s+APOȞ}.uY0.A0!e(Y+<@bR43B6@6ϛ(RljwO}Ҩoؤ5Hں;@N:/5""jI10Qv xY j+>!\N[b` "Z^1*c Bf3 E+*<;)DKs`:UhWFņ[W^rsB`Qʞ:~fPރ%!0esĄWw~s\$ oW\_XfVG98j9~/W1cSvpa@o{(ࠨ}JA%bkV:# 6`&ѻSM`1ɶ]9aw2y8LV2EG29m,%uaC_Kb(B!x-jrJ@AA ٙ*kYْ,[NJ Pc%j"DV'dLhb &+~qȬ![ kMo!| &(0B)%cm Ahd46((]R5K*01VԼ(## zշjx{qJ'nFkKPqAcúZΤ$hT݊ hޅŒ( (IWdNhIPXČMwy6G.O=`Nmc.3순K(!̣A3h\ntd ) -_k(fZ$bdzeSFl2I)6}ud K%5BNq7Npm9FiPHId֖jnƫ7-+Ek({G``yVK]M>Y *B\zMcW&ϱ-"$k}{` ~vZ׳#mb;c;M+ӭ '+܃s;^*̳âV?3R1sVhم09Ek=qg΢xj:}ŅT'nC1:CGqM]]ScOv|,Sؑp,]9At%!XS@D#xv7 >Sc_!hO'VDϾeQ^4Rx|P#tֳX X"2^+sVV>=Q)l@`h B k:2`ðcw>2J*|_$"HEPK H;V $)V_EiZ5Qӭ6h16֨]6P- WUn 8SSjXnEV4 ]'i-59 @Jm$ǴШݷLh@g*caXk6ivl~2;]]a'9&4|*p~k\]Fc"w/x\׭ ֥*5;on}Yu}خxXCc[:r$ RLK3U$6i^7&V$m]nW,Wš8[^7Zzs-kwgv8͊/5.z׸x +/֍zHhG~ӢY -uet -Z_~)qrQV{ p˱0b9&Dl*5F(8r쎳W/%d-rT|^ˋBڲ9K) ͆\QG)1lD-}x3)=W_~W?d;^]VףP z~V3Ϧèv嘍s6\%{nx>yr9ͻ|UsL ++gEƊ=9MGԸ&}<·$u'"+Hypm{V*SͨǑIB vY#$ػ5k(u]Sv{1}IINSyHjhT2%K+SV25u%Lܽ? 3 t ^ $3&GkԇҶj;(]:rQ ֹ۝\Qf4ʸG*P˹ tR%9za%Z\ejRpw0;0n!ZxKVajuTفmAVz:xouO/Pe 3Upi^1eiw86 ٰY]$(%8V7Vf5i<$M:;];'8K!tտw`bMiT8jt+G>0Ƥh#\JW]o^dcg&p]6_[EpVI>Y[vQ>tsuݳlo}aeox3Քs#U *r-*`U1U)4 EӢ2ڿBȍB9QQ&-W)pk)ER|% bMܖc]lİiECM1,2),Nr i :lttN5*quK`u1ɮ1`*﷡nB߃j%}7v1b@_=G|oȭw?t/Ww>sPu%nrSJH ^czwu_݅E|}CMfwi뜖u ~x'&$+!x_jlڄZĄ{P$>׹lf'|$<5Ӝi>Q^j? zo[^Qp).7Kt=L__!~}W}!wKҽf'Q ϩNS:l0"S:N$թ۴hMe#:q9 Xkper xcs~[,[ę?|_v'M`"7)P?Fȸݘ/.)JM9%_ fR-{u m7DBk)s TVlC!@)0sC&-pؔ˘괱uʶ^.9^**Ɂ ?JR$>DcmTUID O~KZ sP:wyQx͟Bf ,lW 4Cme%*`Mn5{G%ƣI % $~*w"edݜ9?}TΜerc~9iN}29o @bhH2){_厄t$'x+we, +v0N^WZgKơdLuùߘ)u>OpZdq?K0)3 Gd'% fhNs!on #,.\QF(ZKu@:)xCC悕FS, 4H3΂I#8N^p8$zE P/$x:)TRL~7 ԬxՂRi>~YXs6 @.<$}9q昮s_ UVBtB|CR*+őD¬'475e2}UNB3y>Q@aAp;ЇeO$gӲuhV<-'"W~ gIŨ ?A=vu+-n) f6).K+** }:|[6jvLWNbK=%kO,M' @;q!O׾`ˁRn[Kք([4Otsc =;gc"09F:ٜӁLNQOr4MPAAccJX+Mr6JȨ)::ζ?LV9 W>re0qUĺƆYT;85JO•+E2W-]Ljnnp8`oAE^Qn8F*ֲ߮o}TJH`9Д1L"k*Ae-LwБ ~)Ts&H :證cHpu $*l%OL,i0,-4L& Uk.Aǐ}_P,Dvɷ%+PI6U:$)-Ξu>qz|0@uPrpf_ZA\IQ<&WTÀ2C:NYC6 z.qJ7AnSI.ʸ]MFn4_p.%sʹ 1##R4;MlKw'pn+f}ƳB^TFN&T9=JjյʋU-N!Xzg N'N%^Շ"wY L^*E-'Qj4%+ e~S@tn@!t ͜4*F'w4m]F"x}Iœ߭Ȓ ȲǓם}pM)D9FXofclnj0*^o\z ox1C-9 My)w21C l1_9S٤D݆!Mo?fH wϟTޓGmҍw^(XAy뀁KHsxZcp˶k.=/o]ۗ7߷ݿ],7Gs-)jNlsom\|CbP.2B=}WLJ=U_J}q_nc&:}w.EorEշQ̃w+(ornp&oږkt2ML+6ݤ *G!,V2mʰҤT'ѐ2)wyZb^f J(aT2ƨ);0mo{-c|,||d >pq玣Pܼ<__o78kGEnQL .8QK\y׬"ɑZӝi:Jel=6ׯ@NBGr\LrV ei;9~뗤΋ N@@<&wr3Sq%q3jٴxBH+Yt%>`D{4)5W}-_}魋QT@\Zp+XѶv[[%-,P-)\()t:F́CQ! HVpiR%eJr9܁!I]!9DA%[(bTL:EBRvN1Ɔ\;=) N#{5w$i%T5^zD^ڨ&mZO<̫m THZsW.E֚HK!*-+{}tʰ{Xf"2`z裐“&8Wr "Gh jQQ =e+!4:Ɲ6R T?JcCXZr"%~"7.U $BbjXX[%Z#\Kj\Fؑ'p5U&ySR|ww.w?B1hR]qʒ0(-&%3Qq%q'I锣i&E wH>(kb=EɵEc ЂD0 z'IIRGEP #4'{"g)]zYJEF)iC:ڇD+042MYo17B8ZI#! 9RB^5zZ\ARQsc-QA 1$ѱd bS8^y:&mR{LO% co,X::r_$DGJio+iR֨S^s,1* (!%@@Oi7&=؊#* r?UeI%݀Vb)~*wN2B.ҙLkv*KY^i=@o}Є3 (3ZQ0}G/}Rb;Tz{v}&+Uen\kb)Ǡ?Ϯzbws{S^`|ÏyzՏ>\Ogs@6 o?V6̞CL@=J?.WY(P\e ?p_%ʭQ/ކxE.+آJMҴ~n* do43 !G/*qo|({|yhTN ΋{gj{ȑ_ep76_ ǻ8 $3hG/Iv9?%ٔ%Y&jkldMUbX:z5yB97@y =cxpL]h4Jݹ+oڲu*e7\hRcP`_ԩH6)#N-80Sw{V![,GE&ʜ{ϲ :kRP)\0)'=uC39:IRL`Bc!JaI`G*1ɸ8Ÿ3H%GKx!n#r`CHf!lomx;_% Ƴ䈶7f]|y&{D%`#$]7_޿?io6.\~74J=k:GߌD.# ވOxP~^kaОٟkCM-rXH.j6T.&P _ތ2!bV_?χ CpIwL6+B"@D6`Lvt mÅ",oYjc2 Q`z @m2MqAVyvE S'j{hDH'=~YD:y<x\RʬȱyAB<˷'# jT3+{;17iKWѿ?ϥ!d1f`[oHDgk\!4sY~u;+uzÜT8FNwIÏZGۤ x5O{b'.<;^Άw56GX&ևR I̡2Y^e X+B!:>#,tm Ąi-_*/q! ԋ`BH*LVL0: Jip y DUL xaP3gqpb<]8叾o?}dl@{B"v#^?SܜI,Ι'']mgY~ͷ>>F$.m%ѥ~Wk^;.m!w,u`&#V,? i%D^8Y1Y\,gԲj"o$rMzkJ&:@'. t98@Y/G3|U e~8g_oiƞoi=z0HN3 )`y H?J@A":RBsƹ*;wlZמ(6|ObƬ\ܛ\f4X,/Π_f2o1vN>]U-Zs#29Bѫ47c1'Ly"7E3Sn4TLv.*Q%ٞ1n >7_m7P#C(GyI{&LmW'ljܹ wP8 ֝X6Ddŏtcixn&~̬*""۝C aoſ aUzTf+FgfTF I ߾N&68@:P'_ 6|E_Geh .yG +{z?h {Ê[=M1 0I3f$]Uv<-:C=<D>jE1տ3]fQE;TJĢRth9+5x|.Ey BD!{mQ A,?Xk7Uy9{0*jUAq?-HӶ^Osl ckt~ % *׽\Y/K3 2xGG+\0/Vh4n}FC( 3dO/Psj c +#5_ug#r3g9FR6+)LL_!4Fl%r2+>M7؜vҐD%'Z7(|TY_75n\SYCX*rtA"B)*jY(fSQNKcC YRsЧ 4gSf橙S̊{ ^vYgXk(Ѿ 3ܬjLtT^gLcfqkQ+RM*#eYc,@e 7vO4d`;g۾YŅF TH15A`^ sY c%9UtpіWt[fIs"e3h&iuۑh>m0q΂tE|s己!T (yB*B}`~XLF$`1 .D'hz\󆰵!6Sի UjA./,[fFQk}|g! rL?w(HD9y>: _q˞ +dT3E)`+RX +s(B$1,m鴚 W@ w=kYLn3]wgicK{9Q#OYno쇣DpK{)K'~hb6;ﳨ9HWKg_놲5@-2mv{,m)h=Zq{A:MV V #gsD۴SY=1@ xۻTM;q I햝øM1P (ǝTRNu隁h|4 7Ho1<iʗ;Gs71Ҕ7C3 }c[tBi\:'ѦG>r0\:N'ыtGN)k+T((BfӉE")[v0Mk&~IB';a!a8;G#|ᯎg{ZMƵ=^;BDbۯCu`&C,eKqrR7dؓT|뷶q:6jjT*n0}ΒUA15Mja~z=Tjcc|aɺݟ= " n Dh^v Dnbr`E(r)FFw]"w\x/9V:|+WWc9]  @vUm%ګ f}Td巻9ͲdXݗS߶\`/}E6wuƩD x~^katdL"0!w=57rèɫ~.χfA,k>6g vXt6(4}dhfCeV[0pjC8U +w1LĞEp|%K0G8Vj1ȜbPirz55sáWeMRBƺ73xd.qT"6c3R)ReΕPc4o(\3άMD'c$iP֖Q( I֒,vo:, bGOT (&rI}>C_EϩDA~ڳcD IY g1S{IxfM%!#c:W0ό\@JfYN8RBk, o&3}锟Tcĉ(1qҩG=ev?K>=noͬ?eYM ngfI|˒JjfDjl*K )Os\dHf%$*4TTvPx,$9DY$*KiN32H9j6+ޭ|Tji >aˠb j)⸂ Z AOYK͞ MgJJ{l6\\M`&cR6甗5hBa_4V(ZXFrH]@6}3Zs .dA:!\?ixwcg&bf䱶<+Ъ_"hAA-?}XzjJ!RK<6FΞɥreG0e8. A; ՙQ=bg0 ~furLWbt?](B>=_#jɳW_>xwӮN2G [Rp>xfqNY]ђQ{A4$X'8_3GB9 07{"mVwo 7tSPlVqj!95F['S_Ef2.RT\gƐQQBdiNQaL2y:(:3NT/9UNԨ ]XPҔo._/qJSY'cϧdÓu$%Vtr6t*2BF!fR7BbmXE{ EѴ Ek,w OdKd~f)٦%Z/{HQ33ÝI 0F-ΕUe퍭X޲CH.d;<U <4%qD+ra#%oy{.-m{tobX6|γ -YX$10hcq"aQR2|JRs{fu0\=s۞Q.g-٨mF{WoT=ww6\E©0ǘm@@=P륮E' ۨrme1%Q*,ĸE @+E%E(5,Ae ˘ՙIdžJ.Aȑ{ )u-Oho7'}|(^x?akdW)ܩ[佊ی'`M;U_PێV50=k V qYmz~H30fw5wNŸQQh2ȭ9LR? jC9!sB9!sZ3wS8"ցAL)>pdH5cֿ+1b !q[ yI7DQ'(^ Rtffi깳X ]-Χ^j#ѩ9@L( P" @0&R,4ˆ)ցP#]&.FO(%t$`Pr "U4#jU\PVZU  /b)`WUK8.S)jH,q'QiZ&Z $Y< a dD!c\ǽl|T]T՝~2JEQƞkGH;al*T̹ӛXr#[tDwjEګw%怖٥J9`Wz޳6W7(fk\z\ON͚9ncctXZ.V9 )sy>ZY)WHQ;V\y\ U7WO/,/3)/˗o߼}aQx?_@vసϞ~< |{?{ӧ/~Ǜ{|;YV8“|Ckr"?˧߽'p,60YUxl*VpA͜nA%<|aQ\C~jg|9 ?;ro.|(boO^9)3x(ˑ=nhvM͞ sK4U3{2<.%yI>p)|5z*`r{#8s3oywx/})t|oB\a.VR~{EyfGo,?/ \*tFB ٫ _?eT?a|Z;[N\=3#||tԁ}F}=8a /GltO'Smvk )!`)'緐q<] wL|ׅIV P: Ul{3ደ?ht|6=f gf0/A͏߾c B LŠRou*f:1\82ӑ#,*KVeXt.D(1w~R ,aèyV J5WEʝ♴{hlJiTLaPs*j09l&S l$5(ef P!ܠ:`ok(U1ʒ"Y(ZeF38JT=}JkX0µJMIt)a)񔶸Qkٻ{! %KΧZ1z,L13 /yiِ8yyˀvCPRAH f9ܕi'D a85A1=Lp5 cY`@|?ߟl<\]k*mGp\h(.ʅߟNj"E/.{E:%|J?^I|.owhn >&M]_Sdx+ LW'g=4Hd=E@ĨYt:3ފ4^3mkVGka cDy #AShj!S"K S\e/94 u߫ʷ#/PXV-n?tt:4 e.ö_عuC1<׆}{su,}5i?7&jX<84\yBT2A:,`NǫfGbȆ%LJ0+c/2,hߎnn?~. C];5pU E .ji2#G/׉EJx|M~ӍHjV\),ZYy\ŲQ~`<*ɨSY--2TL|PYr}rE㚬ni!0]V³gb`vd"m@֝sZxdP;K:clof\ފ^ynkg )2l֊>qMvT+dF-JWOJ%K4nyElb`\BdnJ=f OWꆢK7A fzr;OR6!+tF^{=/ #* rƬx{k[{sp6J9A9KYL'(L@.c";8 VjkET&WE㑩HN1Fh.L䚼nCNdHa!D6E&Kۖ 86_FF-e-{K^6)I>u7F90.XրbG'aVp/ڕq(!i N-ИH Hp"q1z0.iV2iL;jg%Vd2|΢`4=iY*0IۙL &xOH$:>""N#[3Ee <pB ޵PpY VlqK˶Tx=&?U"6*jrJ6,Sh4>ٷ.]뽰.0eOuG X2ڝtr!uyKٙR2`SDb ޾2 nKϓ85k49G*EEesP9h^®k/ #XF{7?,; ~&CZXI|lq݈CqZ_\;55՞L2Cf̐l̄ޔs ~7H6T/}kokaq\69dϦwCz#zb_z$7}ױWCnhM-vz\ e0'/%24Ư,Fud`$y,5ύYdiL77 Jiĩ6AxQ$HEIB$ܕTH|\(#x9JML,oeSVU sY=,3eE磒Cq?)4+m$M?K~]/Y5 +^*:5-KFdHJBHrfr)+<gD)\D&դ!mS#_\$AjQg:t+)]W z KQAۣF/䔓ޢqYd/( 'd(|Hwgnv'$LX^hE?C6x6}{V.l&YY'͆H6C&,+-ʭ j4/$Zj5b|U-i'@RIЬkxAV1P+6t :iRU@VU4S}6  ٻ6#W|qzPMڞ c?:)J$@i$!4&4I$]YeeefeeʒCpIST ZHHfѮ@RMӮZeu"笡D#yt.OÀҦ;cȀ'MP+g|OdEEk&ӯC`\fE0݌N-RW32] mg`hPaac/4L(^=:yn dgt=>lM9n_ޯ,ow;ak\Tn=ysgF.$*n|u&I7CZ]\AxGe YaQB``G`5FoDT!1"zO9E ¬c!0`5Ќ`)Iz)SWn ۙgi$C3,kaѐ$т]Hm  T*A j{"שhHؓ+Fo|vr .7%JnEZIx^ r'ۣk]KA51+z .%>|-%AԋYwwT!#R%UR.ٵKOTq 먢|OrҘpEwJ;^ af⌽K8%,+߽zB')Z,ܣq39JuFS{=+&kng"yzm7s(EWDΰSa / q3iAnWo/_m@+jW"&b1ܧ~Lب:vpqJ4Hz5ۏkyPpk {OP(~RT[5뱕]3[~LH֤fYY2UKr=RG'.LwYY">=rB???Eu.x҃%;ɏcK*O]g v.9 Tϲ"2뱀Q]ڙyN7foMX8_1l`yn#& 9a-#2TY +ɬiAfKMvSUmvM˽F B0:HHX-!2 ha.HJ2H%!H, V=ҡi 2lw\LQyKՂ.Mʝ׬gՃdG5|:~tiGg-|8=F1V=^0} t)`thaՅp"ffFM瓴$!߸6)p[M",OA5A4v;Rn'ڐo\D7dfFH$y$f~/F$qz}LY<8)v2˫ݭ}EZ9/c>y*hmt 33 ƨY7N6^./bA.Gھ2oh mh?+,]^%SKf>zN-LtGk:pFe/)ӇIp`T /rP.vbZ-^ : p RwMVz)st~oŸը1N 1q*k LewPV'Bxvg+/LAs@sfdCtmLv^MF[y5ul~&wrށACϋ#:-ԣ#PNyQQ"e qdoɥZ]2G!R6e6yg\+4:°GbJ! c(5Z#)Q Kx_DX0O6[L23(&Ƴk49&N&YkµLwX(Nb{̔Y$.({|:YY9۩KLA Uh sއU)a2݃"8 ;M(jy精Q*Pt`YҪZi#dEјnW (~.r+fz:kAƓ9%C_ͽpҥJAj#oIy! `|ga|?'op??egxgc6' ˼+sT5 vMNYw:='4#&oDPDLc;P 1js2Uv='0ʝ;,,mzNP}$`ҥ09{qX/uW.I>*\Ps^t%p8{qED17N̪~?=ݹ8m=~k$yuOB\t}m$6%؍IӶ9*|~ 0J4tʒ̧G;ބZ!%il") \e^B3# -V0ޅHԪg6]+s޾SQZp>qfaR1ƹ#*g8;`n4 tg|lt}^&+*^lE!iK% ֡cpYއk”B F(4gEA82L},}΀Ym,?=%Pe3x~]z+|`tWOcG v-:'䢰ωme(|'SpVWei>HȩcODkv[5`Ii#8a]ҶV} uKB]6Q`%q%CxSҀh {,il}$=ٱsX.]<6RH1h/Ȭ#ձ{o0bpՒq< O؏JK4*B*Z(TJ *n vkX_P+lJvCw,i0R[a Tsޞ!!kyEsδDE\xJc$- RNBv38`"1:^:8BbG!z3!e!/\ň glI;1itH`ޗ'1d!S1ƹ£zV{@i.JKJH)}tNu_*, %dϣ4ӝKꯩG@*i#hqi.hR挝@w> #Ŀ!1! LD9_MD#5KjP*KGM1Ti>9ħ}4w77 -.,f`${. ?G2z;$c }b9>Xwa:Pڃҝ58X[,'Ch(KyU?YkFb$i˾,ABs=|L&~d@R45??{,ի']`1y $%Y("9 $2.{'q U$ Z ܤ|KrEc?luLC^Pp4/ 䑗V&9}%rF|v(`ts 4d_.n׀;fevo!|cngg9\Έ6Pb*\DG9@nʈdG}.f>&t\kGZ\e/Y᠓.u$=99Zboy)xjTmnXhg,((cpJ.ױ[w' yڞz PӋkY!` Ʋ*))?KBk)_*BH_ri:h3ej ^P]Ӿ贃Q#;NI"ϼ*#6e&PF*$VȣԔd\+rtjQU͌3=XB6+ 00*Ii/,4b!^c"1k4ED$q)@&8)T b`a@Ԙ@ aVa3pv)r>cj{'^b^`'azZikxxƎaW$Q%p!o[~~Uz`2{|?3dO}5tďOwwWW).f.̝L_4%kKT_ET@>{CKW|R~CJo.:)<1깕d>yX Q8RR*z ܤXRнjB<ހ3`GHV\`?o_IDM" q=u뗅u"hi'|By$R;nLH=!],a,a9˫H4`_ ;lW-%9lz7B f2Bs![5.k:xVUǗq0' q0' IeeAƣ V`YyD0 R1#"Z#tFtU^o*o3v"B:LaStX9Pg(u Fqʼ|@ۍ2`a?`P lyn:P#yn:4 bcl97ڳNG/sڤFy= tc[1:#*!C-C!PdVc`oK'R wm=nrq/bqvF>ll6g䌤v)JJ"&y\ljw#BdTȝ-م )ćsRP"TmffH,boTS q3FW=P@IqR{wP2!"%E]C~\ J}5`"h NJs\-tZu(4ܪKݻe_>\(_-|$0沵ok;LS6q8bi ;mMxzi?s^?[γ/`v|}uɿ} -Dq,UZpGELu0B$JPn&\ɏր /0ybd/\Oπ9ȀnLJB8d!ĦBg{XLzT(&>3o份|q$@wr))8ZfFJI&I&R WTreDfMJy"97@ z.^fx& qrw5;x 9Rcza̮*0g"*9mP%#i[ 'q\xe&1QiT,4 1aR(C<5XCeJ gGqh+lU?@CrbWQ#.TitW|@j%W㼫Iހzhg@5?ʠ0G5]?Lm%Ɵcl3Dr5p=:Ni4F3O?4"v7 B'M"dxfǼeCX@ͧf층A?]O:P!t<ټ/PStn'Gj7M7Cd^FqղowWH"T"ёV0 ]t]-1%F% b}h ]K+Y ۰ vfb)tV>䡸=(D9|R| rw w{raGv!!wRoX)fbIu1* xZ8q;JuqjՊژB3l[ 5Z`_~p6ONfh vi|8F;^ΌnV? ؀y֑LgA_Ɇ#^cN288F]s{$H|؂j";\ >-}Zyg&X~'! G"EB2~w-h/vޣ*l h F܍@#.T{t>;ߜSkKHe1xO&*e=X4rQ ߶\`S 3ٵAF#Q8Q|(;$t(4epE`M2QMآ!wYKi$I9\hc |*N$cD&)S9mfBGLtE%)MTqu%j-0`r\dWcˮ"B*&Qŕ80ާfZK@N4ŭwZq^oK ګ?ŤӋgy>&G&%^psvoo\<, ~Zp!=c|Ĕz% 0&0.MZ()Yg?{+q@"u(+6qUxdO*z4Z|ꕁE\Di +IД1H "CI ǙLQb5"&flC{wn5HE0q5j 7 ՗q5`q# ;y/i %\ZݥqOw-!QzD5x*mpyG҆kB;zӍ{G |P1 G緎VJPgn KG *XWf/ne%wiQ_Vĭ6Oo|îz15baWɊYӇH4PQ oʦp-Q˪3MIz~s AuSp+=Wt dSY4>v>Frw& 01Y*7dⵉN++Ԗ M"B:Y.z#z.7wEEΏvxR𒺧MҊ Q PL\H;"$ d,Y8`R--q" apqCliAd,W.HRSTCq8ILRl'KY|˟AP>3{VO4Z\!6 _OOQ`Ý`' ;}pNMk $+UsB(SkEKuX1K@OQܚ%2?e\^Pʖ=~-k4TVYAߤ=t»c\ quR ե׎[B}i{u.oޞ=:䆏E +I ~Mb0c_-rGd|/#e2g<9e=]Zw?V`JşB}I;D:`/ Lv{`{Js8~lls"H&)?YP/bWh鏕|sݱ^l0o\uY.AOI\&-^$zB=ݠ##ƲK}dc.d[Ѷ.G:pxʋn,{L>z -Tr]6]%N%5UWֈbsQ-\O_mʭB$ ֩`<%K(Hq S*YNVQѶ+%uϗ%-kZfL%yRV#AhFuRLLoTrӁo imtGIj߭]3g;ml wp6v^{L PLPAc 'JjqQwxUqgoI'w|zUayk5-)Z5ׅo7QsBN!*W n:W5g⮔D]`xLǑgu=j=-_+W.(!eHDKyלE e~U|ٖ]0zXw/|]|_D"4_"E #.uq?Odhi"0BKw tDƫ>/p &3 k)JʸP٭j1]٦,hHh?Rp&L; pp731C4t|sx`/ZWabť,;kB"% g*VcET'mvTRk :q,j>.# AїȒy:)S0gCr0yj/F,x }^JC@Z)s{%'CB /~K5pR=\ĺH8nw&pmh $nBzw)vqyNJaG`*|k^VVXp)>'67`kc뎳yd\_>IFK62"*m3mVIʊn5x"XhqD%-a F>n&N<k.&Ob7wX|1"n9T OdW&SZ4MOqqW}ucgGN;Ή5NI\];ϱ̛{;XGo4P:g g}^ ĺBH!ƥ NRZо`RKw_^ ݭG\MZRXԄ E󋑚,dWрɱm u|* uͶy5 s26WXA)|ՎL>rqPN®`7.kx,;ߊ#]D5c Мۇrd>3T \Բ Qm/3!lkLKH szC'w~"vO@+WgzϨ3^H]*3G= x(i%Rx?'֍I+mH /#Q}Ȯ8~}F67GRr﯆1hfȡHCLR9]6p d*)6q Ajݤ:1 !desP\Y^ml'ӊq|,mhwQf4̮FNj8~6L.ՈX7i#Anpf֞`qiwi<1i͠@rQ yF K,"5D%Uܬ1,Ҟ@Ә'c}+'(VYc[CZ~2vuř c:!w9C2kqTV6h-:̐PeS6$t[e-Lv| _ϕ.RY hmaBE4kC&6Vh=.mecw`\1OhkkN#ɣOO%^\;a8[ GDC 0O-wc`Fվ>6CJ;dWim+XFil3rEe,o gd骐i{}Ǟ%VJ<rR[7Q!.D(1+ `E(*ϽWI<0 q赒0~`ۓPmʚ v=k~_Yo0di}jY-|P L\*_ŨUWU?Ӽ`1Mi]'d幽ݴG9#].ҮKDL}KIFZl*@%m;Bx aI~;e~L+ {C# fs 89nenU ߊCtDO^^O ;yhb|C!9߾]y2fUqv-f; .l:ñ߿0ڇQSbh[%m?"dGb8U5jnRRS 7SN|ڕ1bn1!w߷VGЀ`)omigs TⶓYoPv8 ւɻcXVb_{1T ;J_p b|3yoߕ~z_>|UZb~> }pq: ~:9( c(QG">ʨJP 8. ô1y80Zsx@edDX6Y|)d,Jq? P,zAӁ^>z:@mlv6th8k6s0+~> fdۂ'wbD<"0+qup4TҠہsq;f257̟}  6 9VF;kcg*},,fOd9X>qȌe'՚'?ryPDFX)}CPhĤZ:Y4;mF&Zj{ K˘e;%aP]&K{ X-zEoq+Ox2z0/NpPYM4yoBG 0.J`%}x;y5Wx_]? W3&$AARb0!bQZ∋`)"AKceԱ . ~aErD9&]/% 2>(@%g)Y"ͨpA(g^1{\H*BEҖoqt;͔ ]8m4qQFiegl/*Wnxˀj^X z{J{0O 8$aGmJTFW!Nk}Wj(!:WK ^%w߼?z[ o5wo^U-*EYe(*g>++ءOޖ_5ycH1f+/# .C[NXZ (Oߗ;rKFu@bI:Ӛ[Ϲ f1qȱRF hi >pEIȓh.eh4! yș=Mp9.3\}&H(iQX"XpD"=<\Ng$<:ڠKhB JDG~>}{U{gk/ z;Ξ 6= :eQTp1 9`#L: yR"j<|Sq=Nc.-N N[w QsA qw},]=_?J/X̫RϽw8A~{ \/\> "~ѲػwU.$?yxqċ+0YV<( D=CmRQ* E}q@G='N]=ىzZ嬨W%N ڙ\>JrZlVgB>Z ljNҀB%'!D۟tid{Ho>I:_7FY){GZoĬ=mھ0  Qx{s%/hCHL0dC/81P5ɥxaQ@ kZOFh<]q&qp[ҌYhE eeGbU$LM[,*jbķhHFhf$dX.: f@@w,|J89Eal4%^j#y83-JYM8mEl]0~5x)Qtп,p9իPU !6){Q&ƹW 7K .Fxɢΐ&1ɪzju2l1H2BZiiׅrLʤ Yat qZ0""!4V Q5E(qmL `27^[>_ ^DFtמe|w {*'^X=.sSq ΚZ5K/4+xo9 }lx\wLڀ$I%x/Vp.] =LTS(o +j_0MJwq4CBc8:u-)0Z6fqNyma2b%O'Dw[ nJ>(NC5%b_)::N}Dhnv:Ygl#z)Sf=(.}Lh8OSk<g]>m4;cm'ׁxQvi<~7궏4Iջ$/@vlB6Fe.b\(,cQ1f.텃0̉%xy͚YN۱Ϟ<Z ˙g1P\nr]jdRHu4#`5zl|`K]M.]*9+}|)!nv=}sV̨6e/zht\Io2;Ri@9D  9 zرALr$@e e@o.V-q NB& h(T1 `֛ ^Xc ab£璢JpQ%8*P@=vA@8T_֣H+.[-P1.\gޕ5m,CIJf_X[qbG*/Ib Rwb!h&!tb#]ge&MݺSoׄ!GFh^D.&U:7 ~𰐶&iE@B֪NTFSB6"қBa=. k!UɴXB j?3sÏgj&.$:K(7=;qC*`w?q2EמtpZhs!*Đ"rkrj tKY֭J恀]Z3À)" L*Z_-=Tͅ쀱+{2vqIK|9Ĥv˰?]3#rGj2p'Jy)=Q|lJ Ʌ,Nŭqө .OLMR`[aOYwC+8ĸ8`s\v6|>첇W[WqjaڲX' n %${d*wԲa^ U vHl,DʃaUs#SF0Tj1me F{:&`sw*,kͳ\69еr\_x\` \@9; bk+^&4K4y]䭛jZNez(6i췖5{jqs1$`븂[Gw<\bJPEDdSKS"@Q oAaDrׁ_ڭy9>91G@ vpO}{T 6JPvA;R&;Lo'/kӆe%Ξp[L];(d|fRWD5},5pL͍sLTĎQkYS;Sٗ–ʐY-2vWQ(™$,W;Bx[p2ҭ,C(j܃-쁴kg 1Mb:䗙Gۗsa+ J~JӼC_} ^'Iv@saJ=lW'wN9e pqSZn?zXn 6Q)76a]Y4I_AM!"Y()FE4OE^WKfp QW'2QW@ HE^qdY[h̲7_\[$9PzP 쪘Z6$Pa KMRAÅwb?rhx*=fǔfO83hn>< 7 V f8SupS DِT,%3=  8+t|wHz'c%`Ա'i.<7"N+VRLܓ '?r;a" 4W4⒵-$3;zlfrRؓ=K!`Ny~K!Mb6ӦgefyxIĬ6F!02fv@cЦRʛ(oH͊C@H},A滋Z* k m4bfC&BhD! R"gt.峎o5>[>Uը_OjI}"'o6&O(3PQk=?m (VژW$c{T1ہHn:"@gkwmMH  t?ʛ(jJv3옼Ƅ͋+A{F 'B BzB| F#ͶCLPF/=`<$JP ;-FZ!L:nDƑ䠈`hkN=殖3k#P5+hWW븮Qj2}漢~RTbɪO7`uY2|btXl)FP)W j~mSWj=9OKHJۓ GSOz6D>@Q؀)}bۈ;Rr|%ppnwN": 䂗lKiG ppɀNq|;P2@g' R N +r_H>2Yj]iLPx5X9fDB!Vlv璑bZx`1( ̖nY% xp>Z6琓l[@65CQA ,Gb`&'XB`:2cɃW W.UW֎9ؚTdYg "4bs=XC)_6ymD˚w.A|HŖa:yeчˈdxZy xysvU>$Y?Md3b57b#JըsVӶck^$$!Ban0]OkB!!+13gz-R*${ϔy` q-"'"SS03U#$. -Tɍ$0AA #ٌg#B )rmFAɗ]Z'-d>#cOlZ2Qzqhg>F 0 sE.kS9ddґpL.٥M[yb3xH@ictԈuRFM3D&𳺪TA{=G[c;g{_y:#)#ngL+?8xnlar$ ifei(<#(K>UHݡ6%o[ KaGͧ׾)Kykǟ-Iq>vmjz-0"@0 :ߺ!%l(8Џlߺh=ꆖǮz=uP& CݙN_|ُA{a*#o;&Ud?SʬAvMQERdr3ωT*"lE )3> r _>TlֽTN&C|)[?ꆽMmʮ{p7}=;-DjaDjp۳IJ[Oo.=e?>Wn?Pwbp2썾^_+bۿǟԀ ?Il2+KЭ+m7oj N;דA(p% [ {+Xߨvj~ -۶%[XER^s[&Sukpה+?bO j_k/4Q>w4Z8ui+q1~* {3@Ş:{˗pxفk(͜'k",nBDM2&c  S.Se:V?Zu]4޿_bdVjch! af3E$e{lwm>Vjp&Ù.S*TS`h-ܙi1q%R-ֶ~8/Zqٛm?Mhpٞ_ɼﮞмd233-y7%`co/0VS+τ:!-F;z!@/5Ps*k 15_MhfՀ 2IRA&N"ЋB?7/^ܞ9ʭ }=-1 (b, gf3=5Ub);L΀@,SiBDA$sy;7 !ʕzmsKGVٺC01]{pn2%i->|dL? 5#Z#/z&VPom Sf gS]⿯XGIkVr׋~'[s NZ~w͞I/3,p}dL2kᲽ3m` 3o܌y3o|)ki5[˵]>(Of-\{-q^%&[vg{LfGR2qF.Q,iWe;0t [Hl=I ֆ d݃A03WeN`p(엕}Eãk;XyKkiүf?{n|{/pwX;|YVN 춓}ѡBdJKX::/x^3B2FNAW<Dy҇O >K9x (|VX!J&EeQqqS}(RFxa#_Qdfu2xy(2F p-MhB^xZeR )13CNls9u x@XpR:"$ dŻox]7S/}w 2}x,k f~ Z.C{h0QΏ4#%bcؚ+̿2 4p.إbڦE/xxs}UO{Qۻ_]v[RS3`نJdQX;v 8`~<{X̧&=V"`-;S%):٤ b)'ɗjIf._N2kPvoy}sȿ5og3Jx z}s#D-ˠ]\gPN%+4Z%+4i唌2(pq!jZfC{anAn-2F=|.=ۖh3 碖G'xMO=n|1u__ތoS}eUf˻+û߿ r֟BmxVY G>f=*`0CesrXr:8d:Rv@+PSf(懏AobZ]<{g;v?w;˭W]Dy͋t+D4}j'BZ$Tg&J2}"2fRUn$svjT` 0\y"M#͐>묁% d*1>h cCC^Ms3Cꇏt}!pm \[ % UM[0@.ߔHDGC*KJh`dŭJj%rm [ef;j"gu$0k˕kijĥ iZ/1 #8VI״ Zn*PIw5X&edr(]ߪϿzy)dobjgc=UrLB>=>?,?XM% i-ѥ#><ּ,L ]~OyWXf gOt.^ ލ{$+ 2x !xЙnQ(?y+GB&V0ԍk#@עD&X/LwWyzwxvmkK)fk;.vΧ$}m=+-%CX_jul,ƈsU(|%sE:28L\%wx+y0rb~g=8pfX^@}Ir}[}08"<.80fe3NVe$!U|ySRXS']kHStvrs0 "Y7#yYn͉z~8q6manP}g8a>7vm'ʉ|-|b~KFkEkyoWndzޭ n@8;M;9{HMo_Y'-=l2!Y^ok{wr vz?8/WL:6藳*nQ9r-q}Z!ԲO%hp7@D'nGE5.WGi/]Ln2P? OMS ܫoJ A:7ڰdSDn*-gb*eY>huBe.ſLpM2>t^7<ٗQX;aϷzЃAiU]5,(9KZg$=J72:iL+ېtHMVJȁk^FBd3 B;G0 TXߜL ׵#Yf r@f_mHT&m6'ə1K2$+CvvK e5nr]SLȴ}6 e4< *V/ -D'.Ix7C1V(d6V a`F6L8l"`BvC6s0KQvu-z#MH\&h)h0k@.TbKE6;Jiz-VdF>KkS Zך .݁lQ<ʥ,ɚ')-gS A:)s{!ށ_^ڠe1P؏KSZmP NGOQ|GR W({Kdfe3Hʘs)/FL$ i VqܑuTDb>]03rQ'pEo"@l$#rO9a6J&MrܒqBDmcer&k0me˻QƗ}/wmC/Edۧܰ'a\GQ|Y3'qBgٔkΏA%.rh~d_$Ӧiɕb/ZS=]%&OIqs ԧ bv=\J"Pp?Yp|Bgs4b6У4Ɯ2e%eM aUT* EkcRq1)1Haz)|#gŤ㊛J~쾰R{Xĕ{bBcPA&FUV֐ V~\d  'PĄ)!P)tUNP g# )dbEf,)|1p޿4y0'ˠfqj,Nmlm j#N,dyBqx •Uw?Y cG]5ވ#OB{Fؚ9 lmg:gՔ ~<_="9GUe=z/,.6Y7`a؂ e1sh;;<򐵚_YѠqQޤ.9=~M7w|åK Wc-\В죃K֌lF-So߻d1ʕ|ᗅ|S.b%sYT*"m*F J'"hk@ DŽf 0df 2|6)} QdRYrlEdړ0 /&tX8IGγLJY7p-WٖaC&zU d4P+6 B9]kxDA&&z-GI\ʕkGrkV^)ȯSn˹W%YdiD,p͙Z\@UT0𮺩@0!v;A1m=ONTb.cdNL z5ӨX[yqCwiוi Z yC(xT/-86K$S9CvVJqI{F77c=no \ :jQ+k: Eʘ2]}uI-|.{>ɮe%(3lɓHk8) eN5E>`4]LNyyOdiA+J=}& eGp8YBY-@EA01UdБeRLpMIN*ahrB!cU(S!$N!!lVԓ: ;2R6 2m2etH䲧|{@&xG_UxػǍcWy9xFpck;/6fbf"if9TċHHf(UUՙJkW?lDLjb9y`nx{ h SpxF:a xqV)qX ]_[4wz0y| 7X<3FR!U*9 gLƈB(LN3a"4B9n}EJ 6vP*vPrݬie{)*Z~\xauP7zkOC9hSд!Yu0W}è˿TT#[ߎ؞y3: A~z:B 4V5&a,;& Hv<{ "PZ}d:z]VgY#& FimTHw."8yowBp֜=4iG 'q[ܲZ^[]Y'.ihFOiҖwE)FX= _+|:M}xr#-s<#w 4.vx=G;x7g3z++C`izNwRTfTV󜩨w1U~>ŠA͖Kl N^o+7h:֥vR .>:hhk/D3Z\]30J)AKVtEIum]J1xv3O*S~-֟ǟ>qd{ K,Eeפ&MhI=6]wO ?lжR_sժNj革ShBz(4%=&]rj1RO0Rt6ݭyf󴇻5gYo-wPĹV2'Y|z!@;Ҿ;Q+-w(aq<?fŮs/C7O/9_084ԕQh' QeD3h)-M@v>B)(Wz"Ahe4MhtB:{vP9&6T9#,qD3ͭÌy4 S sTRB&V;-JH?fp!>Y8OezjUĸ ik>I:-JJ~^*oEڄJ' +2oQ{Fc<ьzeF`*&2 C!fZa0.NP^E;Nͺ&ƙyΫN(Oe %9x1ãz')EgKK8|0E۩, yQfT 3.\őO*Gq ڼn(Pdf t `L-<0/=-`[y.FO-D's 3H'0aqå`jERKV X/BρGJg]),8P,3Zx PdjFʣ08U?z@KiGE@փsaqhvSy/4;&.%ռԺT@۽].8E͵ҒE训n#Eelwm&nfqkY2Jnq al~lP1u\>@w=9RKTa-m4+:Ib,/̽KPa [KbXteq<=1n=ouqle!Fuav\^˻w.c=2iMJYYl9zdnG%Й\)lB./WAx=sb 1IL^4YEFZ )JEDJNro/#XhĄ$"BrI 8 l 3Kbawj<sNc0R[jiQz˛9uKx\nRވd a0l? Dfd?'a?s>ξwp27(+6v2rv_=?pM; tx-߰,f͞!P "W-d/{ )T)0 .,2kҐ WJ:{SR3UnGOuUH+~=a YZ7%g<=GLH'܇F&6W}1܇uА Wu !'2]ZKI{B2M@kd d#iNƶ]:}ۄ8tKpߟ3xYx^SjX[n/;f|+?~YQ\1{M/|2!e.MK%Fy 'Ρmdti0~qKClfi"Z_{e{,Qaf{ e-#V=#0&I6S6Ͻ%7`9V!oRPj_#>_hD$T}߆pl:knzެWy17af&$Eo oVhe2/('v?e;gZXRFद4:|A!CCkǬj)N#vY~g0 &oGAVgoi{)K!^.F nJqDHeQ765\8!5]4k8z@iSGp"B%h((}짧-܊X׌5 5ɚJk XԮץ=so 5>k78(&TBδP[$ҴɟYM&LKV[S]:d)WE9ӻ߾?\δ'1FbGC2Ґ糉Eyle1(V>%)&ׄ$LCpR%9f\lEc~Y2"qoƺi:Q@xF!WAu>1$!QVT*^ 5fq|>y]c[hbB3QB`O|!n\:[d#(qTSEg15OR&/{nPj1GZӴ?Z mYtZ{ʯu:j8B.@z,$(YVMȺsGccdĈj(iʡ#ZI6Hlt|~9X6%g)m|'W՚nSo&#K]'⇥ħ - Tpfɛ;8ˬU$#0%mba rf^[};;u^; ښ$r@mb|fo<ƭ̉jY *h:*Ľ"{Ȍ]nIɖr|C(Bv'.Z|_cS$GW7}Xak;7S~+lN{/׉ +QKE׉1!iueucu-2<ו=kA0:ke*YW 8Vua_my ^;vuZtVYN%—ésjLpqK,G!$DKkKyy e}oT5>ݿ'R|=rqa(|yhuBIB#$MS-s: d8xyJq}NG.Oy G.k"k'V]1{W.KѕB)E9/AyEW#gOӧa ٰTy8pwS6zZnK):qHF[bmi'bs+QN&DE9D:<~Am QD0z΅8;˳f_mZ]MXΤ3_W)/ΞU{I@.F91v/r24:9S"Z(%-)Q+K4pdVi2GS-dSm؉ɜ& 0&;<9#%_ T 5&07;gnȕOk6B[-i{@&kpI9Ҍ,RA2 ^|Sqh̑M="S7L UUlּKاS`F‚PGL4ʥ8c̫ ,*$jX=#pȖD+9"Э{l8v8U!H]qPp<DD|ܟAd:ϣp|w$3@mξOy#x Ȗ"\P<9k*|63.prSL`a*ttYnSvK)ׂP+  /xAlF܂mTnD 8;4jSsCj7-Lqc3pb ;Z^zS)LV6`bmRLST"qJܮr SvJ*Jd'B2"Gk-7aq 5GL4b+ 6ľ1xy4Ku*+JyAF8 xg=|Ef4<a1"kΒ/$R &ߋuЊ|RMIJPrP襇JqIJ;7,9č\Q)80앱k-qIYb-cmJ<4Dބoč$;/;,=R 3UTҢTABѡ2 Thf0!ku50zgLƌ,+ ue$CH3/@߰dZI ֺz)X/t; oL +x>> a& !I6v+%#À[xhغHdHR:0_7AG)>X؈E@@1'/[kSIM8xMAd^# ?{Wܸ C/XJE;X?8wb^fBQg.Tw{&7A]ģ $Hv_0*}UYY)8bqY(uFQ xQqxm 7svx\zv\vu,6i ߟM@Nv!eIJtUo6YWS-VZ>љSn[7Bo[teÆ\f*a҅N-/Q}k张z{Q#ZcYxא+ %XWK__(jA줺ǎycGլWuT]&DUkBubjg%˟:XG3*2*l>˷o:Kd ˳'u$ǀd!4X-(E <){Wk=Tj)=:b`HXl%=yԠPKi(ܼwv8@֭'3CSA]H;KLo#̳ A(aKTm?ܵUWgh%[̶6/뇭 xPm"їqÌ]ɍkdR{$=K )DQ$PT^lR={*G-\ܞ}@'%@ǨtNP1G+eN );~a̿뜓0shN9f:yk"yc!*Q`%">*J x-ޤԲW 5\87q}vHea %#*',/,q"z|bE 2[\= _poXzT{nqny"K`|rIkg%h`NJ*]Xlc@_NVK EKM* p&1&,~$'bQG&D]Rnk)'_+E).@e)|M(8-/i@xn٨~pU43j%9ZH@v/\)Y>e 5~ K^ ɦ$ѤHNpz;$'K'~Y#7rʊw)xDQoFIWfx257e/mpaMO?L}~師oXBJem fmDI(JyvB!)pB1hB1;_ tm7j *G+4~,wS4Sy~)Gir{{i+^$> ַuӗΨFB;j`ҩ DS Uknx-*-wk-a$N^xjV(/Ӊ uSrjgyRvܰ_3} =zf>xZB+ŌMDw@ g[RNqE%Mh{~U1o4osnFm}wþ9;ASUg>OmV25Z_G-N&I }c tb-0A:uU Nkf7]bid!JE*iYi,PZ$g Hlɉ}ceD5X]2Ií̽$  ڃC2˪_}'c?U4ĀjM`X-&4Q8 o20v=t۝]Lp?hW.4`@\scDϊ_T}I݄_k&͚J9¾!BTPOrYaxr=tHƷ;ͦ$osIMT#JCTJ"RuTQ2" H'}<6s3 q,Ek= IX=7Qz87 -Ud Op@& +K^=?ƛ<~;CC97=OO .?>Aq?Xz)Κϛ?@ʼnϯP)ŀBs)Bp g*$Q;aKK9\)hE=+x6u=Bym`-`ecem B݇[u_!\KEO }ʂyUHH)B eu/5)k&؆X|%#LE'D^柷vc}3І?JoFQzyu2Bͨ49VSt 2h.:˅<^*9[遺rl-U83n$<H-OőC !{ f q0!;Вѓ$ԿiOPӴ:ePbe=$0Z0֗tC=^[쌐N uǾC>BRM4rh=95쪘J=Q%0-;h%?/tYw 6Sv-rfbkrjR|,u蛝**hY?ҟN>>pgm:[7]E.ltQKKz|Z Ki*R Rk DGL %m|F<}Ln)a|t癝OGeXq 8 |2V6]ơǒIҦf9pDJ_9Ɩ~~XU Nš1L̷Sn?9!FFHWAIo.q4c<(U!EǕ<(oוbMV" e gD!lDPM,,u'rd'UKE '%Y g*q@xHVƻ&XrN4o˵"z = QͭO6=x,D8d ┦jX&%iJD[|L9\cE'g`&2%Tj'py@_Rr5 (Sb&{ߋ !bQQ *'.zuHXY4MPqDxb0͝pcT:y' >Z)p0V*~J⎯*PGA{=[hOOQUA%ν{Ey袼AtQ `h&;TUhY(K%=el"2 52??~Z'tMWjxzWrz3=R _j%(ևݳ3^b rBF78էw.+|FNF?q+g.OGYppaO~vw?;G<r6;2}Jzf4^.qr6 c{5s-~crTsUq w8oy9΄_柗 oKp~@Ul5wi$j(/@ +jA>*5C # 6r$+Oܤɹ D%cJDh[u0|zn/sADĄ%ڠmH.@ Se{c4zը   .4VSh5z)Ή?f18q(o>Z(]zgpD:XJHNFTzM(VB T )XZ:: pu%^XQ˄?-glu'{c/G^sX&zHN+weS]msF+,}BWCv엤TEjIʉw5 0 ()ؒaz闙o79B{UeVCe._?>dqD|E違15g6Tm9PbTʦcK5 Cqb?mG cPoٜHJ}( RAH!%Ev9cPN{[ADZ#o CDrO(&ǻ7 SMz#$u!/d^%GAuW뱏iIqasgo`Z3ʜ"ւR6Vh$tε&)J3&Q;Dʔ) R1 [D9k.6*Zm}3!Q-9:3NL:J''RH5cyE 'O8i@+8sHb9 O!yIN B^P:{|Ҋv}/Y5c r >E4n&?UeX&zY0'P i,iDE~gWP"@K""'1hlPn< +[o=\3 ;>Mc PŢFvmр= +|]z6:=o¡.GaaVOj2ʇiQ*0Húb+틆kی4t3W/@m{NW[74JT_ *-opm`4H [ߋIxxw9(/uˏ`D!!c˔VUݡ{TaUEw(c _2.@|s\^f:[-V_Lm״T{=[w"γ>,[?0&.Z]m;Mi5Ǜ`#^ǭǑ3g# ?{@+uvQmG~+ؠldfҁ9a؜; &ӯuaw;:ȓﳀ % ªv1ۭ ^p @l?Y?_t8[2cZO4ХmSD(n"4;ܺrt)8L0~ZZW~%!U~ɐ "#tv(L,Bҙ$GDҔZ:'dnSS"K%"w$8FLДzL+8B6zwݽϮXiW|hӋ-a("҇ T Dj6*}nORW gTS S휄cFJ=%͢1#6?Rzxg%J 둕k:.dtv ڭ-!v&vB$ڛvk_ݺ.dJsjȫ-^ sj^mB+A RJދa=/n>οy&+/W4Mڒ4 Q!OrP! *p}vif6.彷RWZ3#@SLjӭXp`/WGǿYL{͠)MEE`'O]$֩lySD}:X)ĝdAW\;X~0AW(9LrRoeǟ 1#ObU)s3vEKu W˶FHPx3#j4bEBuuX4*%|BcΞ# ǷGtW]Ej TW+cT_WZ|5[ K0VbU=}iSUoA(]fIbJ"to dLD婙$Tjz k\o:Z[I)-wdl[+Ә(>BI=\PyR+@;RcM$W,S6rR2` G0ӎ8P Z5n̗Z"ڝB%:azZz; X,E`ed*# NL)ƜsI \M% j ܉o!IvuӥT`GS%2HqCwbr!!U2$qNTgu/ >ՌXgf,'ěн }V+3( )X Z (}`a}:OG [pH͜!}V_(f`݃_ ӡ*%x(N\4xđ:hE]6K Z I0]'){=.'YRz_/.B*xxrgğPKTjM*˫0o+z4yPgį|ѧ)if|Q8wl bh:-TsSд3;>lbU?f@K=]rP_)`302mgvׄ a6p{xXY5cE5h} i|I ʓxV5`,Cir=fÐQ,ymG^vAx-s^9PJ[9*TP$OR>9nۉBhÇb _Yӡz&k \-ژhZHJߊJa5*vF֢fqfaMrTIgLֿrIw@bL>bq3qo{Nr>&YL4Y?X*ʹq!΢ gEʓ4;qn?^ Vd⯊ܹEҌ>ύu#f58qx!K`Th'R%clU,#;5+w⋩`!Z-/ffMʕY Uۉ3fl~[l<5Vdx1\w,ɵ&Vx{2/W_j2NC_ysm&O>sC1e!N`U+S"ɇkpG97VU)qY9Yp0"j$>UFt2_l-d}lJXiL畩XF?l.=ꃠfiW7]/~FEU2b2nm0e' |,`K7>X)K7u%LR20Si͌aT(%UL# 9bVaA\2!S2Bfޠ{3zo.6mz3hA} [vqzŚ TzXKdcF!&&q(X`nOV)2`E>& A\?l>?0}ΧL9ofeb}*Vx>}+{rD3hĔCgg/b'^_mx5?ß·/~7I#0]U`ߜs_-7e< u<:P I1sqV"gcɜ"`Mh)Nhsh9ڄ/X #u24F&Z)-)h@b+tLf8w!\@D JsAL؇hs=ISd1#HNrS9Ma2?H[OH,^]9@ )<{CڕIQkw @E$z/R4‡6KEAd c B $:hQN)#&C1f`R,Pq((w5'G4 6۠I'RaK-X~DN,X *iPxEnԗS3bq_'ghcdss.X**0:?dI4w߿&[WL.=<:VȨxԷ~~y5wfXe^u6o(3@ex?݈k`[f0> $gb*JI %ULR+!=BBSpAA2k Gi7qpo2/[( Uڿ6=ܭ\ĕRpW .+ŢHiQ^EfvF"5ZF0c4MP(Z\PA>HeQ:D Q#T@18A4/3 5>o$>p ъlwA ZH=sc  @ dA>#JaHo qd%Q,"xMļvG7KY.Mr=2Q!60P:1\phY `΄1jT9*O9f F g\!Ȅ28X(%jjAXʠ&rL,ruMU-vԅ ji CEH ThPk,I{Oa 0S jϰΐPiT0BXX>SG"$t#ûqB`(zF^|ֻ1s%pV L{o5cLggH!Ԧlο}2"Rr Ka*Oi\dx_ǘ2+ ={nJDj}0lXEܑ=yp6"Qu ='Yϗ0ҷz 뜤/0Pc4ƚB _L {=:߹y %a9:1V{=!lrɛӗ9_ilQ aVń([mfK3sGj):A8|,k)A%1$G^B&BNmxu{°I@tr#=i; p뒖W4@mSh'ASDJ)6͑V*̃X\/<6 HnFPKP^ U蘐`mI=^Ŀk@mm/@*4JMoN#+:,]zS44sP,-x| sr척bըKs"eBx\THL͠J@ULb5 5wzis~IX<ǐK$H'r=kSgk1o h>'a1_oO5ܜ mx Q*Z?@q77Wg9Zзʛ߬[5 ߌ?ft1.?Nfc2nۏ.a6Gq"G!=\xk=_u] flF&s;(,>GRZuŽ".}c;}D{<t 􌗛ny)u?y4͞yaԏ1VKsf=0BH>[{ؼOcO'uE8 ǓϭSLnlZ^n 7':Xn N(괒3i0\LsÉƱ?-c QLp0@d̂b%lo4CLz8dqW&Sadxfp5F`&19|VqUaonU!Tx}/n=MԡZ3nɞ]L4xG2bLU8cq9'7,I<6i&B3cn,Ѡ< dE1x&ABQbB*ʳD4$x0r\kfݯWZPFHDs%%DC VlbFVQݗULX-bIAѬ=ixvRùs0%;{V,Nn9ǎIDSTiЙt,8_jk)wSWTgv¨i53n 7Z)&@D! Y0 ii5̱TK߬@b TЍ ߽_;#&-\H NW7Q,+o'nd@b:L՘ʽNLeY c-2(/۰vUox}Ë^D뺁1ƢkxdS p^eLfK:7_0ZEQv<R2GS. 9rͱ?Dɘ3 ;n VLҌJ@sjq K3u$/^7!9\fo:ttJBV1zk1)*)vՅ,Yz0&-P}Ȧ2#Ҕ֐>Ȏ$KϪ$a>e{ZnIwk#ʢEΏ~$KgȥO\W]U(suw%Njer j:,maD48`EFS%8A-\{,ƠΙC<[w #1o7:3ssrz ]Iԣ;eSZ*>VC?V {D_E3;J>.hkݻ_lCWI5O6nwڹCCvr) PLnyt9@ ot֚*hhRLi 沓=;t-ev%LwQS"F%ϸ[MGa6mxZ^QKNI(oܶrq؎rl iUpR.Oi#žM#=7dӛ-iZ7_%x< c{ OW(f},n}Uzְuκw;[?hu<]]]YwsAu]4 ҴN-6d'O(řfZ&#T e#6#(B < + CjVv&(6[,psu9X^29ykrulW1-^-XxX"6%VקW/?^ƛ+?? kp2ItOم+νiXĪ꜑;w)oenRiկ_IJ/ \uhHBr%S<:pwyGeyjmd T]1NuQ@=$Ltg/$'&$+:2% i T`3C B"o {UZSȔ v\[\;G  ~$յ;<O5CB#.@^D" f_K)kO%>mT|]I dE2w'OʸԵ@ɄÊ)r3 9VOv<§dQ DEsL-Fx+@2Z-ϗ VLWY=8|Ɗ*EIOT1FFVX"qlӏ&1 W3 _.~Z)j6*uzQ#U6R^L6yB[m\}=UeNIj/͚KbEYIqn&S|s!R6=: %8-hڥu8rO':w\?/R>(^^-Ne}=m1B-vwJ1ma۬!&#[^u$N9.gM^ 7_Cig{b4kgg{pCl KgȳTCj1`mNwx"q|0AZ4|rZfsSPvr7}tOCF=RǞ&w0Ě V#NJ RR٘my:7l;<V̫Nfn Uaj)\P#6]([rħHք_7o wZ\USrR0b~Ui >xdv>LDh6L"~Sv7=U|$2#K k$8 iwu͍>$񍩚Mݺ[\H+Dgw6= JiY@Cb̚آ9@@q{ /_##-f|]se-wQY$I <`sj8,҂5Z'b[6b *u{ڀPUhjh\/*33{~61WU9$j.Їky |*|uGJ!} {?G'dK/ 5oc6Wa6%ifSrѫ?_":RٗgBp1oWZx}I4:~xHiaZ=, 4V?͉8d>~ K|Ϟ0]_]g`s_1bx#kG0lAP\)O޴h17Q n(MĎ^b5}7 F3:0I잾O[j-8@\rRܯf" -PJefiF;ZrrHs Kݗ_k!g3~dVp#Tgn .*,O6zI5y8iK8(k&AY ()3y{8Wn3::萆*Xjϵl!l<ܡTuCvp.PHtuנkoC<5)s:ejh-OwY?Ls +l"~ohǀ2ѡ61RRﺙ=?/c2aL /\ƹZ)!"}O7_T9 4RBǨuIR`kұ"7KC4L#:uHׯO2"쪤t\=T90lᬲN^I9#x-CS6X( N16ݮ~V3@&2uG @vh5s2`C8BXV%M^"!7jÂ! ^).Bth:`׭f0ykx0qWCmU2vE{; L>ͤ]raZ²oaj66yy V.O(uހOfQR(wk?;&*ρv~jBZU"}Q{Uvu&YaնVhXK׫?ĞCJ[Hfwq+Օqm^A6WxlǛ5gط ,$9z1U yQh2]x=Zg Y3>mWYr/yW HGMKO,%Q!O!"W \y'qo9(QiN+4> ٹ{wV='ǩqDy1'V/oB+s-ЙaD,;Ň/1,U-ZF*g>Rh! F*֪sĵȬ$5OfXo$ ^F/#U7wL1K7Y] 9gmD^"'2ϭ%,,~Ӑ o}~,K|frOrY}Xq `у<,$kQ> gVW:ÏnZqi`ji1 #L'rDشQsե6/W $ۖ¡D[*.4ьS-k~VyR˱.k vI[=}.JJ7Pz~}γ|b|0KJv1nKA%KNFHs~IHKi)`2|II58gȕg$]تwH#U讱p_Q6T@T/PٟNsޒ}ubʙ"é$$h4Arc6nҤky|p QR=ƠwEヽdEՃیa49>׉FpDQҀB4M + IZ MؒDӉW4zpΠ>aN pL Ox:=}@DߌWqsݶ2Cܫ ^Y0`{=QKߎK'p S()fzIz.6\\q@yB!E0+|n%9C( Թ1ֵ|cwLT{hD1Suu.<uZjg%jt"\+f-'ԉJ(1vzGj/#HeڪRV-x5@XZkԁ1sbrGtȵgKro,D=TjT6l 'Zon8tm/Q`'5aDytnPHCas%6@<+k J3i^mmk^ecL\.=W~杢J0=! "V B3=Mȸ^K =F?Fp<]ķY.%4w,lVt'~{ǭv7(^/Gr >zGza &Yp`H9m&x4c?\yRnb)F&݇ѹpE+ ŵ + ? t8:&{i"=5i3TSnbߗ60WO;Ϙ-T)e@M"0tO6teP\LtikR֝ګDÀJt>Sl q F)bq(0*ȄKd ct wpl|o(+ϥ֊nxoIJZb0_}|-z>au?s]rAA]D`"No[ʐ7̓˸)!/-ŗfճa]ie_{G(hxU.X b_§|G Y߼ôt\mBʞ~ YwFȇowvvs`Hm-m{vQ]8"`( ߥ7Jn<&dHܕ(J{^{J#xEl` OVp}V:e/ǚq DZw@%TX;Ĉjb *>/(yŢn9WA7_;<(n0K#VfU[ V֕٫Ň_8PX"Dz5^T ?GpUk;S-? -]- ˿~l2GԏT>Qy chdžZ;bf^ ZUbNi8P?X{]NjJ3=ZJ_fwaX1_gѳ([t+ &3cstceG +I2wlZr H)50v;l-D֎Dqp\eHT|[Ηr{~*jd#f77ve*&Z~ W hpLOu]3ïRB:+3bFo6 IG f/X(DD5wDc1 7{kś)ٹxH ,H̎-eD v:%盠='Sju2\ԃ>r~#Jw43P d)HhR*g4*?ZgDEǶcǛMAhk'1=کԼw~.2rȍ޿WFĉ3 ͬu'<}%A;]I?Q,~@Ѹ/|tcݺپةDӅKRy&FӾH$c8Ჷ!SǑǛ-QRrh8*]GEoTH$CiRerc@L3^snh?|[ ʒe&?D%\e>BjP=KLav*0!{68.x͐L8Pgǣ41NcL8Rî10U'SS4c̆Kj2I+'QñlM3GNo9D[ʡq3U UOOUF]Դ<9#.nZfI,F&)!c=>;4R҉x+hv4FDŽЬ&q5S"ONcm:\N{ ݉l2 d\9[`ko*<>;|JsHL]=FF#ȥ;#?zȪ 7>$Te h{*dCgTfET綈a&A|cK;Q7>'_f>㆛sYּ=\C,`ʡ`uX&-!-oB1b*}%qPTJf=z.ޛ'8ooxaw  4's3 l(0Daok0d >Ҷ`F2ۂ}b K.iGoز."!6 7qVٚOGx tSy_bJV*~.(01m )9*[jlj4xbh ,8iZblcTw*ASvM{Ũ.6g &T \բEV %kˏ7<]#1[^ }OW ҒV%{NHLuVƳ@d`oCγY"0Gsd 1;ōP3ntHxF=1ǟH ygT()xݸ>%̆)W~zg\Ճ6=&Aex,=fM%Cү@0\a#.g &zG ꍺQՋ+7Z aNHb CR`i>!BdlK; %O/2ۭŞ^\ Gg؅4asZHúhrQJZlX/Rɉc,3YTؘm9W+mJU4&d98or'%>dmp&akg29ez4t 9!.={(u ˆ!ʦem3@,ܪP4l?]l$2J1ixΎ|)+O3OgLɪRő L!j1KHY$a'u?}9r=Q63:Gj42%=1=p2X xpEUQH~xN#ҍmD8{|1.< ;_)jzeGe:L//dx6$K_B2uV+2cwQ i+ڮ gäZZs[LKImy:sXPkPz20̢.E-+Kdj =kcR9{d S;v;O#j;{lM;-x86ZZw;.BB*~Nwol}* C&{tlAb͍T#Y6B+ b^g[~&[ڑK7'uFxϭ{Y&wuR;?JWB&Ac^K^Y E\ BY E;ܜԬѮ̩]1-6:к yFXA{FxzlgGFhmlZEJP]s`v0jӥ )6[I^V"MTbqGaE8Bh#مTŔŴJ_QɘBW߽+\DULr1UNGO/dEwoT'*:O$rM&(O!J πRD&m3:%WAG\8E=DɣT2CAGp)'c';'A:(`Ru5HĆԕ}?xEϡFo`e7|E#{Bڲ7Bk6]$-,rۛp4?[jW[7Xvwlf )@=>$Жe~B|p )lq#6$~ ÌZYpCcb5GyͦQ^>Ҿ?}N=[@k=R/Er$|{a݅Qj]xnz\ 'eQa-/ bo+2,1PD-!h+I$0cB9%uKom^xE)j-ZZ ! H˖U?ֻiɠz# W twd5BmƱ55v: Z&ǔL*TYgmńU9*d* `sQk!%oGH; H*-afLX/fK Wt[\Mãt:?.h~Ɵ&W\Ӆ3(>O738Z?A|ܓyHs]^?LbBrwFيdzDŽ75G.f_;ꟹp?lL\hDFL&~߹D~5Bnl|(T p= ݛ'/7[ /H%5233Kh4V<1j2@$VҌ~*rf YY׆Hi`U9{@wM @ 9ӌ@!nB;J>Vg:U2:&h x:*$)$==ڙZ'SJL|=*[CQ&wl06Bbß'c29T :ipOԻO(Ko;1x$GH"R9"ad`1p\dRpk S3wyO3߱{70_t3Sہ >(4 6F n٘yzj[+YvuԐ`&itVZv &VZ}5PTGmJ_%<،$s5g61 hѽ6ٚBj!;6 .y,`uCw)f{%A qytbWTZ(ȓ͉jyE(rMAԕnAXfPeMg'*e9YFwc鍒j-ت -h{Pߐ17/`4dKYKSO.U1Opũ0zK3  xZS26 %Vǘ@D kj;PW[ eZ.2X4b6yY5, }m SzaΫe;|eiy}1A"2& sEq3eco%/[݃Q%i(+UNG]H,9L6ƚ&,fMvc8 ?i7Y XCf˙U-+~llgk,keo4Xa1["`,?Nb978qyna2FԂ1ڕ)z%A*J.GK7$"x`C_%RV=;TX;͡ D~cnh|W͆{[.[YE)A]XL =C, 8 E)Ob+ZM.V4y[˂[wL(]qOߤѱ4e)ra@'uLSzs uAQ,m/P'ţb#{#P"l7\k%Sq DہZ@|}RE[MtP(U0΀7-ЊI{dэC DWO,\S*k-tb)ADׇ; *J\?6L z1 bOJ J2! {QɉȰC  [xX{ıܵs˒S0lL"VC]jX uQZr># 0&:=<棌Ct:l?RS-y,yl ĠrQϙ@]t$b+|C#WkAjsz.:k^I<[I]le zpÆx/ʱ0>(ZUXɒ'0z s0]3(jv6YH;h2gq/z{ m54aF]Z,Bx Q؃H,Qn a6H^4iм@$BDFy LJ\ ԶwcKc,-؇DQ;)=I*,=QyipXCʹU6xj!&!g:{  Z ^(=iAPld2٬¥CM.vDzUkH6H PCvn{IzfB59Pyq& P{Zxd`%uƚ@ۆ8ּ/pG_M>HSMoy$VeXB'k^D(~St4ƈ<zvq1ԯĆcb =5L/a2],돳wrS`on6n?H0]*3paU!Ʊ W׳C扳Zb5=e!M;Ĵ+k5tYF!~ A^`gq nfee/^ߝLgpu= ?çd\dDFɘ㮋 BqGLKJ%% waAildVI[BE,3hNB€E))PP"gE\ nD`T(45ٺR 1xQ+ H;tB? wsPy9tX_zKwd#w&?oINQs7#G{~%J]޷Q#a%ĪL(n) -NQ./LM`u&zT!1v\oA3GBUiՋMAV44X*>[)MNyP=⎎nRE7UOg'"oΏ[G1O'O(b>"j>aKvrS @:4[/̅‡]a>3K7x7 ]Ⓧb(P=:ET&5\Z-=jc:u&<\vhm0ֽTJoL3V,GUYA?@SDi4Ԋr[`U*ƽ e |NcttĖJh=TKYʹ7\vv'V oڼΫi"ŋK zk}t?oqnѧIj$6]7 VLV\%wa,9'mK^|)HEb ss:R>M~j .i' x"͙ bMt.]SI+`M <)X/nrV_l` !!ѝHK vHk*]L{29zˀԑRU(@6hHXvfk#y_^ԝ^`ŞY0x8C>܌RIi~EnE[}czeXh,KBo(ծ"3weV_(ӭ 6Wp܊G<(=ɏ6 5O~Ĥ b\-[#0]qˡ}slzzbpF=}%VW5+tusL)Z{)T5 CF-l0}rF~?=^'%Q隣R8M*~hé]qjJw'stJ9* B'.><4vaLJ>#].<ZO &}]$6vu9:M&Űc1햱R#BRMg7?AϽy Z0'8ToqgXg~^q9}˥3Ցh\O'2Jt궢#jຮm@9gf:l)ZlVwl!N9bǔۑ;TV0<ŕVݮmdԳOlS08h\{8;IH?OOcL0ɸ!+&\vIwY{]{G;w o~&2r}qCzA6ZHAŘ;[ l044 {{?hp֑2tKX" 0tCeGuòwzn3s3]+[wrSO)Mt*_N)&t :n0"oIp]\1=Q?ܚ[ysOR䏱Z[0e! TJ؋pH,}/?ۉY,gWav{kQH)Z̕O (ڎy";]P d{'Zc6O{? vnwORaE{m"KSq̫RK-YQ-+N:(e_[^g4KIb38iB2,I_kbYZpndp/pJE5.XOh75#}}'jS%1cOIv,BƖ-ThsVÖ_jjZ6][^]W`,o[NnVܛ˩}P[fyp q`i2F 9$^ν!c+(Jhc 5K8npq dU #Z\{djU9JW2K祳bS][s+,@*IINr슝<9UdL.%9N.\vDQt7 Cg4EN wx4_I0fc#`*6ҘV<'tJ R7NJV~ i ww&gN+V"$IiV1Y#T[fusTyQLry I[s;An|>C>Ve" X~ehv,!̎_qg#+g#K|۪xs'nNg7狟{:U)+-a ઞ_}Rfםzj5ob.ݿ}q@FmQFiA+KU7|n@kPw-ʃ >j2qGuU8^X2.dV|)jAq. $⦻p{kтnxO R^\F!]ٺ[}er v2Gqxd'Zqz{٭Ja[-^ԸNtl މ!u ɾ)Q!읜vr4H!a:7s\|NLŘ0p?PK8GE;{]1X3h%%EYg\ .(XR{ Zhᔳ(<S]Nqn49BJ(g4O88\hݯbd>::zP%4LRFt"ρ)%%af>[۟jÎ])ڥOի9]ޗu)u\>_|{nII4zۋbOߖFN1deIQJ8&E)QG`Kaڔ VUzМ8l'ޯX>R*v/cVz$a.}gG*GVۅ| :df L@F 7{b,E[~$,Ƈ-OG:/D6*!Ab5K}$Yj6$Z ۞H4A:Hb9fD1`2(bFZJ\,fː> Ƕ)"] .JȊ;ohoҸhDmϩ(W\2,Hs8ސWYZ$$Fe QQu*{vO\-!Fq=7XLyyC*aʂ*g2P(u9mHi`3![QŤLN97UsmLވ>Kz 9Z LcWL M^+lbM2P࡮Pt3MUȟ^yIQn+;Ҽ|qu1?OKcә(Y HΦ5ՕFDOIc 9GǽPCLdҕxGXwɌ E0,u71f} лnط%*dM! 5^sƽ7@NANydO5*D30ZΉb ҶrVu  0PJ5m򢈖ުLь=%}>m8AZ!Ui  :c_.te#JZ r'3à!$SDI%УY+i 3< [t7k"ʟOASn,4} VWZĒ[?tas6%(.RX 6)N>hPjN<,%C,$'0*TF (e)ʦ 6Ej%|W|ߥ[|t[ɗsr?\H'Nh~k׌u]4ׯ7I}k} fmZwz-S-Y }^*-!R0?Ow'FO˝.F.?}sk V{{吝+ޔH{c ЁLo;oNCoO )hbf g.we3w@/n`ʵfO>-֜r 닻 Tt㖗j.VIK8F3r16R7LoSlDg[`2pAEAy]M(^VeÓPH) p9C<}rE,ә^ʎZYX8BJ;{ ̭;|s`4kLΐAiȵ^r%jA(,p\ެxςJ 𲠘+2 ÌW@lN`"x"BH:S (yrm 5Y^Iz8|$ުUem[oQG{n҅>a_vZ{``SX0>mѓQi/xJ3ZbpcG:f7#3C3$/)XOw^0u{[(ϸ aLH/d4 "3d䇖ђ8CM)?dPĖsL?B]Y{\ .dQ?X@=S)s1!4%]v:LdUt(ө'sGUh5ɫj^HW7|#]'Q^ڄO"cFZr;#A|*9]~ㅜt1z{~`gӟ-C:ԅw,Ӆ–7ݛ'v2TQ!Vϓy ZnxWJjƖn5?h;jz7rx|WTia[[C5%p8hzss^G"ɞv'v2wl q..l+щNF>483/#oLRcUFiH6Ecb<{;*qʾȜ➔KI1j$F4Ŋ7>Y:]# xH]X ٥$-H13!΄:)xO+fK5Zi%&gGs &V WNđ&+k''w נ[Lzm01 %A 4hp S& qqkwWrܠΣV;Q>]9*&V@2/pѣrh4LBM:O@j)+xPhԥ!e] uK 8\]suѽ^r.``϶KAϜ d,LwdFЋt٦7U?qG+Kmv""3zx2%Z@%T] 0ĸ:Ֆq×ُi6_.f3R9l ӘߝtcYNV3%hƑEFϋ oor.OiXɻׯ0{}ջʀFC[vV =+'P1ė{ku\Hm&lBk%4KdӚJքDPЩ{+!hA֫LJ59zۼ)Zn7jxŋ2,|2DMbH"yƼO99PIbB\DizyԄr*g!I/MJNA AKKPD1bL0l[?4GwuMfYLVv2Nw\IYsFwG Q<qL 5_.m3'܇b 71Q:ተYEUB(f3 wAuwAUk,,Bc4;=z9p8x]*ޱ>}<(E>na$Ϯ?ܝH8)~ QpսK"nBzVrDߘ?]WdpN_ۓUyxUyyNů#xdhyп-ZRO$qs}8A2ڻ~a} Cmrs ^2D֭}1XԈNotnKQJ̺huBBELq׭uZk4+C>3&:h$@`Ϗ.X>CP:ݒ)N.ۈ8(+:F^~O0<OiõعIIJF30cnZPEݟAWrqǻ;ڰɭ7B6qUHCjXƮ+m#IbfPRއ?gvn=/+)qACRYX$XK-6 n"/2"򊨦c7ֱs;k:1pӧ]2~ ELށffyǥvu$RR N_\(`=f~AHz>魐kjq6dBtc.Zk3q.̥!𫁺@ GӊP׋Z$iˏ>+oE6(g%lf }*X/w|z'[s,I.uߦڹ.^fqvRt&KS,HC2t|7o D}vW~_1BB7"_gFU8z<TZ#_jh4Hl7R0(k"K<2̨S(hk *x tRʇB+-` 1=b:l2P0vMX\7@gۊ$mJ<"W/oM쏫oUbl;P)#m9"1Dz?WJx4&4&2p#M`BQS cjZ|d- mM{1hcU_گO{b؟/o@#b%dWC+Q@];qh3"8=owmƵ0n㐭DmcW"-Sp )Z`F Y'Qpt8Fmva<uך#Esۤ&5X5 c),u%urgLqin]fL%ySBy#QAe(7@׵@ƻ@eנNG^6P+b-L;-I3ͬHptZV̹P̀F )V`[b7RJ#N7B //YA |')G4f6;Yy2*N ԰ :'sRG-wZC׊7 d~Ї܂ z{B/|]#R7b>5Pw:)uq1鑿_Lv04UC` |3Lr97DgӀ{B ɞzG/}#t6_z_'%apO={t/)@uQu2j@[S i_: '%RJ˕ d*9%LS,)~ZGxXk&_ lG0Xv"qpu:BgлO;;յGs RPI*3%B&(%T$ jS)bg!NN'E_>8ڷw3=g-}Ҡ r]u*-ym3_ѶWiy]VY2ei05rC`WY„^Sxqa僰yac,fdmZ CIWG2 s^Edc{>ik;'ZViFG)DL̯. ``<D4Z: /M%y1h p*^ Rޱ_~Ra!d, ר7q<۪ň=BO-&Cbym,QS-(:9S,e bڐ]]!Z4pJ:/#V\ϵᒈt io^֋: )5 tPs>$<ڻ+g'4U {~=gd:~:3; [@}0Iox/JEpeyCنw<Wi~\&9#3ȟ4c>e.L+.k}HseZ m[DZЁY!b*vWQmK:Xue]3CδN,7y>slҀd|aw]"˕("c$`N%x|>@!$KJ֋@U3v (x+-~ c߉zCa#O?Qlup"uֲU[49`k^OAַ.{]z޺;I50lʶWo=?Hg~N6O]e KĜ!M[o*âpm>I;ضƚA ͉<;w%2M!6Yu5 ,LK'0tbKL9zo/)&-gS.Pb*BUK=׿BI/<הe]Ch3FOϬx|?o_>3v<@h槣eOOOt딽޾$|g/9(]l~pՌ:pp+4qzm`8rb  $Yr#}ZKo (D3KԂR~L'f{&m'l) Z'W zu*=E[k,!j(p$D#Aq6 ^Odk%-\9_j(T/_ L,8>>'=:K!Tƙ!JĆtd>u_h!rw\.f '͏|;4gu^9}Y'1qpK4pGN^BmRP3QKHшtl)擡1lBSQOu"(Մ]HUȠx(C!iK|]XoE6(g%l`:s4ӷ̚V-e 9b5a{!4NiKJLCB>6yf}*̾w|zoztk)ӓ![8DRAQǒ>^ _J&9'&oi 󖖊Zc\fwc@tmz5Y Ɉ$n;@Re{C+ucs/HY6iN#AhpA6@VSLZJON\A&R w|boa:O |$SJЀ2&"(=];3D˨D˚x\PIl*%ln\>:m. 4X1T%3-%+HO殪* Pvd/)7X5^j% Uܣrz4xĀh)k2U)_>\wwʀF#tx 껊ivFw7R޺oPe_t` S slŋ*y;i `=n#WŸlqϸ  ͹ baWd0菶 /#{vgya<}qlڒ~+0X Zm;Rp![F@0CCvWQy s13U;9i˶L:wK_v%u~s~Fzt|L&c{L{vWv^{ g. Jb4H@pJi Vsh9Q(5\:qcBb}u:b6ok]V"9!j(vzBbtVx\tDvLKu>RCNR _E8hHٯXE)Cm/\ʷ ZG57D~ZWі$lsf6!5͟dGOr`(oMy*^c<}ﭶi-ԡ2U-!{gd( :ߟ$y?{rsd~4혆$/;Vd_KMrɱÑWb(r-R`N'|pjL`%k0Ϫp|wsJU˝9z[竴мTREĭ%EΗcK2&)sh#ZȤ"Pj@fiK"‘%~ՃoS5_t?DX( >zQ41bD+\06M#T2UlmYBf=QNY $#H,ᕍN$ ђU4lzoj5Bofgܗ#BsM^L1hKB2$tsm#~/5J^9'U^pcv.?צ ¸.](3֮E׸Q^1j}\<422 i $ +<&?( Ј%wmZpג/qnldiQ48%WK,6Y4jsCiidJ KPVV.8V[^)(֖#-&ac + @7,\ l"Pʘ2 d0[ID?4~SqM!h[A<}4}ALdo^V*N X.e>kNUKIW]He2gŊP7p_.~-DI ~1GO'BkwqqMHq k[NAL,S*X Z  'NJF  * ƖȹV!1Zahd2 xih ,2> N$硚aj91pҟ 뙀 ;"00Ub$iVSQA@á I,fAd4 ı@Q!~iەԶ]i`0r7_#;4֭hZy9 9xsd!>B>kvF^>4;ȴ%=u_067/"H;togӃVS zB8Mό?tG 6%glݼ7fH"3A} 5J,l/D5;7,JK+OU$OpQ!(X4m3 kom'#'k'9f7cF"pq7[I +H+%DMQ[·_I)uUr Y}Mopںv^',װ?G1fM<ko ab f+~H۬.䔟 >/y* ,F2(Xb9FfqCfbP`D8pr!)ߣnG^r"@Hrbm,4BVH&P(DԈZP)jv*5XIR* ]Box,#b Hy<u0+ I9řxsDLT'c`ضZ* 'jsoUpteh,Y NR,(nE L" {'MV͟}i]M;n86>SKƖqy)@ ,#X0 ufs܋Lin{p|t$ uK\d!֊a@p * vG% TlĆ`LC Lv @KHHe&9ҽu s'l5A q*\20-Hv ClVIdG!u15FgC{ 4eWa` Z*L`7~+'QMFã?["CgJ~2|dh3>0>dͬX])r &Ҏ֞%jj!cJG;!+Z-,+XaHs(g(>KL\[7YRudՖ* K Zfd>5c}$Kގd0DOӭɶ'0Oף?a+%9LO`r0/>̆>zx¿dgWN~9;Lx?0_|q~~ro޾>n54:~x5 7?tau~ĎV??C{ԗ\dg_'@ݢvǦ7I+<2[7 i1m<ܚ zٗ[k 2~Ǥ4x+9 KВD\ŠF7s'D7ٖ`9Lg>D34F#PCDJڛ҇^:lDߔ8^̓p9X6r@WesЙcAK<tލmxwoO 4OGw~TCrw?ů@Y7GUw|w.oZB.eݟD?Qϯ.^~\@tmbpvO\:zF&;@{H?x5 ' y} wMUI7<{ւ9IW?-&IGϣ,-0\[h[f0~82 ߾} }%ƽJ+c);BFun=<ٷgd**bYj;ONur]hC2%0Qjh5+CA)Wem`sqh6JFHq84%VH J$ q$0\.S'W.ZpvH EMO.Gfr 5r! Y *fyܵ?ǷdM6"i aI#d8)XOCbK#K+Y5,FM8k*^*'j?wnw )k_W_E*w;iԘ Κ8yM#cgJAnӊXߗkUS2N E(ИvP`ypN2!fV^ 0׊ʢksjjҜa=S+Ś\H!,(09AiN$b9 J Aw]2hƞR AMsN2(Cdj,-':>Q9fF RHŃ-\PiFr/('T0"ٮ&VFݒ ؎쏴{;*@g f.aC~jLĬ+ZQc D'v>ޟ]Ebt`$ 45B+ 1iPH)`{vXEV8*U& 㣰Y=ZJeܓjK晼\T^)n)!uT B4(GQe&ԯji.lEfӲN)dm0dh.|7LMV#ռtOŰ92HkK _X? h2XTwO;s+Std4Ttl0DmYiWMG?<;Gl.nINƭ[skpK2Bjn;I,m];דQVX.R\-iyV |v=<  вW+- utq$=ull(x2ڪ^ - cx9ݒ># uxx*ZeWi$it {d輕6C"=8L}C84p$ ELixɊvB&hNgnsO%!iIڭ EL큋uEI ڭ(>SyUktڭ ELqPMnE1ȣv^Y(4aeVp3I60 xEKCh0q6ޤ[$!j^{x3u7ITsX^B2YQ_!VHQ5+5UvQ\ bmpF-n3dFF9EAZ(g} lҜYZ)xl;5LxLzIZ'B'I_EdA+3>V /vl)@`xp=U*dT5FQ}^̖AƘA 1B&%YۈOW+ϧY)|%LʽPJiMJhjGV6T;s#%'8`8XkԘt:GE VbhB:S΍~ZHæ)vULaŚ1a"" QB\4 B m:B M-ADi& -UBNL`#![kX"c.p a0^hAKE -"D0b`Ll 01 lqB6"2޵c"e0-5yxHS:=X`" +c{|I_RTHnIlX߹v>\]thr̙&D8 duY\ (Pj]j4(P(co)W?yZd Z./>_ dU~$&;E#Zbm?wQڽny3}FӳnXбBE'NR.hy;xOBJp/,Zx?̍+`2Y3_dV]ܮl㗻󳋏.pCZ<~zg!4Bs'{{^OD||=!\prw-߱ `69'iε8}oO]ڨas.F]Ըg=?g~P#YUd7׏\%e2+ya2+43^3* 6(F9#+A \%]pKE]b]snEaj9ʈ6$CS>4vDR+_,}-wM`Z2/ چ#GWn/HrQr!|ұўqsu݀޶FMT\~W2*01L%(=-*XotY0rls;s9ۙOr;ku[}m7?:=q~3joo^^ݹoqur67?>֍K rb,|7G۫WI=I7eeOY Ը80H1 AzBёW:s(*CU˒qP5 'U׻PiR i2p3<eÝк7h'peaA>)b+X犦G]ER,zcT逧o}O+&+ŠV<9寧ҖQ@DZ2ĀMd|}>>1>zBbƻjq7qһjIV?=xNtQ`oҋ5{ f O$Q䛃DmƤƓÏS" =*.megGYeoJuyyw^s?1wZ׮8O䇌7R߱wotneNfUii>TŗhLStݵP>~sqo?KW ֳ4%?|tAQ˂QY+7$jS]܈n Ky#FT BL'1mlSh~ 80ua!Dll8ՙFtCը5btK tRƛ;U-}s?0ua!DlJϧ7䜦R11wDH^-,I`tB^oS9Zwy b/UwZFVٍk&)ײ vBj;bꃯrQ ?ǽ]/͖Yu\L뷿)+ùTʰ/;xc'r!-SB%[~wŽ 0Pdosߞ8ÿ]z-Ogv&O>H:wN|'3moMH"խ'anA _..\[cm9IBl'(HdFm}p5">6@tSr92xʣ .X\[Cv4 Np*48w>kΗc¡41Han9o4< {|Q3 iŘ]t m{S`O<)C8> 7LS7áb0 կ3JC~4'#s l#NjmhZV p쉅,FFJip/ڿ!-´% #2%,"15n9@(HtM;h+8nͫ{P}ۥQ +k&-I YAY\T.Qd.U)+ I‚#l,WEHw",2Xhvv?ޛޥtN6Lk-.}᷎طZ8t63 jHm") zL84HޠLTkts['AhFAtWuwcV\} d7aa%3mԲOk*ؔkUseꬖxeʬ$TXhhtt՜i\2NJoo?42r2EI^CJDg:F4n:ZhMS 1B?F4B[Q0Rj0>ҟoN_Fj¾K!ʼȊ$Rp"̻w{!*ϗoJW34wg'{X]I1M^R r@ mjTC4("sqQC?JL2nKȐ,eƸ\I+mnl; H#b;[GĽ|5JuFT#ULJ2Zc3Jd^Md[DukT+kjǨK*fux`9'_dSG{n;׊qJ[J8<{qhفfO8I;^;z.5`~V–ZsI 8c}!Q#DFak}Jp3Ǩ8˹23y7e%oll^Yi {skk?/|h˪7]fX liR(ln% %AB&/\sAT29zwߟafA/ȞkK}ћي8j{ӊ`]wV%|o6 YZQoyD'>4=&Z/xR&y &"*e?WRrnIm 0kPH鋊i_W7E{vm0Ҕ ,7<'@#\9L -kn8IUV%uho'ZHt.TMy%]t#V8KĬ32g`Z*k(%OCA:P$&X>6T0 rYYAQ0P`%\2[&%kHAkjb5)Dn,ȻM?&*ͭUJ0T,t^Z. 3PX\,%YْX!VA HQAjƐG)j/x '?}lg؛şgZzL`N7.^onbp.D\NQ0P8a6*Ef>l18iV rv*Z)m484 lhfпV#Bo^IC$6Z#-0|݊iP͑+jЩ &PHN̍_8iy*1jM$Xiև8u>F@6sʫ'l5(4 yAEI޻`=$Df{| Р`GoPO\ L5S 'i&hڨk'ⷵƃjaU۠Uݐ๒0TPz@3ƫ0hz'BecFpӃ+]jhT7)މWFOa^@H7{i{l A_ vSL沌9TaN\Hfya/7XN!6^yKƠ3,fE)xfY[ͥ;ڊcoq ];kdwMbGϺPhZ6.L󫪅1_ u&Ji7_bsh("gt1q lF?0H90@SNAް\ޤ=,/+H [qPJQ@*8FlO L1=P}t#J/;s NjN*B|H<<{ԑ._qt|ȽXQJK9 · ւCnBՆ X*˼ZNSƹesDsjo k0/gB%>T6b8!9KT83jNl{L|h%ao!K~o WRhz0u[[_96low˲0}[]B|Hrپ|٨ifk$DH 9{QVJ]p[gH[F䢀m%.ͩ6J/I<] uMJ>тI&{<=D5S!*jK=e9CH )7C rJz(l,b-ϑ1 l#qEv/>6sxbCem\\y~pZי0D@i=ܰFT|f,tW;i/tawcȱ-YGGqրkћOa¬ѲJ;[Uw)Vοv_e"\|ߊ>y]-8;%K<,D0gEZ$Lr+ c.$Fyq2mcJ|^9MPR("1zHF9+3YESIH6hU6 )mYf)9XUh2\uUEa,SX֎z!)́2Z22Ơv% )'!HbTmSi`5"f9`Ƌ2A"xἠJ[_Jl늂;xao"]كu˹u4>,-> I1gTREi]Ŵ#*iS b۴oq(y=Gv di`H˒BcQ0ɸH)YD!V+*U1YH{{o4_Oo;O)BNowO|) _bF4oěT̆&>z7]^3S V{ѯ?!f2ۋ ">\oh1;Ww`|;%u'm'_2x s'Ap?6_z@n D}+ޖLcJ+ ZA=O{zBt҆^`m5i<\%Εh$No@+'}!Â043, yLEShaskVXٿϙ*ۈWv=e2i}qsn1Nvə!_W 6E?BŦs񲴝[*#5߲ۛ!V޺mK& [*@98v38ҰjMx%^'益6Lפ"fv6v_Xdy&lI݁Ɍ_g实<P~E<*0rP&BEd{SĂtb* %A җY T';{zzs?[!x@=ͩ F**㩐Y-IJ[0@a!%x :Hܘ 2TNՂs"cffm}oڧ5H]SO!He% pKY,VIZi@.\:KG[ M?OySphmBVsCRʤ?|a| 2d֓2&,qDzFi`,<( J.1ܾ M H p$9 2ɓ#2y0/SyO[VX4Oߝ f$m&ћNOT4 ـaf)W=< qCr9e)J@^򀰂c͠ 59-D*dOZDdGDAaWTh ;ze8I껖bY5@.:cY5bsFKc#YZd*TSskJ$6af)!KAxJ~aD#+Cr >MfBze +ȋg>dъ@$V6J_(WahxuvLi^)b Rи aOLb6>[w{;{s6Fҕ,6;$ *@冸N=ǧ:fAIk:r͡eD)8u v^xAzFD,Q a23Z@OvTwTg'b Mn3S/jfV6re>m@F\ ꒅڮ.{eI;|PV!VTŸOU1oUaн|9rMeao@kٳSn^_}ן Ot3_+wgGstxL*2No/n?ƏԠr|F▞ܜ܊6H+7ޖ+w#~~ɒG⊢y+S@Aa) P5E F$B{}N'9Mn/'%E7N_No#ww7BϳfNF721HS_cB^7:}NfB3o<4<_ h8)I\KJfdz6xݼWwۯC,^4y=m}T {eЛOp&ZkpѩHk 'Y _VZx;ù"|~+e %Y]LarS - ^EIRYX(3Z$lyGѢ٬kuٳ !bl]a*{?}0-U'ɉGRNU'\h‚uq]_L!H}$eiY29Ɯ|Lşr .\R+f3Od)BhR 3t5(- Z)˕lRA ȶ+E "bK%R  KX L ((5lrk$ ,է@u m65-wg/-uܻYWԺOF}  bK& kmJ F':txL$FhL&gn'6Ir`DIjYoTm) ù6&itY*$ 鬰IэllOg6QGxZP[,qE^.hw{X~Y Rܬ1")= |C2aD$lb^#`ܔ,BzQVQ5{lkB▱v{=Y[{g M\H4F%dZ{Bऌ(y9A+Ot2%lhnR3܃:W+d2[W2(z+ R551M^F ~u "&Ef-T \Yƪ޺A$4q\8bZ#O|_ԃqD$ki 2@}ꋔB(} AJ@68RcȘ՚FqYS{fPa s.WU޷B̅;~~??cWdBx9٘f^/v\|_>s4\1}BiҼǟ&6Þ mBm>b+A]3!^w֦fWm>KM!PX_LO9y`2 o]]Q]]ëfK{xŌ4^%/xXOFW\Tv`M5GqqD,@ /\_;fL{|ܢ(;/IjL08ޝ[з+ Vt>1^)&=z _3_Cݴl:Jҕp P-C>beڋ&4o`1RFh2V>eAc3/򹡼@hxV)ND?{Ǎ=c~1Vlh}م@ٲrF̌qK4n n)1dYi"Uź0HQf(H"Zc]N5 LcQ&Ťɛ3hH } USSʾq@azV7r}L xdJS)';A%e ًk{!ﴅ?~>NWX..o@{& tj 9qIah|޴D [$-B@ڍ!?;$?^[1xLw9(]| ǔ࿅N)Ԥ%.),GW>Lܚ}f' oƿkgޟ7畧\^jԞm <}8Bwwb[箛"HjK.nԼ*ؙdZgTKܴ9SA{RMVՉs2ۊvx3ۅwWޮ;KmnEU.r?:5?N'tAkLG. (uG^_ܸ U1rwN~UYhq#%{׫gS.IK\i%P/u}}͠)HT bNmx>;2tK n}DS }&ь3s%sB#D_τ3ǝGv!Bk!~_b۹wu 9(N`AAXFfRc8+I|:emdGļʎ̟ ɅqdYMwN ʇ.ru>y8/G(w&69cGjV-E0s`&/?nYX:hg_Uzȝ1UYL0 We("w`BìIVH9wI/Q~}dVQ5$(bԯoj<9>xAP1xdc3 U _1=f cw'/^) =L-5X>UF90 fϸP5Yia*V re 0ZP9IN9ٓ?-s <Ʌ!fi3! EZX4+Dn:#E1`g){-*1Te!ۺI(m0JCY5ÔS-5LmB`}nŽ/0ذqţ;œőKf3(uﶳ[bPkf;QM3]*4l9ز =E8O}gB!t±VЂcNgQR!,B0ʈՆ@tm," >7`V TDeR\je)(`J XPJ T#xobA=oU|C3<$0/18H1s+̰ 7e=緯{ah}YM)0,%C$4py).DuK - eH)[~CqiNΧwrȳrҌf1c\=~xMrH(w' ^@"EDB=i␆ 6U]ΏjUwn>Ѫ{$<_H/:tdtv֓SHwQ^uȁBGZA ВX,pʉ6VNh~ A@XKIPC@<7,Ƣ(FPs.[+}DJQjUiKMb jHRA|0;^ǺޖOI[lϖV??9y(v⠬XʝKKhulsi A't_zsv VQ`J]7$Es*yuEsJ3) @YHTC FB( 1jA8֝Lؓ,y#tba/R(^dI.L[ΰ˒[jݛ7@pkJI9aZK;`$ Ƭ;a F)Y ;H@?mr#ң:ғKX锄:$|rrZ tflzNg!R-[ﴋn5u9! M̠(SgLńSg3GV݅tc]|$,|Ҿ&{Iq>΢">U`= }|izŇ~ksf$}8ub8~\V _IkJkX ^sϝ λGLx=z q(A;8w[m)Vn醓OuB@)ʛۜ6}!sdh!HqQ0$꧳>(n ǏPtO A1swr!<ؚ/+N4Y) %;^\뵗? ɇ?;˫˛k{Cf VfĕV ӫN lnH~ +- siis>b`BB0 Fb ~r.p[vT3Tz䩤"Ј$9"@7 G=̶n0;c%?&dOiq ;l`9H4d">=h1ELaʍF2> , 1& 9cK\btMs޳xz86q}MFug(Szg$3ҊC5y;׻Nu29SZ6;ȷƯw8!!4ISɴT8D"\m'V.oxq;wn{Dw/1-,=dVZɧ@yks7DSa/vvY9Q״i@sy5#uS橡 )χo_E)}fl '`IJ2itOYC!w0AQ+ ɏ]|JV£BVP9wK]fV8 &^ GΙޏx8 qcfn=C0DSF '@}jmֆm&/fB%h 9Ɯc.K"B!32n(J . X;8ܛH${%K]h:V6Oi)e[ &Wm, X~~l;0x]{vh :|oYxԛR̈3řI% [Y;3Hw̑<0(u<p 6D2?,@Qdnz^䔂L7ӜPNF #t$%Nj'_e`XwP/u  8p uY0~bC\.3J}c\wOG1/6l㇉fN(k=7El9"ynt8+uW;=Q71e%ْfBCa C%<᱃hhLv~r4d9F=ܔXTz#2[ .`xtmXOF_Zp E aQ$C%`!iSxN$"ٔ_╊1wJEJE+ȱD)Aa)bJ" ާDZ )"!qSL[)x N0r$.Q {[9:佧guo /IYh xFjG=knQ fJSMg7mbAEimz6a[p`J8eSc,(!N[=sna 2 au- jL eKƳK(bviǠ(ttAaw5}8pg)RO+g;9 +)Ԭq00LYi5qf 111q8=䔊!cf_V"s)C|Q꧈-õY lNZI'rA1u۶9罢{),EtmC!fO{>Q7}Cن[T21n=pOӎŭУpKBZaߴz:; Cٺ/љb/lK(( Sa[XK!K$ ` 5(%HA !,!*jivſ@:Z2?RP>#hʸpiRb L K u+IJU4n֕"cn|ӿ$C&DA F,Ō! éZIi (L ( b$-JD{%g}̍"]ͼ寰$ H!@j9PSHVjD*NiZIL)+"IpiUMV.fQ1G7DY'[KԚX@Ad`L\nh3@-t"`A!'.er4h߻St>TxUGgV# #{` <$=+ܠ P9ڠ?3]"Q#sƵ dZjD/. ?g2\>kw#l,zoZݨK[ulrmrusUŶ]Z:xdo6_ 3ZzZ/uJ}T͵G!AuXCf3G(/r$tI!'“װOjܮ+S) =lț%iB5"OvlMn|g&5 ti0h~{ƛhi7Q=%.poFw9#$uȸw.cˎjƓ! 8єF 2 Nq՗EƂڀS6,蘝<Ś`@*\HA"[B {uqj$L xW,HA, !*t(F lВwem$Iz!a@ h !OZ=EJ,XŬ" w[ETDdfq8Ƅyo؋,ZW 0~c bSu@Ǟ& $8*Ƒ|s2 SL^4Qn ΥƓCES|a!Cxu'[6]n g7Y(rXJ1_z)1:xZV3XtZxlʾ`u&ˎq YR]rf$䙋hLi` 0E]nT9h%)mkVnMH3,*Ev5mCqQuB햋Aɝv;a撢iVnMH3,:8Pd נj$F:wsSdgQ˫<OKZ*sZ٫l@B@G THYǕj=؊ରP#R.E*=ayA5l龰cƷV%LkZd]P⍏ֹ ;6#$ht|)8e7j;ؒ%/rju{*&t9Rҋ \.?4I ܙoDKAj>~+bQ' /@ k⟭CLb] UV0t<3++ƾ9@3iˡ4&VD&Dn_/4"Ҿ* zMsZ=uKޙg3 rt\)r9wBW7⫌x(?0 R]'p].V5п>lzz_,m'Uxzœ^LGC~0K~]v߮o!Eu3E`z.UBG(TJ")L] x>MPDgB~{`E.r'z[aIgO~(+S2Sao&7yTȈcXQF Ī|؄i& \!=%RR1`8F"V.1y\7 pAM3<0C !DHQ*MAqjn YqFpUiw߫]V3^,\]TEpj QxjA8#J3d #ȝӄ`VtQphFa%  H$G`Ӏ-A(h0xR*XV(VDpTw)}MRw6E[:vO|OcwHKMzx-yaLo4of>g}.2q/~Xtdh^Btv8緷D6ڀTMt0O{Wy 48B#%6ˇn,hCvQ)}nL** y}QY9(G)t,eH7J!Q*epu:ݸNt|Q(yDaK24Zq61R%wk!خ8%Qʝ(k;#`Ŗ (#rW!NĚخUεXmuDˆ]Ҷr* uUcV \@6.X1aFX %ц<(QX S-J`OVO8 f)¢E,8p]$BP X9O`@bKg[.Z.=…;=54a/ O=dá{D yil1S<мt̢yަoVۗTfam$6=|YFNۻi5S`@~A5M|6:'}ǾhA֠d?2kE->~-1Mccz:WPP~71-r nU]aF Iլ崫 5S(us5wGusI`#:uJByT&4@Ie~K?Œ)Y;hhpgʻ}C"LQ*A~ ꖁ2`->o-(Afi$$'9*IyY\yɥ4e\+wbqd!ZB 4j#P*O)]͛:hQYe) hLADQX,ƈQkoS/by4Ax2cB[? F%&Qk e$G#r 3$c0@`_D9#%]VΕB(k*%搴^xs$BG'aFhp N &Z@騜: % H1$HǛ8Ɋ8TR `p" i 8"1<Ȉ"9* `qgŰBm\?@SF@ڼ6_;sFTa &[mၠy J*SQOPlM`"A 1oہaJk$(a[)bݶ[T !Ëh*`wIzd*q"2UE2Հj5n:A)uuE@pRQװq`a)>m}1<ؚ%+Rc\`8솙bj0hYq}3z0pOdW3Gq{eiź(ʀh3h?.@6۳jYL ,b2}jv<ơ+<'uC1TcXS!͏GY#F@$3/~Ǔi_}Q3cT)l[lRRu/hFNG[>f !c(/jBF1v KU&y|кT(?bH(7j%zJ,Fv4Ȝa6̖Ҹ͐Kۊyy4q M<g)"ŖWPW{J8 UeU_>ԕFv"^PEn(e5[G6U[Aڰ39~dW&%)s5{Mʋ(5R"Xd,W^z|^ Ta4ݶx [\?Ppp^}B9PNA$ r2BЮ76#KjB%@t\M$6TMրJ(w ?Y﷋tЀa╒]vPyY#tH?_  r/f8KfwYXRzS^)ca5UV,K+ضoTH0I/Q@egY&< ZAt$SI#q._B X v7wRS|3quՆR%VFry y8ۍt^V-)p'0t6/;P&Q\P]D5¥ITjlFdvF+ZjB^a -LexuA ֩(L93HE?/QIVn1qZ50*+R6)ѵ2 /6(M(X":GPi^GpEw2~⧘XhbJ.AcnƷ۱/s .7xKA8#*CA8"8R")5dlBU Vk)md[h82 9N`nޭ>s&?6"4ߚ5P(u8ط<K(Ҁ\25L"TeOw&Ԃ!r>M7ڨ =#< !Ќh6ͅLs ;G; C(Q&TS^}ç@ur.P~܈؄Jѩ nͺC^$9t: q„:F6؛v n1$䕋hL1,zl/<_B8A  '$!۝ 4`S5Y^oqp V_yd3p?>4^Eݷ[$-{]Sh3ңVo'Kō}CEz5u 8}1-{heڣ _=R %O |y"|VJ Ky\2J(a%([HHzں\!q= ^OKP|=5Nj&#] -9x+jߚ]Y(⒑NR`O ;N<1tC!v`%i4q.sZ1tV@bOZH2Ѝ?ܮ+l$s7G7zhLP@f)Ԯ.B}0F ARU*FSZm)ޘLp9=Q&͚CIjM]ɲh3q3Pάd1E ;([eiFBbP)ȁJJ4Y q 0?ږ "\AjZp.RA+_&UF 1\4%B TNOyJH%nvXNf(!&g&4 l WG vۅAč;v l(h5ƻLb;1IAZZ1<.o 7şğϸwzΧ_6{k^ 1gqrY)hL4"KTAjfaNxeޞh fY٦w|i\ifWqa0ۍe)#;GD`q<_{](R\""zS52NY)ELTn}D½ɣ^ffJߗ.sҟvVTƤ<ZYnm؄ R Yn0:daMTgf4g!KͿMG3icOw<>l%ud|m^ہZTdr.yTd6{!L't#c:a~L *7o5{O(h~?˚G{gd X"2­2;g`}sBrB96Pfʬm|2w Kθ~?({Mk -n!@A wZG )Fn# U*͠r]>!T]T84JbA%ZcX}lO5e+nڲf{2[|*@W %t[aԝ{EQ=]vio'_s$3+m׋YfO~//` >ܼٿO]i`1J݄޸g *>j=i5m)ܺ)y̽\߯_u1xsdd}. LMEAHA/ǧ ύGoN|ߪ!΍0岻Ӽ:yx\dK?`+1t(ŧNwB.eNV,N8|qFZpc >|/k1bK0_"'cDZpYȋRXpua=vհe=L'N:Ol{9&/v4O%mglC [[/Qn<;^[ uօO= :A(8sYf ` B/έ@" n UM%;r=RzA/sIKǣ6{R *N%8Oz2bUqjQr0f98dTzobSΠx :/VjM@z0P6I'ݮ>/ eVH1-6ϾG1w=[|Z+z!\r̀R GT&jnƇ-"cTpcz=|"zi5@( ҟBOz2 &쫇CK{3?F\x VG>6^H`)U(E9 VR \"8Hʘr K6 a֮bpQ 9LptM֏LnR"CgP\ P˜*(as"E\EŔBBUXR"!*T& 3p101"rtvo# SoslI puv [KlGNRq\ƾ5JW~(W-qX>~.vz9V|v3"ju;`2ԐYǫA,p[d1 w<q\-5׮z8<vग़TU©s9dQ+x7<WA;U !6'5̨&ƌJhF 3*߇R_e[ *XȯavM y*4{dC 4`Pd ko*C.kR f&]WW>V>lñh@Q`uSWU HMߊ-}ϙ!T":DJ$*x SHH.-$Zg-7O€"\^qN@b 1HTu5e>ʷJ khQ !ޢ5ŵ 8G& C>(?^E~ci~phE | <>_ 2\!gi#^\82q6*&#Z^]HGUd*R*{`RwJ8,CaB ŝ ")*s^w AC@Dh7K(,1h&TCǪQ! ͊]+MnOYh{gD\@Mb" j'RNBx׏Ts@TOzK StÞ8"eT >yT0ч=~J#6~wn8 _r}>ǹȦܛMǘS&>=i#t6oziŮefEٍKq4KYk{uI*}R2rsNXފ]Kͥg.J)D*ů y"H-;-JvkA4v;׆1D*en'ڐW.''I.@c58wx @bޮ|kLd+"$=G*_Bo?ȫJLVXJ^dz`og#`WJ QS(8PxTK}Pd]IHJ :2^-r~Yq';mw dZȯrFP*y5(/q}~Zo'@Q(F+)[Hfi! TZS\xF,XN`OۘO{ʺs IH("1 H{K`ĜAR c H)%۝cj'8— Є{Jb6V 2PT5 $3A!M*&}|Nkp `D)lr!`C(DF BM@QJ;i h!Zixb. 450F*ϒi?oRv3D5=1JrI.EC#?@IP??.Uy`Z⇸Q_y|O"@Xo'b;Oźnp*B?"?=?<{h+y>YVf\c q\\sK8{)JQnWWD!Pi9uZ5uR3l]f /:f=WgJi}Bu-v!v3l;42T~!RDeG;S ׮& d?=)$ _YMVɺ/*m}c7Z "D:٤y!1F\+wLx:|+JSdAԟn6}o=>Ϟg5)<~hIx nl~!U);~iat^$v 597 ֚< -O>ϱ缡ɧe~i1q\8T8x<5 kpŘYG!n: H>ka yDdfH Ɯ8#$8`jC ! *HC H' !(Luo& n,F2/pEdqŬ20-`FenHbn)и0( V@4qk'b \؁˩ Հ <5ah)9gr1R`47>pO AHcG $(4*MIlb<sB #xr@ ?1 Z`k,J} h=7"RB1C*rxn![uş~m#4RZi 3q"]٤fJ+zF58JӺ6g.fZ#$ LiE$u%pвvs]%b+!ZP %."\ heλC"Б*~Y800àfCi"EIjśy}8LwYb%;{(KΞL2GJɮ6Pb\IQn7[,n0i? h~do'OyzT!R B^̇8YF ))¤&0͑/p7D렮~3{~χ31K4?rzQhޏ֥Ћqz\KCL7ޗ3/5~XVTIݤn`7wc9G29G1f{A ^34KaaT8PBk3Xb i@O 樱dC~{)1zF ƐAyܔRc#ňfG좾JN ' J˥G"͚2  6a{#N g?95-1'M╸E6E%̡t"8]$xM`׀!]8)/Ásh&DVj0:ҏj{ׁˁi\[z@@i@?Ȯz`C073k1Y$&ܴ׊[ϞGCnq^R=EXtK:^D'[W5sT(@Ͳ\TzPgp%0ȖBvr0W.t_u@i7=TKmT:c+du!o8FVF2_+vVmH+F2n;Mbݚb#:MǨxеvk<\ֆr=Y(FlǛgMaNj\uhӣɍ5V =wvmb}{AT`uJաeRɮeˉ1yXbL%O ,9CH&L%I0G b=L Ld^!h'Z Tj.) ʫldDɶ7UqF&0Btp-vE!|<=o㾢.ma'D]7qjqT5uo k.úv*dH/jf,JDiqTͳSSbZ2NyTYW6]ԕ @u@Qzm~'J\B^{FWv9Ƴ;yr0T]fLL^Peť2~~S<5&yM^TxfS;?ͦuRsHz"Eη*)߭HQSI;Rl{3l's Kٷ'yV;hht('֨Δ5譀wFf}`!C.ywo;ZQwGsܟoˇ3 Tqtvv0Lm"B-PzPzcigp:0nAn[- yS/ǚy@  B駭 5{_hBM*x=qβǘ4hCüy3P,E㍣[cag;F=ŤF;" _;DqvmgG<ڻCCIAu%Ϝi{Oq6]]:S; -п&&*uM/O00Ru+\f` y~Hv:Y Kk,@bYǮhMS5Tyٷ\-ey!O3}7z_ѰQat|p_jI.R]'_5My#{X!Aa_#V,e4;ת8ڄ/e0=LĖIFRkrO7pNx= ^B+M K @%H:U-F0bL-:s>)U2q\@~ZX]_ѨGҠ̇( ÿ)&I&iInd񆸇? %YXBm$=?AYGq4h]sGrWPrI. \u%+K%/_|$O,@}b!b=nT Di|ޕߠn&oynɻ29aAHz'|?wNVF3Xű]ՈϓX &ջ tҸS H-$2F.oؼmX>ޮ^]%rS~]XJ{v˛k *)%%T:=UӺ\%B4~'bf$)a׫XKTn֌J gqxsX=K?Uve~Q+!cvHQ(Ila!1a d׷3{yu[Yu:7j"JV^{ Dj,gL~+\Ն?<I`EwnrV۹ߙozx'Ж84X՟1't>Z2gȠ)kz\uM{~XI,H8erY (O%zsl0̒mt*&} I@ "_`-$ªȋb-V9nRwHP:6lgkq=7W!ݞOdtG%Qh^uzjΞ{j({Gc*GnUwݶ8&[wɭBB9"@O uB h5Q()cZ tƌJ R9vqVIʔƔ@騜: %p[ ;P[ u@Œ襃m׻fb&S s!Q:l=wXhה9R#)r1 uBq[P%ײGs6Q+ >Ar:PiMꅫ4,JW Xg(Rw5dyGuS95:Xsn}@GHZpӺU38dBz Y *G aE`GR8A8Q#LdL-W)2m q>Nt1q}϶"G9itg | sF\:qH f=m uZ=U:xpxx1V`*Bkv'a\GYݎjﰙbU><ۙ+\;:uFgIfHua}ԣ0+w!M3ҙtDt|TB:_Žm)\J*:P(:`Z_.m WJ`qU`^ԅŠ1Z{Tta SBxGQ o.K_:5*_nuւ`93&)B|EwjZ^lj݊0 AWaAu_+^(~E.5M}ϯHڙKxJbN~n!H w â <_gE@/UxTULUUd",RYv|THF{쑙)Sqļ/IT30y΂ȩh" ʑk.FXXά`ˤ8 ncj$6:FFk ` 1Q.L gS;$LԔiVm֦NVa!K@6H}Y 躐f>v!",3#V(J0BRj_%DhrQQH`QH$ÏF_HN}rX]N\:"&Xك08h%QPU[AQ5¬CF uC!' H!<\ 3=7E l 0,JVjֺ*NV=m0eZ TTcDZ1$jn 9X34o5]chE^$hpn5Q4I[k8uW׫Cra8mzܢK6.nuPgHP \O /N(P꧉+ t^nPϲWpZa݀s"q&Z (݄nJ>dv'~Z32O_r˴|^]ZLN+q)[%D4A7mT˨LLk("c-E's ETs(dZyIk!׼6O"xGRJ8TD7h:>pԇ+Te#COSZ-ǣx S1:`% TmLE4Aa-0/=2d#j.^ yjKIѹ1QWϪ^ON"Abp쑍F0m:TGH-Z NIQ2mg>::o=3yI%~H~1L:]i1wa w-5_=^>$FVٻko&A.O~Oܸ.&?]}\` E"QRu؟n&~fn'+0f;$[ca6 ^OYG!kvKkM.`!%1ZM `'kҨfTzY -5d],2o2)O 䇴 4ռ;|)?8Sۻ0uk;~bÅ/+4EV#+F LQma^ugJ]I>z~pf"dh*Uy\@Gu;k+8/"+&!+$Q "eE> )b)U)\hr@ZR)Sg0uA+d$;8KT-Iư5T퉥w wOat$[PMuȍ\9%)RQ 7knh:!;O(bN`p+xCn5Hjs!2"jB75HӰ8f 8 (!+M \ _O< ]7*F[A|2?ou~e ˝%JU2p m~|@u.w`]8#l- ǝf9\T-9ՆE/39 SjvN|q;\9+LXܠƎG2^9nzCS156u9;6vQr18R"J21c)uS u>2)2)\(~.MԪe +d(]դRR>X6Kl/?6$kIoR!UHV[~aZ@Qtפu' StYC-|/~GD(+ G ?a\QR ?bl}YA˳r&)E"|QjUcR20 *+3ceG8=Ķzv1cB;;_-wsQCHYqPL 3#WS"U\9竩,nx0Z~;K9أUH ~ )x]mo#7+-dE)w 2݁&3x^IN2Y_%mKmbwq0_&OU YR0ה {bS;&jxl ; 65yR$TӁ4;Agr,mVn7>-B0I`A{ &1^>NZq:e&j]pWjM՛؀+z U"*Vj?x!aTʍ r+ 3PgB<|^Nt7N.UڌfهPLռSg4a乴.*쏽F22b5>iަTnGVgklؼ/dݓ8iL;^^p UqO.=,ҳKںt3[_m퍤5X\fUUPZr km2ΰZkFp-]xJ8v+ti'=D6mF۪s;œȇЭǍxdz?E?CC͂ 8 ɚ"Ί@f8f8XQ8,Q:Q|N%fPAp5bt:ir`=oSs77Rl|7>r-9킞YHKW: Ƕ5t/'B#қ 22RI?+"J+_Vo%!1DŽ!Ugm=aA2^deT:msdS2Q Xg1a*k*Ggn29 5RKgBYKWT^_tTc9$8Znz?[z;"__.W㫴yp5Pf+/vqrq^l>mRˡgyi![|=, !*3uO^=[4UVu0u3X() 0`dփo'&movoǣ3p #2ƙ9@}vs" Z0>ƠxR!2Dzao\KZue58aQZU+h=RLUUĒ8iEYaJcؙA.phrawTjd(N 5e:u J! ;S@VաAynFBx=5f@'@Gju!`?; +/)̠3HMA U`BƵ&6C:^;fCU:Ƞ+j9ALqKB!8yv 2|O*>ZRa%-?p D Krf+.2 y}ukU~v_V d&_TeXg<=i傟7|{?BP7X6C|f?~O'tLOhV/p}qqÛ#Λ3BNwԿ|$pm}A *ۿ}zg*9Fht$(?Z'C~<-ۃ#^ /艶x%/4EH@Ny vV)lqǠ\PY kC6{Y{& FD a~t6EoR$v&J>{sK-WBqk1 #Ì aaQH}/m]EF6u"EZg7m$*3OI“E~:˞ @kM8lz0mw@\LUf'/x(q8;haw()z/(6@ZİWw00 khibבmH,нK|P$' !Y$E7nd˪_-ue7( `2=uQ֞!xKbA W"w]_~cOcSp"rn/nԇ^_%iI~|?MKe U`p;  nȨV'\aW3<,, 1Lc;U*-)zCT$PdYFF/BX~U @ۋXD ryZ9Oɱ}՗k[Ak U1!(m@QZDZ'ܡbW&Hc}v = An :)xQ-Z7P%]$; w}S`>,.Q3P^\)M?d]?jt}7stcI3]}<qޛ϶){s6of7Kt]^f~ef<_EX~?5+o.އYGoÛdgq8Y{uo2Ήa%nɓ~XQuF4yHMDӻgc.0^,+Ǽ9@S3qaR;^{4{YwMQ%VzkVAjnc8ꄲÚ% —y>,ӓ48m%WL4pX`u a#8k0 \У*h 92"ޚ/'X!F cDC]2Pi4e1.1J+UA,|zՙ2\:uu $P/C#VvDƻkGCE{B6ر׍>$ wst&x<T2\bkͪ?a2BG=UW1Dwt>Mh1؋/9!RC_M-vdQu8Aj ɖ$\>uHNC⮕Yݤ.mȌ֛w7&LkNR+Ɲ5nG 647Gj5? ޏ䄒cxNAkyfm8,st?Gd`dorP|X,T|&|S<='*.\}E:og2fd59ƘPosGdCQR2xd%T!3ʐ y=dL]!8å fxU8gK|5$+":̈v*&J1H9r!Fn兺[J1I:?vM*ekRL$.^sM;en4InIsn$&c=X[7ɷȹ`[`@*gVv nNaJDC0PXfP+nrd,㦯0 n 4',yᷫ:}mk5l{"XM<s HjUnK~@SG_/psJs$25FC0uUHӾ2Іy;Lbn"?Ryf~jXo;ŭvk{gqIP=uW %1žd<2 [bf 0T@M,CQKKMbIV$:_3r-t BH+ơLRjMk _f)8=#G^J>3d%j 3XBhwJkj؀ N&A2d3qx(FwQ^3=+0;Ov~+6y!ՊIlږb#V:GΦܙSIG۴-7m\DdJ[S@ѭoؒ\hHی\hHz8ZʚdpTRʖbJȱ-S%F!cCBr[ahaKRBy7a[-񒰍Na8]Yzk=28T i(vزVMT;+|/Bo>eҽu_'rc8K2^LOoMpj&c(qc2ʴEGGg~YjZ,0n)0?74!0~ːpQC]93&)ͻPC gGW^RR98ꔒʎ)HyU6@J7ʋTŠ4Ţti ]REV]D.)\+VZç Q+ *7n)8"*6uoN:RVͱ-Rh ExSE, P?ޤ 46c}YֽM{ųW b: }㴨zE<2S:(Sz'᭎TH>x_zF\RI0]Jew|ӕխr {a68S\tv8b[R11;Ӡ &*`Lz)vg TGt%'MeO&8Q'A|9hLwJ^Q Fb#팙P92r%C㚘G"SO,>(QtaZ/:7&9Y`M=T[4+ӦԄJYRWy(A/YXk5ISB}rY}}Y,5vсe]QuC0ಈxї=Iв7spYTVZAQ0=|?>:hT莩AJh_`$ _'|z%/(4ť\Rx5ݸꚩ/0yE*%%vt(2C n MYI#І?0WOWr9?I*Tbz. >L&Җi!m /˧CɆ\ XYcv"|2Sw3Ϻ鯔j.II\9WxR64R#XM+()Dǿ8 xTn?}'r!Don_fJ)l`l&ӿU)O5i`3slxRRq* 4$\(bWӣ\[רԄ R6V3a 4 +(ɰ6В8 N\B"grPJ΍))8Ӱ؄UsXPӁDQv \4 .?˴pn`FB1VLo)bUw'a8|S'&0 T8UA#|VA} }VR]ö3RH:=0w}x 8M4IjpgEx0.|S{3bYXh |JySgqfw4}x<|Jǔ7yRRonOoOvS0X}wM?=M<(^q {?n1>;Kޢpqi/T.vQ|b&\a]:DR8Zofحky.a4>cP7O%lS,E_eЉ{=d_yQwŲEtη*yom)G?Lg(d)>qn6~8 J4O,!gHP,k"ް}|[l}~.1J B vOZI5;♊aESDZ1w}gJOuzVջ.{rFlfkPz~QbD.{۬Hn38A85=oX&lO|>׋kn/Qvn+J{lal%p)[ y oljó!¡S pY؛ŀ z}""1nQ7zGT (UVmW] w#.iQ\&l;w-BMv`mG5ĄdbBp}NTHN<Gq(6Dy%D" G(Lq.mDr4JJmv,w\D}\[kmZ8R3Bͥq%) Ģ8M-b;߾`bQ$&IjZu_ޮ˄)c]@!I W֧B?jjt-2"Fc\M 4cV-deۗ*ńER5M+^**t%~5#,K Y d>"+VzX2~:tleq&ۄݬc(~*/[V2CEiQpfѵs3՝6JKul*K ULK_ՂQ'S-u\++>ݴ*KILZ.Mc$H5E}h*ՕkwTc`G_]%{^e!_| |,kOpyw<oJ;n,(T*{'4uig25WNqA&}H5 ׫3L9/PAp[˲pAѩUB Jn"=կRe)u9ES̢cT/jªnP,V!Vt.nq- u]+^5S:z/nL'zYbwJF3EC*yy!崰J^ceo+]{ْ3kx9%l!.i96"ðe1R\+%9XΥ,SfEM<!$g7X!k1oMƨƩܒ "S8<$ĺSP&%}l\ҧtqZ͚9` (U\rkTlVl9JQӔI|q,j͹fQI (ѭkDQD٥Cf cZɯp)m\xB?zZt[TJ&F?fiq*Aw,O Pc#ne!0<"SGAyaA2l0oW4]ܪٖE)oG1Ps;Lv>\r}*ۇz|Ӓ7fr4ϟ5>ƴgG?qod1-}]?ӈA?s>\a=Z mʜ5%+}}ncʚ2 ɗrh78m5ǩWH%LrA|9ȄVvSaƴpȆ6SH5{+lAEa& &T[.u뎍9OP k" !=}r EuK^&ůnd ;kЉ@⢓C`~ Nt& B$R{[DXFDVڳ`QQiz \գ쌶׏1J.NilCіw|!t\Хk]Pdex.m%P"Y_rE冉=`S:)\y?.s1Qpq;ь.Y%yY.SI/ҵ`IhSA w* $Z c+ Z`eͬ ;G_$Ù[)uXX) XL $਼Gœ pbK!)RS-*)m@3aK} DӒ#Q{>c[py&4k:/U9AQx)tdR-̇#9z~/gʚ6_aeG]ѱkp[V q&iL* D=/H|_UVVVH"ѕZ7߀{s[1Hܺ^` ܫp;NfN9ڙzNxm`G譆y0 c1ŮRʢ/y;>1f}Ȏp,|3P1 JbJBQJcmFy\!~DM:'Y~ut[gh X,gԑZ=M[[|Rǩäë^ɳjW{cuR=Č`~PӥD Dh0R+;䖾*H.hs'0?UYpJȆ IGť ~Gō1K :* neŒ:Kc/RMV5[D秨.mvA j`޶.@nQ \iurs b-])d1/HIN)Nq(MALXAL`!q$h!s05XA~fr#n6Z{4i `-/ ;P-2 oj )\G(odYUIKCEAhUrPq-Ml2K|^=PhF+aR !̹H4vZ nxͽLN Bn96ԅ ҁ/-VzN} k!5p.ziU5*1 莈ݪ&fIe97HCM)sRlջm3ZqBk^?."-(rt躈UvZݽ:9q㫡ފ]2p`@(pv?<lH Syd-7k\.~14?;Ahq8^wqZl٫n|^-lWLc!7lݫaބOW )ؐݙCv!t Ck*,xDmϑ~Ѧ6sPv_!eW{'0_stPiQ%sǎwryx"A6g)R{wsN$s/kmypLvU\v܂ kЪ)8>8n%.QCDF:eҜi/_Oe v[VEw5-o(7]8W:DBpb@%PBRTbB1D &g )Od8 #${zy㓖b<[иKMMI 娡rUez8L\<]f{Cv4?.‘oU Uf )&tSӾA_Wo>*=*gw!)cȴER6/\K5O @N6 2=ʋC]RY9iɜ~!U&/Y0%9 [A%jk5E樦u|8vޭe-ˌǔBoC:R/{oa>]pr7k?K>;BQud;2!)Dt V縆 +0֋K+TYj$nju{,^hqy@?<0 ~ 5U.$#dh&ι)QÚI?Yp&ũS?+X YbWN4,.>'ߡ3q-b֏v4mS4PVemm\ j$29i'}yK ?NGWvEUA@n<;6NEF@"HŔVF &J(NIBXΔHAB9_+(!qL)]K-3)Lkm&-&;a6Ӥff|כ1NOl=c&лѷ{Se7#]) N@4꽚Of0*G`=zJ}Fv̽j% sIU[;/TJ(6d=&,N^۸3CG)DdQl%KEb+Vm۱Uqެкjwް/ ІJx$S83_UZDB`:DcR HB7NWJdlW+e:'&& O3y'GMstF"5cԌ j(N0%ۛ^7P!f #srr#O+o&VEF3FnT xOxoR.qrMK3@ӉS7MG ,4[Hx0!`5꼄`Z{zJ2m|&RJj<ÙJ1ult:"[J)s+t^tzz^rNW8+v0 wBfwyj_χJ B<4/L!7<j8=&Ta(,dT2Vz DJN TqXQ $v[0qJĕyj6 Ǧ䏡 ,XYNPm@j!GA `[a{8Z+P(Hؓs k<|C] V_PH@KF!y'h۽z!/g5_yWW&i cP a!47Ѷ{fie"RA[nT!W{'2_6b497D뻴1ƓdB{e߄OWA0mvbT`.Um 0KŗZq[0ZZqܹ3!w{0Ws\n.iH(DdhJD:B$cη.գ@'*ƸDc+_ne}w=/EQB{ثIYOAgSPb)sPb-h+ Ƽav)6h]Ԃ2z\'m.GK,!~Sy=4 (GqKxg|:ՇcK(ĥ?$عH,XYdS֎ r]VfĢtl+qܤM\j9Ņ*'9vi;NQqkKp$֌hw D-لbϝBPK;JB%d(Hxwd]jcȎq+,{wβo3XJE9]pF,9#]1qB7`!,?^p}op]sx>^Nh8.5] z SӟƋ /> ޑ%w̕8pʚ1N϶fjpI 5s.frny Vjsm{>XҀ/G]I0w&#3q lA;@T Zu./k/lʐG9]MB^ mbs.]?8)"(58*֭r$GHo _ϓɴ7??⮧xm=b et߿KM}m7{F2P̵A"iHSp´AI!շ )}9mӸ:xa>XzUOg:ӎt4" R[4#R8H)1e KYR R-t(%B)d-yV{ᔑypZ-՗FJS"4I@JASAQ !sd8A4M%BHХB#@wqAێ~{,V/&"ވ7?.!(9f%/!0&tu0J;@ 07t h}8LcxÏkO.`<.‘o 5iPnsPni[ 4y?<}C{VD޼}TZ Ï'[{J2J%)!=z5J<)%Ov6 { k-U p:yfi'/Y0%9{enH~H 4eib7EJ -x F7ީ)ߩ=~߭zΤgz~)h|[;Y7NSƅtfGOW D8L FŬ )|-ػfvADYԜ~,{ɺ 0>􇭚P۪`RF]p>9f'^8ɠ<_(~י9o}b5ÓĒEOhBoa9Jut[l8 g3*!7]vNs> 1$ W4 ʘt/3&Nj.NwJ7rl )[nڬiͨqc{x2jpͳ;3~kڹ{ :r761|M)^oH߿Oڝ_|OYhc}GVwv:Bʎ\BgSTu'8׉cUGѼ#RnH~(Qeݍ)?o'>{DŽڝ?oqt/Bq'u],tb?;Ƴw(a@\K4{'\z 39SEsޚf1=oga#p 5Տ3 |'99m{ `QwVQq>ևS|)GvQ8<4;a@-S*4Y:7"S Yh whA8wGR2m7Klʿ=w24?:PT2>$�5?r)?l.Rp{ۥ.fR6I$KNGk7 t;G 7ausxnw.޼;ܙ3 ]s~"#q j6La,3 N[x#)a%э0n+_vo'9$J;m{͞~E'\2MJ1sPRv@X0/Ͱ64j{ϣ2RȗOQ`=<i_6 hP|>k||\yz:}:ʵ (Ժ .fڧxHˌ@}e,d*s5|yC&zaײ^+jO`|Q#6M(:Ppn%`=?M6eVmV+*[30MT:mkW@w=} h4Q_d(.]c\%'\j^M_IyEj(ٰ7m4Gm;]g)'gp q#oM>Ds-Rv|BK-n80s^M~j$zzt%&kww⿊|/ם4Wf30JIKkScB%+zRڐ^Ȑ.FWudN JSN5"`N{ЈF A˯Ĝftp'40,֞(`u8ȳQM&QZyLP?xBlܙwll~|o%qZN)PL9) LE(RQDg%J3{9$H=jBa5-"L״vpڪjɡmW#DPNs*B4'B RC W*Dgk#4;l9-'?(bho:# 4F?)LʥB_PIDD'\&hE`" "fQ(l"JT jHS-gFTy?ݾsaU$ZHB&! _lƮS `k{ P-7^s3QG@sieW;{SW;{^07̆o kGT-g P!FjN }i DMVԪ5X a ɵhҊ7묍;Ti.2_sfM2欝b)欝b嘳2wH$qd #DoA1p1V Ƙ T$r f)I/"Ӛ/U+U+\X" 6VA,Хoҗ47):(fRضAFx0`-s0&b!Hh䢣 O#'$B?5RD9>`Zr3 spՊ*qpDR{eY5oCz,`a$MLNį`I5T/n][EV4( :mw ЯA*P`hg`..5&aHKAЗE ')ј.UQ_ZdTS$U !_ p< WѻT%JI4*iUBIBA JX (q'Z `L_i=!+s_#zUUW%\.؄BIsNR LmG<2yKTورJ6 aTiςE%+e]75w=Z/&b 8C@+,;'Loe|Oξ<=cFɕTb SV93dSTJ++cwʘ9g.')M0#y1ed#|ٸ;/_|_e!n>cK*| ; pCT1ZjmPbjIbY!A rȥ D>6*WZ ľ z..:2\G#Qiz;UZDGI8:}>lť(rIy(@iXi,aH?NBH`"e;# 0~"csDyN1%`/!4 }L(n}oO/`q'1ֲ@p-(@c9@ k"3Kv,-(kB+㨖ۈ,(:bQ>F` KPaHc PwjPO|{ $(DzQ;C3$,E(\~F`;a"`Rol]f+.9tK`5 E"i8Apwy5E*Fc,RHdlA.ͩ.H:ox.}̶Vu}~rP5Y>TC5}{O)O&*7tQ y,h9ARPDvſ,p/"aKRkނ_XbvD"!-@x sSG4+{}%ILIZ JjT AXRpݤE)FTI@j1t4ð@3|UH:̙SFb!e߃'JJlC $zT6J=+=^u.9q޷<,IJ׷x1$ )" ( g4bDX%`*C0g\_eU;os>Mb`s<2&kl|9G%l,%f#{]:}=@1dp4ta1Z7B. '~?AY7ЖiA۵ӭcAES$HZ|…NPib~-x ˯)GC.綕Bmenm/ xqVO p܄^VZ iMrBQ)6Jnw'#7\02 +z?饅n- &lXxj"L7zum#wem$G boxPf%eЄ񼌃Q]Ho߬H6P>hD*˻.j‘eJ2C,B  1gΥ3aK퍴 fPb?6(|d WcDWir4=}f,,O»$fI,Z^O^Ԅ\2kd-DCxGAe=ҏ>0 ta8<[VobJes@ 5,h4XJI*ɯUPjʔGGeV7?CO6nLiuɑ+p 'fa?=wOagr<] 8Z_0E $*D-7BQ|}96GIᘴgOشagbb8F%) &~*;m&ŢvmV?yh֟oQYTO}_@Ov5s;f ; jY禵΁X6W0'8B #S3fwS]h9x HiGf;+raS}1؁8%Bt9i?}gaqXsN5\'qfRwyn}-8Do:#cI 컲ӣ{eB3[V~9$v3BE/_an },72_nq5=JXi?f3 s3Ǧ=46#7Gs4 w5 ?>P97_E̟^4Ȼhpn95hQFo[?gy1Ee@#c"ȥgćodcsmeC0|9'q?,]'֍OC$ x k |BJw9|j {6%:5 \2 ݴz_bh1!tNlŀ^};~3^__o57->X}y9}{ܾab"`•hM?W!S rZRݲ,v{j om#.tQF@;IBPN)fGh,FrN@E4BlךmfH1#}I8/I8I"ID1g{9x=L%!$jc;oIc(FbZ3,SE#iU8ڱXc)j!* Vjw:jdZ`JViv]k@3T@] ZRPZ˫mTy w;(1iB_aRPj7ǚAJڮOf ΁f*UoESܫj.Acʩ""H8](BC1>HpLcC.Ui !=wXA f{0<E тAZn>:5#(GVX+gԳKH/"BRj-luX Nw[}yIDBA7:xi\3^9[;V!*S%D+ڄYJV͡t#D-Н' ܓ! 4lFP#bQ>F?(3,HF FBFpOzŪ[h 2~abI*]s7IxY^LڡK`Sg7lަѕGT':UsYd NhApߖmj 5ݓ.[)CJs$r,zml#B1eGjLd+Av )]vϱņ9QslP֙4I塼4Qc 6t>t NjNx݂d߂?84/Mf={ ~NF'qi)ĖkqqD,XcAvT.6*$B_/@_'Xf.Y!&-:R|y*y^pJ 5ǚ:.piEt\XPO] !8pH< b S[KƤME2qCYOp!@,a`9`'̳ȭ+,ö`BK Oo 7js 6sC !H2 'gٯiY0Tɾr^_ȁ_cv M5Uˋ=SL#E=E%;8;;BSp3B4hã s<$'ghgQNpB( и7f1C!WVar>Ec*E7OYg0:;SY_짧Rg;֜ a9Ξ8:x3v~ɞ ifUkǖW_`'fWw1B<զy)WX_nۜ٪"m6!S34G"S?G~lMl7vnX@8? ;yZ:$ &:z3AJ{n`?'Miw=^w4z]9|`R0>Wy;BJە#$W]У9`]S~ϸסwAj[~pF5WӏdN.Nzm8ጐJ5k@7.+4 ]s6Ԭ/-͋| G9ߪDywk@6[m &v 2A_zBZ>1_BnJb'е`pp31!>MMzV_L/daac\!s+yUj;/ٝ}O/ZEeVYvUg|j-a;I ܐ[7c $ n'ZK9Po 7eh;.^f1aI7rWG!x) C;tyvx7RCxIۮa&>Rq6 ƇZu¾̼ vVgo p>Hd+0Xv F(=Pwfdjr4J 򑊜s5G*>W_OUPk!ju2ZRX#(:p,J@1E53* zEv zH1z^.tltZ!'ʫȢc{F5&+.9gXG60Z WN(FO,&b۬j0ӑX)"-/~mY)ƴAcӖc#I(52CZInJUM~+O9z2i@kce(=^'HY*CDZ"y@Me?JBzV[U"$xwDBUr9_%R`A"zX.徃aUÅ\0d 3}hNM큃84=Xil/U^8@Y9rV  k}^ \ƋKp/(1͉G.:Dap x:f f8(Dx[l,Q&$@'*ևoxb ʃ?w GchKf8G1 6"fb`7qė2)٩ذ:*([{JA6ה fBO0Vsm@%m24VX/Q(cC1+cp@[UmܰBsvq{:7mTuSbuuk#nL!,=fV& `tp(d2N';`u1r5q+8Uz$):6#V%9`(`ʋf5%*e ^QƳD:CQ5KgJFt!ި甩9w2ph jnSf U 7jɹyp`S̀\(Xiuu Sy bV`m)zE6Q Un6TZ5, #K8/>TdOVHCox!PLҖiYV_`L+Ok#G6|}ٖZë^$D !U:?kJ)g7#]s6'~-䑃a^<:~F=b gutrfif_jo)5S?բW)U#3HqEizOL]G:mѾLaXͰ""lҿA6V97ܼ{ =/Lwu=$_X]4HM%})ʱ."l8bUry־fi Nm;ݯkFw;'TR+cлclwҡjrq"CzWy+\)%ber*5<̥g'·s\\|%eUد%V=L{a)X[;FdN?y7ZSa*oʳQݵa=(R Ef#)P<RKl5w x81:G5Ieu±DSoZ*bD}7"S`W>[rO<"bN(DLSy|I7ozySL( DK(qJy8sύf 8`֟g:NikI5=FPmHN<@SI3$2b|Hu<tzoe~>@d$ؤa<z}PqP4C[Rpf٩ӗ|LJ%bL/~-<ϊ?1a: ӂ"DVzkJ綹kx 5K|?mĐ.*0C#A͆6&44 ,4(vZ7bDp}3/ȇw!;67+ar%qt_ zD!Nı1P rPre.e(#0n]Bq @IG+c6N xs#Y^ H5ze|2E}Œ rԍŘܩa"F>Ǒu X@~m02/T!eU9J2aɑ?:*]&(5VKc_-!ذ?SDmkAc4!_"Ֆ8`b5l"rJxP97Am\"8&źKIx0F Q2I; 6"\qp)SI֑6IM#IBEqJVB٠pK83-3DA-{Cyb[ɥiMq"x¤IJSƍŞPC8 5(.ԭZ1^T,0LπXJ >$M>Cf E7q 7^tMate# &V B^ ]z|8]޼_l5 D9ya.M mu܁Q4GgxIG\M90^4rĆhFLW ROBX9dP0 3oէT<:鏖(kKL([MM 筡rIËsn\{F-YX&*6Pzy\fKxO(tji%y>Q/`{d=ʛ+s? 8U:/iDs0;![N͠Pty6|10\?M,O3-mtqSqׇEM)`뷎Qk3p}mVmmF?`nxw^xƷGOp~nߦ2N^ygO^yϣ rt{GYo{?zlλu9L6v :p]\wZ=Wicbҏi7,z:0jllS\ZV+v^@TU{ ^wj7iE˰5\P^]I+ Mث1Er2 {6rގc3ᓃ[> Vu5`MԾ7/_v0C5\[`jï~r^>Fm?m1ȻV7@/)4y D`8h!ӊbډj4%տ=Ϯ]|z>g'翟4sNxMTY/*<'aKGg.7ͧ3Uwt~{w'{MdqO{Ʒ|0[/ .?/=wWB(ՠw񦗺{8lV&;0n\/#sofLzeșA2Gk5!7nΡ!|3 iV=׌&kF[ͱ(@t>=^\φ/4фl|?z7e]Woݸdo +>'>q'5ˍsl@Ow%‚WiƷfdn;Xx$lgοN^x=~%yO_tW_^iޫI<}9 /7n ~6tr t ?^uZ%WqfɮHt4)'ɸ?}?z3L: ۷N?)m^9=s >b{mہGe7sI_Y;mzS;›opPx0Wa 9+pzB=+f, 7}.#e0u\ʾ[2{n~ζnΆ˲u6 Ql0W\0A% ]e7cMv D?jh,ZfJCuQ8>Zrcbu3BM9 @~04 kW-UPJV&}c&}O.<7 U [\IFTB#{N- CFS5](ARr)Wp!!eoΌ cʚܸ_aˬ[zpFF5i} ɖW1}ţɪf`QnJ|';X#WoJU%Cg tͣ7+=tɝx.D OGXJW!qU;ynru @t*bT*uY.KܢҭF)gkP./ ʽGO0(dzPrX:Igf.ftq_b8+w7I%^ x$D֘r? 5{D#w:'X?H UfE1tv"n0j JU<6zQa` ^(+YTR2s|a/CS& o øGJ0- [%HPTRÃz6a1R{{%FkJI"-HNJ5F30c{<-Go}Ԛ-rܠOæDLCFbg8e&_1xjt*F7AB=@((qd oWLjI~uoS:5LA֙ʈ\#P>? P@5r@Hs`vM 0sSm; }o]v%<$YEhۻV+T>DaR+*0/60y#ֱrk^ՙβg-oDuY{M(+`1{u-+1ju״c!߉tKoo~@+L: q߱Q6>bUQ Y$B;wp7a7%>ML$IY>L0cm?j.ۖG$=ZIgaBtXaR"lNsa6z?NRWe/)u LJH1:V-:7N NB؀|>zzpe11Ey`q旴Uq  u.=zJ\R |G!wM[/-'0a.hcGqVhd? %Īwu^'XJ !F6" KgWe' Щ=%^yUļjgNlhq Yo =թrY[2`֩W(xP@P$drz 9^Q G`{ zrWpn*_oh(8>^e|"(ea~7>ܵ~{/߽M/ EXN'E-&Ͼ?NtDȃQFXvҭT]oQn YA +?&谚$ER4|HTHV!d i+uf11 㠵G{y#9,h"R92zE=rfF[&i pky)jD\ѡ= D4 lTpU&h&T`Rsp96^w*̵.j\:"S,b!"'L0` sb1Jb*|6>cZM*`5aĂ I)1Z(\T X"ºK_- ;Xu"l/8TwVjD0*a2#FPLT1K S:$9Ph ]5)T(豺-XIM7s\B1)"o 1&/"|,~UUO7q<)Fo L̺ 3fu$s2F G%y-{49Kw-kZ"Ըk.66s8zt6P˰5هs[.)P4\Wگؕ `*!ho"G;7ksfkW Rkjҙʍ6`&ƤlROe8k2j#ʺvxFokZ6R”yRBR L@aat$ƅG\ۨ@Jt 0 TFj9.b/Fg;NeBH+y&?Ы`owp=<^ߚ;*0#g%?Ϣ/MT Z'pud :HPbq]*)}KT!hhWrnd/Qb]*aBqM8N2)bH8Xj>8%]vR/EPJ򤔼cAu㤋^^[랜tҬ6V:y2VAHTѩ K2R{/i1W__Oap#h,0j0CȰQ,NEFM{=a)Zs f*ء ;%HS)s۱.*qF*Q8rTQhk) t5ŭZJxlW+sJ AȣzYl:`21ceg. PF9(B 8uJDAz; $nc^ɼy]'.׺7ϸF0/*aQ j:"JH⨥>"jZ8(8,xG͇п͝NEMń!65FOts60?-wN.o5-WO]˽^Md+R=d̂Iy|>?~p 6%20Z]: Mp :u$N[k%сdM֔ 9eQ#g4b"˷ aGo H;g8<1Q=pMTp I W53}2޷@f@ib YڦvA"5A"ShA-*:~؋Gc-8)0_.jm@fTTǿBYP w1eTg7wKݲnj4S7wSLY?V wrdo7D74̙Z> Bx_e5|̚ Õ5e/z3 5`%'J?=ҿF']7G3ھ:cvWȝ8x %>Kdx&!U TރoOGxT&+W ,ahr5 {t 򡵥h-v˽-40WLdE oj_A0 "#\CV9aK0xZƠ1b@7$r: < 0wNYBQśף0ud)+{7Y*44xJTnls1hA: M=>ABBJRLd/QBa}R%lڷ+CkHLr)JQtO /m*YIN؍.Ɗ. s].RQLpDUiM/39Cަ`$իwV}ȗ Y.6ưZI*% bH~\ǰZbI}ȹ6,_2_83Y#0wٲwwSqY (кRkʹ:];'(1Iz3fΌ6D) -x JţZ5-zIZ">$I:txn\S>J*f`zZT AxUlG&\}d\x>>WnGs-XE#wEmPD*G"XF=vhGXXά`ˤ8 nc{$ZSj:E޸ ~n)(W07R#W 7׻6X|HcMiܐ )&2t ,?*cP {;&xlN ds :"ݐIB $gi=\ E(2,!Of\sua' k\pjiM{88)ʣN)ɽwHBC4e>_-T9/[Ȃ!f_ұIs}.Z"02Y+*3XG|+``3ipDB8Aep),0QJ-jaR#鹌d 6Hd&0HˈnPw(M:^e,Wȶ)+IAs: ;z97q:QT+n?a_)L`j[WQx3g?߀4""C$ۢ.ݷL> bOt;f\R~{r[,$b9?b4oqM)ǔ" c f{RSiU"a ;[k7Q<9;dZ [u&֓H V[:HA8eJKk!*bH`NRF=3)qodj(ln@$ Z8,¥2hAAA?{Wȍ C/;p][;>l*P8Z\% I'Qb,nuKETK H 9VTj#QH@[V '0EmZWv+&glq!'l$X2,fntARlc,5a[1`45 4%@ў0B%RojTR5#;ukܸ.xH0ΆX 0 7fu9`O4+3KFm$Yߔ:)Kj).gx9"|CQL.]lQg`Ehww7WampUUO?\v'pol@~Zl9}4Q}1GW)f/ | MssO26|~WLIgO-O?qgflYϬ R^P{)ſ^o8JeDJ~Sn0W %l_T:R=`1$OМhk3v'ȋV m?fٯ]#IcgjQG 4IKWuw#W9My;z DZΨk|?J֗䗽<8V2WSӹj |JQ'үOtWw'f> z/\*UQ*NٶS&TK t龶 SG!5|y9 irL\:dR,d.Vx'(߬Ž1UTEx_xWs}|/sIL[0wحݻz#Fih4E7D9̓vS oxvKr, ޽j *jʪ2܉|!(I~w62(|l<; 6v ""lc=|ĎIT$Ŏy: ;석iUyZ汎"fg*Ƅ5!:u&T{,a ymV2ӌ V3, ,혫*̥s!v9ugoAUn9I{GQ(fuIT-_(fX Q˶XzC5*ʰ5=)ja|]pgT8.е1ÆbL]n׻a'[xwq!p]y"-q`r3eA pϽ˙A!-XUvՌ;"@\0@qPQFLYpTxW>-.Hv.B`ꤦƳb:ҐFvofjid0$2-`'1uƀ RAjɶcEøRB>jDtwCKu0ZϙNiRR)!i1O|ieJ&-e{iAx^˯HSfۈV8cZzPޤq.#8W)~xzM,NM[P_hGI`LOzXwjk6 4@}SKT*JiRpêe&UDItWaj)OԵN"cU7~Q N[MZݻ:c\&?SSWrZ4}i{u(öQtƤn2gb45)UWuOWsdD\wiT9e葤)竒e~3PU$==Ů g)Dtֻ9.(%!/O 2 `%QEI0` ͑p}nuEꇫwo@G^@q1/(!{1OכYHBX EI|-e,–a۸GXi!PM!`U SlY u5*qAof O`Mlb[ۻ!* Y0l}/?^w=B]NϑϤ96u1CrY|J  8oO>ftL8$Bo&hb lVX#Ր . **{Gp emS94 x܃HjN}sm BAJȂXpzRq[6" *[sR,f?LrƘw\H\lwe42ӆ%F%3u?A {[?@!+sn!xyNCz^Kc{.Lf t}ap7weXQ&@?ڂhs[xm1Zfd'^"LByw=F[`p&xrԻIqy2ÿO ϓO͸Gq-3dɘa"\ ŌDZ񣥬ꐔ.|V /Ţ3Lذ^CNY۬}5|L}k-ҢDKjQߪ'Mӱ`f.DծZ`rs_5c%mHcxh, -3.'qu>x>EJн %BZg0t՘;+AArmtl 9ox_ޗ~}!SWq-eG'mÝpMPWQ$]cl;n1*PIJi?GƇCƷAmHҳRut˻(IPsTx֬GWe4ޚt3m&jEV F?ڿ *׆\mrQ B=/oZA̐۱c|}3w"*!V}ݍwώoGx{2Df;z^򁡿ǻuOۻkȚgO&ߋ3?~\&>?tDP SsUlMm7C:xI唚}]Jjm@(ќK%6X/rRܔqGc%)峝3\0.TɫUd0{ѵp?~NeH|%`RJK;=a5 u%T١:zүх.t@hfÌ"6ڨ d٪S|e #K+pd !cHmsEFUfJ;) +!_9KW!T P*ഫtI ؝Q{ `axճ\ؓJQê'ѦM ^+' 'aBStw8MR NN/F]\3,# P"|eg8yúKQd\ujIf֍nv`v#`v/'W"XԶuªۨs6C W!^wP_!mu[6}MP[šeć adރJ62n1)mf3Pu$Z d }4ݱ?D-(2K0wGpX#޿Oo]jN$c1CIttX9CRnJ vtJ,FÚX0K '0G'*:bᄅԲ%)-p1F2-cI`\ٸlų]?&dB5/ Ax`*˳_~?~ĚU䮍C > q+Cr04|~L2͐r63^q9`ebMt (˔!!c!Ӕw.Sc}.Y|xlh?ٻ޸n$W,ff*;1Ii%-Ɏ-K햺9yF H-Wօ,V5j:ǫ {p ~tR?h䞯T'?4{xhׯhӄ Z2=~QQ`U^@_w%B!whmsw6WKaΥl)ڇxcPutIVB(.WL y;}VxSZUE $/A#`#3=sI| SH"!;d5~;}VxeBsY2 1Wq:: :-n2vC_|N<~Y7&7Qrp*t$ Fn}VxlkKA5qN $]?g찋ّyK;y6{ߌr+)R^qhg#~q)Jevgg h/I+E9;+qpG|2]$SBOf;}VD<$ ,5ZII 8fJG :gk*Z'"NH~#P#i?j닩R-q9qP@W|Vl->Gn4\)^P V%+Ķ=$rPJRD6\|CnUS 45Cym<[kP\܊hYI];]0$RpԈ}~W,M Q|*i ;0:'2e2T KQ Y;IMl:g*?)zDP&TTdOfS0uȰr^WU 6!َ(IFlG"r5darQI \ubK8Q(*[=E֊D]u*b3HT\ $;aj6 'CQǜP>+U.tL@&K;Tf8jp!ʆAI6cYi_8`jK"ZK >F6bL:zJvشBAE5#Nb⌄hjhkғ8e Ԍ+o53o1U\;vSF/.7^Yv=mo'Tw$PQeÚ;'r't$e*D{!AGoRcVtbOZ #;Qk/x=O+IPk>^+*@X"j}0Ld$ tJw,o#: V}Θ~ṉ2ګ hD.󮆌O>ub6!>p+w{7ۍG0J!MXT઻5jױnI\"sڮ>}Vؠ>ʓl\` jZp*U[5Tؽ`J*K&+xɩƨmX(f[B `蔌l)Q'$erҭVZ [Ff`?*QBC yCD'D J:YFhV;$fh g2Ila@!4!Z`찦,ZU( 4{^w -!b ,gw%i[] ʆQB)gJ2<#6Z\ZZ@P璤PaOIqr,Fن1]Ĉv8j-{Q 1PkJJSe_,ވzSQeb&h3$"8LZւ:CJkd!F'ʫnɘ$C/bS%%?Dy {Q%X$Vraɱuz7lRaB"؞0fCG]Xr؊ȑ(MkBbuDk*!P)6O#X1R*i% rPN&,Ԕ\DQkUEQR`e'AT6x{W.IXMl:i%QNTSHEF#PkC=zkYgF"R(t/GL);#۴B"e8$63(j\}IS5VN>= Z6l%sR{WjNR5)ca{ m(q~8>Nwcz3%~S潅~}3{1~w?>?)\̗eIK =]n93 [߇t|?868>z=y{z~Gߏi{'HTfrP.99N9|0~ӻoP|,a.q5v_(["8u!EZm6wjewgorgG ty/grG:?½z0{$uc$h@-|Q˗ʽJ؊^:;Z/ۆ:~4+u:Y8o^EjwM?]YǵyI+NPweh{lЭY9[mwk/D~xOEyHZ;m!MD#% V^|6)&ec2xK YZLBG+J^޴ːO /Ύ.b {J^Nj,.rP8+r~~fFbۤd v|P-*xDu܏6H;շQ!X\5\AJfvݕ#%wo F=ٙ;ye~Z?m~1?zs¾?}bs)mʎQІ uCFhU!݆|A0+}`,. x+à xD"*2RK$V%QIukg{xi:e$gS{ 1j>?NZ.]Q_.{*Npbq>ԃCiD|e)zeot${chS_?g,?qV\,7n?GSil`rO|L'`/oYl>OjT _:S<떸hR$ jd7j l\mF,r.To X4:b^ cY3Q%bm&UB Y]_\a/Lv|>IEmO囥ϛײЂU?Wn2Z,w^~ \rZ1ϧgw/D.SЭ:֓헟xqTOW׷2z}3YyoYކ>:/mXVIKwMO4v!^lwݺjZ0=}>f1YO,0_"wmqF4X$G&bSvv6@, בfdKi)vx.j->U$뫀R =06"nR# 򗿝MэpzM%Pp B߳9]ᬶvv:VDSHڧhφGGxt>wʹ#H)e!E & FȐL_Xʓ4$ U8q>Kl^TRGQWW&˳3@F2#87&ulKqܚPd΃*bA|C(E,"K_8U. =nZW\Qc1໘/.d猢-(3]tN9; _M))$Sz%h?R*efE 4['#t$: s9 xcg[A"u׃ *TmFy!؁=\r5<5/rq"m#vO";_(g)DZ6'r['0fD ~B5 =NNAuK{/L `AlHG3/2BhLÈF^QGz䶌m¼0w Kt^;)p~ɤGVؖ`ސy%RHJtT.T&'x|`ͼ%l$뱙Yt)VD77FxyՈZcȜ;A%M5$1%Hp\Nf2f?l) ?0jP h䮌B4`/ٰ; 99Xʀ jGI7jdce&rH,dҘEw3dt>e*{ r11PZNVHE4ޫlH(qhвb.|ACҙP8`!ʒ-ӜW8Y)9k RK)8 #e6~m0!n"q(ρPπar w\Q?eL*t8CuZ D@u >03Ë"z ڬpA!=JwB.NG#:_LEz!0$0 횶AT'qKC9AC6iԷw;b/} I[,,O G =F #wrI==tZA{Ja(L[Gc@ۓ.zayJe$糳ΙENO;X+ps)\=mH#K4})}9Б_ʛ`!{gZRP*(4 NQ!jxuF$o,(Y&tIE Q[)29RAZzv9Iar©llc>ɹ抹[A9FA97i֑.@% #mk}HiA*$FmZd),YA@9?F Ai~ajJa @b+mo92 #U^.1%+;xjucϜ;vˉO-sr}c~6ij=. ?Ͽ%oٲ!mBm$A.b)8Wg|K ĭfNЪMZ +NtS?}bn1Z5Vv>7_.:eEܦ (}'#@$t!c52󙕿Lg]XWRzsg rN/.5&bfj4.@'jsoP':c*lf9TNjX_#u-#ls_i`;#te&9ӏ00ƒF1 4=Lc|7o|2F]T׬ Nͪҹ뿧.yx蔍-{V :"Nl:*e* -#lC7hз?)@N ZUĪ (y ԐSȝLp emfקa3 sJ&YHNGS|Ija x0 ;{LEV~;CaB&yg! ]}xhsȷfo|srYxr~o(hKJY˛w_No}9-Tx^ fcFOgqy]\6LNR)[6dmD4'bŋlmDعUbɽMa7;C):{tM%n듼/~Z/]k?'ZƱa5ouaI鵵B_Wfқ>[~0X L `|8{y:5t^U`|9S'N@6w9e붼-b]mkym{1/崑SW䬽MEq_Zc[+qS]Vjf|5PK u#YgBdݹ$GX'$jGB*#USDVu"@dJH.%}Q1dtNS^ܵ>G=J:c;X7Cyusru .:ʾMۓ寮鄀"q y_% >~zQhaZ轛cKɒ )ƤP<ŐVP[#9y9&)[J\Q ~]dǃkhk-:J^`t !r>'c`PNbh0n mweὭY YbT,!|r[H FhKHMTs-X b-5HkFyߌG^窅ğHLK*m Ñ! e @fOR 4^6O6+i/o4=AhC,icfo1?cfL\?$=tNO5ə#\"f^ G; w&B5"*iK1юXME_|,]O 4Q5 8I'XxZY]ڐ;.}9}hZ23Yջ ҁҞ-L`nzf6(p팩 8W0ȥ*>t>aћP1FL-?Z7`j.PKZV?[XKOgUc&myVVh @{2 ]+(Z8QJ]Js0 FZBJwOr\ax&amdVHt7cAA_p}sif VgʦPw_z^[TN/ΙOvfɂyiZmVoz۟Ve'AF\'%@r6\!U+IK,bv"~͟-zry/Hw§bGPiGF' 1%'p/@Sd2Sn^FKCcJE 6UWwV+:5 ./U*'M1NhkcBE,dHj2JX+SNUЖ&VSxJ44aF3hflL:$<ܼe1#R`͙"7m@Xr5XF=I"`:3.0"$R<ƽ .&*ߥ u9ңXÑq 1yؔ>]yp6^ܜhxFbպ7Nw;n}"taaՂkqQ3s_ѽ%Vymx↰nQ^/߹f|&ZdS-^,&{U,{n2H1wx uidޭEPwa!߹ rwDV*)}Gw.cB&nŌz`˞Ah*@@Q?n? 'WGvH8w9a%fOϏ<53XW vd1k$Q 782!XK@̅ρploɍFlyR=KҕH~Z 682T`Q}HUb>l@2!J(K3*ٖBz3]0+7SU\ˣ3)tb`^Zysvf\<^^7R}0yC; np䚐^U08K4Jah]E%6^R"&Tp6U8WY(EOTa2Q YovxE1TE{ n>۳&0Cc8p%esMl0^7u&,`EJl@aʒF\1m4kE>"^=d[DrJ/Iw<Mbir'K\sDbDL/#evT`_::"\zB#a^N݀bQ_ŀ-=Lq~8 Ӹ6Pp[Oƒ:I#[#IT &b2Af$Qd c `$Ic5UopSk3-ьpyk )p`Z*I.L\Jyxݶh`L,z cE[Xr1Фh%?)/9l>B9DX!F+npui.pu+4'1uk}'ߩէ&W{gLaE*$C~.\ɨQ)W:Z⤫8`pQhSӣ ByLIDIfa'ǟI`*V>ߗS/a~,2_;")殄filٓz`p~9EqR*/3 I:"ZL$VNfy q,VrdWӋVV Hʜum+YLZiˍV 9xPSQQE|Mu\1 z ;N3t6$`|mN;jv/e9_|Y"WD^$asb;zHmm!L\%Op͍ҖY57KJ%3:2 ;< _{OOnɨuܛQAU8F(",CGi?FCcLD N{E%hMl|*6+gI{LX xD-g[8%H 1FF0PAl=Z%ֺLaz/ZR8Г 6^$sU l-AcID][[Cܽ&; Y( :.KiP UQųDl 68$12\QDH (V RrJ"#ޅݠZb|ե2H.}GnS9]ϑ'|&ZbS/{-,{pޭT)SﶿP&nŌzhM1>nGV*)}Gw.$eWVnŌzMD`m-5L;k~(0e8RFأQs|h}N+TDmcBDəآL؀GeZQP+Sһ6\y8O'}̔,lQIvLHOUp=0;.[̋~q৒B5 1 D`a#ׯHOR\63f"Q=LXkv6v{i)ي6ma 5 4IA[򞟴RXCT3*Dd26A1Meu I3/KIAr_$$yoU em`/kBF(!*҆$ {ސ'by`6T)~Hk^/ QTTb R@["f``RI -v?d H$=_,ߋDS,l^VkAٶʣPNPF*+=g1 \{!DX֞"&&0Zye^ "e4<<5Zdi!Bx$aGHaaa_;RQp^c,qH 1,xoq]\}KNMo|˓E[Ô$8d+㵑l>o&aM~5,{ 7dK0k p*^܂Jxq9MԤ'T5 Ξ`[+=u1k3H9a]yhwRXJJ{NV&z:Ή^A3KBb11p+%oK-:ުh8ifY"ͬ94cu 9HRD>s6=JTZ\jBW pZ#\0-kPq{KPdHX=!T.Gɾ]1Yv }bP#T-Or;هy \Ҡ{duLI͍vpRQ<-FpD%*_>Qp&OչLӍR8\nP Alߋ 眓zf{cxUTHi 2,/,s͸X⶜,`Uiit ` yw^)zHРy1VH$0(JLQQ {XH K[\/R׌S}%j![[0ګHgDGEZيeTrKHg>%f32(m&A %,7ZVk6Cmls|*j s[aVk7NK1z֥iPm# i2'j>t.syY@GS,Vvqd}rK9}QbS\ <~ň|+ɳ3LW͘5L(-dv kQ<$c+ꠓ2W 􂂇CQP!/#)1;aG2i 1pNz*6}$okt 2tWE& <2eӎ9J)yEȦOPc|RPmVH)}[\PE{"7+w5S&P2i@kCYZbʽ̉TEYXS͟%'ĨC`#(E9()g,(9F^jư -E!hF,꒥SLM'in&NC[̭f̍f-T~*¹\M>O@m4Bn</%&N'ʇq<نjy<UR%(:WVR+N豝g4l!2:,־nySؕ_rsSuηkʊJZ?wy;=?-s?o/pUISTh ce ]e 55 | M,jzr佰a#s4GX=11焱Z'6䞠QW8p!0)w]e斬iGL@G2EO1F(Ԉ>niƨ{ 7W,S9P4Wb0geYA}vb$ ޅ[Ili4T::"eDtwiwqb#p:ՒI>6)r98v:wvNJtJbj YyT[mBaQ-,|/Xt*k-'6o#x!AJliij6Du(A&l1#Qo<<&Jv ĉ݈݀m%>և`5˔0ۏ-xNr,iq$e@+cJPIs5] tY;ptbǿ pϼ@ TNrm=qR H_f{(h)D3$b|l H}vaM&&ZvL*NS\k 9 ?05s^JnA] ϶Z6a;dO]ɶȗ~uipwgGb@1ܩLlgv !H%D &\ sYwCx儢*άĜ)QgA©KKkRh^V/&Lkipn7Fj SC*O(}+NPM|s]Bs(7ppB񄨭!B%[ FBKPBOiz[xZ*7-0BUhOϸK}! (Bn{rPVѧwt*8 ?$hglg!Q*@;q1iTs XCd1yjB˨|vB?PQ9ԚS@dgGCjz1>*±Ǧ0 C0Kd>E=ا=ى%(#Lʣw>,`Kɧp%]DbЕ)M߫>.1)N_QkәНݒi8u_FEaZϮ8e/D\23v zz%?/Y- _$voO4 XJ4|0kdsUZum#=EX.9`7MF|Doۭ_fg Owʜp/s;1$}mZHĉ%/RFžV}جn3_s?f#" Ȏ2.ehJ1II&k+VPSP(V2qU93"(2ReDx4: d~a_{XPGڍ^SQ<+ܩciS[>(ʚ lzgQV-ݱ8:/c"_oyzH40nf>u-H>\oEw|PxŰ܉koK)58\a!8ﱭn?A+:3ZW0tcۻs_lm7]_}rAŽ;duϽ#;I6}~{ [;,ram宛:}ERvUsJH{[+AI1), 06 b=Z)l,kʡȆU:uVaF!-˞#S.XXJ٣5%~ 7'}=ey {~JyGF^#i2c>Er<\S;  Vtgjt'Df`ջ +RQNŴp@8Jwz4c 3'M>E*\BO_#[N}K9n%,֝B(Rg'* Bb~1nKG݁#|V(z|!p0G8ktB\Ol!-(E1T2eMQ2QW$*Y2%;tE)Vʒ3JJ+^YE/')eir0 %T*Eźk%|볭]x7<5Y=ǮUobSΈx۲zV>>#Tj~qkJ8C|1߭GX|E_30#/ g.fb1a\8{ {8<ܿas߆r.f|b}x0'\(-@^noP~^sO uoO{jPk| 9H.4uz$~'ƁEv- {pQO ;ûYTKn~.swjln~m%,#pD9pg !ҋ5aDX͸o'ݺÀr3+KIt^(sY!t9kVhaڝG<U]yñǒrȚw®S"a-0љw.3HEGUk]&,{bfN*Fi.d';IE>2}#_vj\o_}nuSmd`2*H)2^)`6rvsZ|u\W_ewasnl./)9q蕒Z۳_Wnߺ/nOk[^|lu %zR/%~>p}^ r4Yax+g*Ȍ*@qb|C[닓#lK>iTLzt%.b(MՊ0bbX 0@fKm\Bf+Mr RYDTQ #*vr@5\'5n 5ǫ~oJg>,դ?AxjUa*m +K-mN]PBCYNMLs,6ݕ4V&j&W2jeuѺwP\VZѢD%ӥVA[gzǑ_1rvΎ/a;X<hB%NGv$DN,:-+VWx"4&IjdjdSI(]`x66xltR͈[O9L^lQN3qH8KHF*Ő-!'Da=w(O)HbB|Nac1GA!BQ[@(!'RsR _>NNu:PbG a(a:f:TBbqڭz("JN:-[h:.ڢmM|zq>0bu^[Y< /Q¯g̳wł{cl)[17e^ PBkJBM{#`&[h:~M -`H:4ww+Cl-#fAYKIܷztPQ#iS1 ,65j4r_]-fOgg#s1Џ8Zr^l7Ql7㦣H2#j+O?ݞYݖwC} TVYڞNiCTsL7-ϿUdZVjQӽJ xgZЫ݇yw򫧣7W5#8*Y{t?-yg4k(zsyϵE_^=ttr9l&ȷɶjͷӝrzV͚S% Z,w?9tDBn ӟp10UIG3_-s2Iu^2LZJm.}9Aq:f@?sS*1%mo$_,(l:oX=,Lq{):塳yTXOaJ&R!ZKMM1|{$ »5Ձ4}w/@;n ,[MMa|»qR*Qѻ5Ձ4}wsIFoּһa!oD)L8*`tn-SӢJ(I qL48 D#Be/v-! -_̵0w1 5se$/PRp(]tÂcxL gw ^z.z5߮#п4<+8POڠo,MahIF3(S&H$HGc)ç&:H"0뱟?gj72nЏ:ܘS- Z&1 } %*V]2m,0r*mE]\|pdx,q^4MM/u]vYwhym<ߢ,3gوނ/hڐEAcp9v𶼯T'S8KW}]] hq'Gu.nCNrqYY %V)Ʃbz +B ") "PőJfqm00ʶbWj i;lMGTWa[oyIHW`isAkr)껴V#NcӑJ^OXM!ڸi}Vo[7 QV /] Z:˟9A!`! :vAȘ7gwf3_|o3ؔ^1 BzX-wmQWûƂ u?,wv{EA_vP֐a!PtC U.||U߫lM?5!ދl=>OK``{KȚ/u_$" (_D|ûkRc5)ՇHijMSh01WsJtq9NVqTvxFAlBZ !+MדJa`rlX|sTaʚ-znb+ɅW}?]%X/vK7µڃE'kJ6Q/^/A( 3[lYDym< Bl~|s=]g|smu8F|cKGj`_mRVWν-FTDžw jQ)>mC\N^ÂA@P<{J8jMbw:G~0];J!)#Vp8)0"QKzZiBXa{n7&2a"_UTޢ[ᠱr(%<ʝN9WM,S:b?Qʼn/ՄW=z*m=BxM k( ~¨W!܋"9` *S<9oFRH_%0#B-wJ  Lbt@[~~O\z?w[6bWht#trP8'lhK?im[i0"v԰Ctsd511[^v`prJ#[u辒}IiZ.gWwyQ-M1R8YbnGb@cODD[ ?_|/)@|>ko4{ |aPx$wnc!mg7zVߘY>.t7?黜7͚o"OܶwmX7CtcQP y*/Q>2»V&Mn@owQ6[,Wk`>>cܐ==+BN)xt-W_<K@B^<MH*x4_ׇ;,3@WMx9A]-meˎ)X,Yjdjg~-Oڗ,,qrf:0Nr|%O46[]-wC$63ke#z `+ʿLR՗|^ϲdSEB8_EA&ĉl"P& ϲXb$Xp9.;};#sKwfVdL\MyG4O'i<11yJ / VmnqzˇΒW)+ܗbPf"UHG: 1Y^Y1Ub`"Ly;$Q$cD! y?gf_}f{fw̆Sb)KӹbWjK48sn'@m|9 J4JRK=+=i+V pKŜ4ST,"41̈d:0"e(UP"!zNw\HbH-^5YMJXe{Ւ`r7\{ՍlE7}\G 1{EV1yt5\m[vEsS%mlOBnKbwLViu# LK jJ8`=PG)` Sa@&Χl6#F QWgQ^KW%AP: u^|]HzcfB6 {捯v yPt5NN"ń,QNLLmxJG6bU^%R @`xfޓ3bLx"`rPmtCZDh CA#!Q |QSs;AA$+yR3HEې6,,C

/^H͡xOgЎ[}ZkZjI|V*-Ye1 [~R ӞdSR(o"`gU @VzV"6K2Qfau㶝7^~0 zph KŊ)wt>׮+Ohn} 6HȀIt۫QX#0-DH/bJXD~lHzPvap*xW5p5f͆ia[BVxȎkr|smjk:P?];dv NÃV: =/h,29kRpucXX:CnAQ5i-[Yc(dZdyefRi\8_@$I$\<@[`cZ$#SEif5IU}#~,,2OMt-!/RZI3^v9@2F>Nzii)F6f:l-垾2[-@>NNnO>β娥G_sZ ~ZZJYemKRe;JNTii)ZZzZOKQ۹S(= [ԟҵyj)s~L BKRKGnhkOK4>;QOR+)Ĩ^57vvdʿO-\S6jEk)zR,SKӖ"ZJAQK/ZKRU %aKRj9-u="Vҝv7/[Ki)'X ܧRjM_zZMXss'|ZJR+F-l-e1Q CKin4H zŒO.JgY*KB!&GD$x 9`b:59ހ݌2JJ8Bx@8JK }Z_SqeFwz$"WQ 4m ψ#jQfrmX [eC\TUXmF乗"HExyxʮ+ո4.J-jWokWn[GmԮ cT ivT-C dG5DS4 {3&b5;htt%wVgC)#mC vkۉ۬]E39J9`ٹxX  p븥]Vb%BF6 mbs3ݸ=NI{-*(u$)\.iqE: 瘳024M N*6d(1esLMmh=|yM[CM{t>5E;k!`sS(8̤ueژ?NU|(y Sv] oDJ..0^U%ـ$px$J]38>G!JJ?\ º<..;'Q%yJ{ OUhw\6˜b7;aMW(J`]zz ѝq*ۧp\׵JP*n hOKE˃ُ&;yr;<XJ1"&O6O}C-Z!Aᣆi v}V"1;|o?WYO9(RC rd+~}u24N&" ;3M/iJАTfQOkAXq}޳,P/;h)xt;.6|G ELE|3-Sy{[KtG-09Į1ˁ>ߣ;L wv/q=em}sD8nGĉ\Wj h.= g DߋtyVФZcDŽWIfKf3)DVP1) Pg&y3 .J B[ڟl8v 4FM05$ }N9ᩔ9\fIDy*UO\qBYj )uqQE }(<a1d@-2MyD+*sL"3 002##3lw֨AP-IvFu5O a&$1[t,g@+?7E ITcB*fX"H&FI;: 1%Q@<~LK ־ǔxLESzR=xS" [wLV1$''yRr Y=Jx=jT=y.*[TZ[ĜWخ_&o\%;֖vWq}{j"LxUV~7f^D' iu+}mFUJ~f*Tj-/:Z2O?%bڐ`J`nIVܸ7_'䋝V}Q#-wʿכPcmjQ4ܼe?ou=C U;_{SEL룧*o/|˯ '~5-A¬xg0G< OhZ\j yv~y[{|w [LPڸnftm\{utU>.[jJj20#7=r.:7K(zfgʂC4'/ R*;k* _;8ri C;C@ +;kc[. A>0{&lK*%n|FxX_6j +3pʍKΗ3kKU_>O;ʚ V~/$@#8ʛLnG~;4)YVWq?ݗ|Sr{֒|pmSgCV Ѻ5Gu#ĺ}9N+7nC<[UN{MJ[S |T;B[)VumݚnmhW:=cq9hI+>oץN~XӞaZ6^w '$04jl^]KV׋yz{[{m_xu޵Q(_6Ɩ.LsW#w=k~WJv}Yb5VJt{ ՝W-n&]6r)ZЍU>yx,7WfGoW_75P9#_QRmxl,&nQ*}n6?̭x\;_ Se5[&П=M7^=pd (doQSQMi΄ZSbhAF])Q%Lvw;a׏YfbP L{HM<Ք۲~ڗz||'_c׿ޗcMnW-Gl6QRY%;y?w/Ca;eg~Vyz<_ PSRn1J(T#R*9"ҤGhQ0Zb-R=OJ]eXV߫ gGXy!`Ln1s+6gwT Ƚ&(Sf$TƧ:cTI2&'y}4up0_M ABAg^HE"/j7c"z_e\h/c~{ R?6;" pH= u- 8ԏ~.ӚWHjmԏ :, 4eR??sy~ܶh%c !I0o9i=cT3sѬԏ"" /; `\O 䜽[/B`邢e.iK͕,I5<?_jJ Bn(v .WLMnuBk.?>I-x=fjOZԟMϖ]gjxfjCC>6)U%2y00KaYKi"f}lm|pmS auT "cS &Ɔ[EH<^Nc|}4䃫Nyƛ#NeYv󄚬3qQѮH3+&Sf )ʙcy0A30ZgˉEfII*DZP)ܽ;Rq5o@ %$o4ϖw,_.T\珷̗t{xj1{s01;~%L,h~X}?u\΍}4dZySX*aE~ŠpR=LX>oy&m"k"i鰱H{`^n/fzۚmd 7NFpͿl9]NϿw`ߏcШw8>Apj7_3dr26&54k_.ljijW)+9>|ؓ*DT:P!ލMm],.ݫ-(be"yG0Z>:uEvE,N1|6{m!sO|ƣ].ot;1BQz\#Sz:0؅3؅! =+Q76j};}K B.1vqHc+{d6Ǿ$M$e YzƐCv $tȒ jSRK贽:2OH49Yw]n<MQR"@ =SKbFuҩsF(Ō%J}Dt &hq7#')FL(S9Ee%Z}Et_1C:C) ӍJٻ6r$؇dE],n&L-v7F4I~nj~ٖg$rY_XECj$4L"M\ #kYd.SY(L- 20|n(Hq̔) C7f}OMr')xv!f/!F"'[x݉R<+uu}(neS%BVE<|o60zʖ2 4}U}FX2.TFTع4Voz0乷O-r9g526 rZ' g^AzU毆US8⽁dS9BdT>,vRv[I"Y'+)07_[T6d~Åiv ,Wf+ps;ToS_ }ܳ ,rp3L.f &; YѾz6 4oxmR0Y KY4؎'3u Ӭs_dZE퐰m\QݨPㄥ[YR$x> * 1~0WߜOM+_I'zH"Iṗ))l1*kP)ʰQX ϏSbN7 Bt)^y_zQ+:v5"\/OrPDl$t,~71^Rl핼.!3:g^` VZl2*IX) zauz/k2x]pCp^\z:{[eD]}NR5(ċ=ŎFY;$.&n8/E1^l9\@4GH PQگ0S0a2L,7LcctȖ5~j03ߍp|q}A2zo\wȇV`S\Ӯ@O߭V7ט%fs V|D֢o _`sK׫/SY:fɷ&T艶ao'!cNdnPS,;&}kM,0IDW cD8K85?ddڀ&# --MI\MXK4ŷZ -dTޝL7|iM,UvC a-NH"9T&̄ % #1Q$b,"Qj'rs)$VS-NN*<3bDngfp]:a\V&!#l7XHDU QHWג\s_ ᮮY1Xޖms4PGptZ5)p4֩P P٘; Q J0)SL6gؐmX>%$Jo'UT ؚHKa b1JMXFTHBzI)5UaeMo]o9?EYl3{ yȏdqe7Xל#*w\)(ava u?^#" g? bZՇlk;1C Rm=+RdDD\cOO({#Ƥe@8دk0»\RALO`X9j~xZ\}#)!#XI##T2bZ+-~yt ܙՓfb$3iTwok\$$bUvbbwe= ˈGƨx(,CfdJa7-ODGx<> Ku,e@=>9 {= \rb&JA5ò A*REk:}˿h2H?E͗?uqݡE/%醃M.'HQ8~Znm@GZr*d- >Ҵs * #bDa1'G3XǑLB2E Xp,^g^;w1P. T v0qi6Jmlii0T*E&vy1Bb [I EFQJeD)e`8E 14M$E@*!CG hW,xGuY9A.]'3Q/A(&r z tpǜ 2Rԛa0IL`52ƮzFQxB9ꑖQ z߄KPBN;r}\%~9Wc{#LgP̅#gr(\f0QnC݆R%P^.~e'0\t3E[Z6&j^ Z] aqyn`iLpuut+-_SeH({ 5X4{u5fR<N1ָ/֫S- c*L鑶t++?kj&AcPUߥB7s0z2b.Cw ]? =ިj1_\qJ7 Xq& R+g$[yj]ܕ@A9nBo˓]f \f[p .Z割ω[YyMŖ]=ݜ@npp'5-Zߔ$!\DCdJ0Gi7& ];n<1h.i!iꐐW.!2 Մjqi7 jPNe[^:nMxj:$䕋LQO.xY2s‰ۉ5Nzb~ݝPsbדqtڝ'x_ޙN~X 7Xbi7>q, V[`VOD̗J {{UX2xGfN; Ǻ7l-B@ᅰ@209,AZ)H1][T6dl4' 7_K/{uW#ozM2iqzwzxȣW(D&ڒ%2Jhĉb&B(dId 5I*Kc!I{)}l[4Ne( (ZVu"T"l#dM"ZF3&ō{`9=Q?OIڎ}5jOZ픙x1?,F` Yk9&(dc8&ջl:8a$x!<񣫱 /`5.FA00q/c5Z`Ö&HcTE(e1F)H (; a#aL85!ldL#V=Tr$Rg?G*7 %%·|a֕,_I%ap1a~ 3Uz? /v۫(`s,0WįzO)>6" $`ٔ^($9w+G; nxj+́:mT`lx[7YL|(ī\z{P/d]P:{bKVc-$/dgH5yU^"-V\dk.ê2CR̟(׍F"FYf:8=N/>}V' //_53݂[-(HJB5 0^nuc;l7z> ɨ+l v~zK{S|:0F~c4BPD9w!>ELj& ֕ntȓ g찵Q~I̯6'͂fPH5Z?jl_X7ͯ׹3~V1.Q&/.s*RJ,L\c 5HEt­2 s E)ۍ(тf?#P1r\5Oݗ,w;.dr?Вb r ݡ|>t(1>ޞʭ}M;5{PN5TctϲBjM/[Xd={K3* +&TW܇St/. &5V~  ) 9Z4$\uIS*R\c_㮑+42#4ޮNlɩC@|?g+"wn9L9ڡ.f^|1ɼmdqm+޵+"ewveY>ls83H09OŽ5eA_md]ZNgAlQdbS@quجKnR6/Wjժ4= iJ(d}$4Ǝ4|3RG^6֬sFO8YyVxzׂ,aL@STTQ¥- m<#f.!n; z:fG_yiBņtCRڝnx_x{{/G֯ꍿ[,}CI$ϴL@C܃/U9綎xЅXny)Җa8dV~,WD3N@ph!ڛ N2RSTD ;TY>f|kacr%)5CXz>Fsa!x@uBF0q@7VιGShSppRNBr! qV m1q=Y[~m%ocCDHE)0n01#48R!Fjym7)xK=c&)-ExlLLh eFQRKQvOK5gi,Y 0`cʉ**h'(df_BX8/vy-6Rel/5مc9wJ>&^<`ەaS$߸v9ft?/nVv]W@JI.oM8]ݺ\$V|^"K0?.SB6i'߮kL,o2}ֽ5٧Ne/ؒv+d)ҚqF4P4A,P֌*|B_aʳXf.̅"^\r— A0(|?x)\y HזЬs?>Qh,ŷF%j?XR <Wٖ9z}X?MDY4֩K -\:LfJ,|%;Vﳾ_ BYI.X8~,Y&7.5hi'bih 4b*\@{GIVqdka!l<SѣNHTr&B@!E1Yl$pE cgQPGP`X2WKZ'lMw3a Aw{?}tqp>?nt$.x9.ң! hfT0$Cǧ7c s2dfw%TerU?dm߱NPDov#'%i\X'dZ-f= D0ٺ\ǶGLAc/Cχ!PڡL<*}R J% muSfL.S ~̟ Lj-K0IRG2zii&{AKZK1:E1vx ZzZ=R'1fS5Ǡg Fcر`:IM-Wxek0) QR%e(Y "tQtDžPu:bfы Fwq:ݨ ;Oqχ"wR,GoP@{20IajWT`GG}Ztt<ٴ@a̔ؓGً_?mJELS%DbpGIfwwfAPD[s4 ha-k#]nm0+ 0 FZl44E'J){n\ώ{i Q #4(s rRS2%44 V2&T4" V# B6Wu7=/@Նu@^/ɘGGDJ&TCW!!$6ҽ0%"~-Ė b9{K-[ ?[0DෆW_@^6 yכk 5|Op!R>(bמ5DǿplA,Vfan✠u =wT9~w6ld?Z>INlqmꜶVdk3 SmsĪ;q|BQF)v;b%JL~֣ݕ{nσnE\Ķѷ$6խь}PHD;iPzpG@57'āXWw4clrZnjV;;zvF\pkfoo~i4$i4Z:?qc>1Ro&YzN犂dR7GQ& hBByمos{}q%\\N'eҧˢ 5O$X1JSHW٥nhjh*_Y5f9j]:&@8~pPMJa; @<7Lءl4سj8=jϢŕ5G\!CKz}L}_+< H4C[#@Xg'Ku{e?4tuO<;/ªR\D 2».Ȫ=h%ywO|yi.FCU@i?UvR\sڞz J?{9cEXܳ';OEɹ`r&{Ru.ӣ8v1MԾP5zcm)&ZE[iH5h#g(qٸjV[ER;mu{Rs&9j0/JF[ݻ:ÒtUт9h+ z1Cd~|ӗt7#gSL<(oVoe簅ZBg!Wβu@RR͙w9][Gէ7`6-O3͛$kޡGSfvCe&c,۾ڣmsq ϔgi "'u̸$[DB% Sp4l9&;n/YXY+o92ݧ:bk`U)aiW}4i:4 CcΧtٗZJm8>1r2Z q N>0tq~Z9Ea2."R'aDm<::Uf4fj*{Ɣ݋ }GjN/t/|Ohߪ^\ny=:U^[+!Ra6F:xktt,auH$ݤN mwx%<9c s8[Aև}ZВG3Sj$J"y$QZGذε@"DykCQMb/Lr,e~"@~5,o3&gY)J]NZmNZ.gNj) Мlu}^}O02ESu,$(bLI841<6:X H9*3%pK^B$HZydc4⌄41K$E1" SR+ T>EHnB̧,d $$JBX$HGWq,hR 4Z(8lr}t~"t9ػhi_EQ򐯿Mlk0j#%J *>7o?Y_ˬ7 }E>2h}@~$\b*y0Z}'8)(ۣA1TRh.W+b큏2e}r8Kn/a6umO39REujlW_.8^wnwP #czb9tsk/#:51xGCK~6E¾s1XQ$AGꏍVM8fe8n)^we .ٌ`8jCF0tα_#ql <)BaF"RvSUoUW5ǀԘ&8lP%󌍉x㖠ixv]Qq&W ҳWhC))oI}Bj5h6j?-E@/t7R#,>ZʍAptP9ʣ-WC ۏ"K;ʕZ/ͦid."y +\>(w쎿w/ usMM bRz0`А i/2FQBȴ T Ea*NjKAV'i`뻔v2?]DT/L ^h܉J3dYtsdCP_Be2hd4. B< [*_V"pv^a$ݫja!8$H' #xcBFj7R3Rz<\UU"Չb2%/Kj;sgF @(A`BDAJ M(5"Jb0K5?D+z=:F>kx.  <I9%yJE&Euc$m3~<qRY_䩀o|la>_~ AwDU1D|'NHA g좺C㘃eP* yb7qK+T>vJO`JڣfG W4Q6>OXs!H7)Qlg؝n U~:s^̅.-}VkS5[[1=?~~.^m^}b4LhՑrΩ+7^h%'^. ;IRs5ͤ9sn˜ƻfelh*ok͹UnwbTmLkJEKAggjޡtg{Ws%ߤ.Bdk5 LjJo sÌf/I~J-OKs=݁i/=}0sQҥ`ރrRnBKI})tKYKehx )Rk68s-OK)^d/ͦԪ|7h9j_2Wi)~̥ii&"i)fRt/OK35,Qgs_u>Ğcq mCr3Rzh+ͥBKk.-V Kyk)~ZJy?{Tii&F筥H4KZOK3þϪ 8H -+>OO0D>g-EW\>K+7g:_Œst hBd&H0En4NR%E"Xu0~Ÿw"(O01piRȟ/FQ<8w̒*lGLбbc;0*_2$}]LV\Žmz4MᙙܕSNDȽZh;?{Wm KrRh/}R%[9r 0^ZI('$13k] th<ݽ#s%_Fp_T6^4*F,C?>\8sn0/\iegW?wq5fT[W͟3!5z KIZ4jZbA6Kx;H3[xnvڹ ۑZqVv7k4Hd=vF.tVs(}2em>Wb VhoK>6UVzu1zS٪ݸ5 Uż~TxB8]ϟ_$JD{OP(NSiwHЄijVnѻ Kdu-ux@ (v:-#I*LGRyQɼѻ HA;~}Tz:hʉgy' F6$@$qXaY[#XUrTi&UMZRB;Hˆ#BCQ*ȆUp+nD^kJUT׵Bh#jΕR, k$79ORsSWzl,&2]hnj6FiK>;Ĕx19LbZ-ׂ!@v(ɔ_jвI9zi+$6yFp!2/$bLi|K`ꘑK0u  H4O?n{~/V3kDcu>9 [&MoGj>XpGbTn6~7ojsGJl1pyQ.7v%΢؊*Teg/t`^:P) 6K(wrӑ8JY=>%=x B!$ʡ1תG9{s4a^XO w{,Ň¯9k;EO(8K +xKI2!E†xH9?k-ԇt/M2@ ڸrpLU_JN~)ǬС+.(,E^+fVkl& Cw^h!!{n]{1ÝX4eH Ue姆xe5!UpR,Z@ tv F 02dO O%8PG#x ZS(YHo<;`l819w^!tC?EHή+2JKSݪ(=+5ah wJctk^ 6=59-$#'W֧>ǏWQݕ!khlYcEj[$uim"83}ֺٺ#/Uqa.̸dƕ!UL$F~M uKq`^C'FL NqaobDF+!.@pKKPx׃`?$4q:lsJi5Y&4cw/CB1< b 5Ɏ@{5dx\ \ jc@if% QZf5%`5J]sÕ4(F?L6`ȲV%bIסTY*ע YUഄRC$ P$gp@u[qf{B1B'Q(Y8/fSJ2JkVIYj5tBZc[BOAQn֙Fc/; n:u 7F1jaҀE8W€ ]h\a[yTtmI-d5J@ |hCQ.ZId9|6Dv DY!eS44)@0Ф/jys˺EYIIrp^I$.,eE5dJ g>UJX];).o?TU]k4 J9@;ȼB129my1piI gon}1cHK.N g6CЬj#bJ 62IɱJ:.3v* -U.5e< ֦Ǒ%!P_5a_vWYX ^{ª0o * /*2_ͪ!$ GȪy=  6}H.%q࠙z_HODMO@w;Q 7:1J8'mx~ReTEQ(NE huX4OӅK ukn"&[SC@Eꢮj g&ç9m0,E"&Lopz. m b  0g{i?rk=!x B*Ma!\ү' }R\JdWڱ@wK] KH7R);cQޅD ݺ'Őt]8Eƭٖw 8ޅĠ wȕͫ)ʧ@AlHV'xwT*M!Ci6{lRGCݘ˳G!W5@4.os F5kz>V^¦ū?q##%lZ׋0؊ʠiQM֒~Ԋ-""\=$En>|n8 Õ ahl"aL 0A ,е8]abg?3=:6ޮv]ͦ㻫HrӼG:M ۅP/LVt& Aɕ@fgr巷_^Aze\RzbSP3 {"P hO:@w}L|?ZCu򿮘}ǖ#ǹ-{W-ueܨ=+vӓѧ˟oG7~jV<'%h5|-+yݎnb*.V |w]`]l;^t謁{/3*2 W.o"j6Kɝ(2ܝ1DO;ѧ!0ӧf@fĬCvnxШPy4*oܗ^FYosxZ:켅GMd-ObZ,J~WKӉGFqС[y7\ ?>ݗ*|Itof}~s7 Gw07޷{]ypclë..6oaiX}?Q(oo(*Fp!0?wz@!3d'cWp&^+a(/ىD&b@Ơ?iQC޲*vrvkH4Dz93A p4ǬƩqM {Ff̅Au:DhѰh'h{:@(/ ݨ#poi6>UVhޥ{-U$JB+rI#j4q%NBOhi 9],?H3@46=?w#&Bpt_3D7P횥i| ]Hd=gQ':[QKNNn'ՁVd7:zC=qkhjwBG.1WHaΪz*s7sd!g9:A ~޲n~#`fbEFsdޗ)momAC\Estǹe 4Ž'nĨN7R)OͺAnmhșNE&m6hTE6hA#ij(@hmh OX>c{d,U'v'c13ዌHr;DˣXkL0CŨ.*]ku@+{^\gpUo rF;HuLB]cϝ{ItWcG- Cl4°3_2ÝX h< Ue煖xeC#Z NJl0#YF٢u+hCH 74 P +eE"]*;~ԆKSV ~]kΎd8F JU8&YQתVJ*-Pʅ"/F6ׅ_uVhPmǭQS)o31RFҭ fo}[4U4G$gߩVbEvFtޗf}VRΞh4 9s)ǹeْ'nĨN7R)OAfݲ'kА3W 2:e& R2c1.c1 {I$դnMIb<~J㆟Әbʕ{ƺwH"ɉyDŽםCGFcתb4iY][6+,f7~Q4F7Q:f g=ΩrGyP#LrY/ܝ 1K.ٗzRZ%͒V瞠6|%h*GL֢Vf-vqRr*?ឈ$)Myԕ:B P+F$|tFz? ШIakZ3J{xeeП HrcZ2*KC>57YT{y( w Oe|'.R .$JĜP #(eq"I J2c)"T Sh" :?ޏZF08I *Q:SJ6 %)@m†`)21D0R-}(aBi;7H,ؚCEL(J&2 T 8) )<$uc jXh8S# ` Uìq~5d|+;l~&i~uA>AD RqHQ2B N"Sʉ hLe1܂?oh}m)M093N^#Vb opl`n[cTidA1:u00NW6>/F}/nsQ^͢<ԇ7ijbO},֒dg"X{g_Eq>`HXYb?IcE@1k2Ymoe9I!GٕR02q i@pD0x*P HUC0<j 8W`<#;wU@j/`;u=Nԩ֊jQt5<gm|N;wUmM+J/p\γq֖ ;穾R-DFRP7e(5J3%)QJ];9ô( ApEEԩ[6rCs7fTKHR4N+>IyT#cKqQRP ])!@)n(ͨW^"JtC)9nI7Z!#\Qz(%+>N8Ҍj K~R>*#-_P-1QSޅx)vTK.F)n(EIp E W^6Jݢck4稾R-0&^6J sC)a ҌjW(8ǫ~v\.잞J5FF[TqvN7Q},+e17l@v4Z`vܻlRR >G]j5'QJW|G@)n(ͩzBz(dK>Vo^~p4LF:,ܨh"&|?~:=jQ˪RʗZWJ}D&U#)1f Q(aK8KSjv RZgbW-ϔjT/j8SgaJJ;sϔ&d %K& VDPOBU%ҽs͒ s̒ a7uABQ D$lC%)lU\BIxˑ3:{BLJit0љ⿞HyzR{oxIyEKϥ&,JmQ)MK?}*kSG+dLwRUonμ{ssk9E)^jdAȶQpƌdksjߓ"1<7NhQtkG=M4ضvʏ2Ajfu#\$AT>gJ<2dk`+|߼܄*R- 5ܬҪW=7fw /E{W?-ğmsܐG!Il0BJj(Fixy ¼_u{ﹼ>fIhOlN/i0즾ʃ'W[q|S4u\ٟ@T4^h܎uv#V*֬SA5+լ.Ԇl׺yjGkp8d&/~󥑕Nzz`0hѰz"%Ou]FtK].3,*t)΄1\;&`|OWwE{ucʙiipH$BH3H`c L"NhrSLvc|ƲMRJ#ĈHH S42B81۽H i9k#[lԷ݈$郍է{aՁFY<\eT4*mLX~xzmN3 ݡ׫aҸ૙~&?Mo{B²O_t8/ m]f n{a'`)fM\?3)r<6-h8bLhl><KL3 0e/|S㡹?3.ɎyGC}$^UNlUUWǀmj|e %vӎJW>Od&|s;{4K|>&eԵ];/Dd>"gPLgtOf/u;,<[gfгZ V_!aU?q8{_S&F9& <掶ն=um9#/h>ΚXoոԹJef _:m/VDg&`l6(v)P|S^J~ISppZXN& .opӂw'b pk0$bxXOw?LHRFgkhfdXf•0xHB3JT)X!Jhc\o{7h 7ŻsGyJ +AR<YF)&8qibNI m'^1ퟓ*GcO#o^ ahm6AFOvo99d.؉b: Kq70Z4 jlȎ\cht9̙1''ɧ[!3\kfB涷lKN'"F36T ú𵄌X OuA`3BJ-Jn>E-\LGJO,dy8OmG'J,e55srW>ҧ B;?4X0p&+OW_rfT4,4,4,4,fIHLtRT@ DJBPP$N%1`"faAS)uͻL O8ي-/~@1b9yE:{;ţ.4;!&/?rަѩ; hXXKE8L N1$S#A' "WB k۬-C ;vgN{)m[Ȱ~d[s.zѬ|qoO:i#!kl'_1jC͌2^ Nh:+Fi}[B&v$CQhq,Z$@(2;,H-5%A<7a9!}93j>aFx("p"/,yų3O]#XQp3E- jal^\5/տ-$lĹI>u-'&4J.6^t<13s63㍗KiuqS!pN+ɄV n, v8[e\J@K#3;?[7'*h|PP戠r="/k[D6zVf6UyuZtdmV-seV(ks-{kr`] ';6Dϳoh8Nw(K{_ε<;0+x@HuhDqBpzaί; R3Z7_@u/S&[nh֭ |Ա>[w W.}Gv])5Ѻ5/LqGLFx%g󳕜aJ82/A~ Wir<\|*&pX$E`E)CHj`P:8ZVgyJs*> {T"URQ *iqs%  ,-\?]ob+|9BMT"QB Hgj{6_QWU b%؜w80&i+%E;߯38uO4bK#~auwuLpW[dPO7<&|fvCPj-HQKJ$c̨kJ)g]5VrߊLZnU'|&L35x^FνU^{R!fQ~*(hE7cܤ1htsXR6RRCh:cVVrgI*9/3ۢ ƒi&s,qY1|=ESFZÆ`\u2$`4pNn@f6+tuqQ +"VX e^y{ "Č9jn 2F5cn[5EBTQ!q0$L' 륥O(i.GmI ]MUz3G\2@M~ E4b5#k $cdgLse,%8XVRjME)e FUnŠ0sfDRەD Y/*eaIu26. n{F#jKlH\ZfY({PHS?Bf +ݲXRՆ, T |4Bp8H6K ~Χ)ڰ2ۓ6yJQH̰֪bYaȩYsPY_dEe` ]_Wq!9@Kh [w\0]V{ 8 ka\/:T3S35s=>;jtSqF"o:{^hNx`C^zOyC``f1;8~zί eU-?~~l̲p\=fL  E< h&۲7omEIuXmU?qѨYiLӜgfv`9ElǪq$rgfShnoDlZ6qr$?_%<#HEq؊P+hdq0)ߏ`z<kw1.X`_\@$za hwJXav辶려6 V9/rH L[%tF7\,8*SqЊ27]A[ jۆܹU2$p[ SĴ6-!6PpP兖Un+(d| PH 4HJQw+U@yVP4 [%0ˑXU\9@?Ծfa|Xęoheىi2FV\aQ:݇m{]t1dcM zʁ`JBT\-+[;?G󝸵_7QA ,la4 "D1p<ΗbϢ4vWf󙣥ֆ&A?xJFўO^+A [8?DhI#7CКӦ:mȷϖ> f.H/%QOy`Oo;cul?i)&,7`}Gˏ=/?6y*\y `*QɊ2nM^娔$YePLZ~4?8cM}rk&V{OYT\n/䇷x|FR:9G 8-dx1!(2L\Ͳ,q? 2eP%#,V) Ͱ_d0HPCp$M\AQ(gd9d%SFqENˊ crp [fCf}@]Efbj)91BS4PhhR#c3{=YhjWvV(GT=gu7uZe$7κIOCˑFɫNǞ q!imѭKEz`3[U"sb[ݨ8hL2 hV9s n =6! ZՈpҥ[.!92}{ӮsH?3װ4j?o.owzQ#LЖf(k+fdqƲmحuz[3Hp! mmzl3Fzp*cQ֒jMuپnp-,I|7fi+̕Gdp_ &Eڼ'-ڂʅΙ-ZULXVVT%Uiu%+PĶc-C*&7#!hT8>{=jv\4C+'uP aTT1d."V&u' |Ps h/=3FWk+D1Yۆ9bɕJZ͈;i =b 6ch m=a37.y W$>>jߏf :asc H['POb&RQ>8M. o?c MncFKa<6 >L@dq P}sr{cq~v>o&u}Tt{;뾷. -Z>?$cO޽M s͋ȭw:m4d A1o h§~n C^9ES8ecn8cn8ާuK :ψnݪN% gݒۛZ))%DZ⌱n8uK :ψnݪNn][ ymq*0oqBV'cwN_gϷk[\8񧞑77'eѧޥtvxGrKhP`s/qS< ?iHz@<#y`Q~Qc5S)|wM>ϻkVhjkݙ^xM.n2XbU p)@#R <P53e(61%9]n[bJ ˼ʇ<ƢJbB S@"l| -ijZŗJYk9X*rĨrv-n8Fh0(E"d?re{$8}f~Z\/FG3U dV8޿SLϥ>3-xm/)s _ѮH@Kq9cQ}kKwl@2sں.O !Çړ6G$k>Efık"\%qg*3#EFЃJl7N$]jT=VCrޖd#n1l>#&mrlÖ a!) 'cW"*>#ƺou?H/hݺ`+hsպF?=ZT B]5ånS[ ymqJrJ`h7a^m!wؐ O1p(41LBCvBtU_睁u 0]ǴtQ]0]u)6PJ3tDH}򝉏J2o|[֟[(f 4lH_q/7{(e r{ Ç%y ͑!݈U |9V5fd0X'/`cSuk"GB˻;?cyW/@æEhšs1(1{{c\ҲY걫Agxp:(|?} E!.G>'n?}Mc 4|/N5)p,@0I[kòLH'0*Tye}b3R2gD160U)R7c~1֣(v#*zois}{uB04޺r&qJ/ZVcnP'1m QSuHhu!I_q9ET B]Ȫ[@C[ ymqJrJz|orug, [} HvCQmiP'ri-Ք禚jpjUUPK[譽3A}  =STSljKW_=3ſSgvEh|,{s6USWTI*yal=ݕuls?mD0,m CaY+v4 = 4Ҹ0Ǚ0 KG;\>j#L\n^>=p =F>`6<"Lvّq#YZ646ETƀXBrJXRՆm$ yL֛r}s>n[yQʾSXFb%f$c)E2e]BÖ~RηY1LqX̃M\ܸNG}8_0ϴ|}?2凪?D|ew,Dآϒ_Cx'{ %:*k;dޯ6@N˚$KpR5lPdX\1R*9fVS]n:ʂLWyiPʫرh2;lAj<6 X rf⚍ XZKG4KNTuJWd6RktdALK,I}ޖYz,R(X*ץf)uD#`vRkhV~d!TK^5;f)0RfBܑR4a,EѤd`)0z a,=hrc)ץL(X!s5wORAa,T뛉QtmRf),fGR6LR0"u)(ۥ>oK{:pc)q? .FmRDG[z,3>g|!(XAjq]z,*23 Zj.=zKu~q 'ۥ>oK-;za'].E|=-YJ"WxʘqRa,mr,3>#H(XgZj GAT'i ^RB=QҰ(GL (y#KXJc4 cXCg d!77t.2N9nnlv[ F[&"9]3 R'0-9NKٔh:l*V)+cv>/ǯקj2TOOW_D{4;PIX ylfTNJ|kbpjiMQ oP)|L*٩Sqo T5 ֨1K-)PJoJ#esJ<{QʫӷD?{OƑ_!!mV>~Yv< kּ!+1x #\uuUuw8S)Q';%9BJ8Q?);AH:L .YrYͱK&+)*Wn]0^9ގkPP>^"@p\x~2H/ŵ3H/.VZlԝzg{;mu٬a>1 z?Ho6@}|0zǗOmL郲o ێ/$x]S}c.G%muUiW_Ei}1â&8|Iynk߁|$V/jk3^uUkM=I^&:W_Qj.0)J}kI +d`XX?fQ&uB5k8hO8?A!α]Їa/t,-9T6=2ڈs@9>4r5H`(n%xSм ۍ~K@ƽ;[C +UXC]/Mj:꜄+!tpbIk^0޿h~)AT&x` ,vA-!0NMxTfUw,bQ$pg =Nn Qky23hv݊MqT6Z4؅ vnv2wF~|;Ho@3~iwM?ug3*n} 2APCԴvǴW5Ǡ7Oڜ<9]#+CNދz=GAk4"t:.{pQAA7~B X/0z~0u /w>4^s]|J6Ӟ,Bq}LXKi * *#rX[Wfaq5RHL~hڇ7Mc Xb6&l\PRBPyB9'Np8"? T8j E:!p# ̪Xn{T#c)p;~|agQx^5qX0(<"ܴ-'Ĕ`0r f9fH'c+j.}gITHeq#aB[]{ۊɚbIc.oܨX'߀s<f3` ȫCeYhP"8[y1ӟ;gQG>[_~~!d1½|uk~*r%1o1$gřy!MѴ_KSb{Np*SALss#0-n?E" RJ}EbMA`6o&}vFk~bO@c0!X=Ad]bi/$ /bYuD5}< rQ/W"*ɂ1B{~P}r 1J~Mi##Zz: ̋mY)Ѓ0Ι0[濤=矡-P 4UCixNQ.-2f:i/2^xB0kg 1Ι'"6l*nFU؈DAhJ s$@Xxxpyb  >|qn Etze*>m56b#IpY1VjIT]P 4`sm:2@%L'd_}i,*uX@RPAXt )pMP2c"Y<"YtY%nEzO̝J6[;I.I+e}wfAL"2Zz86 . dԍE/Abެ;mQbUds+D(kIK0Q6bs3bnJ`rHD%H1q{$ O&kP4sePMQ1P6hT6etÜ8Vbs"z .V!s%DazHaI@]YEiZ,rhW ВGb!؊b+ZX äuHnOA,uRXtnyJ(qȀ"(4FoV[ Zhm;̼PܪPeFK3EPkl] kF &⒬T(KPeYb Aht.1PVZQ\cOxC$Lj\B)rfU[g%m;jWJ0|YЃD"Z 5d]*t|r=Nd1궣?XAmxsV#(a?z0DѴgL»s-M'DngG{ڼӣpzfG!r&i+:У]2HA;yҲ/IM4O%;mIGMt]Jjy[ h SMm|V]I9v!ۜ\Xn"Wtv_gӓ0Ǽ>܊~< 焊Q N k׿>/D꡺ dV˓# ɟGWRtfk7M 3CHh%'#=Бr);ȖEǠCSBJ5MFU>d>QD6`* k^:4y;y/8j@ENI]OZm$А`k>iUϯHݒc-yo 7-:mP\cVZh WjiBO - ݜXqu2e.r9J@nOvL `Ky A`Gz'PGf$>GjuzvN/|0Od8VT?&Avͬ!trs[]ӆͲAWr>սܧFE(Ti٦iv򼛏ҷ{z0Ҥ>G Md!ںqFMHkb``*D-Ze͆ ! hc~bF6=)䟱ѨEV'˲7 srn|pUe $rӫ6}˒ ^+m"ӛIb֢=>_\Ҹv $Zyw"o}A7QXoQ=%׷2K Mpl=[Pz`N|IjˍzImJ{,>vDWURQ((u9k *rWEH+DUjV #L(4@"Iw"@CJ5y6{Xc ܥg{%KXx&0_!/D,"TNPj9(#XV*F(7OW.>ވi`RR_"<'!KFCF !89Z֭Lpà@ E4ɸMp~A S-'Nnے[䞜F$'w7:,7+>Y4ݚ$0ӥUR$8(U`8*ΣWB0p[- Ly>m$^u?;9KH8n^{o|Qq햆urdv7% */r,G,"bmvJ[o)w bo,P-KHZDe!>۬_^J}d"AؒQ>~j1f f.U Fȷe۲N[k'"qG P8_1Z^;J;/\b"b́cSS\M.r ?Qyanž1G7;8Po8!u$}^8f*X,yh2l֠kmFc$yy6 ө'Ԯ[}ՅJiEK [^k ^sx̌18Ĥ,VVof {jdI !lgcڛn, 劶# ΁q}\>֨?9Nd&TT8wt65 Ph/"Sjx#"`I{B$`A%Lʝcner۠3ų[Zvd D?!o'}<Ԕ#?F"6M|GkCC@~ũe _?D"!J3XQ*7s,)zʡ¨1@=τu+4&ĬU`4BdQ1,U&Qec [˙ح5w|km :8e\w;JW7SF2hkc`0w V|Qw'OwℵcsΌ$־N%Jѣ8}fgtN{*Ü 9tCJ섥UvWX KD\W$-sR Ǡ\hl29xIJt?]*^]L%%#]nK4~ FI##q 䚝\,a $D]DـɛB={|+t;* x'+~S~X;I@&a5z2.:bQvG6xٮfG߸I6h֜Һ7T<#˕dkLh̑_C_C1tcPr_xG'nc|⭵BVCE(/ťtH)-s[ZdfK#EQd MsB|`zYΠ/M|5d)FA^|xeޗf),4@eʲrjYE`TkD\eU`1lj$x}Ȧ\W($rp=r5Kd,+M:a&#LC &2^憡oܨoX-:#2xOgC1T^:F[+)>39&D2 YK{һ&;gvj9onte~UP ,Q3tdN *$5STX{B5o$vq# =x/9A;]4zcsE1/ih߾0{3hs*Sz]f]h-\[%4v',YCȖI:SbdDdeb*% v֖{yL( k:햋mWS}h 9ni+s`iؙLdo芻31_L{ydOjՂlsab)24"'VqufZ9GA-hy?.Zssb,Wz]"s);0;qrfkm.mCM+SSc:z[I`\-cLUl $O$Ї _ zݬŢj/683)~^ѳY,m"9t`4ĕj >^Z}&1̍.0A21Y%VZ9J1hgثvrJJB($,& B1U=}Rf˭٥[C)v#o.n蝷,dJ{=㐈Ɍ224A6`-/16}6^G@h n8tP\\9ْ=\ gb`+z3Vk]bZW)SJ$wjgJ VHy쇔˩+c_Jv|g/JkvL?C@Ckأ Ԡ1Tdh$]"@s'A.}6"xFIS?X+rڢ 4m%Ԥs}Tn{R?{WG `S#]?>``Xy-5[F#yVlv]}h,@U_EƑM>݊O+| &SZ\h(zrW 5 P2BNNV/ˤATu]]=@;mB>dڴ>LcH>Ŷg=9Բ}tϵtB$nS5hm\n B)*X N3$Ū1A-=D #ln ٖ :Qes֬JlF\e0mcCN|, = tWдdf;)Pm_96 /T1C*38~S\SȽF lS TgbA*hЦZC4Z3~Ku%4p%(f׼Di")Akofnq p(FTtyu@k+:6/;j:0!"0ИX2|tXZk!-ّJN-?|ѨāLFIaSPQ[=4D!Zgw LrjKPa˛;'G )N 9`^8Iefpc۩%GioH5}M<7.x"nM%F>N:9>K IiYPӿ۟"MzgWKV (pFgKj]3c 2=lRu}5v@['R;dwFs4&d! QWn$cMf]6wͤͪȀLP*;fAĸ$D.+74>QkRV6釖36#tHnXEǹ"y; VC%zz)V;$ a+hRHG/s9gW,)2]lxԏ6ǵ[_Z(z@~u;;5h*W1j4∤(@EEf& ɰlW #d1RPn*Ƽݥ+[I] =f\vK`mhy=LG&3…#{ƻSVҶC9GaYUB!chCEUi-W :kTa@ِ+YY+iƑOQ E+C^ǧsmE.&K|A AR2x2:2gA9!@4~Mź{tKnbG)!zlrHN:r;A-_<F\ iFYT%򰋹skvtdg}|Nk3#lFWqPVDj6PN=i곟%%H5UD2K/,#kMʟ5Q\3ubpl`\WD5/ "+duoKbqWnюΐ:ri '&Dt|~+פo[ 5-Hmc3T2a <=E@6KM1Lފte!pu1\KGiA\siIT"ffom󺍝l:)9Iȏ^qO!ܩ>L2 2=h9(`Uܛ1m0s&JA&P堹RpW'.x*! xE# ( lgLu;6[xY6q#(c Y䩲9BiVӚkwU,<B;b:~;3Z5jBt ^0ivsbXbt;:.2hvĿcsGk4$&V4YVd5gp.쌊ag*%R޽JUļcb4êwfJ6nW*k9憞sr$EFr)͙WUZ8YAO2O"}ٞP;[TG?ӫF4 h Kgyt펣LeJ)tzR(@˦>-MWȩHI=ޑrt0e:f +;oB()xeJZ)]^'*]} H{t:\`k˻(R 8;2P\h\;_1VY3|fbhfu}9gf; ]듀 =^?w{ ˺%9Ld|*lIlgGO{q~Gx_[pX̓i8diAdRFYbJ^dXӷT$i)#rwH7V^=$+!-D4h6嗮~\~鲾@܁vhi3Y6h6(Hi7R3fŇlqƵKwz&ei`k6zV ó\1dҹ_; ?\l1+*a*:R;M>*Wv=ȃKa&rrђ_d3 aDdҜE=Hg_I4TlUo(RlWƣp% yHx"00K8Ct]1Q}Ι=͉n2`bIe jɩ4lcE+%;N$Z`~VCzN߷A@&Qk/vt-ENϴ'a`+I㲽wB G]v`d\n] ::`cZl/>jY aw>V*Kf^R6FВ!ZY# ӨiKPJnzi{:UL!JF Lfx<kD'3cg,;vra6s+{,ڔNte=듭>5YIOmQoEE 63*852Ń'v1Z|Ml@BNzm{y<_|zqmF0`Z~k"CbE:Z 2Z?nx^n4ɛ-<1Ӈ6&ymXOϗ[7k\;+=E1nqT]4Fy)"EW|wrBnOռI=MMyB`(.&Qq kgf}}g@wܧĩ+?=QGr1D;@#DV;NmSFs20B,P@mX6$@1`q Ccs}WͲ&@B0Ngn#_P(I=G=gJKڬ.5ZZ}xMX#rو;al ӳPsV#z%n\/^1߇0YI_jy)C6y-Jy#|EIwEL=D(cKcR4<-ү? ;]G;kᕔm m lp9upRv2H/Y7/r";F.rU(%NkuziO:iڪ4ͳIWW'ܿoۇ鳼mo@kp O|z|XdZ~uy œRp=Ͽ#p!LՇ|׿p=9|Qx&pA5xi1isvEzQL>jNE~tSEtd:=!59ۋ^]+燫wܹF7?4z*HCwLŎt7lo_/.VcS$J({"90rjA޼ra <(p*Qh~h9n߁h `68ŀXs} CV/_?ž b;; '>"yaA ɱb,9ԉ>)]7lݶqJtc5#B%B[n[^^< xJ#b"rnw.]g=^ӻ~MλL/R[e&d/ZRR0Yb-TR7EpJ j]`gȄ2?)ajQKh'?e -GnZWO8!| DWN/Ӟ#T,G 4Pe>PBȡ EYsM2 p~{L-doXhK$vruYH~Rw\ZEʳeu6B'@qe nHˤ )&sa'\ %aU@rCZۋGK+_5O*?]ZCv_4~wyr{5vOu]1:e)$m~ҼV2-OOrboi͍r?][O#I+OREW\O@[jvv>Zs4'̌Źƹ#HFa$W!㲶Dr7Oj@*vuuNR2{Tִ]OtT2~lGf0l&ysW&sx͎=!g'_lnUpݶʫ ؊ٮOVD Gmn<`Z LrHxLzKhFC\Bآ[[w.*&1b4Ou_k6~ l pn®UAVPFCHEXpY uL!ܿ^ V9݋ ,im)ڲ'NXd"i4 IK\D2*Tun2uw{%m "hO6kHNCr`P\R : XaNG`1H&r5- {A^n9v8ϗaءI$Àc`2J0iT5@|O.TЯ7FfOF*W9:w2_g?Ah;ƬMYF,ߒ!-geÿnR}Ew,ݏc"~;ޝ`ufG5n֌&5UŰpmר5.6R'bYl=<8'e_uո$Cӏ k_cSc#CS Iffɖaz IIzI=ْ=\}},gAUie{PZ?:{{cfD_^hpNNW9nAX -_MP=L$+j%,Jվ+!u J ' -F"AR|iQ5\-Vsl Yj^xu q!E NOO2jC-Ke3 @V܁ՕMȀ ,> ˲>v|)}Ot/}BvtA,Nj iI,eL@0dI UlY/m`FK4e.YB(9uIi@HIC|Į~иog`;e=Vq`!ۖ`z+F͂i$nP6AM;DSU;d ]Fpd}jpɱxS V(x eyO!i H6@)zL^_Fbet(e&wOZ;HRf7j"7DI~ )a$slIRiH_ʱk~.}nvB}N#U LdU tdSOp%2 vOxP|̈́{  :;@) 9m@ݏ-l)\8fmZeQ3m\b!ԙ:wtEخǂr_ !ՓG6鉡` @ȮNj)b*֡*&"t*ϡkX 3 aa!kbµLB՜V.L͞) !s(Oj S8cT$֞y}RP#g2zNHՌkxOAEcMF -UO<;ҮF#9+z2D6*3rp9`"ć۽XS e>ZAOkWF[40ByOrs&i kX:Z)s  )g:,b޴0UZ>X4h OϮ` O:] 1"9Bͼ$I*+ 3z˞3 L~"(܆=*$,_̂-%B>Ivvr{}{j #+NNT 5&Br) BB]#2\VjXe]1*fk+5U05ZqJuԇѻC tvewe&Нj ʉ~A-VCdEAJXbMư!:פֿtUiԨ` m*Zв6|TS<I[뵯B똻DᮿH=˱؞fI:6Վ25\!&M,Y\&=ba< $詝܌[*Sv3&+%03`XxN5D/YFV"m Jzn)lw-4 ,Κjճn OYikPfg,U?rϓmHỲ, /3}3AdpH x]$7DQr=4F=6ەP[7 fa7E ePfEd "&%1(\0R\}gp"N<՘BD\AfͦCJsE,VY/̠E5~r#{vNFwY' aV=5oIԕ׽( `Ҩˊ~2g9jRxs>)]L4nk1(E:?df$hر濽d펿V |+;i-qŨjZ$e4 j#Xdz֜unoV{EZ`dq}6=mzZksr-Eu%3̽Z_dWVw !IH ρxB(:Jʍ54]IpD 8>[.>)7ے_68!bm@} .e[âJ)blvΦz AdNJi`ȕ%sa,_Yz ujiĞB$4&2N&#`Q1YJɌ0N)J+zhg7-o;-n{ xs kϵc EmGcL (ĥ8;-9 o״֎g?trqG7iF5cGF~:2?~dG:9~Ot+5 ?>NQm['Nm:H-~A=,35ɕ[I; ͘܊bq۱@3__ќ|>Bڐڎ9n5OH'Մ.[^EZr yxEϷYlFh1kpɨױ6 ˢ7l5-Wn#׼"#7ma|^LoFZf!+ewXNUU.> ,rv1?osn`&h> Ug sZN:ij1;=xw[sCʼnI>;IQb*&MQw_ό*BYo~LzZ`3c>-Rl7?OO?1@[M|=9ҳӅ\HYn&20iiiej}$nLM{xDzN'xDo[2[FOk ko=/ѳ0ÜtWĎ·x!k$N)AB8E@O,Tt+pK## iPrc5 \+sM)Sg3٥ v1:qHol7ȘqN Nb\yɐf>%r-EJc*`;\A݇t2`c F2aZ;f!D)+׌K4r'Dy/x 7I^Kz@y w%"9sJ& K yH= ̽7z =pg'G)ŠDf+땤?.yBMO#9ߦH+eaE9F=s@D`1욠z@Bяq@"[P]oQr:=7^7PTJeAj午Ѧ B`C$l|MGVg{q_ÁZ{0qŹ/ֺYUuUIӓ- wLk9_Nrʣ~^MG)Gk/.v~J&oxu|^+#>&oi.W}ozJ?v7SIG"|Sl`wkcLӷO4x Z::A:Yg/s+=Zkr9ImwRaWWUY\6z9ހU"'Tj.ˬ)h1P390Mj9!(.-zܢpg~2 ,/ggtutpCs%e"dF˴ 0b\Z(#巟8߂>oJPtЁM:XowT1;J)2?4hqښgMPgdzKjsƍۆPq[#^v-,ˀ^rOe LVb.{ΓckmKee+0b sٷcV/gޞu 1Bw b+Cyq3&9RIHF˹IH^/g ]Ki(8k~\t{3W-p=~bEC$/k[֯]>c 9kO]脂!v¸8e˲.A$iLd|*XwvmOV܌>Ҷ/H%g > nC[Έ< }Tqr QXs-@2I!l\L$nPAsW\ՖMlgQl0~ƒʍ<$vfWhv-`]F欄L:-TY Ksc!iZonwEF>;!B$O3]ɛ'{.5˅Auo~#fTwsnrslMnZ}|jrvp #2m;_=uXWE)Fl)kH֠)T[\hk~}wۼKH00[djD5*Egx`fŇXĸ=FE8ufyO8/:Ҁ=.XB#MЖ26{R ^,nY&}IyE_G 1,JNm`YO^f5>]uh[|5}+'ͩ3NRb!VDZyo={w]bX77(:e`Daf1%ZFI񸝢(<8*0>z$*>vt<}VX4-G'^2@kb|H}L1;kGfՖ oKR[aT{D7a*G쓶nhM8w?йΜ˕O\š[cqy7wewtɡ&Go?˯/搳6ZJ²Y䛜>2,Lɡfav?XZY?py)tX,~uk^_ŌKܹ MoNa2rPҹ02449j1&HO1cxCw?okS$m $/1()'61ɀ.B=ûwPN]Q5l?,ٗ ̯x+l$ɒnQTlI.MT.K~nt~k4Z\y_/-2!|d@!sbRY-4̈́cr 5yS I|7 p~nVv^DO4;Rd xkYrKIY[b8E|_Uxo%;v<| RgfR)B5-c2DcqY˨TBK '/CI 8YYw5ڌU ϛhp*Ho̤ RNR|V\]"-g;O_yX!-sx8~ԡrVr`Seʖj]FVe|6(@P+ּoiAC ݱ; ҢkW3^]l2tC"24XJ Չmż~&5Ԅ{imA>\k")2 ,$YR[cu52qK|l,ۡ.f3㛌z-ŌI;Zc&&lӵ_8bf8;N)rHڢaǸO퍧v;a KYo3OYpӱQMJR#uY}&3{jSŅsv{GEڹ33Jj g7(VޑK\ovvgD|0s=ˁBxvn_8T`_wBM} ZUM)A:S` g ER ^B&vbI1V{oX.;D,jʱ(f]bqm raMU ^KC&n)'PI\Idd(h͎v"bU&rf(9voܗvH{ =ī];e]؃h0`9D%xMĺHm 6&m-j,Bm<-%)LP4gYvA3VIAɐ`(N\0NY O a|dw4Gv͡s"{Jo6O ϥ׷:s#mb $ظc.yΔNdl[:%<+ndAwXSu8S9kWK彦Mszڈl^Bق]%81F$R%ERXEXz%ؒw?)=K3֌^oXljzh9^KW旋FѪU'_/)Ws5L xjyS۠0ɥy):(~bJX.兔 n?\6_,cP)???6O {L?cA/m"^q邌4~vhc=Gv3'=';+}h{X~EϩW9^XMQ)y5 1(EA;0j{>^G2\w?m>7M,x K Oyq?1޵q#EZ!Y"il`9|:c`+FRkFt7{ƶ0N`RwU:}BL+mk kB$,~}"b_:nL8, v~?U;-L肐 JQ\qw| s&{}X^ݞyܙWK XDvGrt^)eʫ޷IoPr^+*iPꜴE[BV[w­jO)ש>AfYt\ Ppڠ߳.@/P>r rld\Bpug{vj߫RIWuDilgGuBb ,{PDaÑV[;ߝUөsQtZmcO' yQC2\59$dTwZǐX Y` %\+lE^ݰ)'@$hQ{WB~8kJ$hA" Hi877P mZ ɉ" GT[,& nى(ٸ^-곏ì>ZVQ=+_ '.r>ʖQ$v}ʖ)*xW^>.x-31f:'$;If}8-EU:8Mp1Z3AQdJBEVm&Z7c͑qjqg'hPjyju\>#[)IK2tClj՜K lz<|SW/[Jx`YO?|byy>@yDV+N1Ej }#ԫ*8l8_6rywԻ#wo +D2|6&_Gg-W'Wlсom 0`_ΨԄ9nbW{ЖY b.iR+Iha4"* @ Ž"=#D 4chXeDC!RCYa%Ӄ]ٮudlS"XTɒG4&>p i!;19Z(,GYTu9Q$'cQt{,;)fWdla7EƠ_"PRZYeE#JJ)&jيMA(P̵깅hLvîP2z"g0ӒD.$SUx=$$ E.^:Ob.%W(L X| ˕?eBܬ61gtԚ˫OD"&kLmA{9jlDjgr(=?P0p[+]fg-1{mhӷ1Zo뱭(8iw|beQ .R ys^P {@pg`pnMߔtrGӝ!"%9ٿ0=x,P$ux,vMI#gvvy_f>DÊIdeoj6z瓏~ ;c[ʈ"ېAY^Ӫ,"]uBs(󈒳wETk]>:wy;݊>ڠ̾g!F֡O*׽o"т<ρWխǫ]v!W(:21uفxqb!x?;)}kJe9>M7w2 e98uYag7V;{9[H:K_a:-!h9z7~ y~ W߼ZP\a#QS6cfE:}*a;hyW}xvݝG(FHǙڸԑ14RJfxtiEUnGozȚ*ˇZʈFQԬEsh}e73h>Mgr !tuVEE霄倢9j4 @T 2\ eZט9b D٪u,E%wSʶdAvOMƹ G_i2Ew98jCkZ3}r@M-[^SXȞ5<,ɉ&n0K_j6 &HŪ"l"I$!}ܴ~V=|De2fMrGt6~9M~y;{mq`GLL8yYrQb!dE"yA](MRJ@ԄCP[M#0_lT[ۋX"{0ʊ#[1 E bQ۠f=7V;REiu\WVoYbdB[HclY!=ЁA$ܤIiA9F;$b"Gi$Y8рCl| Z;VK6 Ӵ1C\s3[;p@63i,g?ܓ5!P/Qc΋Юw=&ݤ=>sb>4U˨_3ơ/@ȫl^&Q/ L Q\8I9r8@ʘ|5`7Sܰk ri=an\nƳ;I=3$ף z: Ջhu$"t:,ۨN!c)R)V:)GiѤdE9Upd^'݃3]8&EmsW`]!.awga7>ͤW Umm3 #ʙmU`KvN63Η踦.{3e$ש5/Lb]6KLIu߿»/kt؞^5NZaEѼp''gu# :MQY/wWW _n&f--1r֯n~>=:W/z*N,Gg<|ϿFuNge:k8$I e_jCebu˯|J}Qos"yxV^نЛ^u5-٦u)sfx׵!a|3^8'q?IiήOlLlͼlۻdlqKufq痽u6{~>?V `d |G)䵟^r!(sP9g>KR3 F ng;{3vϿ[|Ò T5y鉼v.PB Hbp(Y@gc:?63>{Y5@ +蹔6U|68o$f _cuWeF9Ęp},(KΆ;%eoDBMt2Xa^LKz^Hf)hRRG!琣 ^*'m:Dv=M}ۙSw5Sτ"QO_5-^ۭN 9D1Ɍ`'/; ; KT$eUCNVqT-IYYv Ѩ|Ь TAI*L(m ]y%Ghr;Ju7(m9rHg75X7le$!m e1YTr`YP@D"P>'9pmYtbڕ6N߃U&!γ@KVufl=JbH4F́zNBgNv$$ .sEws"/^R/[3:Xl`auXKS#nmi,Ho>@!^M[nٝ_=:L\fԕӈ|p'NO..]hQ* ,:$N/jtx._W#>niraz'M|S[\gc/8cIco x$'K} 3M֟ w-S.a N^73f2'Ce, ]:g+mXL{c`x^5bL6Iٱ/ߧý*ڦjU/UU_FA"53 5\ >ņZjBi Lk[iX|wTb@X-BFHSzi(gTy)9[5ޓbFvmVd_lhL ֮%oqo ,Nd7hᔨqj;vZqjǩ*Sj٩}J8K*[T/þ'qJ >cJˬiXS f-t?WSE!uI'~s%L=&BVM3њW6Y%Q5sω8 .Ղx4U !HrRtܨ H[S´fRՇT>< 99[//|7f^O\Z%n2I e栶0\ qMX(_Vb0+`~E *#qG~쪾+A0֢]u rIF 9)ąTc8H|xۙ\wXD!k$P{+hʛJou !ǩp(12H(l۹Eݍ.T@z ^G1LX4]: B܇F2֫7]^z=C/t|G"; 1Sg7јy<¿>{_:\= I.3*Xcus֙** ktUg йVQ'_rHɀ=q&qfh%Foސw{)`Bq>V5Ahrmfv=lz1}?3^4eG^_?)3*~]V|5w;c?OŽyfǙJiu1^L%#1F̀Lm+0*W0Za9s5paPG wڐNDq7 }`s{Z폫lY~,y܉S3űikтQ c)cSA #]z{kU(0Lq"L2OQ' F(2u;&H!u|͜b31+vR"d}\y!+>M'V(R(^/Ҏ,dx_:&lzTC2Xt)] QBb@1ӲA:H#_]1g0X 1j?`vy{Broε1R8I't>Z3`WIݸq"}o,Km4y$"*q$PN勵gRFzl@&xBL ,\%,;xl)i)< c4HK"wiuXyD1Ԡ03gFѰӥw٢SD!e[X3KaIAW FGhNk6^OVM|A[/+b]՘RUkD_"LY:+gO7[ ď~wD@ђB<4xA^|sbe| ps:|!" D0u矾lX/N--WW\, |!+gl LX:Kӷ C2=}%h8ѩt"]C}(VHE .p :B{{q'(`Ep界jyPǡMW:y-}Ta?ޖ2hRMiȨ.x˧SPZej'}%Ϧ;} 1s eo 6#[w9+igLM< N;M\T\ :3}uQ [tBG?RB~xH䙢Bu+Mk7J([sduji~W0-U [ -M>ӿQJIl)N&~8WXM.$L'M{K<) D o`:xe N4 w$1KA7?OnA/m]}Ec$9LӫCTTwo^|\6 .'N.[!Be!~x~S)"L )ҠztfʯhVi`i ]*m=J;+#{~h% w_ ,U;lg#V@N8CC6,NY#N'0 `Ɉ "*ǼƑ6np/63"N9ԼӫAe)\)-Z7ddejR٩8JRֱjD-LS uomJNk\3oKLKX#"_,r* 3 $MEVHHkw")g0m ,X`/#2] u*5I2):/KD3w<{40w׼0nRاpo29EYs{gbf$K8!NrQ_J.3":McG_Umu 8ǭ+Xy^Y&mé][>,o9Qhw}4HSMDO9wHXzjfZvf<#Ck٬x;;Cȭ-o/f!iR3bPC P,3w: f G ?V|Bw5{>i4AV!Meo06s%;]uOXU.O7(3Jb$ A,󪞉bo_QEA2Qp5l4,0HvI\PK#zIQ&mp nw3#,ԤL|K~Lbs^|cݴW@+[ym^sx1[w,ҋ!~h\<@ vһ(;s*z/aHId#@XOTjozS5*nT8Y1~ݨAJ3UW9 UC6K>Ι[*ᵥ̋"SWW䙇<ݗ(Bgt_9+K QӾ/9G]e r2XtuGAN:Qc+cIVm8!DvЭNi{ Y݊GQ 3Gt{g{4FIu$N㈕N')hu:2& Rtȁx?dCUpP+OҺlFI iV*WdG[zv@Re لGֽr7Z"kO|\ @[üR;)e 0Z*0X)d54DB[&u ƤE"AB#Y9Pibǹ, T9T+5Iyt^{K3(7$ ʳ<Qόyj%gsfOuyHVU9Ƌ[S^xf$ C9Ȩ-aۥc(ԁ{;nb8Ioj7䉾B|! ǁik+) rPŐ\3Dd'mZ[)NG$3z .st l5/m7dun<B I DCLIցq-jVY"NcSM$03Y0cH$1ZMn'}FԵ@$BZ*{@Հ} sEHF3%A0#踈6$(06 _j 3;8:UboR$ JxM6zx0YK7W޵6r#Eȗ=4|T,xl/ H69gblrDQrP"Z a4{kTvYר={1W& |Ğp}7_|"7o'}5bvn|_}l/OU g۬;vԾ~HY*,~~v":}qG6̤~_sJB״%wl W*Li9zwv#ƠjyPwtnsqL[ݪZ@+h Pџ8i7vA tjEЋeZ[BK[yS$;x,WΘb~uA@PP\{c*Z)Z{_㏣X@hj[1faFfi9s1ӫXDYN*QpiqIF_1.En(uS ILri ȤUNrur~Uǘ_7&{c>##ĻXB2uv4~87K KǚEZ%,]X cmF:g`{ȃB"+N|NC'y egYu_3HR#!`wΙ6(5‚H&cCg(o gjb-(DH#Pa&: nq+lK{ʖ;=̜h8DA V¹%J0aLnjfxwRnaainRkb p`k)MMi+^ :kqfö,,N0|A"=k ,<KL 5U1ٵ\о1IIVυNZG W؝~ T86ڷܙjBL/C1T_MU%ת}.{|#k)`UWlEl1[FS_Ucd31>\Uq7P^^vXU=

tBfw=t,d!Z`M 򍴌C'M[!|rtۘV_P )Y2A@.qa$98g*H  $HIK'O~2)Ke7 7U\`W a:-1Oc8( I|FW&R|}z{iLnrX|%}z[w;fmϻe"k{K_pO(=j QR!QT2.&jpB1T2Ju)l(}u[N(=jBJ޲FPhTa1TE%yKݒP 't,ӹQUJ@OQ}N5)uǍRP*YK%e+h8Q PRj IbOGR,(6 .C)!e~'Jua\>u SQZvW|&OS}N5STBJ"s J%tA!<ŞeHsgJAtA5t_z(5,qd\_z(-7i&IkFR.Pڐ77>Z #9[%)5"iK VDb HLI9i=R5S&AEIK&A$Wxて|R0MGM Bݺ炙U E eElKbÞ(JD ypիH$?U$>uT~BspէWJByiU$,9 Di˴(kY%mP|"i ~M$/H4/?O<.מhP'z5#b?:C!F'̵T`,<^ iE_eT]װA(Fsi9䁸bZC>!XV{1u[[*aTXhN^xmjrZ9,} 1^dZq@!='-rtGv^gm0v~mq/Auzͣw^\}>5峿[^gy|9~5~5L4-uj77S1p&|Tߒj.MHfjL/jt;@ͻoqAƟ'"jٻNω(|\\/D,ز{I_}=G,GD$ӷnכEFW21Xi5c.}NJлHx#J3@)&ͫq!3dY/>7־l;I4#-_wN濄MС Ѭ:佌,Q:kDf2j Ey'Cᴵ*i魡6ژ3a,@͒ *1݋UlTԊe%nZϓ?xm% K(%2 <#"uiM~[<7K(\/b -/o֏-G} hOG=U\;as/qq IfJM$@o7 ZMhTal1"lr[,pɏ ~6E" .6b7^1䒰c/~YymTELmJ ${߻/0hqy-'Q;Wo!d(Ls7priRIr:Q ynKdJ6# #ZI^QKL cViI4SU.),wFTRj˩N93ai/q~y F)7rCpE»ӣ^J/t<o|GrD8kɰe7 {s:]{ :̏6%>ܝ߮ﮍs?wWw ?Ϋ>Y<,!\v?6Ak>峋̟΀(m4Q@ `_@ڻՁꌺ(^~*H᜕P)g#9g. (pEg3`IfXPh~`"P @#%׊jՖL[4AAL*5m PJ\J#PG$">1HaB/H('tzO%(<L5v1Ptp찀;¦g u uu8uiX{s*yW3cӛLS]"P]nv^F06}+bvxY=fz 6Ua=Qh؞[?z8J3{2 ûAJhX+s䪊GU̦nKd3gwzCqr-:Dm"DZ,wҜ(I(zbeT2ԏ_TT_Z7S7f.q0P8CȑSuN/B8_v^+ړKpHxwkpeʵjkÿudOgw.?#7{~~LGԕCO72QˋeroץsrK Cl~?=)Y?ҫY3@^]6]RvR- < '2i28 ${,o:$Y.o;C}[;!YNɪ=*W|B^ٯ.)ݿw畋6C_1褙 V?Hz^ܬ~BxU_UFs\~Pn]ܖ׿Uí{3=rX[c󳹭rDB^&ٔnP9xT BL'1mh"ݡ[zuwB^&ٔ"8w֪['nN7bۄGnה[MaShS멒?/aHzbP Wl$Nuܬw&3jqy}e{§"Ӯ/GǿgX'(ErvwᵦšRD1GD1c`% D]f2b22dŢ N^Qj/~;=Dc(؊ѿ<α|Kp'CpIFŜ*r k A,Z**Ӻf&h4ěehP㕏v1|t0,=Z>F_1xqN>}<_-P̴X;* ן"B|pY@:]s0hP@jl}U<D`&"$DQ$3bI* ꞏVC!osL/s1A~٠>=2]gΓ]Uɍ!gƻ8{yY%hQµ?pk߭'{W(}.eaqOaqs\ŗ۫c0#DVk (Fmy~䞼\T䜃Gv2؁U !<К( .ve-4g|XO m]@2 -FM%ŕ>ozCc}sRSPP(EnR}JLow%5Wq* w?~pƓܜ}L$®J HRI3˖ш*.®JD|eYOQJE-j^q"DRFE%5Tz"B%}'uUҏ_5@)If<*f%|\9LH#.G@W"(3%{ޝɲY̝XRڜ QZ}ɡ>dA\N5ƋPbD`F%e ٸmg{6.^;X۞3NZ߄X,(~4iH|Q+)p2bZ;uuld,䕛hMIgz7] SvL =b7:xN̠&32gM4ɦ}Ĺ4YnGLFT w83VGh,䕛hætMyX&ZЩjoh%elCK8CsȊz#uHzm<+(A$h- QVwGxiy0ӊJ©\c, LQ+H gXHiUSpg0܍짵cnj^o9?Uܧ׼7\Z'茓Fe3ƱU+ .$#Qþdszom|*ߘetL (9e00P(k]PA`EnbZdF\MG2%~v{ c9`em; $HrdF_HSYdewL!p`UCfjFlWZNCF"HfW]yT*9DN*.BJj,3SϒvhZO0OB>ɞ'.D\L L'.1X61~׈fmB;de,ղ(  dh{ *k`X'Pެ) = =Ũ5%rZ->30ɛ'!Ʃ4!REDkedb 4v.o*>~Bڐn*V76F;|7ٚ'`H>! +P"hbHF J 6J`z7!'%`$祲T0^.KhYnf`5JW}+uSWUl/3!%#o dٟ@gF)޾!xfs'0-LVKdUBa+f$cdZJ]LjO{ ~jX4>ŚyNloftN:iZjS.A2Pzx~qڒB:^?N;ѵ>^Di p|.,<Qo`]Jf;^7x?F?zp-盨!}^ {2~S߲{1$>w;t1@ݑ3|;ZZXs 5XxfɅPGMgj>=B%U}DA\yҽFϽwQK=&T BhFSÄ;?exh&'  y&dS}s]v25.N7bt)gz4^WnI6%#]&,bT BL'1m]"A@ևrm )YK{bv|AZ@ .lHeV[. 3[2A ײ'J{g֔sܰf]:S_JCbl]S\h$6Fj-4BudvU5̩UN S*-jQlrbV&S(G8r'S O- 3 %,2@i)$UomD.^^^f߭W듁}-9:>ڑh4R!S]o?RTĠod쓉xR_cks %KK.R]r2ƻ+0ZvWZݦp Nsٻ6r$Wtau|?fp`gp\f6z$;q߷-ۭdQnYuĖD*JK=^ǟьBp3z$%&pHsE9qU:a+:P:n5meQVSIBmAIaRJP ;4\3hUv}=44 8p\4]1.+Y)3Hsl<W B`nQAJQ19d‘H)F{ B5mSZ3Txa̸ԹHޒL!6%"ġ(ktV0QieyOX Mx(>3\:嬜u.ooA3# y{Ħw§8z-gmj-߉ɿHI߅!vɿ}urᤪYhWf&GMSԆ;@GsUm\mHƊ>A٭;; *}%i: ΰ_Eһ=,׼=?X(a6+mVef3۬<6ߖfG䔠#&jö`+Z6&_z2D.u.iv';>U {;֘!kqY;=wa,An˩P%`bW\$GWgpX|W7j1̧싈 \֧!#! =@Eze22ƇZhO^aUY-ՂrϤR1@S,S {FD+i#і(ŌySFFҊ;t=߻1Y1ſ[z04b P!Q\\x_xt y~ FoFfav5eƔ\"B kǔS+M*k^Kd$qg!9.̤6t˱.֫e:} aSaTRm9-NmXkg<XPuuq2[|P_!Ioa):&V(U e`zZ꽙u<2;]¯{$ o={{iEM.c+sxmbo] 0S#-cFJ%2;L3+CzELY*0JAi+PĎ2JmN,`hljNaf&Z+QHST_VNJOXJEJ#RQ*W=m)$NJ1 xRRL4Z҅NJOQJ RB.\H)ͩւw+iK)qRJ}88)nPIIKi\yh'sQHST)RJ:)==)W:CKG }Uu9 Y'(I)fT_VV wR)x}M%(,R|zIAQzHќR$"-STkB$sF2Y+>&2#kMM9I1BxC@U֑Tkا`-h-cJrI:"RMd qd:N!FrK"T(ST_VV;E="W[V6v}}R#-| K)WqRʃ})c'#(RqAȸϒcDJiAQR⤴ק-I)a_J4= )}*`Jv8)eUR4ZI}NYJ+&>Mej';8)Ţy|N)2NJq~ң\XS8~hxKQHiW#ѝ8q)e$NJcOJ2'87iK)GqRQ_cMKSˁ lf2&+hMc'So4$dR[$fp+L VpD JtS!-]uXJǙܓ]^,h ™,y|U ūrE*7N Rr-=)2Z'u_/2c_X$qC6Rٸa~ȷR} 8jo'Xa2>&rk7߇I+d6$^%0[v0i;AuA|(|L7/?fFK䃻kr|yjO7ĺb$SKڍ :zόcx-hOE ZGֆk:t8 \MzoyZcJJ]nmLuQ 07 v" [L U$ 94ΚrI E[qff⣵GWG 4s 6w$#()pwO2X)DOQB+\R8oAhmljnOevLJG\a0@wJ5G^GաVĭ?@=|Y?:Pfj<ưRʝ@K1|5 z00x)&g2+cG9[#t9i.q[/ QT쌚k IT6z|bJlgDNoz}y?z&$ qggfHWOW 7 U͟xݓUtVȧ ͋Z?߅ ,w|HjKvhqtޕ+4[&سDI02Q01x~y'@&`' #j6&Vk~Yn5Ĭ%|Yv0(%9湓ZSNcesyoZ8r/F#^[V I(fMv@q͎uK\}\d-Q\9ݼ]qVv<# ԙMnY37]fa+_`l̍3|~or*dH2qx\^ L5cDcRS - X R*$$$1[qBax JD/pFrTNiakN SfXt潧vȹT)Aduru3}Ic{~13ar,ߥs7޲u/)V Mr6w=}p`:]˷X>*<4h "DPųOoz~0/8p3J~A%q|noQnjFYoh\fz7ҧ Q*&_Sd _oc+ k]]xKR,>]۩B"DKHYTQ:<[R *ppP#CG!XI)γovRzRJ"H'qR)EE:ƺܵ-,Ҫc(HiH#>jPX#Q(EX#sȦ:TGL8'FQ$m[yQ~x^EB*%XCC1X> ?flz. l2Y$lQJ"v 4XH@ Ro[ wI4KZi,7[lJg|2K(QDs̰Iouwm:ߺi |8 E! Ԅj7.&̮X(ttQPinV섁D!fC GcYD83ϻV&X|{X$Bc,v*>bW0YRZXfailjȘwhח U0Ki5IED&2F8`$<1 1Jyu#tU?K%!影 gI-º̷M|_*!`p”KT,O6K*)7:*ϾFtir4&; sĿi>p< 8M`]&Cךy;ߊRiT']GX>[{Mme2 \za?.Nk;-UQe}Z۝ݿ> |n`ir(7c~mIWeZ9N(x&Viʬ`JG^`` qQ3PW+8DkR-=coDz$A< CWpU̯{ `o'?{?׋d#̧/z.·.w=ż{fvM/y&o)5dt~M %͓d<[vGfq2'R,6=Wbvן,Rk7t櫤.iH<. 0Nûdh!vv6׆G._`>B&^g烹·#r^$9u=19ۑ;/pdXNfs7o>@tX ؄P |_p"xb.^^e?褗//G>`~ދ~c?Of.)T;䣙O$ P&: 4Aðff<Y^|_]{6*x?t\u@uH=;^kCI3H=QG4n(׿:{'?iaP?黇 C|~|2 O4o?Dś>O={wU?^~aW˙5ޟ^~txWOF3|s {=@/`23cKsZtOхJ?ۓ/W8'vd>~Wz8Ap_*<&0&4~ߏ!P?^_}k2?++JNO /o# ,?0sa?2_a|乳 /.\;fß&oWo+X:s pd_'SOE' .y<]vm@}2O}qd)pH fϯ_oLA?u`4rV g$osdT׼ )?{{~ _\(ȳv*5nBaY(Uӆz .K*wsi{ en~*޵Wqs3TxyͻH2ɾx`tͯ\gEݻ=%睋f5f:_7!|S_E?y6~\TKJG s_g](.K TDUVnP)oX1H8&XDJjPqVl|և&LА^)?~:T=ZKN{XI^aESyAz1h){үm^@`w-mqz'pN}y1gOб WvgN5Ϝj9<+fрK1D&s(BT%!Jf~J K$c3j-Hl2t޺W{{H!iq8poQsU; 130i~)SߧQ_;hG,]*a,͖,%[C\i%$J&!dTa"Db8'/QQt%=_oj#vZj$y(@ nFR0qie1F Te Se6CI 1ஊSI3Z rw]ѡ]Sj3P0D*Ԅ('Lh W+t!xc X/Xs !:$O+S%YgFʁa+5 !pKv !5j%TPw7@T0&nK S#qH/ZΉve-;<=y~gUJyㅳWO?OV1܁nc~&~4=zcRz`D ~Ŏ``{*]k$ ^yfx~Y.׿߄m;7qezk=tr7Qb+ftp)eK x1qMr!]Β+Uװv~}v^ 5#>:-O}Qi1޶P[Υ$Ju%lU㵲s)V TDa$un߸& ۽X;v!$쿲vAu *XM h픹pS_C4v 2 \߈ B6f^C2GNmfٜj՚n֠W;+wH_@/%Ho5mLfZ0T5'S+7;MEl1)Xy_ S\oۺ58hА{t8(mѺ5A4ucDݚg nmh=WъN@]%N-@pd`ӞEg4vŋwvT劀?>Z*sZ2x㣨db)kOB/֠bEwf+)up|o \㣁IWRACd_DJ04FX VTBE23"Jm٬P3 83_=iVn娼)wY awfܼwʫh XCiI53N[3֑Z+;uf$2:;ll!jV uz,݁ mCyxv<" R xFK%ez}qvͩVnl ,Jܹc;'ڜjAt٫- < TӡҐA0}BN,˴~H>P(ҥ *Dղn٣ukʃiGC.k*mݚ' nmh=W&::xsɺ1!X<Q}Աn"!ݚ1nmh=WъN!7m=_ iauDk26)YT.=@!6>XAj})bvV*0Z,F`S =k`_\i 9)Brҙ{vn/6,l$bXl= pl>")eE۷#|Ɋ!6g |Y'.1V*ď_RʥPh%a7}x&$aṵCIMvεTcExLDT4b(Ny Kp)LW]Xǭ< 0C nC Ѭ!pڝP?EĴ/,AЪ^ש#DnIdܒd0 B [ƒp^`!k? 5$"9C%|m ȵb~>V2M#!8%KL&Z&i5V)U(ibL)$MPTD%M cdLefd}0ӏv%P6}WHnL=ߔջ#<KyO'J-]0h|Y&"yIgk&$&iP`&Jĸ:- ̔6^X nLUbݍ2{$mdR^a(eLŘ !B&ĆJ aI,3f(nE$)]P֍nnkZEZ DdwO0 Y5}]ߐ6JnpY`j\hrcމ\.yPض3Zli,E0]V,xVqN-u 5w99;d='\ƴ+;]k d" n9\t:wH uU pP~vQ 7i@ZމH:^HLŒ T ),)" a=18251If-aKb8)HѶn/\?>--2 JcZ4WAgB. и_Ysc^e 'Fbe$;TBĹ"b60q&cP-ss;/#GGl0QɎCic9qycML 0`R`]z(d1 UXK!qHcڶ;~Me\% 䱦) k1Ҩ/ QەƃiZ#2(WyLqZv/V&)h9E۫K~fe;@N\?4XDN?.BWBdЇX|1dj3t7| ,䢅dEZ39O"4fdHg' J4I97n<>2>A1M;ud+ J*?I)@DqPR,n`@*_ݤ !4)KФP2!#r\-WF#2A,ZA6ij`uYTPiNjT1yI`T%O+ gbCN^~j~QIgԄ_użؓ&1d 9y:7dlWzG!C7DOŜ~98uʿY1B3*j6U4Rs0NWNgzȱ_ R o;b1ۃ:H9,ɶHK,ItwЈbWQ9<#I0t㿋}F95X+Mf -ȌɟbB%,വ z@r2)ƹa!AIé0h5F8"E9M/޼({L7&M`ߕSuVו Cޜ8XxsP7߾øC,)xCVrF Tcr* XGP|=Ȉ JuE;&R>!2?:oPU[;{pK_Č\[W&b+Еu9sOx@ seV-K Ǣ/(/d]+?tY<}q=܍4eϽh\hw3q2THLz[ܒsWյiU9fuPŌtߤۤYvtoJҎsJߥ3v&P S qz)zP<*Ժ(e 'Q|k c=F-jպS̕4֙Ri; ~ nCyV#c*H9ޘj;P,]Ț܁r f/cJgfU&?Pt@'%7YMVr*yYja!Mp,˜6RtD&5[k <3KkYZ*;NGPe(Sާ@"M_ xmVC,t§TĊ1P?guXh(ڣ "_hg!烧VO'tf?i%YaB T_* p_ <p;Q0IYX?!.뺧5;ɾGrA\PlE\].z"\.ꡜ s:]) b.ne((\?+#> NRuܼkwY(7?q |kiNh] C0{=zPx:rL\2c\rjMi\;h pBEO|CXp/z6'I۸Qf1' hMI"nJ,^ѻʠtjw^2:UrޭBևMʦ|wd+zZNwtn+z@yjޭ 6E<=m+Rυk&/Ijg&Wb9 J %N[w&C˱ھ8z Э[쿶z~9oa /ݧ&.6yvɳډl>k0]l{HEś1i<ϳQCG`l-vI I=.:!?'akhZjp;\Hzۀ=߃ahe4Ef?nfBvޓbE9y*?ѓaxXwH=~[cB^+Ǜm ٞBkxDgѯ?L˘V9Gn= !ݳ_Ļ90S^6gmnp`R͋i ch#Bܙ4My3j7L-o;z7 he xde;[4'=.`3u~:lè B@1IzwzM'y>Xk0D"EPDd'H[(aX8mb "_Q Sg'L0?ƯēK@*I04[5CkMtF:#r5>^-zbGE߇l#|)Aj(+xĥIU+@=oy@Ъ&ŰG!.g7_ٝi" E9L mh@yEƝ>,Z(:LoIuq2u>Ayzd"(*Ƃ%";ovi‡1C $z*uG{9 ttʷ~ BR!B+%Z$WeÅ]dHNpC18 d^CQGSEp}$gTikvL{I"NC b4@"Mպw Ȥ(J)VN`Ҍ=_;h9],'3KUDBڕ/0I$JCX@22O[KRU^~^6PHDCD)+%q׶ˇʇ<f{o$ '!=ZfzД(U#\@< ŐDŽVj+ZdTkψIMwgӹ ]+V !|q-b 1^+#I=|DOW/r&\IDN{EF8H(mmbBi +,f`\@rPkL6(g#˘BZsfQ>~z$Ov r 9.x" `5wZ<÷:~`ihek Z r8SL|X6P6bDF@N Nusj9i(qE 5%{bH;Mnk% V[o. ؜I{dO{Iֈ˙H :!G໽iOq 9Z8etܰH\*gW\pJl|嗏Հ!}_N6ף1C{:+3+1ў ƚrdClBL0Eb 2qi G$uw hp 1 UVx0Rʋ6CN2CXpA28K A}B\fDl>c5" FSU2Zf Jh.L0cBD >"?@b:6H[('05ŤylL@ÐR`]$1 <,h0U1`pa)b2@RƊX͂5I9>9gߨCDC0SfyyJF&C'1RuFCqZ\0)p2.lXP$)ՂQJ$RJKM3Npiުťm=l9lcBqQ64(bB0EGXJ*jɽ9.a ౩.}o[,^ǻQh/m̌%1f> b "(Bt鈞e cCp#HTA%(t}ggZMB=U Zxsf=[ifw:7Yo$[XNFcJUC>8WVi?^VO>gH|H^f; >{ 6ZbdY^8R, (4QXYm389Ͱ$cD|`,"G̳]ioW+?M0a6L8ΗA6mH_UZ6M5Ŧ-{I$SU]LXz촄]ox TvW${T"Q+ 7h52@[D渠> dXT\("CmSQ%=BB5#4#p``u׍ 0:L]UѴYF.Кd+qׂg"_"D"/]ɮ@Nj뫋X* KP(  X` ތe:·Fq:-mhFh R85%n)xB2ڪnjL^O[NDH*$Mx+p<Ƃ9VFخ} {6+y0W_NJ[ wcYB^.km5C92WLAcIC 8B!IBlpR85:{N ɏv2%W0)!'+oá@X<\cqmlQ1*uQׁfQ̣&8\mR$ɱH=Xz:}J[-As(}#ldZRrQ1aNWq`P4z} .pep.0E-k,(V,E ,*R+8"kz ɼeFt׍XXNdEWb:(קW8XnA1lۓ? 9;zŦ$cwCS8 YY",J52XQFIl=k-s]ꮧ.hy jlǭ:jď(|Q8㟓܏S\dV~>Г8 z`pɪu7-%)x]6el?^VS eYduV9F^zp2~v>#φݓ́QW_ԁZu\ُo&xxhB|.y>NN/ޑ4$r0O(aW8$"NK.Gw?V?6}VK4]9u\y>'rtܧlR'+ƹ+jCF:?~ 1TF! avB}Xg#izVGwtj,[oD\®E5=47_!+(4ލ}Rꮥ=;=5);!̧(ݦ|Fާ_ i/}9h9˽r1].R PZ,6-^?qڻzP!m/9&!BVr] S *QRbODcwg-|_9wh_!;ihO]f=Fm8_A@r*l9`<,-HTo{oؘFAQf=Άdq594s6ج9k Z%r^*@p,N/߼:@oZ#p 7\3|6Zﰞn,!=+`n]Ly$^_m!653Mu8Ǣd<.~EǯL+'?%1V8h@:palO.z͵iԒ>]A-H.ieiV`op>_ֶp\??{OQrW3x~O~6nlBFUF7CMn0FYǸj 1f3e8N=t i}^WW3=ewfSdGO\rF#ysqrVZ1F %!8CLѱC.r7x<Py';4txz}Erb$yߖÙAUG{8Ƌ+9g&S L)YdјETgKy>ۈ-:jIq&lN?-upg)C9f+#ƣSbUUL 9cAi3IùxWI!ZU)~ KqI%X!x(2ܫq8^r8\,h#6V:(jlr!́ݍ^D/;LqBYH|^*ADz; /L'[&ņKNOOYOZiչ;L:e-BPLMl_) ѹaɱ@ }yjB8J'~{W~F wTA;@)v|f,˽wVf[~{Xzzr[c q6^,JK THh ֍=uK:}MBQHRXE}VQ}!@ՒfH'8|&9%m]t&HsF':7o*3) ,coH "CFq}v^p+e6\UъT@zXH.PT>#egx#%8^(]%ý3%ΟH_F>`Zm6v|Nw m0+VZB-\v^Zv/_Pe=\.m/YlbLHGw5;#铛LR `v0'ЫH5qWn2kPR;AdZcoF1-Ӕg>(."#d Dʑ0Z5Tgl"$hyܱZ XjtS Pâ܈?ME`( +nWIr“('|5UwZ}H#W3𿲟X]'$&]qlKmk/8Vݽ򥇃?ϮO.?b_Oҿ}W*cMμS|4_kHzkT' ]HlZeعT޺*cj}JjM1OVڶ{hPz0}^K5v L.['þܿ}ւ_輱?i _+K iך>k#}NY LtPo lֶu.ͳ6R#ΆdƌN,vSU?r L&nΏTO.c}QKg^F48:K$dgSr;^c1 :x#˧Lͷ3kRu[[\bgWBw1.݉]hyp3Ξ4䙫h'Z: V^=uA+xۓUKn{n=h3WN:Tu~W]/r;u堍t: * oͺuoҺ!\EYd(4UoWj0J,Oǿ61ٖU rbtغP9˾˫xrzRZ39p9dD^Eat}{j[jB%/&Y~вS`ӳ" AƋN\PV}xg[erC.uYKI^s BTKRG0 2I'Cf2A$un%rYsx=2 P0gz(2d<ӎ;vbQZGKhP+Q)I\uOU=Z=p~יH&Y>\;gFa?0m|a!XŐSm.wh'[S{ ɴzm%q+*Y!Q0Ӯ(,fQZ'q B}6K, K"3׮0B%Bl`TBieF %mq3F686{S?5(tP%\qȴ-'Ӽ9֦yc,{ ֒7muxWӢFJÅbtoM˃k{Cir^cs6>=J4Cij0Z`8[d!GEw#_7qkQ1 [VQEP^{ݥqjOӓŭu06`@@9o'1j2䶒ud 1%E';CN+K MN"#<]M.C$2%RPgĐE5:C\06Z,NZ^W͓SQVBֶd]K3;ܥ(=nn*]_7Œvtq[U)0Q Y 2X!y1zh 8DD2P=;wST0kO)uTc32mE !K?Vt׻`< 0eUWb*ᶀ3ճ'jz1(5}Jo/)ʕd*\XDX,'',tn\l1mG[+dag tã(`3\բ)l w:f3uWnvwFFs4HϓM$ik}hAM6z+) :'wiO V.}`捊erԙij{|ĘF_+^ 8ސ =rJV B 7UXH:ѝF -QPm۫<9,o03K4i/⎳@yPY'@:N8Z+drIY䏬L܉D+#m,*l@K@r dM2@# do)Cu{VI2^}Ye!UOy7KwQ-P(Z>>#l,k&%Df^θ O(Mk%"Gl{Czg lcL)LŧߒrAQ.9`qaD %veT$}..)$[zw`=y=TPs\ޤ"'YZ*Sږ[S-(@(7a ]ZRG ⻁U4'[Z'ڕVUZNTz7=oܐs%&4EUg h 0[Y;7#ǚWə2)hťG]$)v ,nG8ڂ-y>4Y$SxH5O 2mbMՙES]`D >=Ga#x;껫$ d ChQq6,ij}R<7=~z)ԯ߽w;_3㳷LX_3`Mś>Vs)wЈCi6i/F#끈!D4X>\y 7DžǠr%tv)U g!c*e"!PQhdϴuqI/:R'BZ.@QǞ\ܹCc44z9!A)8{CgQkOb|d^MA3=k(Y4I +^Tfze^wBo-OH=Jr]KE^]{\],&d! r΅iEpYɞ \J)/ZJKN4ǵR1@봴螏tJ<ޞ.TJ%yvoVnWr+Yώmr{ rPj$mTAѢ(HD9 )i|2GVᙅZ:.xKE*?'ZJ9*XڌS1c7̱7A#Bž='r#&/ۣ㚂 R1K4 dft9bM(Z#u1RIOBSۥXɴ_JJ0(9[P&Q\R"Ji04 3"(FM L M-AXz|HΠa ?\6JlV+P@K2y_zjW4͡< bdr C%A8xf#rYsײ7 ėAú$84`Tа,RuY-ֽ$>^sp?^|qtv+kMdˍRAJGUc~>2vG?ހQX%,t^xB|;%%Ǭ2Q Q ſ5G6S88c~?9;R֐ C75vn[,ЅMezIޮZoՁBr,9r^BWThޏ'׍Dj23%p4\^һAL揔f6&WՏ[-h^u._&Vo>QNi)sΧ7zW'p}[f_VF~ e˵'kNZǴ''Z)Mu??/rg '[)](KwAmԝ]Lp6W˾L|H:{E+&fn˓\?~g ˉH͇? |;{*3A\2Ȇ1zPKd'o58 йRnZLvY?k)96¶RaX RiOk}ҽRTbRQ?{WƑd _v=cRyjwa`Y6<$d/DVbd/eCvU/"#]%(-vN饌yi.V½4/qWL,xð3GɇI?z]Jew %)6|s:WZ`ҝMj5ɤ;gg 9L%A,c,,{Vv3ޭ}fS?.`:GLs!GSB8׎;}}K} yF\ …'+w=2R|!зرo3?|?Ѽa'⪷<9g.&Ik(/u3sӬy<'݆R\ .D/=TLb8\jE;$QjcXDEDxN5,_F}NR'40.KN\j}W/@/g<^:X)NK9X,^z^*^#y\F9r"}qKA?xh~AsF#;Ua%&L!D<|*n]Xf/1Ā0wFY<:&rfC7S?QG1v,=Ty?父PB+F|yh,5dA'[sb~~x|$Ǵ8$#?}\ܟf4 w5A**ީ]9-2wc$+1 oc"L(N nϒ݋},ߛ4 =[$-W4N3ڃ_٘'lXm5wU qARaf&ͤtH#"ԾtGW~l采.Ԝy8TvՌy]!h"XöNe.l.DpO#?I!UPb>Ƿڿ_4b`7!Ty]Pc$U@fLqH1eETk-T \fXhȐ X5YcG8b3"(r \[+ȘuLjǸi8o\n=i|y뎛Ghãro *7Ea۱S>?@5v|M'ǓLfyXE_XLH8}plQ۝7qqC%,24uZ xtDjp_NZ_i|H,o)QzT0]!nLT<1Z Adǣ8B,D)(x;~W[- ~FEoJ]w-`j +|+! -6n ֭v16T(%#YN3j|lEFmƄ~HjoyxHh]'s!t%v\j ĉ8iW'1섇czVsf(&onf Ǭ)e9)i?m~r{j{mJ<@p9 qR~%8`n~pސ$˰SƹLP9*DG,h $y.=KгR_6M ]-4&j/(mgQW+)81M?aXb#Z,78>N<ߞ?LDR\F%TL\Pn zTH,Z'A4rjupyQxqTNAJxML ?K=&oƄ> 5S[0ר,L;dl`߱H{R]ʫ뺎OEYLMbEmV{mZ> t3p߬a6~Qg#ߍ4i^E0ji:T%#"῎Uq4~ƉX}(*6ʆs]s4אvʷ/:MLPzF~8QtC . y֥f TaQKtLJU#A\I)|< :\K*dT8v<6X$*dB! d;ocrPKdCQZ>iT}WHRHR Rq; X0m0rA ˰#@7²>^=)GJs\,)V"W 1LhIV)$=Ź7z ZK@n{k$X 1~%K-~yF<(zݼ`C`U烙 ;XLpyzZbQJ|_!0^ W\?3Tցa^Bf1l=@35eL{E{ 0P=Ǚ]$JMzR 1I,10Y6b H20-` Cbb CBXcްq4f ݣ)C5 IPvڊ+#zTPu kZi[w*a.9bKh,|cCQT{dsk2\zV6ʒb4B6WѢsK6WuirMW~y}j֒fzti"C&"8>}ym[P^:;j)Dʘ(QvV`GEAHV:& >[] E`rw>دfG]Fg7'gvAdZE.fʱsi41qšR߭IM^D/e:'91r5f^t.gգ0ĜԊ^LZyx)Rӌ/l=@+$+%dh93XS U 2d5V.($ R"U`e&V5'ir#R0K1}_Y?Lq-t1=ip`esW7ޝDp[ˁh;}} h߾>v]e n08fM»\IM0FCx";a't7W]R`"Z$?I)=E2q:6Gs$xK0#(6Um(#!X;%I"| Kwt4ckH`40ױMTs\Dh@F'_'sCkKqf9Ty4 "pi Kp*S{%"*xN1VsvjW )&՚D39iŞ*B i3%;3xAAc=m0<@C&Iqn ,c/4u0Qm36`0R3}V[AEH%bzb5aQ@29c<.scV2UF Gb6 ғ8*B3؈\~B"6ww(`0  &"H6ϓ=&=BF|SXR=B(z`rU!YGﶇ>^s7S/ECJrI\L@jze}fw@u="w?0c?܁,ۯ;OIU i'o}5G"y?Ij[r\N2-T#.} DPϫ@JEL<we&&v8~jr[OX{*rk YC'{i!ٱg 0J6jsz2;MFU1d{Te ɁzM_GjJCUrԌ("ѨZ܍$џLf[!ڴtWN|s|"a + ?@Qw-qH|TXrZs.=-0(i❌c%DM%ATcUqPvsLc$9E8j<^-~#%ͷi} ED6CΜY2_ND7➔sL o$f["iŞ4פsع 9sf;.}2VP͝`/~*S{!DRAV;&L_?UWn?n[4Q/~>]W D{ґ״RQShWps;7Ol=)n 5 #hW `kw7WnX^Ke97!8tei\Is8e&p8)[mXeddx1W&C$Yi% lȮZ[Z9ASgbgcB$-zW)?E'\wHwٵZ^~2I>`- }a}`ڨmcY.Eg0hX8 ,]"= %{ΑIJRfCDu.s"X%Et%w6i&0WV5VtMx6 D˞Fa,R.faUbx d11VX0F%\WMŕaSR j> `6hD]5 EWkPVhjMMDYK!`6lw|᠖A- I?NH"A55:V*4]e!$Ƞ#t\l >p x ;ˁi֛^kF+v?NJf'+M|{L\K#@h+EΡsg=[Q( o_YK5i{r_[Z|\T$OmIɷWF4o5>\rbFRb9F5sA gǎS4IJ4RP'a9vwmzW.!ɦol&&9юu~IgYXWiv+!]cbaMehNmBQwP&%tDv ͘?ğDD97,,٥Wx r95o%xy@sܚ*3 Mp w'~LKታ8GP4 \7wF1B楝ť/g;䢔fV.ZU1 TbW%xv.&-k4SNhŽx܌gS:" uBxۦ5u' z 4 j 2Xդ`hu3Pdm-&Ns4(k 4(5iүlԾ<5+C@&V㮬R_Gkm@EXM%ׁ| *\NI=xW<MᝈZ*ZY_IYZ&eu& !SmlZN Ҍ) NI_%;x3tz͡<;Z4`F=&V`B$F9#Ē=!>%T%4^40R1Zfyuny EŊf]"H\1HZH/Ɗ @.T-RhHO)i ؐ["mxH=]"-ɣ$/#eV$iiBj4ha5z^ K"8 4~7DD*}Y9Sb2A*`C8y*d{~aVg v-xs Fv*pWZ5d<>[MBcs1-8w/nc0))RZ)v+yb x#\+sRS&mQPٴw֏޲*{9t׿3}[{Oo׿!?p0%I0aߣ}q}wu%)Yb{G lOw{-ioZko]=܇пs?7{+ ay_` GoBkQU ƭj jVVujjӸ|o~4fg.K/97lNvar/home/core/zuul-output/logs/kubelet.log0000644000000000000000005750543115144010317017703 0ustar rootrootFeb 14 04:09:26 crc systemd[1]: Starting Kubernetes Kubelet... Feb 14 04:09:26 crc restorecon[4704]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 14 04:09:27 crc restorecon[4704]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 14 04:09:28 crc kubenswrapper[4867]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 14 04:09:28 crc kubenswrapper[4867]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 14 04:09:28 crc kubenswrapper[4867]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 14 04:09:28 crc kubenswrapper[4867]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 14 04:09:28 crc kubenswrapper[4867]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 14 04:09:28 crc kubenswrapper[4867]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.686598 4867 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691200 4867 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691230 4867 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691239 4867 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691248 4867 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691255 4867 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691264 4867 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691272 4867 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691278 4867 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691285 4867 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691292 4867 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691301 4867 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691307 4867 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691313 4867 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691321 4867 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691327 4867 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691352 4867 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691359 4867 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691366 4867 feature_gate.go:330] unrecognized feature gate: Example Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691372 4867 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691378 4867 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691385 4867 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691391 4867 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691401 4867 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691410 4867 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691417 4867 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691424 4867 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691434 4867 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691444 4867 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691451 4867 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691459 4867 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691466 4867 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691473 4867 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691479 4867 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691485 4867 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691491 4867 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691497 4867 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691530 4867 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691539 4867 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691547 4867 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691553 4867 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691559 4867 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691565 4867 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691575 4867 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691585 4867 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691592 4867 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691601 4867 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691609 4867 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691617 4867 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691623 4867 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691629 4867 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691635 4867 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691642 4867 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691648 4867 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691654 4867 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691660 4867 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691666 4867 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691671 4867 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691678 4867 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691685 4867 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691692 4867 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691698 4867 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691704 4867 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691711 4867 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691717 4867 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691723 4867 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691730 4867 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691736 4867 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691746 4867 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691754 4867 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691761 4867 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691768 4867 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.693634 4867 flags.go:64] FLAG: --address="0.0.0.0" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.693706 4867 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.693728 4867 flags.go:64] FLAG: --anonymous-auth="true" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.693739 4867 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.693752 4867 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.693761 4867 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.693772 4867 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.693792 4867 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.693800 4867 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.693807 4867 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.693815 4867 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.693823 4867 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.693830 4867 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.693838 4867 flags.go:64] FLAG: --cgroup-root="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.693844 4867 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.693852 4867 flags.go:64] FLAG: --client-ca-file="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.693859 4867 flags.go:64] FLAG: --cloud-config="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.693866 4867 flags.go:64] FLAG: --cloud-provider="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.693872 4867 flags.go:64] FLAG: --cluster-dns="[]" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.693883 4867 flags.go:64] FLAG: --cluster-domain="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.693890 4867 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.693898 4867 flags.go:64] FLAG: --config-dir="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.693905 4867 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.693914 4867 flags.go:64] FLAG: --container-log-max-files="5" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.693927 4867 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.693934 4867 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.693941 4867 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.693948 4867 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.693955 4867 flags.go:64] FLAG: --contention-profiling="false" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.693961 4867 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.693968 4867 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.693976 4867 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.693982 4867 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.693994 4867 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694000 4867 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694006 4867 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694013 4867 flags.go:64] FLAG: --enable-load-reader="false" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694020 4867 flags.go:64] FLAG: --enable-server="true" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694027 4867 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694036 4867 flags.go:64] FLAG: --event-burst="100" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694043 4867 flags.go:64] FLAG: --event-qps="50" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694052 4867 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694059 4867 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694067 4867 flags.go:64] FLAG: --eviction-hard="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694076 4867 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694083 4867 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694089 4867 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694096 4867 flags.go:64] FLAG: --eviction-soft="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694102 4867 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694109 4867 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694115 4867 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694122 4867 flags.go:64] FLAG: --experimental-mounter-path="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694128 4867 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694134 4867 flags.go:64] FLAG: --fail-swap-on="true" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694141 4867 flags.go:64] FLAG: --feature-gates="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694149 4867 flags.go:64] FLAG: --file-check-frequency="20s" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694158 4867 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694165 4867 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694172 4867 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694179 4867 flags.go:64] FLAG: --healthz-port="10248" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694186 4867 flags.go:64] FLAG: --help="false" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694192 4867 flags.go:64] FLAG: --hostname-override="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694198 4867 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694206 4867 flags.go:64] FLAG: --http-check-frequency="20s" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694212 4867 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694218 4867 flags.go:64] FLAG: --image-credential-provider-config="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694225 4867 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694232 4867 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694238 4867 flags.go:64] FLAG: --image-service-endpoint="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694244 4867 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694251 4867 flags.go:64] FLAG: --kube-api-burst="100" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694258 4867 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694265 4867 flags.go:64] FLAG: --kube-api-qps="50" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694271 4867 flags.go:64] FLAG: --kube-reserved="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694277 4867 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694283 4867 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694290 4867 flags.go:64] FLAG: --kubelet-cgroups="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694296 4867 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694303 4867 flags.go:64] FLAG: --lock-file="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694310 4867 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694317 4867 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694324 4867 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694337 4867 flags.go:64] FLAG: --log-json-split-stream="false" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694344 4867 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694352 4867 flags.go:64] FLAG: --log-text-split-stream="false" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694358 4867 flags.go:64] FLAG: --logging-format="text" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694365 4867 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694372 4867 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694379 4867 flags.go:64] FLAG: --manifest-url="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694385 4867 flags.go:64] FLAG: --manifest-url-header="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694396 4867 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694404 4867 flags.go:64] FLAG: --max-open-files="1000000" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694412 4867 flags.go:64] FLAG: --max-pods="110" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694419 4867 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694426 4867 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694434 4867 flags.go:64] FLAG: --memory-manager-policy="None" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694442 4867 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694451 4867 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694458 4867 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694466 4867 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694490 4867 flags.go:64] FLAG: --node-status-max-images="50" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694497 4867 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694527 4867 flags.go:64] FLAG: --oom-score-adj="-999" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694534 4867 flags.go:64] FLAG: --pod-cidr="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694540 4867 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694555 4867 flags.go:64] FLAG: --pod-manifest-path="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694563 4867 flags.go:64] FLAG: --pod-max-pids="-1" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694571 4867 flags.go:64] FLAG: --pods-per-core="0" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694579 4867 flags.go:64] FLAG: --port="10250" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694587 4867 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694594 4867 flags.go:64] FLAG: --provider-id="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694602 4867 flags.go:64] FLAG: --qos-reserved="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694610 4867 flags.go:64] FLAG: --read-only-port="10255" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694618 4867 flags.go:64] FLAG: --register-node="true" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694625 4867 flags.go:64] FLAG: --register-schedulable="true" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694633 4867 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694649 4867 flags.go:64] FLAG: --registry-burst="10" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694655 4867 flags.go:64] FLAG: --registry-qps="5" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694662 4867 flags.go:64] FLAG: --reserved-cpus="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694669 4867 flags.go:64] FLAG: --reserved-memory="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694680 4867 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694687 4867 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694694 4867 flags.go:64] FLAG: --rotate-certificates="false" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694701 4867 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694707 4867 flags.go:64] FLAG: --runonce="false" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694713 4867 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694720 4867 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694727 4867 flags.go:64] FLAG: --seccomp-default="false" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694734 4867 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694740 4867 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694747 4867 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694754 4867 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694763 4867 flags.go:64] FLAG: --storage-driver-password="root" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694769 4867 flags.go:64] FLAG: --storage-driver-secure="false" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694776 4867 flags.go:64] FLAG: --storage-driver-table="stats" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694783 4867 flags.go:64] FLAG: --storage-driver-user="root" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694789 4867 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694797 4867 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694803 4867 flags.go:64] FLAG: --system-cgroups="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694809 4867 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694821 4867 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694828 4867 flags.go:64] FLAG: --tls-cert-file="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694834 4867 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694843 4867 flags.go:64] FLAG: --tls-min-version="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694849 4867 flags.go:64] FLAG: --tls-private-key-file="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694856 4867 flags.go:64] FLAG: --topology-manager-policy="none" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694864 4867 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694871 4867 flags.go:64] FLAG: --topology-manager-scope="container" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694879 4867 flags.go:64] FLAG: --v="2" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694892 4867 flags.go:64] FLAG: --version="false" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694903 4867 flags.go:64] FLAG: --vmodule="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694913 4867 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694920 4867 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695150 4867 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695157 4867 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695162 4867 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695169 4867 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695175 4867 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695181 4867 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695187 4867 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695192 4867 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695200 4867 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695206 4867 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695213 4867 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695219 4867 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695224 4867 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695232 4867 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695239 4867 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695247 4867 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695253 4867 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695258 4867 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695263 4867 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695269 4867 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695274 4867 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695280 4867 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695285 4867 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695290 4867 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695295 4867 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695304 4867 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695310 4867 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695316 4867 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695321 4867 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695327 4867 feature_gate.go:330] unrecognized feature gate: Example Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695332 4867 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695337 4867 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695354 4867 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695359 4867 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695365 4867 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695370 4867 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695376 4867 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695381 4867 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695386 4867 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695392 4867 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695397 4867 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695412 4867 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695417 4867 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695422 4867 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695428 4867 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695433 4867 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695439 4867 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695444 4867 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695450 4867 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695455 4867 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695461 4867 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695466 4867 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695472 4867 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695477 4867 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695482 4867 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695489 4867 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695494 4867 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695500 4867 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695524 4867 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695529 4867 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695535 4867 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695540 4867 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695545 4867 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695552 4867 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695561 4867 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695566 4867 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695574 4867 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695582 4867 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695589 4867 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695596 4867 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695603 4867 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.695626 4867 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.719386 4867 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.719429 4867 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719538 4867 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719550 4867 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719557 4867 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719563 4867 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719570 4867 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719575 4867 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719583 4867 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719591 4867 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719597 4867 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719603 4867 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719609 4867 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719615 4867 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719620 4867 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719625 4867 feature_gate.go:330] unrecognized feature gate: Example Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719630 4867 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719635 4867 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719640 4867 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719645 4867 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719650 4867 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719655 4867 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719660 4867 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719665 4867 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719670 4867 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719696 4867 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719701 4867 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719705 4867 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719710 4867 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719715 4867 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719720 4867 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719725 4867 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719730 4867 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719735 4867 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719740 4867 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719747 4867 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719753 4867 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719759 4867 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719765 4867 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719772 4867 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719777 4867 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719783 4867 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719797 4867 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719802 4867 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719807 4867 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719846 4867 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719854 4867 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719860 4867 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719865 4867 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719870 4867 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719875 4867 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719881 4867 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719886 4867 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719892 4867 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719898 4867 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719904 4867 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719909 4867 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719914 4867 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719919 4867 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719924 4867 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719929 4867 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719946 4867 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719951 4867 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719956 4867 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719961 4867 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719966 4867 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719970 4867 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719977 4867 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719982 4867 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719987 4867 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719992 4867 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719998 4867 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720005 4867 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.720014 4867 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720269 4867 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720277 4867 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720283 4867 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720290 4867 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720295 4867 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720300 4867 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720305 4867 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720310 4867 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720315 4867 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720320 4867 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720325 4867 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720329 4867 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720334 4867 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720340 4867 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720345 4867 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720349 4867 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720354 4867 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720359 4867 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720364 4867 feature_gate.go:330] unrecognized feature gate: Example Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720369 4867 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720374 4867 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720378 4867 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720383 4867 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720397 4867 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720402 4867 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720409 4867 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720414 4867 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720421 4867 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720427 4867 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720432 4867 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720437 4867 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720442 4867 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720448 4867 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720454 4867 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720460 4867 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720466 4867 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720472 4867 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720478 4867 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720484 4867 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720490 4867 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720497 4867 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720522 4867 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720528 4867 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720534 4867 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720539 4867 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720544 4867 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720549 4867 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720554 4867 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720559 4867 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720564 4867 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720569 4867 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720574 4867 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720578 4867 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720583 4867 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720588 4867 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720594 4867 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720600 4867 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720608 4867 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720614 4867 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720628 4867 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720634 4867 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720639 4867 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720643 4867 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720648 4867 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720653 4867 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720658 4867 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720663 4867 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720667 4867 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720672 4867 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720677 4867 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720682 4867 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.720689 4867 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.721940 4867 server.go:940] "Client rotation is on, will bootstrap in background" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.745186 4867 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.745352 4867 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.747725 4867 server.go:997] "Starting client certificate rotation" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.747778 4867 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.749836 4867 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-11 09:20:58.623093149 +0000 UTC Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.749958 4867 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.818179 4867 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.822413 4867 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 14 04:09:28 crc kubenswrapper[4867]: E0214 04:09:28.825319 4867 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.113:6443: connect: connection refused" logger="UnhandledError" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.848247 4867 log.go:25] "Validated CRI v1 runtime API" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.886727 4867 log.go:25] "Validated CRI v1 image API" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.889346 4867 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.896324 4867 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-14-04-04-32-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.896365 4867 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.910809 4867 manager.go:217] Machine: {Timestamp:2026-02-14 04:09:28.909132867 +0000 UTC m=+0.990070201 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654124544 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:1382a0d3-8d29-4f25-bc2c-dc46ad541396 BootID:148e1364-0af4-4e1f-ae72-52166d888ddc Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:89:bf:48 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:89:bf:48 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:f1:70:ad Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:61:fa:b2 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:a1:f2:e6 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:c2:9c:e1 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:36:69:22:d8:01:3e Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:62:0b:66:1b:d9:61 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654124544 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.911360 4867 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.911555 4867 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.911832 4867 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.912052 4867 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.912087 4867 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.912277 4867 topology_manager.go:138] "Creating topology manager with none policy" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.912287 4867 container_manager_linux.go:303] "Creating device plugin manager" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.912734 4867 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.915431 4867 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.916460 4867 state_mem.go:36] "Initialized new in-memory state store" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.916858 4867 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.920202 4867 kubelet.go:418] "Attempting to sync node with API server" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.920226 4867 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.920242 4867 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.920254 4867 kubelet.go:324] "Adding apiserver pod source" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.920267 4867 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.924055 4867 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.926652 4867 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.113:6443: connect: connection refused Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.926733 4867 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.113:6443: connect: connection refused Feb 14 04:09:28 crc kubenswrapper[4867]: E0214 04:09:28.926887 4867 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.113:6443: connect: connection refused" logger="UnhandledError" Feb 14 04:09:28 crc kubenswrapper[4867]: E0214 04:09:28.926898 4867 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.113:6443: connect: connection refused" logger="UnhandledError" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.927078 4867 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.929768 4867 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.931480 4867 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.931546 4867 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.931562 4867 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.931575 4867 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.931598 4867 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.931613 4867 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.931628 4867 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.931651 4867 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.931668 4867 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.931684 4867 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.931702 4867 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.931716 4867 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.932641 4867 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.933142 4867 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.113:6443: connect: connection refused Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.933359 4867 server.go:1280] "Started kubelet" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.933644 4867 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.933686 4867 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.934653 4867 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 14 04:09:28 crc systemd[1]: Started Kubernetes Kubelet. Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.936005 4867 server.go:460] "Adding debug handlers to kubelet server" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.938054 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.938099 4867 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.938116 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 16:13:30.519114575 +0000 UTC Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.938318 4867 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.938339 4867 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.938376 4867 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 14 04:09:28 crc kubenswrapper[4867]: E0214 04:09:28.938527 4867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 14 04:09:28 crc kubenswrapper[4867]: E0214 04:09:28.938968 4867 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.113:6443: connect: connection refused" interval="200ms" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.939259 4867 factory.go:55] Registering systemd factory Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.939282 4867 factory.go:221] Registration of the systemd container factory successfully Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.939466 4867 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.113:6443: connect: connection refused Feb 14 04:09:28 crc kubenswrapper[4867]: E0214 04:09:28.940128 4867 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.113:6443: connect: connection refused" logger="UnhandledError" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.940815 4867 factory.go:153] Registering CRI-O factory Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.940839 4867 factory.go:221] Registration of the crio container factory successfully Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.940910 4867 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.940934 4867 factory.go:103] Registering Raw factory Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.940954 4867 manager.go:1196] Started watching for new ooms in manager Feb 14 04:09:28 crc kubenswrapper[4867]: E0214 04:09:28.939580 4867 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.113:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.18940178218205da default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-14 04:09:28.933320154 +0000 UTC m=+1.014257508,LastTimestamp:2026-02-14 04:09:28.933320154 +0000 UTC m=+1.014257508,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.943107 4867 manager.go:319] Starting recovery of all containers Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.947052 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.947125 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.947142 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.947166 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.947183 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.947208 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.947224 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.947236 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.947258 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.947272 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.947290 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.947307 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.947327 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.947343 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.947364 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.947379 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.947401 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.947414 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.947428 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.947449 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.947463 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.947481 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.947498 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.947532 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.947553 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.947564 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.947585 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.947599 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.947622 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.947637 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.947659 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.947675 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.947698 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.947733 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.947754 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.947780 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.947798 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.947820 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.947837 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.947857 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.947947 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.947968 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.949857 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.950459 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.950494 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.950530 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.950545 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.950560 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.950575 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.950591 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.950605 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.950621 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.950647 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.950666 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.950685 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.950703 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.950719 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.950734 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.950749 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.950763 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.950776 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.950792 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.950807 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.950823 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.954652 4867 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.954727 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.954748 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.954776 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.954800 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.954815 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.954830 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.954843 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.954857 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.954871 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.954885 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.954900 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.954912 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.954925 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.954968 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.954982 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.954997 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.955014 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.955031 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.955047 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.955069 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.955086 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.955102 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.955119 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.955140 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.955184 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.955197 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.955210 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.955221 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.955250 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.955263 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.955274 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.955320 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.955333 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.955347 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.955359 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.955372 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.955386 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.955399 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.955411 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.955428 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.955474 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.955493 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.956667 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.956732 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.956748 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.956764 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.956779 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.956793 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.956807 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.956822 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.956836 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.956849 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.956863 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.956880 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.956895 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.956938 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.956951 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.956963 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.956975 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.956989 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957001 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957013 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957026 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957038 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957050 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957070 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957082 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957097 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957116 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957136 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957153 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957170 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957182 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957248 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957269 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957286 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957299 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957312 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957329 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957342 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957353 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957367 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957378 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957391 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957404 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957415 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957427 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957439 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957453 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957464 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957477 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957490 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957525 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957541 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957554 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957572 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957590 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957611 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957627 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957642 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957658 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957672 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957688 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957704 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957718 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957731 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957745 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957762 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957774 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957789 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957803 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957816 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957829 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957842 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957856 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957870 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957885 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957898 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957911 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957926 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957938 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957951 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957964 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.958024 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.958037 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.958054 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.958068 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.958091 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.958108 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.958122 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.958136 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.958152 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.958168 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.958182 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.958196 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.958214 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.958230 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.958242 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.958256 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.958272 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.958287 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.958299 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.958312 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.958323 4867 reconstruct.go:97] "Volume reconstruction finished" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.958333 4867 reconciler.go:26] "Reconciler: start to sync state" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.965183 4867 manager.go:324] Recovery completed Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.974922 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.977037 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.977072 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.977088 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.977908 4867 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.977926 4867 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.977946 4867 state_mem.go:36] "Initialized new in-memory state store" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.993403 4867 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.995844 4867 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.995924 4867 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.995979 4867 kubelet.go:2335] "Starting kubelet main sync loop" Feb 14 04:09:28 crc kubenswrapper[4867]: E0214 04:09:28.996050 4867 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.997582 4867 policy_none.go:49] "None policy: Start" Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.997972 4867 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.113:6443: connect: connection refused Feb 14 04:09:28 crc kubenswrapper[4867]: E0214 04:09:28.998050 4867 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.113:6443: connect: connection refused" logger="UnhandledError" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.998690 4867 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.998716 4867 state_mem.go:35] "Initializing new in-memory state store" Feb 14 04:09:29 crc kubenswrapper[4867]: E0214 04:09:29.039007 4867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.055783 4867 manager.go:334] "Starting Device Plugin manager" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.055942 4867 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.055959 4867 server.go:79] "Starting device plugin registration server" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.056591 4867 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.056614 4867 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.057093 4867 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.057182 4867 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.057195 4867 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 14 04:09:29 crc kubenswrapper[4867]: E0214 04:09:29.067957 4867 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.097299 4867 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.097406 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.099173 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.099225 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.099236 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.099392 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.099721 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.099815 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.100466 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.100527 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.100543 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.100726 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.100891 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.100914 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.100922 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.100890 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.101025 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.101869 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.101898 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.101909 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.101938 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.101955 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.101965 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.102029 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.102237 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.102268 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.103018 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.103037 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.103045 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.103154 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.103164 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.103171 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.103532 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.104968 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.105019 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.106416 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.106458 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.106469 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.106733 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.106783 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.108353 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.108397 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.108412 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.109089 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.109122 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.109137 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:29 crc kubenswrapper[4867]: E0214 04:09:29.140286 4867 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.113:6443: connect: connection refused" interval="400ms" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.157255 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.158681 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.158742 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.158757 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.158801 4867 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 14 04:09:29 crc kubenswrapper[4867]: E0214 04:09:29.159498 4867 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.113:6443: connect: connection refused" node="crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.162699 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.162749 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.162781 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.162804 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.162830 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.162850 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.162956 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.162981 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.163021 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.163045 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.163067 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.163087 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.163109 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.163128 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.163208 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.264642 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.264706 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.264755 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.264777 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.264799 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.264822 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.264844 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.264866 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.264888 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.264904 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.264891 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.264910 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.264984 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.264985 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.265013 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.265022 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.264960 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.265037 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.265065 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.265143 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.265154 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.265171 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.265154 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.265191 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.265200 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.265222 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.265230 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.265288 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.265359 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.265291 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.359663 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.361044 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.361109 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.361129 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.361164 4867 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 14 04:09:29 crc kubenswrapper[4867]: E0214 04:09:29.361755 4867 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.113:6443: connect: connection refused" node="crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.424430 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.430237 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.448002 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.466903 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.470970 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: W0214 04:09:29.473198 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-46e9b8e22e2be7f717536dffa7529d7a97bdffebd9250ddec5e65d5d5f016d77 WatchSource:0}: Error finding container 46e9b8e22e2be7f717536dffa7529d7a97bdffebd9250ddec5e65d5d5f016d77: Status 404 returned error can't find the container with id 46e9b8e22e2be7f717536dffa7529d7a97bdffebd9250ddec5e65d5d5f016d77 Feb 14 04:09:29 crc kubenswrapper[4867]: W0214 04:09:29.473777 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-4a1bb8a3dfe17859d34e5eed972a7741459e836f78cc358592caf6be6c31f172 WatchSource:0}: Error finding container 4a1bb8a3dfe17859d34e5eed972a7741459e836f78cc358592caf6be6c31f172: Status 404 returned error can't find the container with id 4a1bb8a3dfe17859d34e5eed972a7741459e836f78cc358592caf6be6c31f172 Feb 14 04:09:29 crc kubenswrapper[4867]: W0214 04:09:29.487608 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-5160e8d4ce2a4297674e730207cdfd905b5e676ac1b9b9c937d380dd67ad9e6d WatchSource:0}: Error finding container 5160e8d4ce2a4297674e730207cdfd905b5e676ac1b9b9c937d380dd67ad9e6d: Status 404 returned error can't find the container with id 5160e8d4ce2a4297674e730207cdfd905b5e676ac1b9b9c937d380dd67ad9e6d Feb 14 04:09:29 crc kubenswrapper[4867]: W0214 04:09:29.489465 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-944a9baef757973ab049cf70e903aa7f527656f3cfe6a2b91bbe6c555afd69e7 WatchSource:0}: Error finding container 944a9baef757973ab049cf70e903aa7f527656f3cfe6a2b91bbe6c555afd69e7: Status 404 returned error can't find the container with id 944a9baef757973ab049cf70e903aa7f527656f3cfe6a2b91bbe6c555afd69e7 Feb 14 04:09:29 crc kubenswrapper[4867]: E0214 04:09:29.541409 4867 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.113:6443: connect: connection refused" interval="800ms" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.762267 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.763947 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.763986 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.763995 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.764016 4867 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 14 04:09:29 crc kubenswrapper[4867]: E0214 04:09:29.764372 4867 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.113:6443: connect: connection refused" node="crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.934231 4867 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.113:6443: connect: connection refused Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.939218 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 13:37:34.555451055 +0000 UTC Feb 14 04:09:30 crc kubenswrapper[4867]: I0214 04:09:30.000898 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"d5df1d6f504d3df72192b61ee87a9edaf65546935df593f3d941db9b1a30220b"} Feb 14 04:09:30 crc kubenswrapper[4867]: I0214 04:09:30.002769 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"5160e8d4ce2a4297674e730207cdfd905b5e676ac1b9b9c937d380dd67ad9e6d"} Feb 14 04:09:30 crc kubenswrapper[4867]: I0214 04:09:30.004411 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"46e9b8e22e2be7f717536dffa7529d7a97bdffebd9250ddec5e65d5d5f016d77"} Feb 14 04:09:30 crc kubenswrapper[4867]: I0214 04:09:30.005850 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4a1bb8a3dfe17859d34e5eed972a7741459e836f78cc358592caf6be6c31f172"} Feb 14 04:09:30 crc kubenswrapper[4867]: I0214 04:09:30.007557 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"944a9baef757973ab049cf70e903aa7f527656f3cfe6a2b91bbe6c555afd69e7"} Feb 14 04:09:30 crc kubenswrapper[4867]: W0214 04:09:30.049616 4867 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.113:6443: connect: connection refused Feb 14 04:09:30 crc kubenswrapper[4867]: E0214 04:09:30.049701 4867 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.113:6443: connect: connection refused" logger="UnhandledError" Feb 14 04:09:30 crc kubenswrapper[4867]: W0214 04:09:30.156562 4867 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.113:6443: connect: connection refused Feb 14 04:09:30 crc kubenswrapper[4867]: E0214 04:09:30.156645 4867 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.113:6443: connect: connection refused" logger="UnhandledError" Feb 14 04:09:30 crc kubenswrapper[4867]: E0214 04:09:30.343628 4867 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.113:6443: connect: connection refused" interval="1.6s" Feb 14 04:09:30 crc kubenswrapper[4867]: W0214 04:09:30.386478 4867 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.113:6443: connect: connection refused Feb 14 04:09:30 crc kubenswrapper[4867]: E0214 04:09:30.386699 4867 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.113:6443: connect: connection refused" logger="UnhandledError" Feb 14 04:09:30 crc kubenswrapper[4867]: W0214 04:09:30.407906 4867 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.113:6443: connect: connection refused Feb 14 04:09:30 crc kubenswrapper[4867]: E0214 04:09:30.408044 4867 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.113:6443: connect: connection refused" logger="UnhandledError" Feb 14 04:09:30 crc kubenswrapper[4867]: I0214 04:09:30.565432 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:30 crc kubenswrapper[4867]: I0214 04:09:30.567623 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:30 crc kubenswrapper[4867]: I0214 04:09:30.567684 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:30 crc kubenswrapper[4867]: I0214 04:09:30.567704 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:30 crc kubenswrapper[4867]: I0214 04:09:30.567745 4867 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 14 04:09:30 crc kubenswrapper[4867]: E0214 04:09:30.568428 4867 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.113:6443: connect: connection refused" node="crc" Feb 14 04:09:30 crc kubenswrapper[4867]: I0214 04:09:30.935042 4867 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.113:6443: connect: connection refused Feb 14 04:09:30 crc kubenswrapper[4867]: I0214 04:09:30.940201 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 11:21:01.666447061 +0000 UTC Feb 14 04:09:32 crc kubenswrapper[4867]: I0214 04:09:32.079468 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 21:50:28.755568387 +0000 UTC Feb 14 04:09:32 crc kubenswrapper[4867]: I0214 04:09:32.079931 4867 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 14 04:09:32 crc kubenswrapper[4867]: W0214 04:09:32.079924 4867 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.113:6443: connect: connection refused Feb 14 04:09:32 crc kubenswrapper[4867]: E0214 04:09:32.080010 4867 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.113:6443: connect: connection refused" logger="UnhandledError" Feb 14 04:09:32 crc kubenswrapper[4867]: E0214 04:09:32.080003 4867 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.113:6443: connect: connection refused" interval="3.2s" Feb 14 04:09:32 crc kubenswrapper[4867]: I0214 04:09:32.081097 4867 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.113:6443: connect: connection refused Feb 14 04:09:32 crc kubenswrapper[4867]: E0214 04:09:32.082343 4867 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.113:6443: connect: connection refused" logger="UnhandledError" Feb 14 04:09:32 crc kubenswrapper[4867]: I0214 04:09:32.169242 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:32 crc kubenswrapper[4867]: I0214 04:09:32.170988 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:32 crc kubenswrapper[4867]: I0214 04:09:32.171031 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:32 crc kubenswrapper[4867]: I0214 04:09:32.171045 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:32 crc kubenswrapper[4867]: I0214 04:09:32.171072 4867 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 14 04:09:32 crc kubenswrapper[4867]: E0214 04:09:32.171775 4867 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.113:6443: connect: connection refused" node="crc" Feb 14 04:09:32 crc kubenswrapper[4867]: E0214 04:09:32.351856 4867 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.113:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.18940178218205da default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-14 04:09:28.933320154 +0000 UTC m=+1.014257508,LastTimestamp:2026-02-14 04:09:28.933320154 +0000 UTC m=+1.014257508,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 14 04:09:32 crc kubenswrapper[4867]: W0214 04:09:32.554391 4867 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.113:6443: connect: connection refused Feb 14 04:09:32 crc kubenswrapper[4867]: E0214 04:09:32.554870 4867 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.113:6443: connect: connection refused" logger="UnhandledError" Feb 14 04:09:32 crc kubenswrapper[4867]: I0214 04:09:32.934299 4867 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.113:6443: connect: connection refused Feb 14 04:09:33 crc kubenswrapper[4867]: I0214 04:09:33.079831 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 14:40:52.820159508 +0000 UTC Feb 14 04:09:33 crc kubenswrapper[4867]: I0214 04:09:33.090702 4867 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302" exitCode=0 Feb 14 04:09:33 crc kubenswrapper[4867]: I0214 04:09:33.090846 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302"} Feb 14 04:09:33 crc kubenswrapper[4867]: I0214 04:09:33.090882 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:33 crc kubenswrapper[4867]: I0214 04:09:33.091997 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:33 crc kubenswrapper[4867]: I0214 04:09:33.092044 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:33 crc kubenswrapper[4867]: I0214 04:09:33.092061 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:33 crc kubenswrapper[4867]: I0214 04:09:33.092228 4867 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="6e4d315b1c424660a2a02ab7882b4d25e0baa2407cbcc9efab29adf052733231" exitCode=0 Feb 14 04:09:33 crc kubenswrapper[4867]: I0214 04:09:33.092303 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"6e4d315b1c424660a2a02ab7882b4d25e0baa2407cbcc9efab29adf052733231"} Feb 14 04:09:33 crc kubenswrapper[4867]: I0214 04:09:33.092333 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:33 crc kubenswrapper[4867]: I0214 04:09:33.093189 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:33 crc kubenswrapper[4867]: I0214 04:09:33.093212 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:33 crc kubenswrapper[4867]: I0214 04:09:33.093222 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:33 crc kubenswrapper[4867]: I0214 04:09:33.094215 4867 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="9361fb0bab2f70eaf2adc19e3fbfa9066fd7ad2fe0c94cd1a13518d2ab3708d0" exitCode=0 Feb 14 04:09:33 crc kubenswrapper[4867]: I0214 04:09:33.094260 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"9361fb0bab2f70eaf2adc19e3fbfa9066fd7ad2fe0c94cd1a13518d2ab3708d0"} Feb 14 04:09:33 crc kubenswrapper[4867]: I0214 04:09:33.094323 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:33 crc kubenswrapper[4867]: I0214 04:09:33.094580 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:33 crc kubenswrapper[4867]: I0214 04:09:33.094918 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:33 crc kubenswrapper[4867]: I0214 04:09:33.094942 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:33 crc kubenswrapper[4867]: I0214 04:09:33.094952 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:33 crc kubenswrapper[4867]: I0214 04:09:33.095364 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:33 crc kubenswrapper[4867]: I0214 04:09:33.095394 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:33 crc kubenswrapper[4867]: I0214 04:09:33.095402 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:33 crc kubenswrapper[4867]: I0214 04:09:33.097229 4867 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="313dd94a6a60cea26237126b4d80e162ff2866b335e74ba876fa919f2950922e" exitCode=0 Feb 14 04:09:33 crc kubenswrapper[4867]: I0214 04:09:33.097349 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"313dd94a6a60cea26237126b4d80e162ff2866b335e74ba876fa919f2950922e"} Feb 14 04:09:33 crc kubenswrapper[4867]: I0214 04:09:33.097483 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:33 crc kubenswrapper[4867]: I0214 04:09:33.099701 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:33 crc kubenswrapper[4867]: I0214 04:09:33.099730 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:33 crc kubenswrapper[4867]: I0214 04:09:33.099748 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:33 crc kubenswrapper[4867]: I0214 04:09:33.104438 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"4c04ba7033e9c86439f79a30f5ac92368859a69c6b8d46aa6e05ca42fbc37839"} Feb 14 04:09:33 crc kubenswrapper[4867]: I0214 04:09:33.104477 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b61ad62a4304538cec45962a9672a69b853848bbfcbce460811135c2ffde4849"} Feb 14 04:09:33 crc kubenswrapper[4867]: I0214 04:09:33.104488 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"6ec966f4b2a6aef7743d32f976a12645c5b0feda623f7baf64edf02bc35389e6"} Feb 14 04:09:33 crc kubenswrapper[4867]: I0214 04:09:33.104498 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"898133696f8478fcb41fba24d15e056570cab68af53a559cb642724dff51617a"} Feb 14 04:09:33 crc kubenswrapper[4867]: I0214 04:09:33.104595 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:33 crc kubenswrapper[4867]: W0214 04:09:33.105359 4867 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.113:6443: connect: connection refused Feb 14 04:09:33 crc kubenswrapper[4867]: E0214 04:09:33.105473 4867 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.113:6443: connect: connection refused" logger="UnhandledError" Feb 14 04:09:33 crc kubenswrapper[4867]: I0214 04:09:33.106016 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:33 crc kubenswrapper[4867]: I0214 04:09:33.106056 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:33 crc kubenswrapper[4867]: I0214 04:09:33.106074 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:33 crc kubenswrapper[4867]: W0214 04:09:33.155240 4867 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.113:6443: connect: connection refused Feb 14 04:09:33 crc kubenswrapper[4867]: E0214 04:09:33.155354 4867 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.113:6443: connect: connection refused" logger="UnhandledError" Feb 14 04:09:33 crc kubenswrapper[4867]: I0214 04:09:33.934606 4867 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.113:6443: connect: connection refused Feb 14 04:09:34 crc kubenswrapper[4867]: I0214 04:09:34.025044 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 14 04:09:34 crc kubenswrapper[4867]: I0214 04:09:34.037446 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 14 04:09:34 crc kubenswrapper[4867]: I0214 04:09:34.080187 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 02:38:52.508321325 +0000 UTC Feb 14 04:09:34 crc kubenswrapper[4867]: I0214 04:09:34.112277 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7d60b00afe16ba210d6cf3e8edd9c12aef490177b83185e6d74f219cc35efbc7"} Feb 14 04:09:34 crc kubenswrapper[4867]: I0214 04:09:34.112329 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"44c65a590577c74e672bca804403f159a08ded5ed0e25daf1bef640898c304fc"} Feb 14 04:09:34 crc kubenswrapper[4867]: I0214 04:09:34.112350 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3b2c4d9c08ee7188cfea877222707517949a93291dac8409facff18ccd5d9243"} Feb 14 04:09:34 crc kubenswrapper[4867]: I0214 04:09:34.112366 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ff2ac5b982c695cfabb0b045748396477b0076e3f4bd77aedf8140d8d212eefe"} Feb 14 04:09:34 crc kubenswrapper[4867]: I0214 04:09:34.114409 4867 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="6718fb3f6cc2532e0ed35f4a37eb39738cd75a5f20f85e778dec867a620eba6f" exitCode=0 Feb 14 04:09:34 crc kubenswrapper[4867]: I0214 04:09:34.114453 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"6718fb3f6cc2532e0ed35f4a37eb39738cd75a5f20f85e778dec867a620eba6f"} Feb 14 04:09:34 crc kubenswrapper[4867]: I0214 04:09:34.114563 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:34 crc kubenswrapper[4867]: I0214 04:09:34.115632 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:34 crc kubenswrapper[4867]: I0214 04:09:34.115673 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:34 crc kubenswrapper[4867]: I0214 04:09:34.115690 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:34 crc kubenswrapper[4867]: I0214 04:09:34.117058 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"8f0b9cac3faa5bfffa911cb16b70fa88a320b7bd9314d7a0ee0732b2a57afb90"} Feb 14 04:09:34 crc kubenswrapper[4867]: I0214 04:09:34.117100 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:34 crc kubenswrapper[4867]: I0214 04:09:34.117103 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"c54a1f41a2a0e8fa5eae1575fc40b6f3240fe6ea8cafe6fd89a64e092e5b4602"} Feb 14 04:09:34 crc kubenswrapper[4867]: I0214 04:09:34.117207 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"61da3ab9eb87eb886d6bdf805db38bcabc3db4334167f9e28fd6144269a76515"} Feb 14 04:09:34 crc kubenswrapper[4867]: I0214 04:09:34.118765 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:34 crc kubenswrapper[4867]: I0214 04:09:34.118785 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:34 crc kubenswrapper[4867]: I0214 04:09:34.118797 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:34 crc kubenswrapper[4867]: I0214 04:09:34.121035 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"62a23e7ed290c1546350cfd89f40731062a0bbfc60ee74489cb0fc243bb8187f"} Feb 14 04:09:34 crc kubenswrapper[4867]: I0214 04:09:34.121068 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:34 crc kubenswrapper[4867]: I0214 04:09:34.121200 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:34 crc kubenswrapper[4867]: I0214 04:09:34.122184 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:34 crc kubenswrapper[4867]: I0214 04:09:34.122215 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:34 crc kubenswrapper[4867]: I0214 04:09:34.122242 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:34 crc kubenswrapper[4867]: I0214 04:09:34.122191 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:34 crc kubenswrapper[4867]: I0214 04:09:34.122347 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:34 crc kubenswrapper[4867]: I0214 04:09:34.122359 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:34 crc kubenswrapper[4867]: I0214 04:09:34.182166 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 14 04:09:34 crc kubenswrapper[4867]: I0214 04:09:34.189500 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 14 04:09:34 crc kubenswrapper[4867]: I0214 04:09:34.934608 4867 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.113:6443: connect: connection refused Feb 14 04:09:35 crc kubenswrapper[4867]: I0214 04:09:35.080765 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 00:02:31.03602251 +0000 UTC Feb 14 04:09:35 crc kubenswrapper[4867]: I0214 04:09:35.127621 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b9a86a9d4bdcb85bed9cc5869d14d5d0dcd8a0e22ad73bcc1a9db45554d0c687"} Feb 14 04:09:35 crc kubenswrapper[4867]: I0214 04:09:35.127673 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:35 crc kubenswrapper[4867]: I0214 04:09:35.128644 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:35 crc kubenswrapper[4867]: I0214 04:09:35.128675 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:35 crc kubenswrapper[4867]: I0214 04:09:35.128687 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:35 crc kubenswrapper[4867]: I0214 04:09:35.130535 4867 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="185d95c4c216a23ddee54c001dee313a17659c22037a5f60772d4449bd8fdd08" exitCode=0 Feb 14 04:09:35 crc kubenswrapper[4867]: I0214 04:09:35.130604 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:35 crc kubenswrapper[4867]: I0214 04:09:35.130660 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:35 crc kubenswrapper[4867]: I0214 04:09:35.130683 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"185d95c4c216a23ddee54c001dee313a17659c22037a5f60772d4449bd8fdd08"} Feb 14 04:09:35 crc kubenswrapper[4867]: I0214 04:09:35.130709 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:35 crc kubenswrapper[4867]: I0214 04:09:35.130791 4867 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 14 04:09:35 crc kubenswrapper[4867]: I0214 04:09:35.130854 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:35 crc kubenswrapper[4867]: I0214 04:09:35.131338 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:35 crc kubenswrapper[4867]: I0214 04:09:35.131379 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:35 crc kubenswrapper[4867]: I0214 04:09:35.131398 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:35 crc kubenswrapper[4867]: I0214 04:09:35.131749 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:35 crc kubenswrapper[4867]: I0214 04:09:35.131782 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:35 crc kubenswrapper[4867]: I0214 04:09:35.131794 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:35 crc kubenswrapper[4867]: I0214 04:09:35.131869 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:35 crc kubenswrapper[4867]: I0214 04:09:35.131904 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:35 crc kubenswrapper[4867]: I0214 04:09:35.131918 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:35 crc kubenswrapper[4867]: I0214 04:09:35.131999 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:35 crc kubenswrapper[4867]: I0214 04:09:35.132022 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:35 crc kubenswrapper[4867]: I0214 04:09:35.132034 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:35 crc kubenswrapper[4867]: I0214 04:09:35.372725 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:35 crc kubenswrapper[4867]: I0214 04:09:35.374696 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:35 crc kubenswrapper[4867]: I0214 04:09:35.374742 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:35 crc kubenswrapper[4867]: I0214 04:09:35.374758 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:35 crc kubenswrapper[4867]: I0214 04:09:35.374790 4867 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 14 04:09:35 crc kubenswrapper[4867]: I0214 04:09:35.753270 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 14 04:09:36 crc kubenswrapper[4867]: I0214 04:09:36.081167 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 20:19:52.696512746 +0000 UTC Feb 14 04:09:36 crc kubenswrapper[4867]: I0214 04:09:36.136456 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4f07e13016eff40608d9a7f5dbdbd6e4faa7b21b965957c062bfd1c40b04d582"} Feb 14 04:09:36 crc kubenswrapper[4867]: I0214 04:09:36.136498 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:36 crc kubenswrapper[4867]: I0214 04:09:36.136528 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"c647364c951a6adef887ffa61edec540e1ba09f957cffaf60aa4e2fb6ecaa22d"} Feb 14 04:09:36 crc kubenswrapper[4867]: I0214 04:09:36.136543 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"1680b0766cf32cd9af06a1636274ebdc0e1a0eb1ef8ebf2dd5af50a426593936"} Feb 14 04:09:36 crc kubenswrapper[4867]: I0214 04:09:36.136553 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"5777a20697086ac1eaf7dd01c471658a6ea96751fc9184d7bc2597777d86949a"} Feb 14 04:09:36 crc kubenswrapper[4867]: I0214 04:09:36.136586 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 04:09:36 crc kubenswrapper[4867]: I0214 04:09:36.136599 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:36 crc kubenswrapper[4867]: I0214 04:09:36.136600 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:36 crc kubenswrapper[4867]: I0214 04:09:36.137280 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:36 crc kubenswrapper[4867]: I0214 04:09:36.137298 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:36 crc kubenswrapper[4867]: I0214 04:09:36.137306 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:36 crc kubenswrapper[4867]: I0214 04:09:36.137938 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:36 crc kubenswrapper[4867]: I0214 04:09:36.137958 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:36 crc kubenswrapper[4867]: I0214 04:09:36.137965 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:36 crc kubenswrapper[4867]: I0214 04:09:36.138012 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:36 crc kubenswrapper[4867]: I0214 04:09:36.137982 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:36 crc kubenswrapper[4867]: I0214 04:09:36.138060 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:36 crc kubenswrapper[4867]: I0214 04:09:36.390494 4867 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 14 04:09:37 crc kubenswrapper[4867]: I0214 04:09:37.025538 4867 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 04:09:37 crc kubenswrapper[4867]: I0214 04:09:37.025646 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 04:09:37 crc kubenswrapper[4867]: I0214 04:09:37.082127 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 23:49:17.805640143 +0000 UTC Feb 14 04:09:37 crc kubenswrapper[4867]: I0214 04:09:37.143771 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"85486406cb9ccb97ccb382e44c3c4372c54609d367aeec7a04ddfa06424c9cd6"} Feb 14 04:09:37 crc kubenswrapper[4867]: I0214 04:09:37.143806 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:37 crc kubenswrapper[4867]: I0214 04:09:37.143900 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:37 crc kubenswrapper[4867]: I0214 04:09:37.144627 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:37 crc kubenswrapper[4867]: I0214 04:09:37.144666 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:37 crc kubenswrapper[4867]: I0214 04:09:37.144679 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:37 crc kubenswrapper[4867]: I0214 04:09:37.145069 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:37 crc kubenswrapper[4867]: I0214 04:09:37.145099 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:37 crc kubenswrapper[4867]: I0214 04:09:37.145108 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:37 crc kubenswrapper[4867]: I0214 04:09:37.995973 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 14 04:09:37 crc kubenswrapper[4867]: I0214 04:09:37.996208 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:37 crc kubenswrapper[4867]: I0214 04:09:37.997743 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:37 crc kubenswrapper[4867]: I0214 04:09:37.997810 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:37 crc kubenswrapper[4867]: I0214 04:09:37.997829 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:38 crc kubenswrapper[4867]: I0214 04:09:38.055837 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 14 04:09:38 crc kubenswrapper[4867]: I0214 04:09:38.083050 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 17:20:31.753755814 +0000 UTC Feb 14 04:09:38 crc kubenswrapper[4867]: I0214 04:09:38.117562 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 04:09:38 crc kubenswrapper[4867]: I0214 04:09:38.146232 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:38 crc kubenswrapper[4867]: I0214 04:09:38.146420 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:38 crc kubenswrapper[4867]: I0214 04:09:38.147961 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:38 crc kubenswrapper[4867]: I0214 04:09:38.148001 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:38 crc kubenswrapper[4867]: I0214 04:09:38.148012 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:38 crc kubenswrapper[4867]: I0214 04:09:38.147999 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:38 crc kubenswrapper[4867]: I0214 04:09:38.148047 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:38 crc kubenswrapper[4867]: I0214 04:09:38.148064 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:39 crc kubenswrapper[4867]: E0214 04:09:39.068170 4867 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 14 04:09:39 crc kubenswrapper[4867]: I0214 04:09:39.083271 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 12:14:43.607269607 +0000 UTC Feb 14 04:09:39 crc kubenswrapper[4867]: I0214 04:09:39.148826 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:39 crc kubenswrapper[4867]: I0214 04:09:39.150070 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:39 crc kubenswrapper[4867]: I0214 04:09:39.150130 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:39 crc kubenswrapper[4867]: I0214 04:09:39.150150 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:39 crc kubenswrapper[4867]: I0214 04:09:39.229625 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 04:09:39 crc kubenswrapper[4867]: I0214 04:09:39.229864 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:39 crc kubenswrapper[4867]: I0214 04:09:39.231184 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:39 crc kubenswrapper[4867]: I0214 04:09:39.231257 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:39 crc kubenswrapper[4867]: I0214 04:09:39.231281 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:40 crc kubenswrapper[4867]: I0214 04:09:40.058951 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 14 04:09:40 crc kubenswrapper[4867]: I0214 04:09:40.084201 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 11:12:37.165607116 +0000 UTC Feb 14 04:09:40 crc kubenswrapper[4867]: I0214 04:09:40.151217 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:40 crc kubenswrapper[4867]: I0214 04:09:40.152677 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:40 crc kubenswrapper[4867]: I0214 04:09:40.152724 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:40 crc kubenswrapper[4867]: I0214 04:09:40.152740 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:41 crc kubenswrapper[4867]: I0214 04:09:41.084688 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 09:52:27.195226736 +0000 UTC Feb 14 04:09:42 crc kubenswrapper[4867]: I0214 04:09:42.085590 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 10:46:17.518804751 +0000 UTC Feb 14 04:09:43 crc kubenswrapper[4867]: I0214 04:09:43.086776 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 01:43:15.672784504 +0000 UTC Feb 14 04:09:44 crc kubenswrapper[4867]: I0214 04:09:44.042566 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 14 04:09:44 crc kubenswrapper[4867]: I0214 04:09:44.042693 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:44 crc kubenswrapper[4867]: I0214 04:09:44.043946 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:44 crc kubenswrapper[4867]: I0214 04:09:44.044027 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:44 crc kubenswrapper[4867]: I0214 04:09:44.044051 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:44 crc kubenswrapper[4867]: I0214 04:09:44.087243 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 18:37:33.431158703 +0000 UTC Feb 14 04:09:45 crc kubenswrapper[4867]: I0214 04:09:45.088156 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 11:48:46.714722648 +0000 UTC Feb 14 04:09:45 crc kubenswrapper[4867]: E0214 04:09:45.281271 4867 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="6.4s" Feb 14 04:09:45 crc kubenswrapper[4867]: E0214 04:09:45.375590 4867 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Feb 14 04:09:45 crc kubenswrapper[4867]: I0214 04:09:45.434611 4867 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]","reason":"Forbidden","details":{},"code":403} Feb 14 04:09:45 crc kubenswrapper[4867]: I0214 04:09:45.434690 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 14 04:09:45 crc kubenswrapper[4867]: I0214 04:09:45.441938 4867 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]","reason":"Forbidden","details":{},"code":403} Feb 14 04:09:45 crc kubenswrapper[4867]: I0214 04:09:45.441984 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 14 04:09:46 crc kubenswrapper[4867]: I0214 04:09:46.088315 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 00:53:14.428551025 +0000 UTC Feb 14 04:09:46 crc kubenswrapper[4867]: I0214 04:09:46.170175 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 14 04:09:46 crc kubenswrapper[4867]: I0214 04:09:46.172002 4867 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b9a86a9d4bdcb85bed9cc5869d14d5d0dcd8a0e22ad73bcc1a9db45554d0c687" exitCode=255 Feb 14 04:09:46 crc kubenswrapper[4867]: I0214 04:09:46.172052 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"b9a86a9d4bdcb85bed9cc5869d14d5d0dcd8a0e22ad73bcc1a9db45554d0c687"} Feb 14 04:09:46 crc kubenswrapper[4867]: I0214 04:09:46.172204 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:46 crc kubenswrapper[4867]: I0214 04:09:46.173198 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:46 crc kubenswrapper[4867]: I0214 04:09:46.173241 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:46 crc kubenswrapper[4867]: I0214 04:09:46.173253 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:46 crc kubenswrapper[4867]: I0214 04:09:46.173864 4867 scope.go:117] "RemoveContainer" containerID="b9a86a9d4bdcb85bed9cc5869d14d5d0dcd8a0e22ad73bcc1a9db45554d0c687" Feb 14 04:09:47 crc kubenswrapper[4867]: I0214 04:09:47.026343 4867 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 04:09:47 crc kubenswrapper[4867]: I0214 04:09:47.026454 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 04:09:47 crc kubenswrapper[4867]: I0214 04:09:47.088959 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 21:04:01.589474723 +0000 UTC Feb 14 04:09:47 crc kubenswrapper[4867]: I0214 04:09:47.176796 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 14 04:09:47 crc kubenswrapper[4867]: I0214 04:09:47.178905 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"37c96b250166bcf9c613c7707d9b66c11bbb6292c67d03ed9c9cd8359f466d48"} Feb 14 04:09:47 crc kubenswrapper[4867]: I0214 04:09:47.179105 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:47 crc kubenswrapper[4867]: I0214 04:09:47.180061 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:47 crc kubenswrapper[4867]: I0214 04:09:47.180105 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:47 crc kubenswrapper[4867]: I0214 04:09:47.180121 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:48 crc kubenswrapper[4867]: I0214 04:09:48.084761 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 14 04:09:48 crc kubenswrapper[4867]: I0214 04:09:48.084949 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:48 crc kubenswrapper[4867]: I0214 04:09:48.086087 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:48 crc kubenswrapper[4867]: I0214 04:09:48.086126 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:48 crc kubenswrapper[4867]: I0214 04:09:48.086138 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:48 crc kubenswrapper[4867]: I0214 04:09:48.089051 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 01:21:56.073156678 +0000 UTC Feb 14 04:09:48 crc kubenswrapper[4867]: I0214 04:09:48.097364 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 14 04:09:48 crc kubenswrapper[4867]: I0214 04:09:48.182099 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:48 crc kubenswrapper[4867]: I0214 04:09:48.182993 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:48 crc kubenswrapper[4867]: I0214 04:09:48.183019 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:48 crc kubenswrapper[4867]: I0214 04:09:48.183029 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:49 crc kubenswrapper[4867]: E0214 04:09:49.068272 4867 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 14 04:09:49 crc kubenswrapper[4867]: I0214 04:09:49.089388 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 19:37:35.822211374 +0000 UTC Feb 14 04:09:49 crc kubenswrapper[4867]: I0214 04:09:49.235854 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 04:09:49 crc kubenswrapper[4867]: I0214 04:09:49.236317 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:49 crc kubenswrapper[4867]: I0214 04:09:49.236472 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 04:09:49 crc kubenswrapper[4867]: I0214 04:09:49.237891 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:49 crc kubenswrapper[4867]: I0214 04:09:49.237933 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:49 crc kubenswrapper[4867]: I0214 04:09:49.237956 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:49 crc kubenswrapper[4867]: I0214 04:09:49.245386 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 04:09:50 crc kubenswrapper[4867]: I0214 04:09:50.089846 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 06:07:24.989012886 +0000 UTC Feb 14 04:09:50 crc kubenswrapper[4867]: I0214 04:09:50.187679 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:50 crc kubenswrapper[4867]: I0214 04:09:50.188820 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:50 crc kubenswrapper[4867]: I0214 04:09:50.188871 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:50 crc kubenswrapper[4867]: I0214 04:09:50.188889 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:50 crc kubenswrapper[4867]: I0214 04:09:50.449416 4867 trace.go:236] Trace[2028999215]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (14-Feb-2026 04:09:38.273) (total time: 12175ms): Feb 14 04:09:50 crc kubenswrapper[4867]: Trace[2028999215]: ---"Objects listed" error: 12175ms (04:09:50.449) Feb 14 04:09:50 crc kubenswrapper[4867]: Trace[2028999215]: [12.175665655s] [12.175665655s] END Feb 14 04:09:50 crc kubenswrapper[4867]: I0214 04:09:50.449468 4867 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 14 04:09:50 crc kubenswrapper[4867]: I0214 04:09:50.450932 4867 trace.go:236] Trace[878925746]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (14-Feb-2026 04:09:36.658) (total time: 13792ms): Feb 14 04:09:50 crc kubenswrapper[4867]: Trace[878925746]: ---"Objects listed" error: 13792ms (04:09:50.450) Feb 14 04:09:50 crc kubenswrapper[4867]: Trace[878925746]: [13.792414679s] [13.792414679s] END Feb 14 04:09:50 crc kubenswrapper[4867]: I0214 04:09:50.450998 4867 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 14 04:09:50 crc kubenswrapper[4867]: I0214 04:09:50.453756 4867 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 14 04:09:50 crc kubenswrapper[4867]: I0214 04:09:50.454124 4867 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 14 04:09:50 crc kubenswrapper[4867]: I0214 04:09:50.454299 4867 trace.go:236] Trace[148007666]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (14-Feb-2026 04:09:37.974) (total time: 12480ms): Feb 14 04:09:50 crc kubenswrapper[4867]: Trace[148007666]: ---"Objects listed" error: 12479ms (04:09:50.453) Feb 14 04:09:50 crc kubenswrapper[4867]: Trace[148007666]: [12.480125756s] [12.480125756s] END Feb 14 04:09:50 crc kubenswrapper[4867]: I0214 04:09:50.454393 4867 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 14 04:09:50 crc kubenswrapper[4867]: I0214 04:09:50.455720 4867 trace.go:236] Trace[1976206573]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (14-Feb-2026 04:09:36.799) (total time: 13655ms): Feb 14 04:09:50 crc kubenswrapper[4867]: Trace[1976206573]: ---"Objects listed" error: 13655ms (04:09:50.455) Feb 14 04:09:50 crc kubenswrapper[4867]: Trace[1976206573]: [13.65576066s] [13.65576066s] END Feb 14 04:09:50 crc kubenswrapper[4867]: I0214 04:09:50.455762 4867 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 14 04:09:50 crc kubenswrapper[4867]: I0214 04:09:50.483898 4867 csr.go:261] certificate signing request csr-kn5td is approved, waiting to be issued Feb 14 04:09:50 crc kubenswrapper[4867]: I0214 04:09:50.497449 4867 csr.go:257] certificate signing request csr-kn5td is issued Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.089889 4867 apiserver.go:52] "Watching apiserver" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.090058 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 12:25:26.44749354 +0000 UTC Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.100920 4867 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.101569 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-dns/node-resolver-l6v69","openshift-machine-config-operator/machine-config-daemon-4s95t","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf"] Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.102345 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.102430 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.102485 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.102736 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-l6v69" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.103210 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.103348 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.103398 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" Feb 14 04:09:51 crc kubenswrapper[4867]: E0214 04:09:51.103439 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.102358 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:09:51 crc kubenswrapper[4867]: E0214 04:09:51.103678 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:09:51 crc kubenswrapper[4867]: E0214 04:09:51.103741 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.104841 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.105568 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.105722 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.105954 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.106185 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.106752 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-fl729"] Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.107401 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-9st5b"] Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.107886 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-6nndn"] Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.107903 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.108025 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.108099 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.108295 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-9st5b" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.108442 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.108571 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.108608 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.108917 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.108970 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.109273 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.109318 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.109435 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.108932 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.109447 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.109600 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.110956 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.111298 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.111388 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.111525 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.114161 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.114171 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.114603 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.114474 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.114774 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.114787 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.114953 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.115164 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.115170 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.115260 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.137591 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.139474 4867 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.157967 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.158016 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.158039 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.158059 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.158081 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.158099 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.158116 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.158136 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.158156 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.158174 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.158190 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.158206 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.158223 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.158239 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.158255 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.158276 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.158281 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.158292 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.158364 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.158388 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.158408 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.158430 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.158448 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.158466 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.158487 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.158533 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.158551 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.158568 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.158585 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.158602 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.158618 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.158638 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.158655 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.158672 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.158689 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.158714 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.158735 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.158752 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.158768 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.158785 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.158803 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.158829 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.158848 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.158865 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.158935 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.158958 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.158975 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.158993 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.159010 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.159028 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.159047 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.159051 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.159067 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.159257 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.159313 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.159393 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.159421 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.159455 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.159555 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.159628 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.159687 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.159743 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.159799 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.159852 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.159910 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.159920 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.159194 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.159993 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.160057 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.160115 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.160151 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.160174 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.160234 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.160304 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.160373 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.160444 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.160310 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.169524 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.160421 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: E0214 04:09:51.160535 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:09:51.660492055 +0000 UTC m=+23.741429359 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.160715 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.160753 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.160780 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.169616 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.160907 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.160889 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.169653 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.161135 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.161145 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.161156 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.169692 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.161465 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.161481 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.162441 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.169857 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.169883 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.169937 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.162545 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.162941 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.170016 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.163119 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.163197 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.163313 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.163432 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.163706 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.163710 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.163760 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.163995 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.164254 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.164352 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.164548 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.164568 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.164668 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.165246 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.165261 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.165618 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.165667 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.165851 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.165396 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.166103 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.166202 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.166660 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.166697 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.166986 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.167434 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.167948 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.168058 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.168274 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.168455 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.168487 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.168722 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.168722 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.168548 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.169239 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.169359 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.169573 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.160974 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.169767 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.170365 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.170368 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.170386 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.170223 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.169953 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.170477 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.170543 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.170569 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.170587 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.170605 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.170633 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.170650 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.170666 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.170687 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.170705 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.170720 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.170736 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.170724 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.170824 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.171103 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.171105 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.171165 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.171441 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.171449 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.171458 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.171922 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.171920 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.171934 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.171990 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.172200 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.171012 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.172305 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.172344 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.172420 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.172483 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.172597 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.172659 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.172662 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.172711 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.172786 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.172807 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.172826 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.172845 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.172865 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.172867 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.172887 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.172907 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.172928 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.172948 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.172970 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.172987 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.173007 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.173026 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.173044 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.173063 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.173079 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.173116 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.173137 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.173154 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.173175 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.173198 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.173193 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.173219 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.173239 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.173236 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.173266 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.173287 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.173307 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.173329 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.173344 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.173354 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.173413 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.173362 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.173460 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.173893 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.173936 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.173974 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.174016 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.174078 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.174135 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.174190 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.174241 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.174280 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.174316 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.174350 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.174399 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.174440 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.174493 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.174620 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.174714 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.174776 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.174814 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.174853 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.174889 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.174923 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.174958 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.174993 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.175026 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.175060 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.175103 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.175145 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.175182 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.175220 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.175262 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.175296 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.175335 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.175371 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.175409 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.175442 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.175474 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.175543 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.175580 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.175615 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.175647 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.175687 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.175721 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.175769 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.175821 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.175860 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.175895 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.175934 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.175972 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.176023 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.176071 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.176119 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.176168 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.176222 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.176275 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.176325 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.176367 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.174016 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.174063 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.174288 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.174293 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.174637 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.174655 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.174659 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.175076 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.175239 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.175485 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.175585 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.175548 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.175851 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.176247 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.176401 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.176419 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.177567 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.177596 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.177635 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.177685 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.177749 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.177782 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.177817 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.177872 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.177891 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.177909 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.177946 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.177979 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.178011 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.178048 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.178081 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.178117 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.178150 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.178183 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.178195 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.178216 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.178313 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64stb\" (UniqueName: \"kubernetes.io/projected/2afb01bb-2288-4e50-aa66-3e5f2685af58-kube-api-access-64stb\") pod \"node-resolver-l6v69\" (UID: \"2afb01bb-2288-4e50-aa66-3e5f2685af58\") " pod="openshift-dns/node-resolver-l6v69" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.178361 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-systemd-units\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.178397 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-log-socket\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.178425 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.178430 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/fb77d03e-6ead-48b5-a96a-db4cbd540192-host-run-multus-certs\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.178501 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.178562 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-host-slash\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.178589 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-run-systemd\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.178743 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.178782 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d645541b-4940-4e53-a506-1b42bd296dfb-system-cni-dir\") pod \"multus-additional-cni-plugins-9st5b\" (UID: \"d645541b-4940-4e53-a506-1b42bd296dfb\") " pod="openshift-multus/multus-additional-cni-plugins-9st5b" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.178768 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.178820 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-host-run-ovn-kubernetes\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.178847 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fb77d03e-6ead-48b5-a96a-db4cbd540192-os-release\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.178871 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fb77d03e-6ead-48b5-a96a-db4cbd540192-etc-kubernetes\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.178898 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d645541b-4940-4e53-a506-1b42bd296dfb-tuning-conf-dir\") pod \"multus-additional-cni-plugins-9st5b\" (UID: \"d645541b-4940-4e53-a506-1b42bd296dfb\") " pod="openshift-multus/multus-additional-cni-plugins-9st5b" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.178911 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.178933 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.178965 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-host-kubelet\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.178993 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-host-cni-netd\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.179023 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/fb77d03e-6ead-48b5-a96a-db4cbd540192-cni-binary-copy\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.179047 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fb77d03e-6ead-48b5-a96a-db4cbd540192-multus-conf-dir\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.179077 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brktz\" (UniqueName: \"kubernetes.io/projected/5992e46c-bce7-4b9f-82f2-c7ffb93286cd-kube-api-access-brktz\") pod \"machine-config-daemon-4s95t\" (UID: \"5992e46c-bce7-4b9f-82f2-c7ffb93286cd\") " pod="openshift-machine-config-operator/machine-config-daemon-4s95t" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.179112 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.179137 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fb77d03e-6ead-48b5-a96a-db4cbd540192-host-var-lib-cni-bin\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.179158 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gznnx\" (UniqueName: \"kubernetes.io/projected/fb77d03e-6ead-48b5-a96a-db4cbd540192-kube-api-access-gznnx\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.179179 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.179197 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/fb77d03e-6ead-48b5-a96a-db4cbd540192-multus-socket-dir-parent\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.179214 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/2afb01bb-2288-4e50-aa66-3e5f2685af58-hosts-file\") pod \"node-resolver-l6v69\" (UID: \"2afb01bb-2288-4e50-aa66-3e5f2685af58\") " pod="openshift-dns/node-resolver-l6v69" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.179242 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.179269 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-etc-openvswitch\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.179292 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/34391a30-5865-46e9-af5f-705cc3b11fba-ovnkube-script-lib\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.179318 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/fb77d03e-6ead-48b5-a96a-db4cbd540192-host-var-lib-kubelet\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.179340 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.179364 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.179388 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5992e46c-bce7-4b9f-82f2-c7ffb93286cd-proxy-tls\") pod \"machine-config-daemon-4s95t\" (UID: \"5992e46c-bce7-4b9f-82f2-c7ffb93286cd\") " pod="openshift-machine-config-operator/machine-config-daemon-4s95t" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.179409 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nd8lr\" (UniqueName: \"kubernetes.io/projected/d645541b-4940-4e53-a506-1b42bd296dfb-kube-api-access-nd8lr\") pod \"multus-additional-cni-plugins-9st5b\" (UID: \"d645541b-4940-4e53-a506-1b42bd296dfb\") " pod="openshift-multus/multus-additional-cni-plugins-9st5b" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.179429 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-var-lib-openvswitch\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.179452 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.179477 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fb77d03e-6ead-48b5-a96a-db4cbd540192-multus-cni-dir\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.179522 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.179549 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d645541b-4940-4e53-a506-1b42bd296dfb-os-release\") pod \"multus-additional-cni-plugins-9st5b\" (UID: \"d645541b-4940-4e53-a506-1b42bd296dfb\") " pod="openshift-multus/multus-additional-cni-plugins-9st5b" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.179818 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-host-cni-bin\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.179836 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/fb77d03e-6ead-48b5-a96a-db4cbd540192-host-run-k8s-cni-cncf-io\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.179854 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-run-openvswitch\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.179925 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.179947 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fb77d03e-6ead-48b5-a96a-db4cbd540192-cnibin\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.179970 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/fb77d03e-6ead-48b5-a96a-db4cbd540192-multus-daemon-config\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.179998 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.180023 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.180047 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-node-log\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.180068 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/34391a30-5865-46e9-af5f-705cc3b11fba-ovnkube-config\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.180090 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fb77d03e-6ead-48b5-a96a-db4cbd540192-system-cni-dir\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.180108 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fb77d03e-6ead-48b5-a96a-db4cbd540192-host-run-netns\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.180125 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/fb77d03e-6ead-48b5-a96a-db4cbd540192-host-var-lib-cni-multus\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.180145 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.180161 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/5992e46c-bce7-4b9f-82f2-c7ffb93286cd-rootfs\") pod \"machine-config-daemon-4s95t\" (UID: \"5992e46c-bce7-4b9f-82f2-c7ffb93286cd\") " pod="openshift-machine-config-operator/machine-config-daemon-4s95t" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.180180 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5992e46c-bce7-4b9f-82f2-c7ffb93286cd-mcd-auth-proxy-config\") pod \"machine-config-daemon-4s95t\" (UID: \"5992e46c-bce7-4b9f-82f2-c7ffb93286cd\") " pod="openshift-machine-config-operator/machine-config-daemon-4s95t" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.180196 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/fb77d03e-6ead-48b5-a96a-db4cbd540192-hostroot\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.180215 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.180233 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/d645541b-4940-4e53-a506-1b42bd296dfb-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-9st5b\" (UID: \"d645541b-4940-4e53-a506-1b42bd296dfb\") " pod="openshift-multus/multus-additional-cni-plugins-9st5b" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.180252 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-host-run-netns\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.180273 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/34391a30-5865-46e9-af5f-705cc3b11fba-ovn-node-metrics-cert\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.180279 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.180295 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmqj7\" (UniqueName: \"kubernetes.io/projected/34391a30-5865-46e9-af5f-705cc3b11fba-kube-api-access-kmqj7\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.180318 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d645541b-4940-4e53-a506-1b42bd296dfb-cnibin\") pod \"multus-additional-cni-plugins-9st5b\" (UID: \"d645541b-4940-4e53-a506-1b42bd296dfb\") " pod="openshift-multus/multus-additional-cni-plugins-9st5b" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.180340 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d645541b-4940-4e53-a506-1b42bd296dfb-cni-binary-copy\") pod \"multus-additional-cni-plugins-9st5b\" (UID: \"d645541b-4940-4e53-a506-1b42bd296dfb\") " pod="openshift-multus/multus-additional-cni-plugins-9st5b" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.180362 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-run-ovn\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.181010 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/34391a30-5865-46e9-af5f-705cc3b11fba-env-overrides\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.181050 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.181205 4867 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.181225 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.181242 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.181257 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.181273 4867 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.181292 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.181307 4867 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.181322 4867 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.181338 4867 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.181354 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.181389 4867 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.181406 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.181422 4867 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.181437 4867 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.181455 4867 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.181469 4867 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.181486 4867 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.181498 4867 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.181536 4867 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.181550 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.181563 4867 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.181592 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.181607 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.181621 4867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.181634 4867 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.181649 4867 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.181664 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.181679 4867 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.181692 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.181707 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.181756 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.181774 4867 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.181787 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.181825 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.181840 4867 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.181855 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.181894 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.181909 4867 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.181926 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.181940 4867 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.181990 4867 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.182007 4867 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.182020 4867 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.182072 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.182110 4867 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.182124 4867 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.182140 4867 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.182177 4867 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.182190 4867 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.182204 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.182216 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.182230 4867 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.182246 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.182261 4867 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.182274 4867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.182288 4867 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.182303 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.182316 4867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.182329 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.182342 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.182356 4867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.182350 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.180411 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.180929 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.176476 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.176934 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.177093 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.177121 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.181164 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.181614 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.182652 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.181684 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.181719 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.182324 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.182350 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.176423 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.182814 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.182925 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.183014 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.183352 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.183949 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.184245 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.184309 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.184482 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.184545 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.184555 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.184669 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.184788 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.184974 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.185053 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.185202 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: E0214 04:09:51.185888 4867 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 14 04:09:51 crc kubenswrapper[4867]: E0214 04:09:51.185957 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-14 04:09:51.685940085 +0000 UTC m=+23.766877399 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 14 04:09:51 crc kubenswrapper[4867]: E0214 04:09:51.186901 4867 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 14 04:09:51 crc kubenswrapper[4867]: E0214 04:09:51.186980 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-14 04:09:51.686967062 +0000 UTC m=+23.767904606 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.187629 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.187730 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.187898 4867 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.187990 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.188673 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l6v69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afb01bb-2288-4e50-aa66-3e5f2685af58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64stb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l6v69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.188892 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.189136 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.189761 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.189731 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.189872 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.190339 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.190363 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.191011 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.192182 4867 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.192267 4867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.192310 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.192474 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.192495 4867 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.192533 4867 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.192552 4867 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.192572 4867 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.192590 4867 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.192605 4867 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.192619 4867 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.192634 4867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.192652 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.192667 4867 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.192681 4867 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.192695 4867 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.192709 4867 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.192725 4867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.192743 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.192867 4867 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.192898 4867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.192910 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.193042 4867 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.193060 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.193080 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.193101 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.193120 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.193137 4867 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.193151 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.193164 4867 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.193178 4867 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.193195 4867 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.193208 4867 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.193222 4867 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.193239 4867 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.193257 4867 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.193275 4867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.193611 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.193485 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.194233 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.194189 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.194420 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.194616 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.195249 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.196683 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.200703 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.201220 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.206124 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.206351 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.207040 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.207072 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: E0214 04:09:51.207857 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 14 04:09:51 crc kubenswrapper[4867]: E0214 04:09:51.207888 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.207886 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.207886 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 04:09:51 crc kubenswrapper[4867]: E0214 04:09:51.207907 4867 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 04:09:51 crc kubenswrapper[4867]: E0214 04:09:51.208031 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-14 04:09:51.708003124 +0000 UTC m=+23.788940468 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.208051 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: E0214 04:09:51.208240 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 14 04:09:51 crc kubenswrapper[4867]: E0214 04:09:51.208256 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 14 04:09:51 crc kubenswrapper[4867]: E0214 04:09:51.208271 4867 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 04:09:51 crc kubenswrapper[4867]: E0214 04:09:51.208329 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-14 04:09:51.708306862 +0000 UTC m=+23.789244176 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.208744 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.208864 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.209166 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.209282 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.209674 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.211471 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.212419 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.215292 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.215827 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.215869 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.216038 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.216979 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.217795 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.217845 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.218292 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.218442 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.219092 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.219160 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.219663 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.220162 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.220352 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.220553 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.224113 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.224293 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.224536 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.225823 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.226627 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.226655 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.226674 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.231682 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34391a30-5865-46e9-af5f-705cc3b11fba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nndn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.238881 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.245629 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.254464 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.256878 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.261812 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.274390 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fl729" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb77d03e-6ead-48b5-a96a-db4cbd540192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gznnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fl729\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.294359 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fb77d03e-6ead-48b5-a96a-db4cbd540192-etc-kubernetes\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.294423 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d645541b-4940-4e53-a506-1b42bd296dfb-system-cni-dir\") pod \"multus-additional-cni-plugins-9st5b\" (UID: \"d645541b-4940-4e53-a506-1b42bd296dfb\") " pod="openshift-multus/multus-additional-cni-plugins-9st5b" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.294454 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-host-run-ovn-kubernetes\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.294418 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9st5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d645541b-4940-4e53-a506-1b42bd296dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9st5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.294604 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fb77d03e-6ead-48b5-a96a-db4cbd540192-etc-kubernetes\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.294630 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-host-run-ovn-kubernetes\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.294630 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d645541b-4940-4e53-a506-1b42bd296dfb-system-cni-dir\") pod \"multus-additional-cni-plugins-9st5b\" (UID: \"d645541b-4940-4e53-a506-1b42bd296dfb\") " pod="openshift-multus/multus-additional-cni-plugins-9st5b" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.294485 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fb77d03e-6ead-48b5-a96a-db4cbd540192-os-release\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.294832 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d645541b-4940-4e53-a506-1b42bd296dfb-tuning-conf-dir\") pod \"multus-additional-cni-plugins-9st5b\" (UID: \"d645541b-4940-4e53-a506-1b42bd296dfb\") " pod="openshift-multus/multus-additional-cni-plugins-9st5b" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.294871 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-host-kubelet\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.294913 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-host-cni-netd\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.294949 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fb77d03e-6ead-48b5-a96a-db4cbd540192-multus-conf-dir\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.294972 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-host-kubelet\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.294986 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brktz\" (UniqueName: \"kubernetes.io/projected/5992e46c-bce7-4b9f-82f2-c7ffb93286cd-kube-api-access-brktz\") pod \"machine-config-daemon-4s95t\" (UID: \"5992e46c-bce7-4b9f-82f2-c7ffb93286cd\") " pod="openshift-machine-config-operator/machine-config-daemon-4s95t" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.294998 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-host-cni-netd\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.295037 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/fb77d03e-6ead-48b5-a96a-db4cbd540192-cni-binary-copy\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.295084 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fb77d03e-6ead-48b5-a96a-db4cbd540192-os-release\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.295133 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.295167 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.295027 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fb77d03e-6ead-48b5-a96a-db4cbd540192-multus-conf-dir\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.295195 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fb77d03e-6ead-48b5-a96a-db4cbd540192-host-var-lib-cni-bin\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.295229 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gznnx\" (UniqueName: \"kubernetes.io/projected/fb77d03e-6ead-48b5-a96a-db4cbd540192-kube-api-access-gznnx\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.295268 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/fb77d03e-6ead-48b5-a96a-db4cbd540192-multus-socket-dir-parent\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.295289 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/2afb01bb-2288-4e50-aa66-3e5f2685af58-hosts-file\") pod \"node-resolver-l6v69\" (UID: \"2afb01bb-2288-4e50-aa66-3e5f2685af58\") " pod="openshift-dns/node-resolver-l6v69" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.295306 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fb77d03e-6ead-48b5-a96a-db4cbd540192-host-var-lib-cni-bin\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.295320 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.295372 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.295416 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-etc-openvswitch\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.295376 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-etc-openvswitch\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.295471 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/34391a30-5865-46e9-af5f-705cc3b11fba-ovnkube-script-lib\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.295521 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/fb77d03e-6ead-48b5-a96a-db4cbd540192-host-var-lib-kubelet\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.295549 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fb77d03e-6ead-48b5-a96a-db4cbd540192-multus-cni-dir\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.295568 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/fb77d03e-6ead-48b5-a96a-db4cbd540192-multus-socket-dir-parent\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.295580 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5992e46c-bce7-4b9f-82f2-c7ffb93286cd-proxy-tls\") pod \"machine-config-daemon-4s95t\" (UID: \"5992e46c-bce7-4b9f-82f2-c7ffb93286cd\") " pod="openshift-machine-config-operator/machine-config-daemon-4s95t" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.295625 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nd8lr\" (UniqueName: \"kubernetes.io/projected/d645541b-4940-4e53-a506-1b42bd296dfb-kube-api-access-nd8lr\") pod \"multus-additional-cni-plugins-9st5b\" (UID: \"d645541b-4940-4e53-a506-1b42bd296dfb\") " pod="openshift-multus/multus-additional-cni-plugins-9st5b" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.295653 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-var-lib-openvswitch\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.295672 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.295674 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d645541b-4940-4e53-a506-1b42bd296dfb-tuning-conf-dir\") pod \"multus-additional-cni-plugins-9st5b\" (UID: \"d645541b-4940-4e53-a506-1b42bd296dfb\") " pod="openshift-multus/multus-additional-cni-plugins-9st5b" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.295689 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/fb77d03e-6ead-48b5-a96a-db4cbd540192-host-run-k8s-cni-cncf-io\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.295747 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d645541b-4940-4e53-a506-1b42bd296dfb-os-release\") pod \"multus-additional-cni-plugins-9st5b\" (UID: \"d645541b-4940-4e53-a506-1b42bd296dfb\") " pod="openshift-multus/multus-additional-cni-plugins-9st5b" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.295774 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-host-cni-bin\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.295800 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-run-openvswitch\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.295836 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fb77d03e-6ead-48b5-a96a-db4cbd540192-cnibin\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.295866 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-var-lib-openvswitch\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.295865 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fb77d03e-6ead-48b5-a96a-db4cbd540192-host-run-netns\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.295897 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fb77d03e-6ead-48b5-a96a-db4cbd540192-host-run-netns\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.295907 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/fb77d03e-6ead-48b5-a96a-db4cbd540192-host-var-lib-cni-multus\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.295709 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/fb77d03e-6ead-48b5-a96a-db4cbd540192-host-run-k8s-cni-cncf-io\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.295938 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/fb77d03e-6ead-48b5-a96a-db4cbd540192-multus-daemon-config\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.295974 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-node-log\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.295975 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d645541b-4940-4e53-a506-1b42bd296dfb-os-release\") pod \"multus-additional-cni-plugins-9st5b\" (UID: \"d645541b-4940-4e53-a506-1b42bd296dfb\") " pod="openshift-multus/multus-additional-cni-plugins-9st5b" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.295991 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/34391a30-5865-46e9-af5f-705cc3b11fba-ovnkube-config\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.296012 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fb77d03e-6ead-48b5-a96a-db4cbd540192-system-cni-dir\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.296038 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-run-openvswitch\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.296077 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/5992e46c-bce7-4b9f-82f2-c7ffb93286cd-rootfs\") pod \"machine-config-daemon-4s95t\" (UID: \"5992e46c-bce7-4b9f-82f2-c7ffb93286cd\") " pod="openshift-machine-config-operator/machine-config-daemon-4s95t" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.296082 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.296100 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/fb77d03e-6ead-48b5-a96a-db4cbd540192-host-var-lib-cni-multus\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.296181 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fb77d03e-6ead-48b5-a96a-db4cbd540192-cnibin\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.296044 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/5992e46c-bce7-4b9f-82f2-c7ffb93286cd-rootfs\") pod \"machine-config-daemon-4s95t\" (UID: \"5992e46c-bce7-4b9f-82f2-c7ffb93286cd\") " pod="openshift-machine-config-operator/machine-config-daemon-4s95t" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.296225 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5992e46c-bce7-4b9f-82f2-c7ffb93286cd-mcd-auth-proxy-config\") pod \"machine-config-daemon-4s95t\" (UID: \"5992e46c-bce7-4b9f-82f2-c7ffb93286cd\") " pod="openshift-machine-config-operator/machine-config-daemon-4s95t" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.296253 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/34391a30-5865-46e9-af5f-705cc3b11fba-ovn-node-metrics-cert\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.296278 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmqj7\" (UniqueName: \"kubernetes.io/projected/34391a30-5865-46e9-af5f-705cc3b11fba-kube-api-access-kmqj7\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.296305 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/fb77d03e-6ead-48b5-a96a-db4cbd540192-hostroot\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.296352 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/d645541b-4940-4e53-a506-1b42bd296dfb-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-9st5b\" (UID: \"d645541b-4940-4e53-a506-1b42bd296dfb\") " pod="openshift-multus/multus-additional-cni-plugins-9st5b" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.296382 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-host-run-netns\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.296396 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/fb77d03e-6ead-48b5-a96a-db4cbd540192-cni-binary-copy\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.296411 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/34391a30-5865-46e9-af5f-705cc3b11fba-env-overrides\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.296489 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d645541b-4940-4e53-a506-1b42bd296dfb-cnibin\") pod \"multus-additional-cni-plugins-9st5b\" (UID: \"d645541b-4940-4e53-a506-1b42bd296dfb\") " pod="openshift-multus/multus-additional-cni-plugins-9st5b" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.296567 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d645541b-4940-4e53-a506-1b42bd296dfb-cni-binary-copy\") pod \"multus-additional-cni-plugins-9st5b\" (UID: \"d645541b-4940-4e53-a506-1b42bd296dfb\") " pod="openshift-multus/multus-additional-cni-plugins-9st5b" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.296606 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-run-ovn\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.296639 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/fb77d03e-6ead-48b5-a96a-db4cbd540192-host-run-multus-certs\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.296692 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64stb\" (UniqueName: \"kubernetes.io/projected/2afb01bb-2288-4e50-aa66-3e5f2685af58-kube-api-access-64stb\") pod \"node-resolver-l6v69\" (UID: \"2afb01bb-2288-4e50-aa66-3e5f2685af58\") " pod="openshift-dns/node-resolver-l6v69" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.296704 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/fb77d03e-6ead-48b5-a96a-db4cbd540192-multus-daemon-config\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.296728 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-systemd-units\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.296749 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-node-log\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.296766 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-log-socket\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.296801 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-host-slash\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.296813 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/fb77d03e-6ead-48b5-a96a-db4cbd540192-host-run-multus-certs\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.296834 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-run-systemd\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.295433 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/2afb01bb-2288-4e50-aa66-3e5f2685af58-hosts-file\") pod \"node-resolver-l6v69\" (UID: \"2afb01bb-2288-4e50-aa66-3e5f2685af58\") " pod="openshift-dns/node-resolver-l6v69" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.296012 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-host-cni-bin\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.297471 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/34391a30-5865-46e9-af5f-705cc3b11fba-ovnkube-config\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.297567 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fb77d03e-6ead-48b5-a96a-db4cbd540192-system-cni-dir\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.297616 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-run-systemd\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.297810 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/34391a30-5865-46e9-af5f-705cc3b11fba-env-overrides\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.297846 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-systemd-units\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.297818 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-log-socket\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.297902 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/fb77d03e-6ead-48b5-a96a-db4cbd540192-hostroot\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.297912 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d645541b-4940-4e53-a506-1b42bd296dfb-cnibin\") pod \"multus-additional-cni-plugins-9st5b\" (UID: \"d645541b-4940-4e53-a506-1b42bd296dfb\") " pod="openshift-multus/multus-additional-cni-plugins-9st5b" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.297934 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/fb77d03e-6ead-48b5-a96a-db4cbd540192-host-var-lib-kubelet\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.298038 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-host-slash\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.298069 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fb77d03e-6ead-48b5-a96a-db4cbd540192-multus-cni-dir\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.298208 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-host-run-netns\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.298493 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/34391a30-5865-46e9-af5f-705cc3b11fba-ovnkube-script-lib\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.298583 4867 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.298621 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d645541b-4940-4e53-a506-1b42bd296dfb-cni-binary-copy\") pod \"multus-additional-cni-plugins-9st5b\" (UID: \"d645541b-4940-4e53-a506-1b42bd296dfb\") " pod="openshift-multus/multus-additional-cni-plugins-9st5b" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.298632 4867 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.298671 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.298690 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/d645541b-4940-4e53-a506-1b42bd296dfb-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-9st5b\" (UID: \"d645541b-4940-4e53-a506-1b42bd296dfb\") " pod="openshift-multus/multus-additional-cni-plugins-9st5b" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.298698 4867 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.298757 4867 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.298774 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.298787 4867 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.298800 4867 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.298814 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.298813 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5992e46c-bce7-4b9f-82f2-c7ffb93286cd-mcd-auth-proxy-config\") pod \"machine-config-daemon-4s95t\" (UID: \"5992e46c-bce7-4b9f-82f2-c7ffb93286cd\") " pod="openshift-machine-config-operator/machine-config-daemon-4s95t" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.298826 4867 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.298859 4867 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.298877 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.298891 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.298906 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.298922 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.298933 4867 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.298989 4867 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299006 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299039 4867 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299048 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299057 4867 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299068 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299079 4867 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299090 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299130 4867 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299142 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299152 4867 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299164 4867 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299210 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299227 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299239 4867 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299252 4867 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299263 4867 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299300 4867 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299314 4867 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299325 4867 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299337 4867 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299349 4867 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299388 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299404 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299416 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299428 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299465 4867 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299479 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299490 4867 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299538 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299553 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299564 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299575 4867 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299586 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299624 4867 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299636 4867 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299647 4867 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299662 4867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299674 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299717 4867 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299729 4867 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299742 4867 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299754 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299800 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299812 4867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299825 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299837 4867 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299880 4867 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299894 4867 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299907 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299919 4867 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299957 4867 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299969 4867 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299980 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299992 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.300003 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.300039 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.300054 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.300066 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.300080 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.300092 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.300130 4867 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.300143 4867 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.300154 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.300166 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.300202 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.300217 4867 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.300228 4867 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.300238 4867 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.300250 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.300288 4867 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.300301 4867 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.300312 4867 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.300322 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.300333 4867 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.300369 4867 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.300383 4867 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.300394 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.300404 4867 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.300416 4867 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.300451 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.300464 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.300490 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-run-ovn\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.302274 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5992e46c-bce7-4b9f-82f2-c7ffb93286cd-proxy-tls\") pod \"machine-config-daemon-4s95t\" (UID: \"5992e46c-bce7-4b9f-82f2-c7ffb93286cd\") " pod="openshift-machine-config-operator/machine-config-daemon-4s95t" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.302383 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/34391a30-5865-46e9-af5f-705cc3b11fba-ovn-node-metrics-cert\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.306493 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.315163 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64stb\" (UniqueName: \"kubernetes.io/projected/2afb01bb-2288-4e50-aa66-3e5f2685af58-kube-api-access-64stb\") pod \"node-resolver-l6v69\" (UID: \"2afb01bb-2288-4e50-aa66-3e5f2685af58\") " pod="openshift-dns/node-resolver-l6v69" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.316419 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brktz\" (UniqueName: \"kubernetes.io/projected/5992e46c-bce7-4b9f-82f2-c7ffb93286cd-kube-api-access-brktz\") pod \"machine-config-daemon-4s95t\" (UID: \"5992e46c-bce7-4b9f-82f2-c7ffb93286cd\") " pod="openshift-machine-config-operator/machine-config-daemon-4s95t" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.319233 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nd8lr\" (UniqueName: \"kubernetes.io/projected/d645541b-4940-4e53-a506-1b42bd296dfb-kube-api-access-nd8lr\") pod \"multus-additional-cni-plugins-9st5b\" (UID: \"d645541b-4940-4e53-a506-1b42bd296dfb\") " pod="openshift-multus/multus-additional-cni-plugins-9st5b" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.320074 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gznnx\" (UniqueName: \"kubernetes.io/projected/fb77d03e-6ead-48b5-a96a-db4cbd540192-kube-api-access-gznnx\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.321208 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmqj7\" (UniqueName: \"kubernetes.io/projected/34391a30-5865-46e9-af5f-705cc3b11fba-kube-api-access-kmqj7\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.321807 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.334527 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5992e46c-bce7-4b9f-82f2-c7ffb93286cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4s95t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.379472 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.424854 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.430821 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.438857 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-l6v69" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.446207 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 14 04:09:51 crc kubenswrapper[4867]: W0214 04:09:51.449762 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-8f62492d6d7982717b9ac621b1bb111c49bae9d6da799e0f1b454669693102be WatchSource:0}: Error finding container 8f62492d6d7982717b9ac621b1bb111c49bae9d6da799e0f1b454669693102be: Status 404 returned error can't find the container with id 8f62492d6d7982717b9ac621b1bb111c49bae9d6da799e0f1b454669693102be Feb 14 04:09:51 crc kubenswrapper[4867]: W0214 04:09:51.451120 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-e0339c932b98dcb5cf2ac59eef115a838c0d9243ce93b773e55a3a02f67b6fa3 WatchSource:0}: Error finding container e0339c932b98dcb5cf2ac59eef115a838c0d9243ce93b773e55a3a02f67b6fa3: Status 404 returned error can't find the container with id e0339c932b98dcb5cf2ac59eef115a838c0d9243ce93b773e55a3a02f67b6fa3 Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.459075 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.467698 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.475238 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-9st5b" Feb 14 04:09:51 crc kubenswrapper[4867]: W0214 04:09:51.476481 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-717c03e8fbb0972418c14940e7fc89e04cc838574e077da1cf7a1741efa88f2c WatchSource:0}: Error finding container 717c03e8fbb0972418c14940e7fc89e04cc838574e077da1cf7a1741efa88f2c: Status 404 returned error can't find the container with id 717c03e8fbb0972418c14940e7fc89e04cc838574e077da1cf7a1741efa88f2c Feb 14 04:09:51 crc kubenswrapper[4867]: W0214 04:09:51.489578 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5992e46c_bce7_4b9f_82f2_c7ffb93286cd.slice/crio-8d37ac335c77fd83330a4118ee880ad78776f98dc45e069afd59cee1eb4a1840 WatchSource:0}: Error finding container 8d37ac335c77fd83330a4118ee880ad78776f98dc45e069afd59cee1eb4a1840: Status 404 returned error can't find the container with id 8d37ac335c77fd83330a4118ee880ad78776f98dc45e069afd59cee1eb4a1840 Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.498971 4867 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-14 04:04:50 +0000 UTC, rotation deadline is 2026-11-07 05:31:29.29019253 +0000 UTC Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.499549 4867 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6385h21m37.7906488s for next certificate rotation Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.502124 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: W0214 04:09:51.513033 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfb77d03e_6ead_48b5_a96a_db4cbd540192.slice/crio-873bdb0e3e5dc5374d35049d34e08f519588b676fecf70d774756d715ce02331 WatchSource:0}: Error finding container 873bdb0e3e5dc5374d35049d34e08f519588b676fecf70d774756d715ce02331: Status 404 returned error can't find the container with id 873bdb0e3e5dc5374d35049d34e08f519588b676fecf70d774756d715ce02331 Feb 14 04:09:51 crc kubenswrapper[4867]: W0214 04:09:51.540344 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd645541b_4940_4e53_a506_1b42bd296dfb.slice/crio-433c012dfbbda658b5dd2c476aba6a094b1c71e752198db24f61fe1beedfcf8a WatchSource:0}: Error finding container 433c012dfbbda658b5dd2c476aba6a094b1c71e752198db24f61fe1beedfcf8a: Status 404 returned error can't find the container with id 433c012dfbbda658b5dd2c476aba6a094b1c71e752198db24f61fe1beedfcf8a Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.711648 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:09:51 crc kubenswrapper[4867]: E0214 04:09:51.712006 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:09:52.711959191 +0000 UTC m=+24.792896505 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.712172 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.712201 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.712231 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.712255 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:09:51 crc kubenswrapper[4867]: E0214 04:09:51.712283 4867 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 14 04:09:51 crc kubenswrapper[4867]: E0214 04:09:51.712333 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-14 04:09:52.712319071 +0000 UTC m=+24.793256385 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 14 04:09:51 crc kubenswrapper[4867]: E0214 04:09:51.712351 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 14 04:09:51 crc kubenswrapper[4867]: E0214 04:09:51.712365 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 14 04:09:51 crc kubenswrapper[4867]: E0214 04:09:51.712375 4867 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 04:09:51 crc kubenswrapper[4867]: E0214 04:09:51.712402 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-14 04:09:52.712393843 +0000 UTC m=+24.793331157 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 04:09:51 crc kubenswrapper[4867]: E0214 04:09:51.712472 4867 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 14 04:09:51 crc kubenswrapper[4867]: E0214 04:09:51.712514 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-14 04:09:52.712488115 +0000 UTC m=+24.793425429 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 14 04:09:51 crc kubenswrapper[4867]: E0214 04:09:51.712554 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 14 04:09:51 crc kubenswrapper[4867]: E0214 04:09:51.712564 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 14 04:09:51 crc kubenswrapper[4867]: E0214 04:09:51.712570 4867 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 04:09:51 crc kubenswrapper[4867]: E0214 04:09:51.712588 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-14 04:09:52.712582828 +0000 UTC m=+24.793520142 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.775712 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.778068 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.778102 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.778113 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.778211 4867 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.790068 4867 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.790454 4867 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.794895 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.795064 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.795157 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.795246 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.795322 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:51Z","lastTransitionTime":"2026-02-14T04:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:51 crc kubenswrapper[4867]: E0214 04:09:51.806881 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148e1364-0af4-4e1f-ae72-52166d888ddc\\\",\\\"systemUUID\\\":\\\"1382a0d3-8d29-4f25-bc2c-dc46ad541396\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.810912 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.810949 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.810958 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.810975 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.810984 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:51Z","lastTransitionTime":"2026-02-14T04:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:51 crc kubenswrapper[4867]: E0214 04:09:51.822623 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148e1364-0af4-4e1f-ae72-52166d888ddc\\\",\\\"systemUUID\\\":\\\"1382a0d3-8d29-4f25-bc2c-dc46ad541396\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.825744 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.825778 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.825787 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.825802 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.825811 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:51Z","lastTransitionTime":"2026-02-14T04:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:51 crc kubenswrapper[4867]: E0214 04:09:51.839392 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148e1364-0af4-4e1f-ae72-52166d888ddc\\\",\\\"systemUUID\\\":\\\"1382a0d3-8d29-4f25-bc2c-dc46ad541396\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.846451 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.846490 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.846520 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.846540 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.846552 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:51Z","lastTransitionTime":"2026-02-14T04:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:51 crc kubenswrapper[4867]: E0214 04:09:51.859254 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148e1364-0af4-4e1f-ae72-52166d888ddc\\\",\\\"systemUUID\\\":\\\"1382a0d3-8d29-4f25-bc2c-dc46ad541396\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.862801 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.862832 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.862842 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.862859 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.862869 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:51Z","lastTransitionTime":"2026-02-14T04:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:51 crc kubenswrapper[4867]: E0214 04:09:51.875874 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148e1364-0af4-4e1f-ae72-52166d888ddc\\\",\\\"systemUUID\\\":\\\"1382a0d3-8d29-4f25-bc2c-dc46ad541396\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 04:09:51 crc kubenswrapper[4867]: E0214 04:09:51.876033 4867 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.881777 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.881824 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.881837 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.881855 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.881864 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:51Z","lastTransitionTime":"2026-02-14T04:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.984227 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.984269 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.984282 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.984300 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.984311 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:51Z","lastTransitionTime":"2026-02-14T04:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.086674 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.086726 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.086738 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.086768 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.086781 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:52Z","lastTransitionTime":"2026-02-14T04:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.090890 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 05:48:28.97479689 +0000 UTC Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.189213 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.189258 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.189268 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.189290 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.189301 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:52Z","lastTransitionTime":"2026-02-14T04:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.195645 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"717c03e8fbb0972418c14940e7fc89e04cc838574e077da1cf7a1741efa88f2c"} Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.197210 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-l6v69" event={"ID":"2afb01bb-2288-4e50-aa66-3e5f2685af58","Type":"ContainerStarted","Data":"a109f9fc7a2ea765543b2d1437ad5eccddd0ccb0542b1ffe6a67490057d6d41e"} Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.197261 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-l6v69" event={"ID":"2afb01bb-2288-4e50-aa66-3e5f2685af58","Type":"ContainerStarted","Data":"f590eff1e465dd61ee0ef4b9d9a120ddae3c21e03088c79a3e6e0cecc1c6f79e"} Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.198839 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"5b12a920eac3a6bb901e1eb5b3f4ec399de4fb28f20cd73bdcf463730ccc78bf"} Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.198892 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"a3a373e25fceabb99332a08d8c1928aa6023c103d488a1f02a57b3157eceb75e"} Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.198903 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"e0339c932b98dcb5cf2ac59eef115a838c0d9243ce93b773e55a3a02f67b6fa3"} Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.200222 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"ab82cfd4c916a17bb5ae2454a121a8367c532dd78d0ae1e13c02868208b7c7fe"} Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.200246 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"8f62492d6d7982717b9ac621b1bb111c49bae9d6da799e0f1b454669693102be"} Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.203088 4867 generic.go:334] "Generic (PLEG): container finished" podID="34391a30-5865-46e9-af5f-705cc3b11fba" containerID="cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288" exitCode=0 Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.203172 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" event={"ID":"34391a30-5865-46e9-af5f-705cc3b11fba","Type":"ContainerDied","Data":"cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288"} Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.203208 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" event={"ID":"34391a30-5865-46e9-af5f-705cc3b11fba","Type":"ContainerStarted","Data":"766035eb89c0e6059ab573e34c9ca67206f8aeefdcb68c749029bbaceeefc307"} Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.205278 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9st5b" event={"ID":"d645541b-4940-4e53-a506-1b42bd296dfb","Type":"ContainerStarted","Data":"feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2"} Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.205305 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9st5b" event={"ID":"d645541b-4940-4e53-a506-1b42bd296dfb","Type":"ContainerStarted","Data":"433c012dfbbda658b5dd2c476aba6a094b1c71e752198db24f61fe1beedfcf8a"} Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.206474 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fl729" event={"ID":"fb77d03e-6ead-48b5-a96a-db4cbd540192","Type":"ContainerStarted","Data":"6f23c7e00abcb489852a771f1534532f8a6c3acdd810e4432dd155a72558bcc7"} Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.206497 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fl729" event={"ID":"fb77d03e-6ead-48b5-a96a-db4cbd540192","Type":"ContainerStarted","Data":"873bdb0e3e5dc5374d35049d34e08f519588b676fecf70d774756d715ce02331"} Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.208701 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" event={"ID":"5992e46c-bce7-4b9f-82f2-c7ffb93286cd","Type":"ContainerStarted","Data":"945e51e35cb7125361ec74b9c291782c9bc28f0c319ca5c90a88c27540d6ad95"} Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.208728 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" event={"ID":"5992e46c-bce7-4b9f-82f2-c7ffb93286cd","Type":"ContainerStarted","Data":"c06b088007e4cc02eff5f33dffc101f9d559fc0af6d9fc99cb7d1a49c47deec3"} Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.208740 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" event={"ID":"5992e46c-bce7-4b9f-82f2-c7ffb93286cd","Type":"ContainerStarted","Data":"8d37ac335c77fd83330a4118ee880ad78776f98dc45e069afd59cee1eb4a1840"} Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.214796 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.222266 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l6v69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afb01bb-2288-4e50-aa66-3e5f2685af58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a109f9fc7a2ea765543b2d1437ad5eccddd0ccb0542b1ffe6a67490057d6d41e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64stb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l6v69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.230315 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.247113 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34391a30-5865-46e9-af5f-705cc3b11fba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nndn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.258864 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.270265 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.279749 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fl729" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb77d03e-6ead-48b5-a96a-db4cbd540192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gznnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fl729\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.290790 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9st5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d645541b-4940-4e53-a506-1b42bd296dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9st5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.292312 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.292363 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.292375 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.292393 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.292405 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:52Z","lastTransitionTime":"2026-02-14T04:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.304998 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5aa8290-4924-4bc2-bd8e-576e53fa4216\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff2ac5b982c695cfabb0b045748396477b0076e3f4bd77aedf8140d8d212eefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c65a590577c74e672bca804403f159a08ded5ed0e25daf1bef640898c304fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b2c4d9c08ee7188cfea877222707517949a93291dac8409facff18ccd5d9243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c96b250166bcf9c613c7707d9b66c11bbb6292c67d03ed9c9cd8359f466d48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9a86a9d4bdcb85bed9cc5869d14d5d0dcd8a0e22ad73bcc1a9db45554d0c687\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T04:09:45Z\\\",\\\"message\\\":\\\"W0214 04:09:34.347288 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0214 04:09:34.347626 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771042174 cert, and key in /tmp/serving-cert-184764736/serving-signer.crt, /tmp/serving-cert-184764736/serving-signer.key\\\\nI0214 04:09:34.829829 1 observer_polling.go:159] Starting file observer\\\\nW0214 04:09:34.832051 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0214 04:09:34.832310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 04:09:34.835332 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-184764736/tls.crt::/tmp/serving-cert-184764736/tls.key\\\\\\\"\\\\nF0214 04:09:45.190789 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d60b00afe16ba210d6cf3e8edd9c12aef490177b83185e6d74f219cc35efbc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.314588 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.324289 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.333656 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5992e46c-bce7-4b9f-82f2-c7ffb93286cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4s95t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.343297 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b12a920eac3a6bb901e1eb5b3f4ec399de4fb28f20cd73bdcf463730ccc78bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a373e25fceabb99332a08d8c1928aa6023c103d488a1f02a57b3157eceb75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.353837 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l6v69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afb01bb-2288-4e50-aa66-3e5f2685af58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a109f9fc7a2ea765543b2d1437ad5eccddd0ccb0542b1ffe6a67490057d6d41e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64stb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l6v69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.379400 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.395579 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.395612 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.395621 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.395640 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.395652 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:52Z","lastTransitionTime":"2026-02-14T04:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.400086 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34391a30-5865-46e9-af5f-705cc3b11fba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nndn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.415227 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab82cfd4c916a17bb5ae2454a121a8367c532dd78d0ae1e13c02868208b7c7fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.428766 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.445643 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fl729" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb77d03e-6ead-48b5-a96a-db4cbd540192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f23c7e00abcb489852a771f1534532f8a6c3acdd810e4432dd155a72558bcc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gznnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fl729\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.462014 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9st5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d645541b-4940-4e53-a506-1b42bd296dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9st5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.473782 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5aa8290-4924-4bc2-bd8e-576e53fa4216\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff2ac5b982c695cfabb0b045748396477b0076e3f4bd77aedf8140d8d212eefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c65a590577c74e672bca804403f159a08ded5ed0e25daf1bef640898c304fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b2c4d9c08ee7188cfea877222707517949a93291dac8409facff18ccd5d9243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c96b250166bcf9c613c7707d9b66c11bbb6292c67d03ed9c9cd8359f466d48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9a86a9d4bdcb85bed9cc5869d14d5d0dcd8a0e22ad73bcc1a9db45554d0c687\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T04:09:45Z\\\",\\\"message\\\":\\\"W0214 04:09:34.347288 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0214 04:09:34.347626 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771042174 cert, and key in /tmp/serving-cert-184764736/serving-signer.crt, /tmp/serving-cert-184764736/serving-signer.key\\\\nI0214 04:09:34.829829 1 observer_polling.go:159] Starting file observer\\\\nW0214 04:09:34.832051 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0214 04:09:34.832310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 04:09:34.835332 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-184764736/tls.crt::/tmp/serving-cert-184764736/tls.key\\\\\\\"\\\\nF0214 04:09:45.190789 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d60b00afe16ba210d6cf3e8edd9c12aef490177b83185e6d74f219cc35efbc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.484525 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.494989 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.498088 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.498120 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.498133 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.498152 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.498164 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:52Z","lastTransitionTime":"2026-02-14T04:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.507311 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5992e46c-bce7-4b9f-82f2-c7ffb93286cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945e51e35cb7125361ec74b9c291782c9bc28f0c319ca5c90a88c27540d6ad95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06b088007e4cc02eff5f33dffc101f9d559fc0af6d9fc99cb7d1a49c47deec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4s95t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.600326 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.600359 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.600367 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.600381 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.600391 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:52Z","lastTransitionTime":"2026-02-14T04:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.702935 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.703001 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.703016 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.703045 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.703065 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:52Z","lastTransitionTime":"2026-02-14T04:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.723047 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.723146 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.723173 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:09:52 crc kubenswrapper[4867]: E0214 04:09:52.723280 4867 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 14 04:09:52 crc kubenswrapper[4867]: E0214 04:09:52.723297 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:09:54.723244187 +0000 UTC m=+26.804181511 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:09:52 crc kubenswrapper[4867]: E0214 04:09:52.723350 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-14 04:09:54.7233374 +0000 UTC m=+26.804274934 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 14 04:09:52 crc kubenswrapper[4867]: E0214 04:09:52.723357 4867 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 14 04:09:52 crc kubenswrapper[4867]: E0214 04:09:52.723478 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-14 04:09:54.723453953 +0000 UTC m=+26.804391267 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.723398 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.723599 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:09:52 crc kubenswrapper[4867]: E0214 04:09:52.723638 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 14 04:09:52 crc kubenswrapper[4867]: E0214 04:09:52.723661 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 14 04:09:52 crc kubenswrapper[4867]: E0214 04:09:52.723680 4867 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 04:09:52 crc kubenswrapper[4867]: E0214 04:09:52.723736 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-14 04:09:54.72372481 +0000 UTC m=+26.804662144 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 04:09:52 crc kubenswrapper[4867]: E0214 04:09:52.723778 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 14 04:09:52 crc kubenswrapper[4867]: E0214 04:09:52.723794 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 14 04:09:52 crc kubenswrapper[4867]: E0214 04:09:52.723808 4867 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 04:09:52 crc kubenswrapper[4867]: E0214 04:09:52.723837 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-14 04:09:54.723829753 +0000 UTC m=+26.804767067 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.805200 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.805418 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.805587 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.805669 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.805728 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:52Z","lastTransitionTime":"2026-02-14T04:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.908793 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.908838 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.908851 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.908869 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.908881 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:52Z","lastTransitionTime":"2026-02-14T04:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.999015 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:09:52 crc kubenswrapper[4867]: E0214 04:09:52.999120 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.999417 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:09:52 crc kubenswrapper[4867]: E0214 04:09:52.999471 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.999527 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:09:52 crc kubenswrapper[4867]: E0214 04:09:52.999577 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.002435 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.003189 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.004368 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.005006 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.006032 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.006588 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.007176 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.010850 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.010876 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.010911 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.010925 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.010936 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:53Z","lastTransitionTime":"2026-02-14T04:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.011057 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.011854 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.012861 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.013439 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.015882 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.016430 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.016958 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.017907 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.018400 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.019413 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.019875 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.020438 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.021488 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.022001 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.022926 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.023346 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.024314 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.024779 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.025413 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.027618 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.028090 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.029050 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.029552 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.030383 4867 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.030482 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.032034 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.032907 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.033362 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.035168 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.036173 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.036674 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.037295 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.038363 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.038860 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.039926 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.040922 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.041548 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.042450 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.043011 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.043912 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.044640 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.045467 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.046023 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.046468 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.047342 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.047934 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.048873 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.091741 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 17:56:06.882159776 +0000 UTC Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.113893 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.113948 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.113958 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.113974 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.113984 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:53Z","lastTransitionTime":"2026-02-14T04:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.218447 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.218494 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.218523 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.218549 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.218564 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:53Z","lastTransitionTime":"2026-02-14T04:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.222002 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" event={"ID":"34391a30-5865-46e9-af5f-705cc3b11fba","Type":"ContainerStarted","Data":"d9937714cb48d5e8bc3542473d8261629ced25c342c26baa13e57c3dc2ace633"} Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.222056 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" event={"ID":"34391a30-5865-46e9-af5f-705cc3b11fba","Type":"ContainerStarted","Data":"250a34062c680cecaa28554a71a782da6fa1c3554900e8b4f2fa6c093f98e307"} Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.222066 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" event={"ID":"34391a30-5865-46e9-af5f-705cc3b11fba","Type":"ContainerStarted","Data":"92014b6ab3e1d7c8631fb2a2fa44b60586bb67157c43e3528a7644584fd25b18"} Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.222076 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" event={"ID":"34391a30-5865-46e9-af5f-705cc3b11fba","Type":"ContainerStarted","Data":"669cee3fb4ea5c0247e6aa92962377d3152fc79d9022e03689b7c017f857e6e6"} Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.222085 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" event={"ID":"34391a30-5865-46e9-af5f-705cc3b11fba","Type":"ContainerStarted","Data":"e713ec2c9e59ec516f5c2241a0e87501f4e83e05d6dadd3c54cd7f1cf11f7d1e"} Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.223924 4867 generic.go:334] "Generic (PLEG): container finished" podID="d645541b-4940-4e53-a506-1b42bd296dfb" containerID="feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2" exitCode=0 Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.223962 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9st5b" event={"ID":"d645541b-4940-4e53-a506-1b42bd296dfb","Type":"ContainerDied","Data":"feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2"} Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.242119 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fl729" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb77d03e-6ead-48b5-a96a-db4cbd540192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f23c7e00abcb489852a771f1534532f8a6c3acdd810e4432dd155a72558bcc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gznnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fl729\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:53Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.264946 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9st5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d645541b-4940-4e53-a506-1b42bd296dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9st5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:53Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.282911 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:53Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.303123 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:53Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.322438 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5992e46c-bce7-4b9f-82f2-c7ffb93286cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945e51e35cb7125361ec74b9c291782c9bc28f0c319ca5c90a88c27540d6ad95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06b088007e4cc02eff5f33dffc101f9d559fc0af6d9fc99cb7d1a49c47deec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4s95t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:53Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.326418 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.326994 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.327778 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.327924 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.328028 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:53Z","lastTransitionTime":"2026-02-14T04:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.345587 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5aa8290-4924-4bc2-bd8e-576e53fa4216\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff2ac5b982c695cfabb0b045748396477b0076e3f4bd77aedf8140d8d212eefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c65a590577c74e672bca804403f159a08ded5ed0e25daf1bef640898c304fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b2c4d9c08ee7188cfea877222707517949a93291dac8409facff18ccd5d9243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c96b250166bcf9c613c7707d9b66c11bbb6292c67d03ed9c9cd8359f466d48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9a86a9d4bdcb85bed9cc5869d14d5d0dcd8a0e22ad73bcc1a9db45554d0c687\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T04:09:45Z\\\",\\\"message\\\":\\\"W0214 04:09:34.347288 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0214 04:09:34.347626 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771042174 cert, and key in /tmp/serving-cert-184764736/serving-signer.crt, /tmp/serving-cert-184764736/serving-signer.key\\\\nI0214 04:09:34.829829 1 observer_polling.go:159] Starting file observer\\\\nW0214 04:09:34.832051 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0214 04:09:34.832310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 04:09:34.835332 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-184764736/tls.crt::/tmp/serving-cert-184764736/tls.key\\\\\\\"\\\\nF0214 04:09:45.190789 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d60b00afe16ba210d6cf3e8edd9c12aef490177b83185e6d74f219cc35efbc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:53Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.362376 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l6v69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afb01bb-2288-4e50-aa66-3e5f2685af58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a109f9fc7a2ea765543b2d1437ad5eccddd0ccb0542b1ffe6a67490057d6d41e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64stb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l6v69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:53Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.377700 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:53Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.398877 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34391a30-5865-46e9-af5f-705cc3b11fba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nndn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:53Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.414031 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b12a920eac3a6bb901e1eb5b3f4ec399de4fb28f20cd73bdcf463730ccc78bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a373e25fceabb99332a08d8c1928aa6023c103d488a1f02a57b3157eceb75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:53Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.427120 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab82cfd4c916a17bb5ae2454a121a8367c532dd78d0ae1e13c02868208b7c7fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:53Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.431926 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.431971 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.431982 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.432003 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.432016 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:53Z","lastTransitionTime":"2026-02-14T04:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.447450 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:53Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.536303 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.536351 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.536364 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.536385 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.536420 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:53Z","lastTransitionTime":"2026-02-14T04:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.640246 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.640561 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.640569 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.640586 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.640595 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:53Z","lastTransitionTime":"2026-02-14T04:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.743785 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.743843 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.743856 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.743882 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.743896 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:53Z","lastTransitionTime":"2026-02-14T04:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.846778 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.846951 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.847210 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.847365 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.847574 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:53Z","lastTransitionTime":"2026-02-14T04:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.950750 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.950814 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.950829 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.950863 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.950883 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:53Z","lastTransitionTime":"2026-02-14T04:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.032261 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.050263 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.053570 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.053613 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.053624 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.053642 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.053657 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:54Z","lastTransitionTime":"2026-02-14T04:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.055093 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:54Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.055759 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.080403 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab82cfd4c916a17bb5ae2454a121a8367c532dd78d0ae1e13c02868208b7c7fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:54Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.092842 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 23:48:01.230754758 +0000 UTC Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.098216 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fl729" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb77d03e-6ead-48b5-a96a-db4cbd540192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f23c7e00abcb489852a771f1534532f8a6c3acdd810e4432dd155a72558bcc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gznnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fl729\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:54Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.115216 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9st5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d645541b-4940-4e53-a506-1b42bd296dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9st5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:54Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.129354 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5aa8290-4924-4bc2-bd8e-576e53fa4216\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff2ac5b982c695cfabb0b045748396477b0076e3f4bd77aedf8140d8d212eefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c65a590577c74e672bca804403f159a08ded5ed0e25daf1bef640898c304fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b2c4d9c08ee7188cfea877222707517949a93291dac8409facff18ccd5d9243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c96b250166bcf9c613c7707d9b66c11bbb6292c67d03ed9c9cd8359f466d48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9a86a9d4bdcb85bed9cc5869d14d5d0dcd8a0e22ad73bcc1a9db45554d0c687\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T04:09:45Z\\\",\\\"message\\\":\\\"W0214 04:09:34.347288 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0214 04:09:34.347626 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771042174 cert, and key in /tmp/serving-cert-184764736/serving-signer.crt, /tmp/serving-cert-184764736/serving-signer.key\\\\nI0214 04:09:34.829829 1 observer_polling.go:159] Starting file observer\\\\nW0214 04:09:34.832051 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0214 04:09:34.832310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 04:09:34.835332 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-184764736/tls.crt::/tmp/serving-cert-184764736/tls.key\\\\\\\"\\\\nF0214 04:09:45.190789 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d60b00afe16ba210d6cf3e8edd9c12aef490177b83185e6d74f219cc35efbc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:54Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.140588 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:54Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.151964 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:54Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.155853 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.155903 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.155914 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.155937 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.155951 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:54Z","lastTransitionTime":"2026-02-14T04:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.163474 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5992e46c-bce7-4b9f-82f2-c7ffb93286cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945e51e35cb7125361ec74b9c291782c9bc28f0c319ca5c90a88c27540d6ad95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06b088007e4cc02eff5f33dffc101f9d559fc0af6d9fc99cb7d1a49c47deec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4s95t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:54Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.181395 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b12a920eac3a6bb901e1eb5b3f4ec399de4fb28f20cd73bdcf463730ccc78bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a373e25fceabb99332a08d8c1928aa6023c103d488a1f02a57b3157eceb75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:54Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.196431 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l6v69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afb01bb-2288-4e50-aa66-3e5f2685af58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a109f9fc7a2ea765543b2d1437ad5eccddd0ccb0542b1ffe6a67490057d6d41e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64stb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l6v69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:54Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.212541 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:54Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.233374 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" event={"ID":"34391a30-5865-46e9-af5f-705cc3b11fba","Type":"ContainerStarted","Data":"ee3393a612147da0ed4305cb2d2fab51792bf4aefb36be402a6faaa698793cfd"} Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.235874 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9st5b" event={"ID":"d645541b-4940-4e53-a506-1b42bd296dfb","Type":"ContainerDied","Data":"a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3"} Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.235881 4867 generic.go:334] "Generic (PLEG): container finished" podID="d645541b-4940-4e53-a506-1b42bd296dfb" containerID="a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3" exitCode=0 Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.241221 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34391a30-5865-46e9-af5f-705cc3b11fba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nndn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:54Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.256772 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:54Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.259415 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.259527 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.259545 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.259570 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.259586 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:54Z","lastTransitionTime":"2026-02-14T04:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.279681 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34391a30-5865-46e9-af5f-705cc3b11fba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nndn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:54Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.294898 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b12a920eac3a6bb901e1eb5b3f4ec399de4fb28f20cd73bdcf463730ccc78bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a373e25fceabb99332a08d8c1928aa6023c103d488a1f02a57b3157eceb75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:54Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.309231 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l6v69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afb01bb-2288-4e50-aa66-3e5f2685af58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a109f9fc7a2ea765543b2d1437ad5eccddd0ccb0542b1ffe6a67490057d6d41e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64stb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l6v69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:54Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.324309 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab82cfd4c916a17bb5ae2454a121a8367c532dd78d0ae1e13c02868208b7c7fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:54Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.342581 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:54Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.358159 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2ca498f-e329-422d-8b40-abb4d86f9b5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec966f4b2a6aef7743d32f976a12645c5b0feda623f7baf64edf02bc35389e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://898133696f8478fcb41fba24d15e056570cab68af53a559cb642724dff51617a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ad62a4304538cec45962a9672a69b853848bbfcbce460811135c2ffde4849\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c04ba7033e9c86439f79a30f5ac92368859a69c6b8d46aa6e05ca42fbc37839\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:54Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.363218 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.363276 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.363293 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.363319 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.363332 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:54Z","lastTransitionTime":"2026-02-14T04:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.378808 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9st5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d645541b-4940-4e53-a506-1b42bd296dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9st5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:54Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.394404 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fl729" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb77d03e-6ead-48b5-a96a-db4cbd540192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f23c7e00abcb489852a771f1534532f8a6c3acdd810e4432dd155a72558bcc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gznnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fl729\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:54Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.408322 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:54Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.420331 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5992e46c-bce7-4b9f-82f2-c7ffb93286cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945e51e35cb7125361ec74b9c291782c9bc28f0c319ca5c90a88c27540d6ad95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06b088007e4cc02eff5f33dffc101f9d559fc0af6d9fc99cb7d1a49c47deec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4s95t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:54Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.434590 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5aa8290-4924-4bc2-bd8e-576e53fa4216\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff2ac5b982c695cfabb0b045748396477b0076e3f4bd77aedf8140d8d212eefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c65a590577c74e672bca804403f159a08ded5ed0e25daf1bef640898c304fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b2c4d9c08ee7188cfea877222707517949a93291dac8409facff18ccd5d9243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c96b250166bcf9c613c7707d9b66c11bbb6292c67d03ed9c9cd8359f466d48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9a86a9d4bdcb85bed9cc5869d14d5d0dcd8a0e22ad73bcc1a9db45554d0c687\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T04:09:45Z\\\",\\\"message\\\":\\\"W0214 04:09:34.347288 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0214 04:09:34.347626 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771042174 cert, and key in /tmp/serving-cert-184764736/serving-signer.crt, /tmp/serving-cert-184764736/serving-signer.key\\\\nI0214 04:09:34.829829 1 observer_polling.go:159] Starting file observer\\\\nW0214 04:09:34.832051 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0214 04:09:34.832310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 04:09:34.835332 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-184764736/tls.crt::/tmp/serving-cert-184764736/tls.key\\\\\\\"\\\\nF0214 04:09:45.190789 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d60b00afe16ba210d6cf3e8edd9c12aef490177b83185e6d74f219cc35efbc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:54Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.448720 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:54Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.466029 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.466177 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.466285 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.466379 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.466444 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:54Z","lastTransitionTime":"2026-02-14T04:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.570694 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.570742 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.570759 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.570781 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.570796 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:54Z","lastTransitionTime":"2026-02-14T04:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.675276 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.675356 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.675374 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.675404 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.675424 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:54Z","lastTransitionTime":"2026-02-14T04:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.744146 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:09:54 crc kubenswrapper[4867]: E0214 04:09:54.744561 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:09:58.744450003 +0000 UTC m=+30.825387367 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.744725 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.744894 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.744968 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.745103 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:09:54 crc kubenswrapper[4867]: E0214 04:09:54.745145 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 14 04:09:54 crc kubenswrapper[4867]: E0214 04:09:54.745204 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 14 04:09:54 crc kubenswrapper[4867]: E0214 04:09:54.745237 4867 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 04:09:54 crc kubenswrapper[4867]: E0214 04:09:54.745148 4867 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 14 04:09:54 crc kubenswrapper[4867]: E0214 04:09:54.745350 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-14 04:09:58.745325376 +0000 UTC m=+30.826262730 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 04:09:54 crc kubenswrapper[4867]: E0214 04:09:54.745368 4867 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 14 04:09:54 crc kubenswrapper[4867]: E0214 04:09:54.745433 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-14 04:09:58.745395378 +0000 UTC m=+30.826332732 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 14 04:09:54 crc kubenswrapper[4867]: E0214 04:09:54.745391 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 14 04:09:54 crc kubenswrapper[4867]: E0214 04:09:54.745495 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 14 04:09:54 crc kubenswrapper[4867]: E0214 04:09:54.745551 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-14 04:09:58.74547043 +0000 UTC m=+30.826407774 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 14 04:09:54 crc kubenswrapper[4867]: E0214 04:09:54.745559 4867 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 04:09:54 crc kubenswrapper[4867]: E0214 04:09:54.745624 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-14 04:09:58.745609604 +0000 UTC m=+30.826546958 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.778482 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.778584 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.778603 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.778632 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.778654 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:54Z","lastTransitionTime":"2026-02-14T04:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.883873 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.884614 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.884637 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.884668 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.884687 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:54Z","lastTransitionTime":"2026-02-14T04:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.989553 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.989607 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.989618 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.989636 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.989648 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:54Z","lastTransitionTime":"2026-02-14T04:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.996817 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.996948 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:09:54 crc kubenswrapper[4867]: E0214 04:09:54.996996 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:09:54 crc kubenswrapper[4867]: E0214 04:09:54.997189 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.996817 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:09:54 crc kubenswrapper[4867]: E0214 04:09:54.997373 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.092955 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 03:35:23.241655058 +0000 UTC Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.093228 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.093266 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.093275 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.093294 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.093304 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:55Z","lastTransitionTime":"2026-02-14T04:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.196457 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.196574 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.196599 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.196639 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.196658 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:55Z","lastTransitionTime":"2026-02-14T04:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.242396 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"cb2334a367dde5688d19979264cfd6e67f44426ae7cd249c0b0e18b7e889c8c8"} Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.245948 4867 generic.go:334] "Generic (PLEG): container finished" podID="d645541b-4940-4e53-a506-1b42bd296dfb" containerID="26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2" exitCode=0 Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.246011 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9st5b" event={"ID":"d645541b-4940-4e53-a506-1b42bd296dfb","Type":"ContainerDied","Data":"26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2"} Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.273210 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5aa8290-4924-4bc2-bd8e-576e53fa4216\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff2ac5b982c695cfabb0b045748396477b0076e3f4bd77aedf8140d8d212eefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c65a590577c74e672bca804403f159a08ded5ed0e25daf1bef640898c304fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b2c4d9c08ee7188cfea877222707517949a93291dac8409facff18ccd5d9243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c96b250166bcf9c613c7707d9b66c11bbb6292c67d03ed9c9cd8359f466d48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9a86a9d4bdcb85bed9cc5869d14d5d0dcd8a0e22ad73bcc1a9db45554d0c687\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T04:09:45Z\\\",\\\"message\\\":\\\"W0214 04:09:34.347288 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0214 04:09:34.347626 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771042174 cert, and key in /tmp/serving-cert-184764736/serving-signer.crt, /tmp/serving-cert-184764736/serving-signer.key\\\\nI0214 04:09:34.829829 1 observer_polling.go:159] Starting file observer\\\\nW0214 04:09:34.832051 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0214 04:09:34.832310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 04:09:34.835332 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-184764736/tls.crt::/tmp/serving-cert-184764736/tls.key\\\\\\\"\\\\nF0214 04:09:45.190789 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d60b00afe16ba210d6cf3e8edd9c12aef490177b83185e6d74f219cc35efbc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:55Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.292707 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2334a367dde5688d19979264cfd6e67f44426ae7cd249c0b0e18b7e889c8c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:55Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.300258 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.300302 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.300314 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.300340 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.300356 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:55Z","lastTransitionTime":"2026-02-14T04:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.322614 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:55Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.341455 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5992e46c-bce7-4b9f-82f2-c7ffb93286cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945e51e35cb7125361ec74b9c291782c9bc28f0c319ca5c90a88c27540d6ad95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06b088007e4cc02eff5f33dffc101f9d559fc0af6d9fc99cb7d1a49c47deec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4s95t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:55Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.358635 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b12a920eac3a6bb901e1eb5b3f4ec399de4fb28f20cd73bdcf463730ccc78bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a373e25fceabb99332a08d8c1928aa6023c103d488a1f02a57b3157eceb75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:55Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.375895 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l6v69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afb01bb-2288-4e50-aa66-3e5f2685af58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a109f9fc7a2ea765543b2d1437ad5eccddd0ccb0542b1ffe6a67490057d6d41e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64stb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l6v69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:55Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.395295 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:55Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.404034 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.404096 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.404123 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.404173 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.404201 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:55Z","lastTransitionTime":"2026-02-14T04:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.417154 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34391a30-5865-46e9-af5f-705cc3b11fba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nndn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:55Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.429565 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2ca498f-e329-422d-8b40-abb4d86f9b5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec966f4b2a6aef7743d32f976a12645c5b0feda623f7baf64edf02bc35389e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://898133696f8478fcb41fba24d15e056570cab68af53a559cb642724dff51617a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ad62a4304538cec45962a9672a69b853848bbfcbce460811135c2ffde4849\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c04ba7033e9c86439f79a30f5ac92368859a69c6b8d46aa6e05ca42fbc37839\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:55Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.447607 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab82cfd4c916a17bb5ae2454a121a8367c532dd78d0ae1e13c02868208b7c7fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:55Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.462709 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:55Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.480154 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fl729" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb77d03e-6ead-48b5-a96a-db4cbd540192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f23c7e00abcb489852a771f1534532f8a6c3acdd810e4432dd155a72558bcc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gznnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fl729\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:55Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.494894 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9st5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d645541b-4940-4e53-a506-1b42bd296dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9st5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:55Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.507951 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.507997 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.508007 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.508026 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.508038 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:55Z","lastTransitionTime":"2026-02-14T04:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.510911 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2ca498f-e329-422d-8b40-abb4d86f9b5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec966f4b2a6aef7743d32f976a12645c5b0feda623f7baf64edf02bc35389e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://898133696f8478fcb41fba24d15e056570cab68af53a559cb642724dff51617a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ad62a4304538cec45962a9672a69b853848bbfcbce460811135c2ffde4849\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c04ba7033e9c86439f79a30f5ac92368859a69c6b8d46aa6e05ca42fbc37839\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:55Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.530440 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab82cfd4c916a17bb5ae2454a121a8367c532dd78d0ae1e13c02868208b7c7fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:55Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.548490 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:55Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.568143 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fl729" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb77d03e-6ead-48b5-a96a-db4cbd540192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f23c7e00abcb489852a771f1534532f8a6c3acdd810e4432dd155a72558bcc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gznnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fl729\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:55Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.583734 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9st5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d645541b-4940-4e53-a506-1b42bd296dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9st5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:55Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.598708 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5aa8290-4924-4bc2-bd8e-576e53fa4216\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff2ac5b982c695cfabb0b045748396477b0076e3f4bd77aedf8140d8d212eefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c65a590577c74e672bca804403f159a08ded5ed0e25daf1bef640898c304fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b2c4d9c08ee7188cfea877222707517949a93291dac8409facff18ccd5d9243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c96b250166bcf9c613c7707d9b66c11bbb6292c67d03ed9c9cd8359f466d48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9a86a9d4bdcb85bed9cc5869d14d5d0dcd8a0e22ad73bcc1a9db45554d0c687\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T04:09:45Z\\\",\\\"message\\\":\\\"W0214 04:09:34.347288 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0214 04:09:34.347626 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771042174 cert, and key in /tmp/serving-cert-184764736/serving-signer.crt, /tmp/serving-cert-184764736/serving-signer.key\\\\nI0214 04:09:34.829829 1 observer_polling.go:159] Starting file observer\\\\nW0214 04:09:34.832051 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0214 04:09:34.832310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 04:09:34.835332 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-184764736/tls.crt::/tmp/serving-cert-184764736/tls.key\\\\\\\"\\\\nF0214 04:09:45.190789 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d60b00afe16ba210d6cf3e8edd9c12aef490177b83185e6d74f219cc35efbc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:55Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.610464 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2334a367dde5688d19979264cfd6e67f44426ae7cd249c0b0e18b7e889c8c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:55Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.611796 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.611827 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.611836 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.611853 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.611865 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:55Z","lastTransitionTime":"2026-02-14T04:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.625264 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:55Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.642232 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5992e46c-bce7-4b9f-82f2-c7ffb93286cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945e51e35cb7125361ec74b9c291782c9bc28f0c319ca5c90a88c27540d6ad95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06b088007e4cc02eff5f33dffc101f9d559fc0af6d9fc99cb7d1a49c47deec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4s95t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:55Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.659468 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b12a920eac3a6bb901e1eb5b3f4ec399de4fb28f20cd73bdcf463730ccc78bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a373e25fceabb99332a08d8c1928aa6023c103d488a1f02a57b3157eceb75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:55Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.670681 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l6v69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afb01bb-2288-4e50-aa66-3e5f2685af58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a109f9fc7a2ea765543b2d1437ad5eccddd0ccb0542b1ffe6a67490057d6d41e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64stb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l6v69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:55Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.685209 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:55Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.709523 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34391a30-5865-46e9-af5f-705cc3b11fba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nndn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:55Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.714660 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.714745 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.714767 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.714803 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.714824 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:55Z","lastTransitionTime":"2026-02-14T04:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.817898 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.817944 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.817955 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.817973 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.817986 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:55Z","lastTransitionTime":"2026-02-14T04:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.920846 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.920880 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.920892 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.920907 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.920918 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:55Z","lastTransitionTime":"2026-02-14T04:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.023629 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.023670 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.023679 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.023698 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.023709 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:56Z","lastTransitionTime":"2026-02-14T04:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.093598 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 10:22:57.845939 +0000 UTC Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.126232 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.126278 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.126293 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.126314 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.126326 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:56Z","lastTransitionTime":"2026-02-14T04:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.228874 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.228958 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.228977 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.229015 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.229036 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:56Z","lastTransitionTime":"2026-02-14T04:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.253412 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" event={"ID":"34391a30-5865-46e9-af5f-705cc3b11fba","Type":"ContainerStarted","Data":"b353a2a6ce81989e21b42414fdc2911f63d44fbd94dd8c588a704ae66216d8b5"} Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.256682 4867 generic.go:334] "Generic (PLEG): container finished" podID="d645541b-4940-4e53-a506-1b42bd296dfb" containerID="9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed" exitCode=0 Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.256892 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9st5b" event={"ID":"d645541b-4940-4e53-a506-1b42bd296dfb","Type":"ContainerDied","Data":"9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed"} Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.274763 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l6v69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afb01bb-2288-4e50-aa66-3e5f2685af58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a109f9fc7a2ea765543b2d1437ad5eccddd0ccb0542b1ffe6a67490057d6d41e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64stb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l6v69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:56Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.302398 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:56Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.321366 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34391a30-5865-46e9-af5f-705cc3b11fba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nndn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:56Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.335406 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.335444 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.335463 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.335481 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.335492 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:56Z","lastTransitionTime":"2026-02-14T04:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.339688 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b12a920eac3a6bb901e1eb5b3f4ec399de4fb28f20cd73bdcf463730ccc78bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a373e25fceabb99332a08d8c1928aa6023c103d488a1f02a57b3157eceb75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:56Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.354224 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab82cfd4c916a17bb5ae2454a121a8367c532dd78d0ae1e13c02868208b7c7fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:56Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.365355 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:56Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.368813 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-qbv2g"] Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.369153 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-qbv2g" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.370436 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.371793 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.371910 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.371935 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.380969 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2ca498f-e329-422d-8b40-abb4d86f9b5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec966f4b2a6aef7743d32f976a12645c5b0feda623f7baf64edf02bc35389e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://898133696f8478fcb41fba24d15e056570cab68af53a559cb642724dff51617a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ad62a4304538cec45962a9672a69b853848bbfcbce460811135c2ffde4849\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c04ba7033e9c86439f79a30f5ac92368859a69c6b8d46aa6e05ca42fbc37839\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:56Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.401261 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fl729" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb77d03e-6ead-48b5-a96a-db4cbd540192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f23c7e00abcb489852a771f1534532f8a6c3acdd810e4432dd155a72558bcc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gznnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fl729\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:56Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.416277 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9st5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d645541b-4940-4e53-a506-1b42bd296dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9st5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:56Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.427766 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2334a367dde5688d19979264cfd6e67f44426ae7cd249c0b0e18b7e889c8c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:56Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.437609 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.437633 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.437642 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.437658 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.437669 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:56Z","lastTransitionTime":"2026-02-14T04:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.440755 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:56Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.454599 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5992e46c-bce7-4b9f-82f2-c7ffb93286cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945e51e35cb7125361ec74b9c291782c9bc28f0c319ca5c90a88c27540d6ad95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06b088007e4cc02eff5f33dffc101f9d559fc0af6d9fc99cb7d1a49c47deec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4s95t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:56Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.463393 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghrlq\" (UniqueName: \"kubernetes.io/projected/e55b70fd-de82-48c9-b879-de727928e084-kube-api-access-ghrlq\") pod \"node-ca-qbv2g\" (UID: \"e55b70fd-de82-48c9-b879-de727928e084\") " pod="openshift-image-registry/node-ca-qbv2g" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.463434 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e55b70fd-de82-48c9-b879-de727928e084-host\") pod \"node-ca-qbv2g\" (UID: \"e55b70fd-de82-48c9-b879-de727928e084\") " pod="openshift-image-registry/node-ca-qbv2g" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.463451 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/e55b70fd-de82-48c9-b879-de727928e084-serviceca\") pod \"node-ca-qbv2g\" (UID: \"e55b70fd-de82-48c9-b879-de727928e084\") " pod="openshift-image-registry/node-ca-qbv2g" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.471539 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5aa8290-4924-4bc2-bd8e-576e53fa4216\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff2ac5b982c695cfabb0b045748396477b0076e3f4bd77aedf8140d8d212eefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c65a590577c74e672bca804403f159a08ded5ed0e25daf1bef640898c304fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b2c4d9c08ee7188cfea877222707517949a93291dac8409facff18ccd5d9243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c96b250166bcf9c613c7707d9b66c11bbb6292c67d03ed9c9cd8359f466d48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9a86a9d4bdcb85bed9cc5869d14d5d0dcd8a0e22ad73bcc1a9db45554d0c687\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T04:09:45Z\\\",\\\"message\\\":\\\"W0214 04:09:34.347288 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0214 04:09:34.347626 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771042174 cert, and key in /tmp/serving-cert-184764736/serving-signer.crt, /tmp/serving-cert-184764736/serving-signer.key\\\\nI0214 04:09:34.829829 1 observer_polling.go:159] Starting file observer\\\\nW0214 04:09:34.832051 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0214 04:09:34.832310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 04:09:34.835332 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-184764736/tls.crt::/tmp/serving-cert-184764736/tls.key\\\\\\\"\\\\nF0214 04:09:45.190789 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d60b00afe16ba210d6cf3e8edd9c12aef490177b83185e6d74f219cc35efbc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:56Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.485608 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b12a920eac3a6bb901e1eb5b3f4ec399de4fb28f20cd73bdcf463730ccc78bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a373e25fceabb99332a08d8c1928aa6023c103d488a1f02a57b3157eceb75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:56Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.497209 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l6v69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afb01bb-2288-4e50-aa66-3e5f2685af58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a109f9fc7a2ea765543b2d1437ad5eccddd0ccb0542b1ffe6a67490057d6d41e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64stb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l6v69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:56Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.510479 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:56Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.528412 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34391a30-5865-46e9-af5f-705cc3b11fba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nndn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:56Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.540007 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.540047 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.540057 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.540072 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.540083 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:56Z","lastTransitionTime":"2026-02-14T04:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.541277 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qbv2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55b70fd-de82-48c9-b879-de727928e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghrlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qbv2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:56Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.553990 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2ca498f-e329-422d-8b40-abb4d86f9b5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec966f4b2a6aef7743d32f976a12645c5b0feda623f7baf64edf02bc35389e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://898133696f8478fcb41fba24d15e056570cab68af53a559cb642724dff51617a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ad62a4304538cec45962a9672a69b853848bbfcbce460811135c2ffde4849\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c04ba7033e9c86439f79a30f5ac92368859a69c6b8d46aa6e05ca42fbc37839\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:56Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.563949 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghrlq\" (UniqueName: \"kubernetes.io/projected/e55b70fd-de82-48c9-b879-de727928e084-kube-api-access-ghrlq\") pod \"node-ca-qbv2g\" (UID: \"e55b70fd-de82-48c9-b879-de727928e084\") " pod="openshift-image-registry/node-ca-qbv2g" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.564001 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e55b70fd-de82-48c9-b879-de727928e084-host\") pod \"node-ca-qbv2g\" (UID: \"e55b70fd-de82-48c9-b879-de727928e084\") " pod="openshift-image-registry/node-ca-qbv2g" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.564021 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/e55b70fd-de82-48c9-b879-de727928e084-serviceca\") pod \"node-ca-qbv2g\" (UID: \"e55b70fd-de82-48c9-b879-de727928e084\") " pod="openshift-image-registry/node-ca-qbv2g" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.564114 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e55b70fd-de82-48c9-b879-de727928e084-host\") pod \"node-ca-qbv2g\" (UID: \"e55b70fd-de82-48c9-b879-de727928e084\") " pod="openshift-image-registry/node-ca-qbv2g" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.565121 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/e55b70fd-de82-48c9-b879-de727928e084-serviceca\") pod \"node-ca-qbv2g\" (UID: \"e55b70fd-de82-48c9-b879-de727928e084\") " pod="openshift-image-registry/node-ca-qbv2g" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.568108 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab82cfd4c916a17bb5ae2454a121a8367c532dd78d0ae1e13c02868208b7c7fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:56Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.581855 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:56Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.588650 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghrlq\" (UniqueName: \"kubernetes.io/projected/e55b70fd-de82-48c9-b879-de727928e084-kube-api-access-ghrlq\") pod \"node-ca-qbv2g\" (UID: \"e55b70fd-de82-48c9-b879-de727928e084\") " pod="openshift-image-registry/node-ca-qbv2g" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.597489 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fl729" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb77d03e-6ead-48b5-a96a-db4cbd540192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f23c7e00abcb489852a771f1534532f8a6c3acdd810e4432dd155a72558bcc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gznnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fl729\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:56Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.619830 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9st5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d645541b-4940-4e53-a506-1b42bd296dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9st5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:56Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.633654 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5aa8290-4924-4bc2-bd8e-576e53fa4216\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff2ac5b982c695cfabb0b045748396477b0076e3f4bd77aedf8140d8d212eefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c65a590577c74e672bca804403f159a08ded5ed0e25daf1bef640898c304fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b2c4d9c08ee7188cfea877222707517949a93291dac8409facff18ccd5d9243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c96b250166bcf9c613c7707d9b66c11bbb6292c67d03ed9c9cd8359f466d48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9a86a9d4bdcb85bed9cc5869d14d5d0dcd8a0e22ad73bcc1a9db45554d0c687\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T04:09:45Z\\\",\\\"message\\\":\\\"W0214 04:09:34.347288 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0214 04:09:34.347626 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771042174 cert, and key in /tmp/serving-cert-184764736/serving-signer.crt, /tmp/serving-cert-184764736/serving-signer.key\\\\nI0214 04:09:34.829829 1 observer_polling.go:159] Starting file observer\\\\nW0214 04:09:34.832051 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0214 04:09:34.832310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 04:09:34.835332 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-184764736/tls.crt::/tmp/serving-cert-184764736/tls.key\\\\\\\"\\\\nF0214 04:09:45.190789 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d60b00afe16ba210d6cf3e8edd9c12aef490177b83185e6d74f219cc35efbc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:56Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.642493 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.642548 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.642560 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.642611 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.642627 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:56Z","lastTransitionTime":"2026-02-14T04:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.647446 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2334a367dde5688d19979264cfd6e67f44426ae7cd249c0b0e18b7e889c8c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:56Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.661265 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:56Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.675523 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5992e46c-bce7-4b9f-82f2-c7ffb93286cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945e51e35cb7125361ec74b9c291782c9bc28f0c319ca5c90a88c27540d6ad95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06b088007e4cc02eff5f33dffc101f9d559fc0af6d9fc99cb7d1a49c47deec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4s95t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:56Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.687299 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-qbv2g" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.747158 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.747649 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.747661 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.747680 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.747693 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:56Z","lastTransitionTime":"2026-02-14T04:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.851760 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.851788 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.851796 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.851809 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.851818 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:56Z","lastTransitionTime":"2026-02-14T04:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.955446 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.955492 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.955557 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.955591 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.955605 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:56Z","lastTransitionTime":"2026-02-14T04:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.996619 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.996675 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.996716 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:09:56 crc kubenswrapper[4867]: E0214 04:09:56.996898 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:09:56 crc kubenswrapper[4867]: E0214 04:09:56.997115 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:09:56 crc kubenswrapper[4867]: E0214 04:09:56.997232 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.059069 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.059141 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.059161 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.059190 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.059212 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:57Z","lastTransitionTime":"2026-02-14T04:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.094579 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 01:13:23.403798782 +0000 UTC Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.162489 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.162582 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.162592 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.162622 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.162634 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:57Z","lastTransitionTime":"2026-02-14T04:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.262009 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-qbv2g" event={"ID":"e55b70fd-de82-48c9-b879-de727928e084","Type":"ContainerStarted","Data":"4d6de20a8d6a8a1104338491af05cb4bad2960df3f3d41271922974f2bd0f355"} Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.262092 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-qbv2g" event={"ID":"e55b70fd-de82-48c9-b879-de727928e084","Type":"ContainerStarted","Data":"a9291288bede55d3a5542beca321ac1c9b6dcd94142fc8fcac273384dc5764c8"} Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.265178 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.265227 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.265245 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.265268 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.265285 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:57Z","lastTransitionTime":"2026-02-14T04:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.267777 4867 generic.go:334] "Generic (PLEG): container finished" podID="d645541b-4940-4e53-a506-1b42bd296dfb" containerID="b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57" exitCode=0 Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.267837 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9st5b" event={"ID":"d645541b-4940-4e53-a506-1b42bd296dfb","Type":"ContainerDied","Data":"b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57"} Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.287768 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5aa8290-4924-4bc2-bd8e-576e53fa4216\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff2ac5b982c695cfabb0b045748396477b0076e3f4bd77aedf8140d8d212eefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c65a590577c74e672bca804403f159a08ded5ed0e25daf1bef640898c304fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b2c4d9c08ee7188cfea877222707517949a93291dac8409facff18ccd5d9243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c96b250166bcf9c613c7707d9b66c11bbb6292c67d03ed9c9cd8359f466d48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9a86a9d4bdcb85bed9cc5869d14d5d0dcd8a0e22ad73bcc1a9db45554d0c687\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T04:09:45Z\\\",\\\"message\\\":\\\"W0214 04:09:34.347288 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0214 04:09:34.347626 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771042174 cert, and key in /tmp/serving-cert-184764736/serving-signer.crt, /tmp/serving-cert-184764736/serving-signer.key\\\\nI0214 04:09:34.829829 1 observer_polling.go:159] Starting file observer\\\\nW0214 04:09:34.832051 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0214 04:09:34.832310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 04:09:34.835332 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-184764736/tls.crt::/tmp/serving-cert-184764736/tls.key\\\\\\\"\\\\nF0214 04:09:45.190789 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d60b00afe16ba210d6cf3e8edd9c12aef490177b83185e6d74f219cc35efbc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:57Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.305295 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2334a367dde5688d19979264cfd6e67f44426ae7cd249c0b0e18b7e889c8c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:57Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.322885 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:57Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.343302 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5992e46c-bce7-4b9f-82f2-c7ffb93286cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945e51e35cb7125361ec74b9c291782c9bc28f0c319ca5c90a88c27540d6ad95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06b088007e4cc02eff5f33dffc101f9d559fc0af6d9fc99cb7d1a49c47deec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4s95t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:57Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.363457 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qbv2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55b70fd-de82-48c9-b879-de727928e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6de20a8d6a8a1104338491af05cb4bad2960df3f3d41271922974f2bd0f355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghrlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qbv2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:57Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.368269 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.368331 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.368351 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.368375 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.368390 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:57Z","lastTransitionTime":"2026-02-14T04:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.380127 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b12a920eac3a6bb901e1eb5b3f4ec399de4fb28f20cd73bdcf463730ccc78bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a373e25fceabb99332a08d8c1928aa6023c103d488a1f02a57b3157eceb75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:57Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.394789 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l6v69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afb01bb-2288-4e50-aa66-3e5f2685af58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a109f9fc7a2ea765543b2d1437ad5eccddd0ccb0542b1ffe6a67490057d6d41e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64stb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l6v69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:57Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.409784 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:57Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.434994 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34391a30-5865-46e9-af5f-705cc3b11fba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nndn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:57Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.458359 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:57Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.471389 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.471440 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.471452 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.471479 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.471493 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:57Z","lastTransitionTime":"2026-02-14T04:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.480836 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2ca498f-e329-422d-8b40-abb4d86f9b5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec966f4b2a6aef7743d32f976a12645c5b0feda623f7baf64edf02bc35389e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://898133696f8478fcb41fba24d15e056570cab68af53a559cb642724dff51617a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ad62a4304538cec45962a9672a69b853848bbfcbce460811135c2ffde4849\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c04ba7033e9c86439f79a30f5ac92368859a69c6b8d46aa6e05ca42fbc37839\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:57Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.496207 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab82cfd4c916a17bb5ae2454a121a8367c532dd78d0ae1e13c02868208b7c7fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:57Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.511705 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fl729" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb77d03e-6ead-48b5-a96a-db4cbd540192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f23c7e00abcb489852a771f1534532f8a6c3acdd810e4432dd155a72558bcc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gznnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fl729\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:57Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.527289 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9st5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d645541b-4940-4e53-a506-1b42bd296dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9st5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:57Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.540873 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fl729" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb77d03e-6ead-48b5-a96a-db4cbd540192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f23c7e00abcb489852a771f1534532f8a6c3acdd810e4432dd155a72558bcc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gznnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fl729\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:57Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.555670 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9st5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d645541b-4940-4e53-a506-1b42bd296dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9st5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:57Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.569419 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5aa8290-4924-4bc2-bd8e-576e53fa4216\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff2ac5b982c695cfabb0b045748396477b0076e3f4bd77aedf8140d8d212eefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c65a590577c74e672bca804403f159a08ded5ed0e25daf1bef640898c304fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b2c4d9c08ee7188cfea877222707517949a93291dac8409facff18ccd5d9243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c96b250166bcf9c613c7707d9b66c11bbb6292c67d03ed9c9cd8359f466d48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9a86a9d4bdcb85bed9cc5869d14d5d0dcd8a0e22ad73bcc1a9db45554d0c687\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T04:09:45Z\\\",\\\"message\\\":\\\"W0214 04:09:34.347288 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0214 04:09:34.347626 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771042174 cert, and key in /tmp/serving-cert-184764736/serving-signer.crt, /tmp/serving-cert-184764736/serving-signer.key\\\\nI0214 04:09:34.829829 1 observer_polling.go:159] Starting file observer\\\\nW0214 04:09:34.832051 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0214 04:09:34.832310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 04:09:34.835332 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-184764736/tls.crt::/tmp/serving-cert-184764736/tls.key\\\\\\\"\\\\nF0214 04:09:45.190789 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d60b00afe16ba210d6cf3e8edd9c12aef490177b83185e6d74f219cc35efbc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:57Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.573704 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.573752 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.573763 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.573783 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.573793 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:57Z","lastTransitionTime":"2026-02-14T04:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.584184 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2334a367dde5688d19979264cfd6e67f44426ae7cd249c0b0e18b7e889c8c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:57Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.597624 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:57Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.607713 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5992e46c-bce7-4b9f-82f2-c7ffb93286cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945e51e35cb7125361ec74b9c291782c9bc28f0c319ca5c90a88c27540d6ad95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06b088007e4cc02eff5f33dffc101f9d559fc0af6d9fc99cb7d1a49c47deec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4s95t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:57Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.619341 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b12a920eac3a6bb901e1eb5b3f4ec399de4fb28f20cd73bdcf463730ccc78bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a373e25fceabb99332a08d8c1928aa6023c103d488a1f02a57b3157eceb75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:57Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.628541 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l6v69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afb01bb-2288-4e50-aa66-3e5f2685af58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a109f9fc7a2ea765543b2d1437ad5eccddd0ccb0542b1ffe6a67490057d6d41e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64stb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l6v69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:57Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.642779 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:57Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.665598 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34391a30-5865-46e9-af5f-705cc3b11fba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nndn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:57Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.678677 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.678723 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.678732 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.678747 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.678758 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:57Z","lastTransitionTime":"2026-02-14T04:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.680052 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qbv2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55b70fd-de82-48c9-b879-de727928e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6de20a8d6a8a1104338491af05cb4bad2960df3f3d41271922974f2bd0f355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghrlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qbv2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:57Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.692265 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2ca498f-e329-422d-8b40-abb4d86f9b5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec966f4b2a6aef7743d32f976a12645c5b0feda623f7baf64edf02bc35389e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://898133696f8478fcb41fba24d15e056570cab68af53a559cb642724dff51617a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ad62a4304538cec45962a9672a69b853848bbfcbce460811135c2ffde4849\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c04ba7033e9c86439f79a30f5ac92368859a69c6b8d46aa6e05ca42fbc37839\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:57Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.705666 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab82cfd4c916a17bb5ae2454a121a8367c532dd78d0ae1e13c02868208b7c7fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:57Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.727062 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:57Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.780737 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.780806 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.780825 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.780855 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.780875 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:57Z","lastTransitionTime":"2026-02-14T04:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.888239 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.888769 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.888783 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.888808 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.888824 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:57Z","lastTransitionTime":"2026-02-14T04:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.991659 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.991693 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.991705 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.991723 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.991737 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:57Z","lastTransitionTime":"2026-02-14T04:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.093786 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.093822 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.093832 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.093847 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.093859 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:58Z","lastTransitionTime":"2026-02-14T04:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.095020 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 03:12:48.974108267 +0000 UTC Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.196131 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.196167 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.196176 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.196191 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.196200 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:58Z","lastTransitionTime":"2026-02-14T04:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.280358 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" event={"ID":"34391a30-5865-46e9-af5f-705cc3b11fba","Type":"ContainerStarted","Data":"32de50fe13796a05a11d846751a0d9aba8dcf9dcde8086c0eb90b5dc685c6ef8"} Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.280441 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.280609 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.286429 4867 generic.go:334] "Generic (PLEG): container finished" podID="d645541b-4940-4e53-a506-1b42bd296dfb" containerID="84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7" exitCode=0 Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.286559 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9st5b" event={"ID":"d645541b-4940-4e53-a506-1b42bd296dfb","Type":"ContainerDied","Data":"84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7"} Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.296159 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l6v69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afb01bb-2288-4e50-aa66-3e5f2685af58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a109f9fc7a2ea765543b2d1437ad5eccddd0ccb0542b1ffe6a67490057d6d41e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64stb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l6v69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:58Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.300695 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.300741 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.300750 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.300769 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.300786 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:58Z","lastTransitionTime":"2026-02-14T04:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.309080 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.309199 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.314584 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:58Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.333318 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34391a30-5865-46e9-af5f-705cc3b11fba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92014b6ab3e1d7c8631fb2a2fa44b60586bb67157c43e3528a7644584fd25b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://250a34062c680cecaa28554a71a782da6fa1c3554900e8b4f2fa6c093f98e307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3393a612147da0ed4305cb2d2fab51792bf4aefb36be402a6faaa698793cfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9937714cb48d5e8bc3542473d8261629ced25c342c26baa13e57c3dc2ace633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://669cee3fb4ea5c0247e6aa92962377d3152fc79d9022e03689b7c017f857e6e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e713ec2c9e59ec516f5c2241a0e87501f4e83e05d6dadd3c54cd7f1cf11f7d1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32de50fe13796a05a11d846751a0d9aba8dcf9dcde8086c0eb90b5dc685c6ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b353a2a6ce81989e21b42414fdc2911f63d44fbd94dd8c588a704ae66216d8b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nndn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:58Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.346814 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qbv2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55b70fd-de82-48c9-b879-de727928e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6de20a8d6a8a1104338491af05cb4bad2960df3f3d41271922974f2bd0f355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghrlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qbv2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:58Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.360629 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b12a920eac3a6bb901e1eb5b3f4ec399de4fb28f20cd73bdcf463730ccc78bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a373e25fceabb99332a08d8c1928aa6023c103d488a1f02a57b3157eceb75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:58Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.379643 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab82cfd4c916a17bb5ae2454a121a8367c532dd78d0ae1e13c02868208b7c7fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:58Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.395940 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:58Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.404588 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.404639 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.404661 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.404691 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.404710 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:58Z","lastTransitionTime":"2026-02-14T04:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.409342 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2ca498f-e329-422d-8b40-abb4d86f9b5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec966f4b2a6aef7743d32f976a12645c5b0feda623f7baf64edf02bc35389e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://898133696f8478fcb41fba24d15e056570cab68af53a559cb642724dff51617a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ad62a4304538cec45962a9672a69b853848bbfcbce460811135c2ffde4849\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c04ba7033e9c86439f79a30f5ac92368859a69c6b8d46aa6e05ca42fbc37839\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:58Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.425278 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fl729" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb77d03e-6ead-48b5-a96a-db4cbd540192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f23c7e00abcb489852a771f1534532f8a6c3acdd810e4432dd155a72558bcc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gznnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fl729\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:58Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.441985 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9st5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d645541b-4940-4e53-a506-1b42bd296dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9st5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:58Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.459465 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2334a367dde5688d19979264cfd6e67f44426ae7cd249c0b0e18b7e889c8c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:58Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.479423 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:58Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.497933 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5992e46c-bce7-4b9f-82f2-c7ffb93286cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945e51e35cb7125361ec74b9c291782c9bc28f0c319ca5c90a88c27540d6ad95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06b088007e4cc02eff5f33dffc101f9d559fc0af6d9fc99cb7d1a49c47deec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4s95t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:58Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.508202 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.508257 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.508269 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.508291 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.508305 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:58Z","lastTransitionTime":"2026-02-14T04:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.522350 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5aa8290-4924-4bc2-bd8e-576e53fa4216\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff2ac5b982c695cfabb0b045748396477b0076e3f4bd77aedf8140d8d212eefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c65a590577c74e672bca804403f159a08ded5ed0e25daf1bef640898c304fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b2c4d9c08ee7188cfea877222707517949a93291dac8409facff18ccd5d9243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c96b250166bcf9c613c7707d9b66c11bbb6292c67d03ed9c9cd8359f466d48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9a86a9d4bdcb85bed9cc5869d14d5d0dcd8a0e22ad73bcc1a9db45554d0c687\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T04:09:45Z\\\",\\\"message\\\":\\\"W0214 04:09:34.347288 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0214 04:09:34.347626 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771042174 cert, and key in /tmp/serving-cert-184764736/serving-signer.crt, /tmp/serving-cert-184764736/serving-signer.key\\\\nI0214 04:09:34.829829 1 observer_polling.go:159] Starting file observer\\\\nW0214 04:09:34.832051 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0214 04:09:34.832310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 04:09:34.835332 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-184764736/tls.crt::/tmp/serving-cert-184764736/tls.key\\\\\\\"\\\\nF0214 04:09:45.190789 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d60b00afe16ba210d6cf3e8edd9c12aef490177b83185e6d74f219cc35efbc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:58Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.542528 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b12a920eac3a6bb901e1eb5b3f4ec399de4fb28f20cd73bdcf463730ccc78bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a373e25fceabb99332a08d8c1928aa6023c103d488a1f02a57b3157eceb75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:58Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.554876 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l6v69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afb01bb-2288-4e50-aa66-3e5f2685af58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a109f9fc7a2ea765543b2d1437ad5eccddd0ccb0542b1ffe6a67490057d6d41e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64stb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l6v69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:58Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.576398 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:58Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.598640 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34391a30-5865-46e9-af5f-705cc3b11fba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92014b6ab3e1d7c8631fb2a2fa44b60586bb67157c43e3528a7644584fd25b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://250a34062c680cecaa28554a71a782da6fa1c3554900e8b4f2fa6c093f98e307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3393a612147da0ed4305cb2d2fab51792bf4aefb36be402a6faaa698793cfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9937714cb48d5e8bc3542473d8261629ced25c342c26baa13e57c3dc2ace633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://669cee3fb4ea5c0247e6aa92962377d3152fc79d9022e03689b7c017f857e6e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e713ec2c9e59ec516f5c2241a0e87501f4e83e05d6dadd3c54cd7f1cf11f7d1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32de50fe13796a05a11d846751a0d9aba8dcf9dcde8086c0eb90b5dc685c6ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b353a2a6ce81989e21b42414fdc2911f63d44fbd94dd8c588a704ae66216d8b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nndn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:58Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.610466 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.610520 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.610534 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.610575 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.610588 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:58Z","lastTransitionTime":"2026-02-14T04:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.611346 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qbv2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55b70fd-de82-48c9-b879-de727928e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6de20a8d6a8a1104338491af05cb4bad2960df3f3d41271922974f2bd0f355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghrlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qbv2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:58Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.628422 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2ca498f-e329-422d-8b40-abb4d86f9b5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec966f4b2a6aef7743d32f976a12645c5b0feda623f7baf64edf02bc35389e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://898133696f8478fcb41fba24d15e056570cab68af53a559cb642724dff51617a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ad62a4304538cec45962a9672a69b853848bbfcbce460811135c2ffde4849\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c04ba7033e9c86439f79a30f5ac92368859a69c6b8d46aa6e05ca42fbc37839\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:58Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.645163 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab82cfd4c916a17bb5ae2454a121a8367c532dd78d0ae1e13c02868208b7c7fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:58Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.658103 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:58Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.670077 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fl729" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb77d03e-6ead-48b5-a96a-db4cbd540192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f23c7e00abcb489852a771f1534532f8a6c3acdd810e4432dd155a72558bcc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gznnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fl729\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:58Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.685578 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9st5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d645541b-4940-4e53-a506-1b42bd296dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9st5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:58Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.703953 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5aa8290-4924-4bc2-bd8e-576e53fa4216\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff2ac5b982c695cfabb0b045748396477b0076e3f4bd77aedf8140d8d212eefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c65a590577c74e672bca804403f159a08ded5ed0e25daf1bef640898c304fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b2c4d9c08ee7188cfea877222707517949a93291dac8409facff18ccd5d9243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c96b250166bcf9c613c7707d9b66c11bbb6292c67d03ed9c9cd8359f466d48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9a86a9d4bdcb85bed9cc5869d14d5d0dcd8a0e22ad73bcc1a9db45554d0c687\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T04:09:45Z\\\",\\\"message\\\":\\\"W0214 04:09:34.347288 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0214 04:09:34.347626 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771042174 cert, and key in /tmp/serving-cert-184764736/serving-signer.crt, /tmp/serving-cert-184764736/serving-signer.key\\\\nI0214 04:09:34.829829 1 observer_polling.go:159] Starting file observer\\\\nW0214 04:09:34.832051 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0214 04:09:34.832310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 04:09:34.835332 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-184764736/tls.crt::/tmp/serving-cert-184764736/tls.key\\\\\\\"\\\\nF0214 04:09:45.190789 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d60b00afe16ba210d6cf3e8edd9c12aef490177b83185e6d74f219cc35efbc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:58Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.713115 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.713156 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.713204 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.713228 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.713243 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:58Z","lastTransitionTime":"2026-02-14T04:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.717207 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2334a367dde5688d19979264cfd6e67f44426ae7cd249c0b0e18b7e889c8c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:58Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.730676 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:58Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.747402 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5992e46c-bce7-4b9f-82f2-c7ffb93286cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945e51e35cb7125361ec74b9c291782c9bc28f0c319ca5c90a88c27540d6ad95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06b088007e4cc02eff5f33dffc101f9d559fc0af6d9fc99cb7d1a49c47deec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4s95t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:58Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.748296 4867 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 14 04:09:58 crc kubenswrapper[4867]: E0214 04:09:58.749059 4867 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/events\": read tcp 38.102.83.113:33772->38.102.83.113:6443: use of closed network connection" event="&Event{ObjectMeta:{multus-additional-cni-plugins-9st5b.1894017f1272e637 openshift-multus 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-multus,Name:multus-additional-cni-plugins-9st5b,UID:d645541b-4940-4e53-a506-1b42bd296dfb,APIVersion:v1,ResourceVersion:26450,FieldPath:spec.containers{kube-multus-additional-cni-plugins},},Reason:Started,Message:Started container kube-multus-additional-cni-plugins,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-14 04:09:58.745441847 +0000 UTC m=+30.826379161,LastTimestamp:2026-02-14 04:09:58.745441847 +0000 UTC m=+30.826379161,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.783216 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.783311 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.783333 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.783361 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.783381 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:09:58 crc kubenswrapper[4867]: E0214 04:09:58.783487 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 14 04:09:58 crc kubenswrapper[4867]: E0214 04:09:58.783520 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 14 04:09:58 crc kubenswrapper[4867]: E0214 04:09:58.783520 4867 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 14 04:09:58 crc kubenswrapper[4867]: E0214 04:09:58.783627 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-14 04:10:06.783606256 +0000 UTC m=+38.864543570 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 14 04:09:58 crc kubenswrapper[4867]: E0214 04:09:58.783531 4867 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 04:09:58 crc kubenswrapper[4867]: E0214 04:09:58.783714 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 14 04:09:58 crc kubenswrapper[4867]: E0214 04:09:58.783749 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-14 04:10:06.783718069 +0000 UTC m=+38.864655383 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 04:09:58 crc kubenswrapper[4867]: E0214 04:09:58.783759 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 14 04:09:58 crc kubenswrapper[4867]: E0214 04:09:58.783630 4867 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 14 04:09:58 crc kubenswrapper[4867]: E0214 04:09:58.783783 4867 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 04:09:58 crc kubenswrapper[4867]: E0214 04:09:58.783813 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-14 04:10:06.783805871 +0000 UTC m=+38.864743185 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 14 04:09:58 crc kubenswrapper[4867]: E0214 04:09:58.783851 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-14 04:10:06.783829712 +0000 UTC m=+38.864767046 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 04:09:58 crc kubenswrapper[4867]: E0214 04:09:58.783876 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:10:06.783866233 +0000 UTC m=+38.864803557 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.815398 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.815420 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.815429 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.815444 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.815453 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:58Z","lastTransitionTime":"2026-02-14T04:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.918350 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.918406 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.918422 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.918443 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.918460 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:58Z","lastTransitionTime":"2026-02-14T04:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.996867 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:09:58 crc kubenswrapper[4867]: E0214 04:09:58.997027 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.997052 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.997081 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:09:58 crc kubenswrapper[4867]: E0214 04:09:58.997210 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:09:58 crc kubenswrapper[4867]: E0214 04:09:58.997341 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.013185 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b12a920eac3a6bb901e1eb5b3f4ec399de4fb28f20cd73bdcf463730ccc78bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a373e25fceabb99332a08d8c1928aa6023c103d488a1f02a57b3157eceb75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:59Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.021152 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.021187 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.021197 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.021212 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.021222 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:59Z","lastTransitionTime":"2026-02-14T04:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.028370 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l6v69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afb01bb-2288-4e50-aa66-3e5f2685af58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a109f9fc7a2ea765543b2d1437ad5eccddd0ccb0542b1ffe6a67490057d6d41e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64stb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l6v69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:59Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.046354 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:59Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.071205 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34391a30-5865-46e9-af5f-705cc3b11fba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92014b6ab3e1d7c8631fb2a2fa44b60586bb67157c43e3528a7644584fd25b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://250a34062c680cecaa28554a71a782da6fa1c3554900e8b4f2fa6c093f98e307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3393a612147da0ed4305cb2d2fab51792bf4aefb36be402a6faaa698793cfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9937714cb48d5e8bc3542473d8261629ced25c342c26baa13e57c3dc2ace633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://669cee3fb4ea5c0247e6aa92962377d3152fc79d9022e03689b7c017f857e6e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e713ec2c9e59ec516f5c2241a0e87501f4e83e05d6dadd3c54cd7f1cf11f7d1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32de50fe13796a05a11d846751a0d9aba8dcf9dcde8086c0eb90b5dc685c6ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b353a2a6ce81989e21b42414fdc2911f63d44fbd94dd8c588a704ae66216d8b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nndn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:59Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.089190 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qbv2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55b70fd-de82-48c9-b879-de727928e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6de20a8d6a8a1104338491af05cb4bad2960df3f3d41271922974f2bd0f355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghrlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qbv2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:59Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.095407 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 17:49:21.56343548 +0000 UTC Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.103353 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2ca498f-e329-422d-8b40-abb4d86f9b5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec966f4b2a6aef7743d32f976a12645c5b0feda623f7baf64edf02bc35389e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://898133696f8478fcb41fba24d15e056570cab68af53a559cb642724dff51617a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ad62a4304538cec45962a9672a69b853848bbfcbce460811135c2ffde4849\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c04ba7033e9c86439f79a30f5ac92368859a69c6b8d46aa6e05ca42fbc37839\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:59Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.116468 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab82cfd4c916a17bb5ae2454a121a8367c532dd78d0ae1e13c02868208b7c7fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:59Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.123094 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.123162 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.123175 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.123194 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.123216 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:59Z","lastTransitionTime":"2026-02-14T04:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.130831 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:59Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.142104 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fl729" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb77d03e-6ead-48b5-a96a-db4cbd540192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f23c7e00abcb489852a771f1534532f8a6c3acdd810e4432dd155a72558bcc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gznnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fl729\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:59Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.156307 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9st5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d645541b-4940-4e53-a506-1b42bd296dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9st5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:59Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.170801 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5aa8290-4924-4bc2-bd8e-576e53fa4216\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff2ac5b982c695cfabb0b045748396477b0076e3f4bd77aedf8140d8d212eefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c65a590577c74e672bca804403f159a08ded5ed0e25daf1bef640898c304fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b2c4d9c08ee7188cfea877222707517949a93291dac8409facff18ccd5d9243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c96b250166bcf9c613c7707d9b66c11bbb6292c67d03ed9c9cd8359f466d48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9a86a9d4bdcb85bed9cc5869d14d5d0dcd8a0e22ad73bcc1a9db45554d0c687\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T04:09:45Z\\\",\\\"message\\\":\\\"W0214 04:09:34.347288 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0214 04:09:34.347626 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771042174 cert, and key in /tmp/serving-cert-184764736/serving-signer.crt, /tmp/serving-cert-184764736/serving-signer.key\\\\nI0214 04:09:34.829829 1 observer_polling.go:159] Starting file observer\\\\nW0214 04:09:34.832051 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0214 04:09:34.832310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 04:09:34.835332 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-184764736/tls.crt::/tmp/serving-cert-184764736/tls.key\\\\\\\"\\\\nF0214 04:09:45.190789 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d60b00afe16ba210d6cf3e8edd9c12aef490177b83185e6d74f219cc35efbc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:59Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.183009 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2334a367dde5688d19979264cfd6e67f44426ae7cd249c0b0e18b7e889c8c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:59Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.196066 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:59Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.208705 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5992e46c-bce7-4b9f-82f2-c7ffb93286cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945e51e35cb7125361ec74b9c291782c9bc28f0c319ca5c90a88c27540d6ad95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06b088007e4cc02eff5f33dffc101f9d559fc0af6d9fc99cb7d1a49c47deec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4s95t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:59Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.226444 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.226690 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.226759 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.226835 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.226902 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:59Z","lastTransitionTime":"2026-02-14T04:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.297388 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9st5b" event={"ID":"d645541b-4940-4e53-a506-1b42bd296dfb","Type":"ContainerStarted","Data":"80e2ddc09dadcbbbecee7addee881a393497c7456c1ab3fd4ec4b870d86e87ae"} Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.297487 4867 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.314934 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b12a920eac3a6bb901e1eb5b3f4ec399de4fb28f20cd73bdcf463730ccc78bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a373e25fceabb99332a08d8c1928aa6023c103d488a1f02a57b3157eceb75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:59Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.327639 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l6v69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afb01bb-2288-4e50-aa66-3e5f2685af58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a109f9fc7a2ea765543b2d1437ad5eccddd0ccb0542b1ffe6a67490057d6d41e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64stb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l6v69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:59Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.329152 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.329320 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.329420 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.329546 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.329637 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:59Z","lastTransitionTime":"2026-02-14T04:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.340209 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:59Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.361968 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34391a30-5865-46e9-af5f-705cc3b11fba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92014b6ab3e1d7c8631fb2a2fa44b60586bb67157c43e3528a7644584fd25b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://250a34062c680cecaa28554a71a782da6fa1c3554900e8b4f2fa6c093f98e307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3393a612147da0ed4305cb2d2fab51792bf4aefb36be402a6faaa698793cfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9937714cb48d5e8bc3542473d8261629ced25c342c26baa13e57c3dc2ace633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://669cee3fb4ea5c0247e6aa92962377d3152fc79d9022e03689b7c017f857e6e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e713ec2c9e59ec516f5c2241a0e87501f4e83e05d6dadd3c54cd7f1cf11f7d1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32de50fe13796a05a11d846751a0d9aba8dcf9dcde8086c0eb90b5dc685c6ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b353a2a6ce81989e21b42414fdc2911f63d44fbd94dd8c588a704ae66216d8b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nndn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:59Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.372213 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qbv2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55b70fd-de82-48c9-b879-de727928e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6de20a8d6a8a1104338491af05cb4bad2960df3f3d41271922974f2bd0f355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghrlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qbv2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:59Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.383025 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2ca498f-e329-422d-8b40-abb4d86f9b5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec966f4b2a6aef7743d32f976a12645c5b0feda623f7baf64edf02bc35389e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://898133696f8478fcb41fba24d15e056570cab68af53a559cb642724dff51617a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ad62a4304538cec45962a9672a69b853848bbfcbce460811135c2ffde4849\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c04ba7033e9c86439f79a30f5ac92368859a69c6b8d46aa6e05ca42fbc37839\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:59Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.399804 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab82cfd4c916a17bb5ae2454a121a8367c532dd78d0ae1e13c02868208b7c7fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:59Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.413618 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:59Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.432757 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.432837 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.432857 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.432906 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.432931 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:59Z","lastTransitionTime":"2026-02-14T04:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.434678 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fl729" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb77d03e-6ead-48b5-a96a-db4cbd540192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f23c7e00abcb489852a771f1534532f8a6c3acdd810e4432dd155a72558bcc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gznnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fl729\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:59Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.454781 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9st5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d645541b-4940-4e53-a506-1b42bd296dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80e2ddc09dadcbbbecee7addee881a393497c7456c1ab3fd4ec4b870d86e87ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9st5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:59Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.476791 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5aa8290-4924-4bc2-bd8e-576e53fa4216\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff2ac5b982c695cfabb0b045748396477b0076e3f4bd77aedf8140d8d212eefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c65a590577c74e672bca804403f159a08ded5ed0e25daf1bef640898c304fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b2c4d9c08ee7188cfea877222707517949a93291dac8409facff18ccd5d9243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c96b250166bcf9c613c7707d9b66c11bbb6292c67d03ed9c9cd8359f466d48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9a86a9d4bdcb85bed9cc5869d14d5d0dcd8a0e22ad73bcc1a9db45554d0c687\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T04:09:45Z\\\",\\\"message\\\":\\\"W0214 04:09:34.347288 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0214 04:09:34.347626 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771042174 cert, and key in /tmp/serving-cert-184764736/serving-signer.crt, /tmp/serving-cert-184764736/serving-signer.key\\\\nI0214 04:09:34.829829 1 observer_polling.go:159] Starting file observer\\\\nW0214 04:09:34.832051 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0214 04:09:34.832310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 04:09:34.835332 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-184764736/tls.crt::/tmp/serving-cert-184764736/tls.key\\\\\\\"\\\\nF0214 04:09:45.190789 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d60b00afe16ba210d6cf3e8edd9c12aef490177b83185e6d74f219cc35efbc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:59Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.490005 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2334a367dde5688d19979264cfd6e67f44426ae7cd249c0b0e18b7e889c8c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:59Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.504045 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:59Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.520944 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5992e46c-bce7-4b9f-82f2-c7ffb93286cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945e51e35cb7125361ec74b9c291782c9bc28f0c319ca5c90a88c27540d6ad95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06b088007e4cc02eff5f33dffc101f9d559fc0af6d9fc99cb7d1a49c47deec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4s95t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:59Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.535590 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.535638 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.535651 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.535671 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.535682 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:59Z","lastTransitionTime":"2026-02-14T04:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.637659 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.637929 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.637939 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.637955 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.637966 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:59Z","lastTransitionTime":"2026-02-14T04:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.741473 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.741522 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.741533 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.741550 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.741621 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:59Z","lastTransitionTime":"2026-02-14T04:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.843710 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.843759 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.843772 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.843794 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.843805 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:59Z","lastTransitionTime":"2026-02-14T04:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.946131 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.946163 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.946171 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.946183 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.946192 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:59Z","lastTransitionTime":"2026-02-14T04:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.048494 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.048562 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.048571 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.048641 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.048652 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:00Z","lastTransitionTime":"2026-02-14T04:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.095758 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 05:51:22.785367467 +0000 UTC Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.151049 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.151077 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.151085 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.151097 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.151106 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:00Z","lastTransitionTime":"2026-02-14T04:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.255614 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.255666 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.255683 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.255706 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.255724 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:00Z","lastTransitionTime":"2026-02-14T04:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.318257 4867 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.357849 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.357900 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.357911 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.357927 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.357937 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:00Z","lastTransitionTime":"2026-02-14T04:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.459848 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.459878 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.459887 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.459902 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.459910 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:00Z","lastTransitionTime":"2026-02-14T04:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.562319 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.562368 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.562384 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.562402 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.562415 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:00Z","lastTransitionTime":"2026-02-14T04:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.664796 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.664841 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.664853 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.664870 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.664883 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:00Z","lastTransitionTime":"2026-02-14T04:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.766989 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.767040 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.767055 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.767075 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.767087 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:00Z","lastTransitionTime":"2026-02-14T04:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.868978 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.869015 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.869024 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.869039 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.869049 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:00Z","lastTransitionTime":"2026-02-14T04:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.971877 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.971948 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.971966 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.971992 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.972010 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:00Z","lastTransitionTime":"2026-02-14T04:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.996362 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.996415 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:10:00 crc kubenswrapper[4867]: E0214 04:10:00.996598 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.996629 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:10:00 crc kubenswrapper[4867]: E0214 04:10:00.996786 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:10:00 crc kubenswrapper[4867]: E0214 04:10:00.996924 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.075020 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.075062 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.075071 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.075084 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.075093 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:01Z","lastTransitionTime":"2026-02-14T04:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.095982 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 16:36:30.861279809 +0000 UTC Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.177543 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.177585 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.177594 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.177609 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.177629 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:01Z","lastTransitionTime":"2026-02-14T04:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.280317 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.280359 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.280369 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.280385 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.280396 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:01Z","lastTransitionTime":"2026-02-14T04:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.322976 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6nndn_34391a30-5865-46e9-af5f-705cc3b11fba/ovnkube-controller/0.log" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.326140 4867 generic.go:334] "Generic (PLEG): container finished" podID="34391a30-5865-46e9-af5f-705cc3b11fba" containerID="32de50fe13796a05a11d846751a0d9aba8dcf9dcde8086c0eb90b5dc685c6ef8" exitCode=1 Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.326206 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" event={"ID":"34391a30-5865-46e9-af5f-705cc3b11fba","Type":"ContainerDied","Data":"32de50fe13796a05a11d846751a0d9aba8dcf9dcde8086c0eb90b5dc685c6ef8"} Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.326913 4867 scope.go:117] "RemoveContainer" containerID="32de50fe13796a05a11d846751a0d9aba8dcf9dcde8086c0eb90b5dc685c6ef8" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.344264 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5992e46c-bce7-4b9f-82f2-c7ffb93286cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945e51e35cb7125361ec74b9c291782c9bc28f0c319ca5c90a88c27540d6ad95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06b088007e4cc02eff5f33dffc101f9d559fc0af6d9fc99cb7d1a49c47deec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4s95t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:01Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.358158 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5aa8290-4924-4bc2-bd8e-576e53fa4216\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff2ac5b982c695cfabb0b045748396477b0076e3f4bd77aedf8140d8d212eefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c65a590577c74e672bca804403f159a08ded5ed0e25daf1bef640898c304fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b2c4d9c08ee7188cfea877222707517949a93291dac8409facff18ccd5d9243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c96b250166bcf9c613c7707d9b66c11bbb6292c67d03ed9c9cd8359f466d48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9a86a9d4bdcb85bed9cc5869d14d5d0dcd8a0e22ad73bcc1a9db45554d0c687\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T04:09:45Z\\\",\\\"message\\\":\\\"W0214 04:09:34.347288 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0214 04:09:34.347626 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771042174 cert, and key in /tmp/serving-cert-184764736/serving-signer.crt, /tmp/serving-cert-184764736/serving-signer.key\\\\nI0214 04:09:34.829829 1 observer_polling.go:159] Starting file observer\\\\nW0214 04:09:34.832051 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0214 04:09:34.832310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 04:09:34.835332 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-184764736/tls.crt::/tmp/serving-cert-184764736/tls.key\\\\\\\"\\\\nF0214 04:09:45.190789 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d60b00afe16ba210d6cf3e8edd9c12aef490177b83185e6d74f219cc35efbc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:01Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.370770 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2334a367dde5688d19979264cfd6e67f44426ae7cd249c0b0e18b7e889c8c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:01Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.382253 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:01Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.383235 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.383294 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.383321 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.383349 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.383372 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:01Z","lastTransitionTime":"2026-02-14T04:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.401015 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34391a30-5865-46e9-af5f-705cc3b11fba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92014b6ab3e1d7c8631fb2a2fa44b60586bb67157c43e3528a7644584fd25b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://250a34062c680cecaa28554a71a782da6fa1c3554900e8b4f2fa6c093f98e307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3393a612147da0ed4305cb2d2fab51792bf4aefb36be402a6faaa698793cfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9937714cb48d5e8bc3542473d8261629ced25c342c26baa13e57c3dc2ace633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://669cee3fb4ea5c0247e6aa92962377d3152fc79d9022e03689b7c017f857e6e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e713ec2c9e59ec516f5c2241a0e87501f4e83e05d6dadd3c54cd7f1cf11f7d1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32de50fe13796a05a11d846751a0d9aba8dcf9dcde8086c0eb90b5dc685c6ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32de50fe13796a05a11d846751a0d9aba8dcf9dcde8086c0eb90b5dc685c6ef8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T04:10:00Z\\\",\\\"message\\\":\\\"AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0214 04:10:00.855869 6162 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 04:10:00.856816 6162 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0214 04:10:00.856832 6162 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0214 04:10:00.856855 6162 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0214 04:10:00.856860 6162 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0214 04:10:00.856872 6162 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0214 04:10:00.856879 6162 handler.go:208] Removed *v1.Node event handler 7\\\\nI0214 04:10:00.856887 6162 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0214 04:10:00.856891 6162 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0214 04:10:00.856931 6162 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0214 04:10:00.856956 6162 factory.go:656] Stopping watch factory\\\\nI0214 04:10:00.856960 6162 handler.go:208] Removed *v1.Node event handler 2\\\\nI0214 04:10:00.856969 6162 ovnkube.go:599] Stopped ovnkube\\\\nI0214 04:10:00.856978 6162 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0214 04:10:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b353a2a6ce81989e21b42414fdc2911f63d44fbd94dd8c588a704ae66216d8b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nndn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:01Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.415755 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qbv2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55b70fd-de82-48c9-b879-de727928e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6de20a8d6a8a1104338491af05cb4bad2960df3f3d41271922974f2bd0f355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghrlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qbv2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:01Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.428164 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b12a920eac3a6bb901e1eb5b3f4ec399de4fb28f20cd73bdcf463730ccc78bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a373e25fceabb99332a08d8c1928aa6023c103d488a1f02a57b3157eceb75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:01Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.437737 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l6v69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afb01bb-2288-4e50-aa66-3e5f2685af58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a109f9fc7a2ea765543b2d1437ad5eccddd0ccb0542b1ffe6a67490057d6d41e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64stb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l6v69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:01Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.448412 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:01Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.461824 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab82cfd4c916a17bb5ae2454a121a8367c532dd78d0ae1e13c02868208b7c7fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:01Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.473914 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:01Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.486714 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.486752 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.486761 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.486786 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.486796 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:01Z","lastTransitionTime":"2026-02-14T04:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.488543 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2ca498f-e329-422d-8b40-abb4d86f9b5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec966f4b2a6aef7743d32f976a12645c5b0feda623f7baf64edf02bc35389e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://898133696f8478fcb41fba24d15e056570cab68af53a559cb642724dff51617a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ad62a4304538cec45962a9672a69b853848bbfcbce460811135c2ffde4849\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c04ba7033e9c86439f79a30f5ac92368859a69c6b8d46aa6e05ca42fbc37839\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:01Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.500941 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fl729" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb77d03e-6ead-48b5-a96a-db4cbd540192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f23c7e00abcb489852a771f1534532f8a6c3acdd810e4432dd155a72558bcc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gznnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fl729\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:01Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.514469 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9st5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d645541b-4940-4e53-a506-1b42bd296dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80e2ddc09dadcbbbecee7addee881a393497c7456c1ab3fd4ec4b870d86e87ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9st5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:01Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.589731 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.589767 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.589776 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.589791 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.589803 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:01Z","lastTransitionTime":"2026-02-14T04:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.691803 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.691853 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.691861 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.691875 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.691883 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:01Z","lastTransitionTime":"2026-02-14T04:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.793472 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.793529 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.793540 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.793554 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.793563 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:01Z","lastTransitionTime":"2026-02-14T04:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.896834 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.896887 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.896922 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.896943 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.896958 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:01Z","lastTransitionTime":"2026-02-14T04:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.999369 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.999403 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.999413 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.999424 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.999437 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:01Z","lastTransitionTime":"2026-02-14T04:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.083401 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.096338 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 15:16:55.290504014 +0000 UTC Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.096520 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fl729" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb77d03e-6ead-48b5-a96a-db4cbd540192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f23c7e00abcb489852a771f1534532f8a6c3acdd810e4432dd155a72558bcc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gznnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fl729\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:02Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.101126 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.101166 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.101178 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.101195 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.101207 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:02Z","lastTransitionTime":"2026-02-14T04:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.113765 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9st5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d645541b-4940-4e53-a506-1b42bd296dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80e2ddc09dadcbbbecee7addee881a393497c7456c1ab3fd4ec4b870d86e87ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9st5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:02Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.131042 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5aa8290-4924-4bc2-bd8e-576e53fa4216\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff2ac5b982c695cfabb0b045748396477b0076e3f4bd77aedf8140d8d212eefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c65a590577c74e672bca804403f159a08ded5ed0e25daf1bef640898c304fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b2c4d9c08ee7188cfea877222707517949a93291dac8409facff18ccd5d9243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c96b250166bcf9c613c7707d9b66c11bbb6292c67d03ed9c9cd8359f466d48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9a86a9d4bdcb85bed9cc5869d14d5d0dcd8a0e22ad73bcc1a9db45554d0c687\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T04:09:45Z\\\",\\\"message\\\":\\\"W0214 04:09:34.347288 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0214 04:09:34.347626 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771042174 cert, and key in /tmp/serving-cert-184764736/serving-signer.crt, /tmp/serving-cert-184764736/serving-signer.key\\\\nI0214 04:09:34.829829 1 observer_polling.go:159] Starting file observer\\\\nW0214 04:09:34.832051 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0214 04:09:34.832310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 04:09:34.835332 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-184764736/tls.crt::/tmp/serving-cert-184764736/tls.key\\\\\\\"\\\\nF0214 04:09:45.190789 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d60b00afe16ba210d6cf3e8edd9c12aef490177b83185e6d74f219cc35efbc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:02Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.140712 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2334a367dde5688d19979264cfd6e67f44426ae7cd249c0b0e18b7e889c8c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:02Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.152259 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:02Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.162364 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5992e46c-bce7-4b9f-82f2-c7ffb93286cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945e51e35cb7125361ec74b9c291782c9bc28f0c319ca5c90a88c27540d6ad95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06b088007e4cc02eff5f33dffc101f9d559fc0af6d9fc99cb7d1a49c47deec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4s95t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:02Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.174437 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b12a920eac3a6bb901e1eb5b3f4ec399de4fb28f20cd73bdcf463730ccc78bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a373e25fceabb99332a08d8c1928aa6023c103d488a1f02a57b3157eceb75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:02Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.183154 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l6v69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afb01bb-2288-4e50-aa66-3e5f2685af58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a109f9fc7a2ea765543b2d1437ad5eccddd0ccb0542b1ffe6a67490057d6d41e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64stb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l6v69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:02Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.193156 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.193189 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.193197 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.193209 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.193218 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:02Z","lastTransitionTime":"2026-02-14T04:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.194886 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:02Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:02 crc kubenswrapper[4867]: E0214 04:10:02.204535 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148e1364-0af4-4e1f-ae72-52166d888ddc\\\",\\\"systemUUID\\\":\\\"1382a0d3-8d29-4f25-bc2c-dc46ad541396\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:02Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.207944 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.208010 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.208030 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.208063 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.208083 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:02Z","lastTransitionTime":"2026-02-14T04:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.213491 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34391a30-5865-46e9-af5f-705cc3b11fba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92014b6ab3e1d7c8631fb2a2fa44b60586bb67157c43e3528a7644584fd25b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://250a34062c680cecaa28554a71a782da6fa1c3554900e8b4f2fa6c093f98e307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3393a612147da0ed4305cb2d2fab51792bf4aefb36be402a6faaa698793cfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9937714cb48d5e8bc3542473d8261629ced25c342c26baa13e57c3dc2ace633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://669cee3fb4ea5c0247e6aa92962377d3152fc79d9022e03689b7c017f857e6e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e713ec2c9e59ec516f5c2241a0e87501f4e83e05d6dadd3c54cd7f1cf11f7d1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32de50fe13796a05a11d846751a0d9aba8dcf9dcde8086c0eb90b5dc685c6ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32de50fe13796a05a11d846751a0d9aba8dcf9dcde8086c0eb90b5dc685c6ef8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T04:10:00Z\\\",\\\"message\\\":\\\"AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0214 04:10:00.855869 6162 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 04:10:00.856816 6162 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0214 04:10:00.856832 6162 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0214 04:10:00.856855 6162 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0214 04:10:00.856860 6162 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0214 04:10:00.856872 6162 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0214 04:10:00.856879 6162 handler.go:208] Removed *v1.Node event handler 7\\\\nI0214 04:10:00.856887 6162 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0214 04:10:00.856891 6162 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0214 04:10:00.856931 6162 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0214 04:10:00.856956 6162 factory.go:656] Stopping watch factory\\\\nI0214 04:10:00.856960 6162 handler.go:208] Removed *v1.Node event handler 2\\\\nI0214 04:10:00.856969 6162 ovnkube.go:599] Stopped ovnkube\\\\nI0214 04:10:00.856978 6162 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0214 04:10:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b353a2a6ce81989e21b42414fdc2911f63d44fbd94dd8c588a704ae66216d8b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nndn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:02Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:02 crc kubenswrapper[4867]: E0214 04:10:02.220894 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148e1364-0af4-4e1f-ae72-52166d888ddc\\\",\\\"systemUUID\\\":\\\"1382a0d3-8d29-4f25-bc2c-dc46ad541396\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:02Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.223908 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qbv2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55b70fd-de82-48c9-b879-de727928e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6de20a8d6a8a1104338491af05cb4bad2960df3f3d41271922974f2bd0f355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghrlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qbv2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:02Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.224060 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.224092 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.224100 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.224116 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.224128 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:02Z","lastTransitionTime":"2026-02-14T04:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.234568 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2ca498f-e329-422d-8b40-abb4d86f9b5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec966f4b2a6aef7743d32f976a12645c5b0feda623f7baf64edf02bc35389e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://898133696f8478fcb41fba24d15e056570cab68af53a559cb642724dff51617a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ad62a4304538cec45962a9672a69b853848bbfcbce460811135c2ffde4849\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c04ba7033e9c86439f79a30f5ac92368859a69c6b8d46aa6e05ca42fbc37839\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:02Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:02 crc kubenswrapper[4867]: E0214 04:10:02.235163 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148e1364-0af4-4e1f-ae72-52166d888ddc\\\",\\\"systemUUID\\\":\\\"1382a0d3-8d29-4f25-bc2c-dc46ad541396\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:02Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.238982 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.239013 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.239068 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.239084 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.239094 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:02Z","lastTransitionTime":"2026-02-14T04:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.250095 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab82cfd4c916a17bb5ae2454a121a8367c532dd78d0ae1e13c02868208b7c7fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:02Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:02 crc kubenswrapper[4867]: E0214 04:10:02.251358 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148e1364-0af4-4e1f-ae72-52166d888ddc\\\",\\\"systemUUID\\\":\\\"1382a0d3-8d29-4f25-bc2c-dc46ad541396\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:02Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.254669 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.254699 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.254707 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.254721 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.254731 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:02Z","lastTransitionTime":"2026-02-14T04:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.266132 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:02Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:02 crc kubenswrapper[4867]: E0214 04:10:02.267751 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148e1364-0af4-4e1f-ae72-52166d888ddc\\\",\\\"systemUUID\\\":\\\"1382a0d3-8d29-4f25-bc2c-dc46ad541396\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:02Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:02 crc kubenswrapper[4867]: E0214 04:10:02.268033 4867 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.269764 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.269795 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.269808 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.269826 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.269841 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:02Z","lastTransitionTime":"2026-02-14T04:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.332805 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6nndn_34391a30-5865-46e9-af5f-705cc3b11fba/ovnkube-controller/0.log" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.335831 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" event={"ID":"34391a30-5865-46e9-af5f-705cc3b11fba","Type":"ContainerStarted","Data":"6275ed33b70a915b2624ce3be264cf800be4656505fbb478cda0c95a4e486648"} Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.335981 4867 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.349094 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b12a920eac3a6bb901e1eb5b3f4ec399de4fb28f20cd73bdcf463730ccc78bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a373e25fceabb99332a08d8c1928aa6023c103d488a1f02a57b3157eceb75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:02Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.359382 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l6v69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afb01bb-2288-4e50-aa66-3e5f2685af58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a109f9fc7a2ea765543b2d1437ad5eccddd0ccb0542b1ffe6a67490057d6d41e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64stb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l6v69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:02Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.372284 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.372324 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.372334 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.372350 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.372363 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:02Z","lastTransitionTime":"2026-02-14T04:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.375778 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:02Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.398795 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34391a30-5865-46e9-af5f-705cc3b11fba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92014b6ab3e1d7c8631fb2a2fa44b60586bb67157c43e3528a7644584fd25b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://250a34062c680cecaa28554a71a782da6fa1c3554900e8b4f2fa6c093f98e307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3393a612147da0ed4305cb2d2fab51792bf4aefb36be402a6faaa698793cfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9937714cb48d5e8bc3542473d8261629ced25c342c26baa13e57c3dc2ace633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://669cee3fb4ea5c0247e6aa92962377d3152fc79d9022e03689b7c017f857e6e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e713ec2c9e59ec516f5c2241a0e87501f4e83e05d6dadd3c54cd7f1cf11f7d1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6275ed33b70a915b2624ce3be264cf800be4656505fbb478cda0c95a4e486648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32de50fe13796a05a11d846751a0d9aba8dcf9dcde8086c0eb90b5dc685c6ef8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T04:10:00Z\\\",\\\"message\\\":\\\"AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0214 04:10:00.855869 6162 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 04:10:00.856816 6162 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0214 04:10:00.856832 6162 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0214 04:10:00.856855 6162 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0214 04:10:00.856860 6162 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0214 04:10:00.856872 6162 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0214 04:10:00.856879 6162 handler.go:208] Removed *v1.Node event handler 7\\\\nI0214 04:10:00.856887 6162 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0214 04:10:00.856891 6162 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0214 04:10:00.856931 6162 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0214 04:10:00.856956 6162 factory.go:656] Stopping watch factory\\\\nI0214 04:10:00.856960 6162 handler.go:208] Removed *v1.Node event handler 2\\\\nI0214 04:10:00.856969 6162 ovnkube.go:599] Stopped ovnkube\\\\nI0214 04:10:00.856978 6162 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0214 04:10:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b353a2a6ce81989e21b42414fdc2911f63d44fbd94dd8c588a704ae66216d8b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nndn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:02Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.409873 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qbv2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55b70fd-de82-48c9-b879-de727928e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6de20a8d6a8a1104338491af05cb4bad2960df3f3d41271922974f2bd0f355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghrlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qbv2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:02Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.421542 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2ca498f-e329-422d-8b40-abb4d86f9b5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec966f4b2a6aef7743d32f976a12645c5b0feda623f7baf64edf02bc35389e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://898133696f8478fcb41fba24d15e056570cab68af53a559cb642724dff51617a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ad62a4304538cec45962a9672a69b853848bbfcbce460811135c2ffde4849\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c04ba7033e9c86439f79a30f5ac92368859a69c6b8d46aa6e05ca42fbc37839\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:02Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.433114 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab82cfd4c916a17bb5ae2454a121a8367c532dd78d0ae1e13c02868208b7c7fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:02Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.448822 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:02Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.463003 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fl729" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb77d03e-6ead-48b5-a96a-db4cbd540192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f23c7e00abcb489852a771f1534532f8a6c3acdd810e4432dd155a72558bcc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gznnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fl729\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:02Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.475931 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.475998 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.476020 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.476061 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.476087 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:02Z","lastTransitionTime":"2026-02-14T04:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.477379 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9st5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d645541b-4940-4e53-a506-1b42bd296dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80e2ddc09dadcbbbecee7addee881a393497c7456c1ab3fd4ec4b870d86e87ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9st5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:02Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.492261 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5aa8290-4924-4bc2-bd8e-576e53fa4216\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff2ac5b982c695cfabb0b045748396477b0076e3f4bd77aedf8140d8d212eefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c65a590577c74e672bca804403f159a08ded5ed0e25daf1bef640898c304fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b2c4d9c08ee7188cfea877222707517949a93291dac8409facff18ccd5d9243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c96b250166bcf9c613c7707d9b66c11bbb6292c67d03ed9c9cd8359f466d48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9a86a9d4bdcb85bed9cc5869d14d5d0dcd8a0e22ad73bcc1a9db45554d0c687\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T04:09:45Z\\\",\\\"message\\\":\\\"W0214 04:09:34.347288 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0214 04:09:34.347626 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771042174 cert, and key in /tmp/serving-cert-184764736/serving-signer.crt, /tmp/serving-cert-184764736/serving-signer.key\\\\nI0214 04:09:34.829829 1 observer_polling.go:159] Starting file observer\\\\nW0214 04:09:34.832051 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0214 04:09:34.832310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 04:09:34.835332 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-184764736/tls.crt::/tmp/serving-cert-184764736/tls.key\\\\\\\"\\\\nF0214 04:09:45.190789 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d60b00afe16ba210d6cf3e8edd9c12aef490177b83185e6d74f219cc35efbc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:02Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.503018 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2334a367dde5688d19979264cfd6e67f44426ae7cd249c0b0e18b7e889c8c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:02Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.513172 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:02Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.522660 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5992e46c-bce7-4b9f-82f2-c7ffb93286cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945e51e35cb7125361ec74b9c291782c9bc28f0c319ca5c90a88c27540d6ad95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06b088007e4cc02eff5f33dffc101f9d559fc0af6d9fc99cb7d1a49c47deec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4s95t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:02Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.578662 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.578696 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.578705 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.578722 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.578731 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:02Z","lastTransitionTime":"2026-02-14T04:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.680805 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.680836 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.680844 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.680858 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.680867 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:02Z","lastTransitionTime":"2026-02-14T04:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.782809 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.782847 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.782856 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.782871 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.782881 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:02Z","lastTransitionTime":"2026-02-14T04:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.885316 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.885377 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.885387 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.885402 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.885414 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:02Z","lastTransitionTime":"2026-02-14T04:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.987660 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.987712 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.987722 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.987734 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.987744 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:02Z","lastTransitionTime":"2026-02-14T04:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.996984 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.997155 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.997179 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:10:02 crc kubenswrapper[4867]: E0214 04:10:02.997278 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:10:02 crc kubenswrapper[4867]: E0214 04:10:02.997442 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:10:02 crc kubenswrapper[4867]: E0214 04:10:02.997584 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.091427 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.091476 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.091488 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.091528 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.091541 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:03Z","lastTransitionTime":"2026-02-14T04:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.096756 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 14:33:02.271878069 +0000 UTC Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.193812 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.193864 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.193877 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.193894 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.193906 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:03Z","lastTransitionTime":"2026-02-14T04:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.296623 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.296666 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.296678 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.296699 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.296712 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:03Z","lastTransitionTime":"2026-02-14T04:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.339822 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6nndn_34391a30-5865-46e9-af5f-705cc3b11fba/ovnkube-controller/1.log" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.340873 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6nndn_34391a30-5865-46e9-af5f-705cc3b11fba/ovnkube-controller/0.log" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.343795 4867 generic.go:334] "Generic (PLEG): container finished" podID="34391a30-5865-46e9-af5f-705cc3b11fba" containerID="6275ed33b70a915b2624ce3be264cf800be4656505fbb478cda0c95a4e486648" exitCode=1 Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.343848 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" event={"ID":"34391a30-5865-46e9-af5f-705cc3b11fba","Type":"ContainerDied","Data":"6275ed33b70a915b2624ce3be264cf800be4656505fbb478cda0c95a4e486648"} Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.343998 4867 scope.go:117] "RemoveContainer" containerID="32de50fe13796a05a11d846751a0d9aba8dcf9dcde8086c0eb90b5dc685c6ef8" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.344793 4867 scope.go:117] "RemoveContainer" containerID="6275ed33b70a915b2624ce3be264cf800be4656505fbb478cda0c95a4e486648" Feb 14 04:10:03 crc kubenswrapper[4867]: E0214 04:10:03.344961 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-6nndn_openshift-ovn-kubernetes(34391a30-5865-46e9-af5f-705cc3b11fba)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.358075 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l6v69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afb01bb-2288-4e50-aa66-3e5f2685af58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a109f9fc7a2ea765543b2d1437ad5eccddd0ccb0542b1ffe6a67490057d6d41e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64stb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l6v69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:03Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.370413 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:03Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.386945 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34391a30-5865-46e9-af5f-705cc3b11fba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92014b6ab3e1d7c8631fb2a2fa44b60586bb67157c43e3528a7644584fd25b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://250a34062c680cecaa28554a71a782da6fa1c3554900e8b4f2fa6c093f98e307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3393a612147da0ed4305cb2d2fab51792bf4aefb36be402a6faaa698793cfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9937714cb48d5e8bc3542473d8261629ced25c342c26baa13e57c3dc2ace633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://669cee3fb4ea5c0247e6aa92962377d3152fc79d9022e03689b7c017f857e6e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e713ec2c9e59ec516f5c2241a0e87501f4e83e05d6dadd3c54cd7f1cf11f7d1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6275ed33b70a915b2624ce3be264cf800be4656505fbb478cda0c95a4e486648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32de50fe13796a05a11d846751a0d9aba8dcf9dcde8086c0eb90b5dc685c6ef8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T04:10:00Z\\\",\\\"message\\\":\\\"AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0214 04:10:00.855869 6162 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 04:10:00.856816 6162 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0214 04:10:00.856832 6162 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0214 04:10:00.856855 6162 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0214 04:10:00.856860 6162 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0214 04:10:00.856872 6162 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0214 04:10:00.856879 6162 handler.go:208] Removed *v1.Node event handler 7\\\\nI0214 04:10:00.856887 6162 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0214 04:10:00.856891 6162 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0214 04:10:00.856931 6162 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0214 04:10:00.856956 6162 factory.go:656] Stopping watch factory\\\\nI0214 04:10:00.856960 6162 handler.go:208] Removed *v1.Node event handler 2\\\\nI0214 04:10:00.856969 6162 ovnkube.go:599] Stopped ovnkube\\\\nI0214 04:10:00.856978 6162 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0214 04:10:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6275ed33b70a915b2624ce3be264cf800be4656505fbb478cda0c95a4e486648\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"message\\\":\\\"cs-tls service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc000627d57 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:9393,TargetPort:{1 0 metrics},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{name: ingress-operator,},ClusterIP:10.217.5.244,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.244],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0214 04:10:02.346564 6288 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to star\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b353a2a6ce81989e21b42414fdc2911f63d44fbd94dd8c588a704ae66216d8b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nndn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:03Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.396464 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qbv2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55b70fd-de82-48c9-b879-de727928e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6de20a8d6a8a1104338491af05cb4bad2960df3f3d41271922974f2bd0f355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghrlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qbv2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:03Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.398935 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.398989 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.399001 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.399019 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.399030 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:03Z","lastTransitionTime":"2026-02-14T04:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.408735 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b12a920eac3a6bb901e1eb5b3f4ec399de4fb28f20cd73bdcf463730ccc78bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a373e25fceabb99332a08d8c1928aa6023c103d488a1f02a57b3157eceb75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:03Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.420372 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab82cfd4c916a17bb5ae2454a121a8367c532dd78d0ae1e13c02868208b7c7fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:03Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.430364 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:03Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.442958 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2ca498f-e329-422d-8b40-abb4d86f9b5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec966f4b2a6aef7743d32f976a12645c5b0feda623f7baf64edf02bc35389e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://898133696f8478fcb41fba24d15e056570cab68af53a559cb642724dff51617a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ad62a4304538cec45962a9672a69b853848bbfcbce460811135c2ffde4849\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c04ba7033e9c86439f79a30f5ac92368859a69c6b8d46aa6e05ca42fbc37839\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:03Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.445584 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbvwr"] Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.445988 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbvwr" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.447588 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.448644 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.455754 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fl729" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb77d03e-6ead-48b5-a96a-db4cbd540192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f23c7e00abcb489852a771f1534532f8a6c3acdd810e4432dd155a72558bcc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gznnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fl729\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:03Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.469832 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9st5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d645541b-4940-4e53-a506-1b42bd296dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80e2ddc09dadcbbbecee7addee881a393497c7456c1ab3fd4ec4b870d86e87ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9st5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:03Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.479672 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2334a367dde5688d19979264cfd6e67f44426ae7cd249c0b0e18b7e889c8c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:03Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.490306 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:03Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.505002 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.505076 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.505095 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.505122 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.505139 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:03Z","lastTransitionTime":"2026-02-14T04:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.505114 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5992e46c-bce7-4b9f-82f2-c7ffb93286cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945e51e35cb7125361ec74b9c291782c9bc28f0c319ca5c90a88c27540d6ad95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06b088007e4cc02eff5f33dffc101f9d559fc0af6d9fc99cb7d1a49c47deec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4s95t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:03Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.519049 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5aa8290-4924-4bc2-bd8e-576e53fa4216\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff2ac5b982c695cfabb0b045748396477b0076e3f4bd77aedf8140d8d212eefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c65a590577c74e672bca804403f159a08ded5ed0e25daf1bef640898c304fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b2c4d9c08ee7188cfea877222707517949a93291dac8409facff18ccd5d9243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c96b250166bcf9c613c7707d9b66c11bbb6292c67d03ed9c9cd8359f466d48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9a86a9d4bdcb85bed9cc5869d14d5d0dcd8a0e22ad73bcc1a9db45554d0c687\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T04:09:45Z\\\",\\\"message\\\":\\\"W0214 04:09:34.347288 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0214 04:09:34.347626 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771042174 cert, and key in /tmp/serving-cert-184764736/serving-signer.crt, /tmp/serving-cert-184764736/serving-signer.key\\\\nI0214 04:09:34.829829 1 observer_polling.go:159] Starting file observer\\\\nW0214 04:09:34.832051 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0214 04:09:34.832310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 04:09:34.835332 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-184764736/tls.crt::/tmp/serving-cert-184764736/tls.key\\\\\\\"\\\\nF0214 04:09:45.190789 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d60b00afe16ba210d6cf3e8edd9c12aef490177b83185e6d74f219cc35efbc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:03Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.534631 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9st5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d645541b-4940-4e53-a506-1b42bd296dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80e2ddc09dadcbbbecee7addee881a393497c7456c1ab3fd4ec4b870d86e87ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9st5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:03Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.545106 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbvwr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05957e01-c589-4408-8f80-cd33f8856262\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj65g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj65g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:10:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dbvwr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:03Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.546554 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/05957e01-c589-4408-8f80-cd33f8856262-env-overrides\") pod \"ovnkube-control-plane-749d76644c-dbvwr\" (UID: \"05957e01-c589-4408-8f80-cd33f8856262\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbvwr" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.546599 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nj65g\" (UniqueName: \"kubernetes.io/projected/05957e01-c589-4408-8f80-cd33f8856262-kube-api-access-nj65g\") pod \"ovnkube-control-plane-749d76644c-dbvwr\" (UID: \"05957e01-c589-4408-8f80-cd33f8856262\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbvwr" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.546628 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/05957e01-c589-4408-8f80-cd33f8856262-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-dbvwr\" (UID: \"05957e01-c589-4408-8f80-cd33f8856262\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbvwr" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.546681 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/05957e01-c589-4408-8f80-cd33f8856262-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-dbvwr\" (UID: \"05957e01-c589-4408-8f80-cd33f8856262\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbvwr" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.555946 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fl729" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb77d03e-6ead-48b5-a96a-db4cbd540192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f23c7e00abcb489852a771f1534532f8a6c3acdd810e4432dd155a72558bcc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gznnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fl729\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:03Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.566264 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:03Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.578759 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5992e46c-bce7-4b9f-82f2-c7ffb93286cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945e51e35cb7125361ec74b9c291782c9bc28f0c319ca5c90a88c27540d6ad95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06b088007e4cc02eff5f33dffc101f9d559fc0af6d9fc99cb7d1a49c47deec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4s95t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:03Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.589702 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5aa8290-4924-4bc2-bd8e-576e53fa4216\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff2ac5b982c695cfabb0b045748396477b0076e3f4bd77aedf8140d8d212eefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c65a590577c74e672bca804403f159a08ded5ed0e25daf1bef640898c304fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b2c4d9c08ee7188cfea877222707517949a93291dac8409facff18ccd5d9243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c96b250166bcf9c613c7707d9b66c11bbb6292c67d03ed9c9cd8359f466d48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9a86a9d4bdcb85bed9cc5869d14d5d0dcd8a0e22ad73bcc1a9db45554d0c687\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T04:09:45Z\\\",\\\"message\\\":\\\"W0214 04:09:34.347288 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0214 04:09:34.347626 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771042174 cert, and key in /tmp/serving-cert-184764736/serving-signer.crt, /tmp/serving-cert-184764736/serving-signer.key\\\\nI0214 04:09:34.829829 1 observer_polling.go:159] Starting file observer\\\\nW0214 04:09:34.832051 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0214 04:09:34.832310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 04:09:34.835332 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-184764736/tls.crt::/tmp/serving-cert-184764736/tls.key\\\\\\\"\\\\nF0214 04:09:45.190789 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d60b00afe16ba210d6cf3e8edd9c12aef490177b83185e6d74f219cc35efbc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:03Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.599174 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2334a367dde5688d19979264cfd6e67f44426ae7cd249c0b0e18b7e889c8c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:03Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.607726 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.607766 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.607777 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.607794 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.607807 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:03Z","lastTransitionTime":"2026-02-14T04:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.613137 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:03Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.629414 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34391a30-5865-46e9-af5f-705cc3b11fba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92014b6ab3e1d7c8631fb2a2fa44b60586bb67157c43e3528a7644584fd25b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://250a34062c680cecaa28554a71a782da6fa1c3554900e8b4f2fa6c093f98e307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3393a612147da0ed4305cb2d2fab51792bf4aefb36be402a6faaa698793cfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9937714cb48d5e8bc3542473d8261629ced25c342c26baa13e57c3dc2ace633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://669cee3fb4ea5c0247e6aa92962377d3152fc79d9022e03689b7c017f857e6e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e713ec2c9e59ec516f5c2241a0e87501f4e83e05d6dadd3c54cd7f1cf11f7d1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6275ed33b70a915b2624ce3be264cf800be4656505fbb478cda0c95a4e486648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32de50fe13796a05a11d846751a0d9aba8dcf9dcde8086c0eb90b5dc685c6ef8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T04:10:00Z\\\",\\\"message\\\":\\\"AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0214 04:10:00.855869 6162 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 04:10:00.856816 6162 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0214 04:10:00.856832 6162 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0214 04:10:00.856855 6162 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0214 04:10:00.856860 6162 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0214 04:10:00.856872 6162 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0214 04:10:00.856879 6162 handler.go:208] Removed *v1.Node event handler 7\\\\nI0214 04:10:00.856887 6162 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0214 04:10:00.856891 6162 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0214 04:10:00.856931 6162 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0214 04:10:00.856956 6162 factory.go:656] Stopping watch factory\\\\nI0214 04:10:00.856960 6162 handler.go:208] Removed *v1.Node event handler 2\\\\nI0214 04:10:00.856969 6162 ovnkube.go:599] Stopped ovnkube\\\\nI0214 04:10:00.856978 6162 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0214 04:10:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6275ed33b70a915b2624ce3be264cf800be4656505fbb478cda0c95a4e486648\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"message\\\":\\\"cs-tls service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc000627d57 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:9393,TargetPort:{1 0 metrics},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{name: ingress-operator,},ClusterIP:10.217.5.244,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.244],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0214 04:10:02.346564 6288 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to star\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b353a2a6ce81989e21b42414fdc2911f63d44fbd94dd8c588a704ae66216d8b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nndn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:03Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.638269 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qbv2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55b70fd-de82-48c9-b879-de727928e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6de20a8d6a8a1104338491af05cb4bad2960df3f3d41271922974f2bd0f355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghrlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qbv2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:03Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.647585 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/05957e01-c589-4408-8f80-cd33f8856262-env-overrides\") pod \"ovnkube-control-plane-749d76644c-dbvwr\" (UID: \"05957e01-c589-4408-8f80-cd33f8856262\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbvwr" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.647630 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nj65g\" (UniqueName: \"kubernetes.io/projected/05957e01-c589-4408-8f80-cd33f8856262-kube-api-access-nj65g\") pod \"ovnkube-control-plane-749d76644c-dbvwr\" (UID: \"05957e01-c589-4408-8f80-cd33f8856262\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbvwr" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.647675 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/05957e01-c589-4408-8f80-cd33f8856262-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-dbvwr\" (UID: \"05957e01-c589-4408-8f80-cd33f8856262\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbvwr" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.647703 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/05957e01-c589-4408-8f80-cd33f8856262-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-dbvwr\" (UID: \"05957e01-c589-4408-8f80-cd33f8856262\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbvwr" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.648213 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/05957e01-c589-4408-8f80-cd33f8856262-env-overrides\") pod \"ovnkube-control-plane-749d76644c-dbvwr\" (UID: \"05957e01-c589-4408-8f80-cd33f8856262\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbvwr" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.648875 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b12a920eac3a6bb901e1eb5b3f4ec399de4fb28f20cd73bdcf463730ccc78bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a373e25fceabb99332a08d8c1928aa6023c103d488a1f02a57b3157eceb75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:03Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.648979 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/05957e01-c589-4408-8f80-cd33f8856262-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-dbvwr\" (UID: \"05957e01-c589-4408-8f80-cd33f8856262\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbvwr" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.653679 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/05957e01-c589-4408-8f80-cd33f8856262-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-dbvwr\" (UID: \"05957e01-c589-4408-8f80-cd33f8856262\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbvwr" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.657701 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l6v69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afb01bb-2288-4e50-aa66-3e5f2685af58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a109f9fc7a2ea765543b2d1437ad5eccddd0ccb0542b1ffe6a67490057d6d41e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64stb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l6v69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:03Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.664078 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nj65g\" (UniqueName: \"kubernetes.io/projected/05957e01-c589-4408-8f80-cd33f8856262-kube-api-access-nj65g\") pod \"ovnkube-control-plane-749d76644c-dbvwr\" (UID: \"05957e01-c589-4408-8f80-cd33f8856262\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbvwr" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.669716 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab82cfd4c916a17bb5ae2454a121a8367c532dd78d0ae1e13c02868208b7c7fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:03Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.680572 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:03Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.691809 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2ca498f-e329-422d-8b40-abb4d86f9b5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec966f4b2a6aef7743d32f976a12645c5b0feda623f7baf64edf02bc35389e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://898133696f8478fcb41fba24d15e056570cab68af53a559cb642724dff51617a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ad62a4304538cec45962a9672a69b853848bbfcbce460811135c2ffde4849\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c04ba7033e9c86439f79a30f5ac92368859a69c6b8d46aa6e05ca42fbc37839\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:03Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.711852 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.711893 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.711903 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.711917 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.711926 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:03Z","lastTransitionTime":"2026-02-14T04:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.756728 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbvwr" Feb 14 04:10:03 crc kubenswrapper[4867]: W0214 04:10:03.777275 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod05957e01_c589_4408_8f80_cd33f8856262.slice/crio-70356d1768809ef207dc89091e759f7babb03f91a97749a7f29a652b275fede6 WatchSource:0}: Error finding container 70356d1768809ef207dc89091e759f7babb03f91a97749a7f29a652b275fede6: Status 404 returned error can't find the container with id 70356d1768809ef207dc89091e759f7babb03f91a97749a7f29a652b275fede6 Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.814486 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.814543 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.814557 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.814574 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.814584 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:03Z","lastTransitionTime":"2026-02-14T04:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.917106 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.917140 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.917147 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.917160 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.917169 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:03Z","lastTransitionTime":"2026-02-14T04:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.020225 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.020290 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.020311 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.020340 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.020359 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:04Z","lastTransitionTime":"2026-02-14T04:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.097286 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 08:50:03.345451093 +0000 UTC Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.122893 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.122925 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.122933 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.122946 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.122954 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:04Z","lastTransitionTime":"2026-02-14T04:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.225150 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.225188 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.225198 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.225214 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.225228 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:04Z","lastTransitionTime":"2026-02-14T04:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.328608 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.328646 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.328655 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.328669 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.328679 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:04Z","lastTransitionTime":"2026-02-14T04:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.347621 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6nndn_34391a30-5865-46e9-af5f-705cc3b11fba/ovnkube-controller/1.log" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.351796 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbvwr" event={"ID":"05957e01-c589-4408-8f80-cd33f8856262","Type":"ContainerStarted","Data":"3962042c51f3b88c029c3ee23ee5704544b33af6a41463e864d81409a6f6845f"} Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.351844 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbvwr" event={"ID":"05957e01-c589-4408-8f80-cd33f8856262","Type":"ContainerStarted","Data":"9bb1c0677bd48bd254b78efc670de4cf3c1a2ae1a5dde8bcdc4d84ff4524b847"} Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.351858 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbvwr" event={"ID":"05957e01-c589-4408-8f80-cd33f8856262","Type":"ContainerStarted","Data":"70356d1768809ef207dc89091e759f7babb03f91a97749a7f29a652b275fede6"} Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.366247 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab82cfd4c916a17bb5ae2454a121a8367c532dd78d0ae1e13c02868208b7c7fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:04Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.379716 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:04Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.393847 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2ca498f-e329-422d-8b40-abb4d86f9b5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec966f4b2a6aef7743d32f976a12645c5b0feda623f7baf64edf02bc35389e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://898133696f8478fcb41fba24d15e056570cab68af53a559cb642724dff51617a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ad62a4304538cec45962a9672a69b853848bbfcbce460811135c2ffde4849\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c04ba7033e9c86439f79a30f5ac92368859a69c6b8d46aa6e05ca42fbc37839\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:04Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.404985 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbvwr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05957e01-c589-4408-8f80-cd33f8856262\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb1c0677bd48bd254b78efc670de4cf3c1a2ae1a5dde8bcdc4d84ff4524b847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj65g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3962042c51f3b88c029c3ee23ee5704544b33af6a41463e864d81409a6f6845f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj65g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:10:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dbvwr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:04Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.417065 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fl729" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb77d03e-6ead-48b5-a96a-db4cbd540192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f23c7e00abcb489852a771f1534532f8a6c3acdd810e4432dd155a72558bcc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gznnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fl729\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:04Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.431253 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.431311 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.431324 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.431342 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.431356 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:04Z","lastTransitionTime":"2026-02-14T04:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.438960 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9st5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d645541b-4940-4e53-a506-1b42bd296dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80e2ddc09dadcbbbecee7addee881a393497c7456c1ab3fd4ec4b870d86e87ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9st5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:04Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.453251 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5992e46c-bce7-4b9f-82f2-c7ffb93286cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945e51e35cb7125361ec74b9c291782c9bc28f0c319ca5c90a88c27540d6ad95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06b088007e4cc02eff5f33dffc101f9d559fc0af6d9fc99cb7d1a49c47deec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4s95t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:04Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.469194 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5aa8290-4924-4bc2-bd8e-576e53fa4216\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff2ac5b982c695cfabb0b045748396477b0076e3f4bd77aedf8140d8d212eefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c65a590577c74e672bca804403f159a08ded5ed0e25daf1bef640898c304fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b2c4d9c08ee7188cfea877222707517949a93291dac8409facff18ccd5d9243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c96b250166bcf9c613c7707d9b66c11bbb6292c67d03ed9c9cd8359f466d48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9a86a9d4bdcb85bed9cc5869d14d5d0dcd8a0e22ad73bcc1a9db45554d0c687\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T04:09:45Z\\\",\\\"message\\\":\\\"W0214 04:09:34.347288 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0214 04:09:34.347626 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771042174 cert, and key in /tmp/serving-cert-184764736/serving-signer.crt, /tmp/serving-cert-184764736/serving-signer.key\\\\nI0214 04:09:34.829829 1 observer_polling.go:159] Starting file observer\\\\nW0214 04:09:34.832051 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0214 04:09:34.832310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 04:09:34.835332 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-184764736/tls.crt::/tmp/serving-cert-184764736/tls.key\\\\\\\"\\\\nF0214 04:09:45.190789 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d60b00afe16ba210d6cf3e8edd9c12aef490177b83185e6d74f219cc35efbc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:04Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.481106 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2334a367dde5688d19979264cfd6e67f44426ae7cd249c0b0e18b7e889c8c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:04Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.498796 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:04Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.516394 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34391a30-5865-46e9-af5f-705cc3b11fba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92014b6ab3e1d7c8631fb2a2fa44b60586bb67157c43e3528a7644584fd25b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://250a34062c680cecaa28554a71a782da6fa1c3554900e8b4f2fa6c093f98e307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3393a612147da0ed4305cb2d2fab51792bf4aefb36be402a6faaa698793cfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9937714cb48d5e8bc3542473d8261629ced25c342c26baa13e57c3dc2ace633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://669cee3fb4ea5c0247e6aa92962377d3152fc79d9022e03689b7c017f857e6e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e713ec2c9e59ec516f5c2241a0e87501f4e83e05d6dadd3c54cd7f1cf11f7d1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6275ed33b70a915b2624ce3be264cf800be4656505fbb478cda0c95a4e486648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32de50fe13796a05a11d846751a0d9aba8dcf9dcde8086c0eb90b5dc685c6ef8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T04:10:00Z\\\",\\\"message\\\":\\\"AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0214 04:10:00.855869 6162 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 04:10:00.856816 6162 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0214 04:10:00.856832 6162 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0214 04:10:00.856855 6162 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0214 04:10:00.856860 6162 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0214 04:10:00.856872 6162 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0214 04:10:00.856879 6162 handler.go:208] Removed *v1.Node event handler 7\\\\nI0214 04:10:00.856887 6162 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0214 04:10:00.856891 6162 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0214 04:10:00.856931 6162 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0214 04:10:00.856956 6162 factory.go:656] Stopping watch factory\\\\nI0214 04:10:00.856960 6162 handler.go:208] Removed *v1.Node event handler 2\\\\nI0214 04:10:00.856969 6162 ovnkube.go:599] Stopped ovnkube\\\\nI0214 04:10:00.856978 6162 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0214 04:10:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6275ed33b70a915b2624ce3be264cf800be4656505fbb478cda0c95a4e486648\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"message\\\":\\\"cs-tls service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc000627d57 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:9393,TargetPort:{1 0 metrics},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{name: ingress-operator,},ClusterIP:10.217.5.244,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.244],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0214 04:10:02.346564 6288 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to star\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b353a2a6ce81989e21b42414fdc2911f63d44fbd94dd8c588a704ae66216d8b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nndn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:04Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.527238 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qbv2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55b70fd-de82-48c9-b879-de727928e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6de20a8d6a8a1104338491af05cb4bad2960df3f3d41271922974f2bd0f355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghrlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qbv2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:04Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.533860 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.533910 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.533922 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.533943 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.533957 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:04Z","lastTransitionTime":"2026-02-14T04:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.539068 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b12a920eac3a6bb901e1eb5b3f4ec399de4fb28f20cd73bdcf463730ccc78bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a373e25fceabb99332a08d8c1928aa6023c103d488a1f02a57b3157eceb75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:04Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.549113 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l6v69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afb01bb-2288-4e50-aa66-3e5f2685af58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a109f9fc7a2ea765543b2d1437ad5eccddd0ccb0542b1ffe6a67490057d6d41e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64stb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l6v69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:04Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.563308 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:04Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.636315 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.636361 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.636373 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.636392 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.636404 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:04Z","lastTransitionTime":"2026-02-14T04:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.739375 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.739414 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.739424 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.739439 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.739448 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:04Z","lastTransitionTime":"2026-02-14T04:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.842380 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.842431 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.842446 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.842466 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.842479 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:04Z","lastTransitionTime":"2026-02-14T04:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.945466 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.945811 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.945928 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.946029 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.946118 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:04Z","lastTransitionTime":"2026-02-14T04:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.997258 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:10:04 crc kubenswrapper[4867]: E0214 04:10:04.997466 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.997275 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.997908 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:10:04 crc kubenswrapper[4867]: E0214 04:10:04.997995 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:10:04 crc kubenswrapper[4867]: E0214 04:10:04.998279 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.050111 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.050197 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.050223 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.050258 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.050281 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:05Z","lastTransitionTime":"2026-02-14T04:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.097725 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 05:40:00.386787958 +0000 UTC Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.153395 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.153449 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.153465 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.153489 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.153544 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:05Z","lastTransitionTime":"2026-02-14T04:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.256973 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.257045 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.257063 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.257096 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.257115 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:05Z","lastTransitionTime":"2026-02-14T04:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.280284 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-4b6k5"] Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.281168 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:10:05 crc kubenswrapper[4867]: E0214 04:10:05.281307 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.305387 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2ca498f-e329-422d-8b40-abb4d86f9b5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec966f4b2a6aef7743d32f976a12645c5b0feda623f7baf64edf02bc35389e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://898133696f8478fcb41fba24d15e056570cab68af53a559cb642724dff51617a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ad62a4304538cec45962a9672a69b853848bbfcbce460811135c2ffde4849\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c04ba7033e9c86439f79a30f5ac92368859a69c6b8d46aa6e05ca42fbc37839\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:05Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.325149 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab82cfd4c916a17bb5ae2454a121a8367c532dd78d0ae1e13c02868208b7c7fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:05Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.342599 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:05Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.359709 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.359749 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.359761 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.359778 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.359790 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:05Z","lastTransitionTime":"2026-02-14T04:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.365782 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fl729" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb77d03e-6ead-48b5-a96a-db4cbd540192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f23c7e00abcb489852a771f1534532f8a6c3acdd810e4432dd155a72558bcc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gznnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fl729\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:05Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.389289 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9st5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d645541b-4940-4e53-a506-1b42bd296dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80e2ddc09dadcbbbecee7addee881a393497c7456c1ab3fd4ec4b870d86e87ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9st5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:05Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.410045 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbvwr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05957e01-c589-4408-8f80-cd33f8856262\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb1c0677bd48bd254b78efc670de4cf3c1a2ae1a5dde8bcdc4d84ff4524b847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj65g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3962042c51f3b88c029c3ee23ee5704544b33af6a41463e864d81409a6f6845f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj65g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:10:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dbvwr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:05Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.427182 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4b6k5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7206174b-645b-4924-8345-d1d4b1a5ec39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-272vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-272vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:10:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4b6k5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:05Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.446769 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5aa8290-4924-4bc2-bd8e-576e53fa4216\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff2ac5b982c695cfabb0b045748396477b0076e3f4bd77aedf8140d8d212eefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c65a590577c74e672bca804403f159a08ded5ed0e25daf1bef640898c304fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b2c4d9c08ee7188cfea877222707517949a93291dac8409facff18ccd5d9243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c96b250166bcf9c613c7707d9b66c11bbb6292c67d03ed9c9cd8359f466d48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9a86a9d4bdcb85bed9cc5869d14d5d0dcd8a0e22ad73bcc1a9db45554d0c687\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T04:09:45Z\\\",\\\"message\\\":\\\"W0214 04:09:34.347288 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0214 04:09:34.347626 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771042174 cert, and key in /tmp/serving-cert-184764736/serving-signer.crt, /tmp/serving-cert-184764736/serving-signer.key\\\\nI0214 04:09:34.829829 1 observer_polling.go:159] Starting file observer\\\\nW0214 04:09:34.832051 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0214 04:09:34.832310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 04:09:34.835332 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-184764736/tls.crt::/tmp/serving-cert-184764736/tls.key\\\\\\\"\\\\nF0214 04:09:45.190789 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d60b00afe16ba210d6cf3e8edd9c12aef490177b83185e6d74f219cc35efbc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:05Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.460530 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2334a367dde5688d19979264cfd6e67f44426ae7cd249c0b0e18b7e889c8c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:05Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.462839 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.462970 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.463088 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.463175 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.463260 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:05Z","lastTransitionTime":"2026-02-14T04:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.465908 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-272vg\" (UniqueName: \"kubernetes.io/projected/7206174b-645b-4924-8345-d1d4b1a5ec39-kube-api-access-272vg\") pod \"network-metrics-daemon-4b6k5\" (UID: \"7206174b-645b-4924-8345-d1d4b1a5ec39\") " pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.465953 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7206174b-645b-4924-8345-d1d4b1a5ec39-metrics-certs\") pod \"network-metrics-daemon-4b6k5\" (UID: \"7206174b-645b-4924-8345-d1d4b1a5ec39\") " pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.487249 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:05Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.507962 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5992e46c-bce7-4b9f-82f2-c7ffb93286cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945e51e35cb7125361ec74b9c291782c9bc28f0c319ca5c90a88c27540d6ad95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06b088007e4cc02eff5f33dffc101f9d559fc0af6d9fc99cb7d1a49c47deec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4s95t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:05Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.527372 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b12a920eac3a6bb901e1eb5b3f4ec399de4fb28f20cd73bdcf463730ccc78bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a373e25fceabb99332a08d8c1928aa6023c103d488a1f02a57b3157eceb75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:05Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.542247 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l6v69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afb01bb-2288-4e50-aa66-3e5f2685af58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a109f9fc7a2ea765543b2d1437ad5eccddd0ccb0542b1ffe6a67490057d6d41e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64stb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l6v69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:05Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.558688 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:05Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.565961 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.566168 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.566286 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.566388 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.566477 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:05Z","lastTransitionTime":"2026-02-14T04:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.566560 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-272vg\" (UniqueName: \"kubernetes.io/projected/7206174b-645b-4924-8345-d1d4b1a5ec39-kube-api-access-272vg\") pod \"network-metrics-daemon-4b6k5\" (UID: \"7206174b-645b-4924-8345-d1d4b1a5ec39\") " pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.566801 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7206174b-645b-4924-8345-d1d4b1a5ec39-metrics-certs\") pod \"network-metrics-daemon-4b6k5\" (UID: \"7206174b-645b-4924-8345-d1d4b1a5ec39\") " pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:10:05 crc kubenswrapper[4867]: E0214 04:10:05.566956 4867 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 14 04:10:05 crc kubenswrapper[4867]: E0214 04:10:05.567051 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7206174b-645b-4924-8345-d1d4b1a5ec39-metrics-certs podName:7206174b-645b-4924-8345-d1d4b1a5ec39 nodeName:}" failed. No retries permitted until 2026-02-14 04:10:06.067028504 +0000 UTC m=+38.147965838 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7206174b-645b-4924-8345-d1d4b1a5ec39-metrics-certs") pod "network-metrics-daemon-4b6k5" (UID: "7206174b-645b-4924-8345-d1d4b1a5ec39") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.587051 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34391a30-5865-46e9-af5f-705cc3b11fba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92014b6ab3e1d7c8631fb2a2fa44b60586bb67157c43e3528a7644584fd25b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://250a34062c680cecaa28554a71a782da6fa1c3554900e8b4f2fa6c093f98e307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3393a612147da0ed4305cb2d2fab51792bf4aefb36be402a6faaa698793cfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9937714cb48d5e8bc3542473d8261629ced25c342c26baa13e57c3dc2ace633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://669cee3fb4ea5c0247e6aa92962377d3152fc79d9022e03689b7c017f857e6e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e713ec2c9e59ec516f5c2241a0e87501f4e83e05d6dadd3c54cd7f1cf11f7d1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6275ed33b70a915b2624ce3be264cf800be4656505fbb478cda0c95a4e486648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32de50fe13796a05a11d846751a0d9aba8dcf9dcde8086c0eb90b5dc685c6ef8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T04:10:00Z\\\",\\\"message\\\":\\\"AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0214 04:10:00.855869 6162 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 04:10:00.856816 6162 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0214 04:10:00.856832 6162 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0214 04:10:00.856855 6162 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0214 04:10:00.856860 6162 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0214 04:10:00.856872 6162 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0214 04:10:00.856879 6162 handler.go:208] Removed *v1.Node event handler 7\\\\nI0214 04:10:00.856887 6162 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0214 04:10:00.856891 6162 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0214 04:10:00.856931 6162 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0214 04:10:00.856956 6162 factory.go:656] Stopping watch factory\\\\nI0214 04:10:00.856960 6162 handler.go:208] Removed *v1.Node event handler 2\\\\nI0214 04:10:00.856969 6162 ovnkube.go:599] Stopped ovnkube\\\\nI0214 04:10:00.856978 6162 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0214 04:10:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6275ed33b70a915b2624ce3be264cf800be4656505fbb478cda0c95a4e486648\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"message\\\":\\\"cs-tls service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc000627d57 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:9393,TargetPort:{1 0 metrics},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{name: ingress-operator,},ClusterIP:10.217.5.244,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.244],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0214 04:10:02.346564 6288 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to star\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b353a2a6ce81989e21b42414fdc2911f63d44fbd94dd8c588a704ae66216d8b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nndn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:05Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.588069 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-272vg\" (UniqueName: \"kubernetes.io/projected/7206174b-645b-4924-8345-d1d4b1a5ec39-kube-api-access-272vg\") pod \"network-metrics-daemon-4b6k5\" (UID: \"7206174b-645b-4924-8345-d1d4b1a5ec39\") " pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.603298 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qbv2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55b70fd-de82-48c9-b879-de727928e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6de20a8d6a8a1104338491af05cb4bad2960df3f3d41271922974f2bd0f355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghrlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qbv2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:05Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.669614 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.669658 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.669710 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.669730 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.669743 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:05Z","lastTransitionTime":"2026-02-14T04:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.772791 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.772866 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.772884 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.772912 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.772944 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:05Z","lastTransitionTime":"2026-02-14T04:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.875960 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.876241 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.876368 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.876496 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.876785 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:05Z","lastTransitionTime":"2026-02-14T04:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.980674 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.980749 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.980762 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.980788 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.980808 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:05Z","lastTransitionTime":"2026-02-14T04:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.072975 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7206174b-645b-4924-8345-d1d4b1a5ec39-metrics-certs\") pod \"network-metrics-daemon-4b6k5\" (UID: \"7206174b-645b-4924-8345-d1d4b1a5ec39\") " pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:10:06 crc kubenswrapper[4867]: E0214 04:10:06.073325 4867 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 14 04:10:06 crc kubenswrapper[4867]: E0214 04:10:06.073562 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7206174b-645b-4924-8345-d1d4b1a5ec39-metrics-certs podName:7206174b-645b-4924-8345-d1d4b1a5ec39 nodeName:}" failed. No retries permitted until 2026-02-14 04:10:07.073471127 +0000 UTC m=+39.154408461 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7206174b-645b-4924-8345-d1d4b1a5ec39-metrics-certs") pod "network-metrics-daemon-4b6k5" (UID: "7206174b-645b-4924-8345-d1d4b1a5ec39") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.083940 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.084001 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.084024 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.084056 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.084082 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:06Z","lastTransitionTime":"2026-02-14T04:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.098241 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 02:09:24.748973193 +0000 UTC Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.186863 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.186936 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.186950 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.186969 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.186980 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:06Z","lastTransitionTime":"2026-02-14T04:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.289453 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.289543 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.289566 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.289596 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.289619 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:06Z","lastTransitionTime":"2026-02-14T04:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.393638 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.393713 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.393738 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.393804 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.393823 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:06Z","lastTransitionTime":"2026-02-14T04:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.497471 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.497608 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.497632 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.497666 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.497710 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:06Z","lastTransitionTime":"2026-02-14T04:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.601757 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.601965 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.602027 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.602087 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.602174 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:06Z","lastTransitionTime":"2026-02-14T04:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.705206 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.705480 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.705622 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.705732 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.705808 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:06Z","lastTransitionTime":"2026-02-14T04:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.809022 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.809073 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.809084 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.809109 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.809123 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:06Z","lastTransitionTime":"2026-02-14T04:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.882356 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:10:06 crc kubenswrapper[4867]: E0214 04:10:06.882650 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:10:22.882613735 +0000 UTC m=+54.963551059 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.882773 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.882861 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.882966 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.883028 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:10:06 crc kubenswrapper[4867]: E0214 04:10:06.883142 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 14 04:10:06 crc kubenswrapper[4867]: E0214 04:10:06.883173 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 14 04:10:06 crc kubenswrapper[4867]: E0214 04:10:06.883198 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 14 04:10:06 crc kubenswrapper[4867]: E0214 04:10:06.883212 4867 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 14 04:10:06 crc kubenswrapper[4867]: E0214 04:10:06.883226 4867 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 04:10:06 crc kubenswrapper[4867]: E0214 04:10:06.883217 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 14 04:10:06 crc kubenswrapper[4867]: E0214 04:10:06.883316 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-14 04:10:22.883287123 +0000 UTC m=+54.964224477 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 14 04:10:06 crc kubenswrapper[4867]: E0214 04:10:06.883347 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-14 04:10:22.883334835 +0000 UTC m=+54.964272189 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 04:10:06 crc kubenswrapper[4867]: E0214 04:10:06.883323 4867 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 04:10:06 crc kubenswrapper[4867]: E0214 04:10:06.883170 4867 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 14 04:10:06 crc kubenswrapper[4867]: E0214 04:10:06.883474 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-14 04:10:22.883455478 +0000 UTC m=+54.964393032 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 04:10:06 crc kubenswrapper[4867]: E0214 04:10:06.883542 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-14 04:10:22.883489739 +0000 UTC m=+54.964427243 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.912218 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.912266 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.912280 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.912300 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.912315 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:06Z","lastTransitionTime":"2026-02-14T04:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.997045 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:10:06 crc kubenswrapper[4867]: E0214 04:10:06.997176 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.997247 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.997307 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.997055 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:10:06 crc kubenswrapper[4867]: E0214 04:10:06.997480 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:10:06 crc kubenswrapper[4867]: E0214 04:10:06.997592 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:10:06 crc kubenswrapper[4867]: E0214 04:10:06.997696 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.016042 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.016110 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.016136 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.016172 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.016197 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:07Z","lastTransitionTime":"2026-02-14T04:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.085580 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7206174b-645b-4924-8345-d1d4b1a5ec39-metrics-certs\") pod \"network-metrics-daemon-4b6k5\" (UID: \"7206174b-645b-4924-8345-d1d4b1a5ec39\") " pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:10:07 crc kubenswrapper[4867]: E0214 04:10:07.085776 4867 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 14 04:10:07 crc kubenswrapper[4867]: E0214 04:10:07.085895 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7206174b-645b-4924-8345-d1d4b1a5ec39-metrics-certs podName:7206174b-645b-4924-8345-d1d4b1a5ec39 nodeName:}" failed. No retries permitted until 2026-02-14 04:10:09.085864123 +0000 UTC m=+41.166801477 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7206174b-645b-4924-8345-d1d4b1a5ec39-metrics-certs") pod "network-metrics-daemon-4b6k5" (UID: "7206174b-645b-4924-8345-d1d4b1a5ec39") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.099463 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 12:07:18.054801582 +0000 UTC Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.118925 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.119218 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.119292 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.119376 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.119453 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:07Z","lastTransitionTime":"2026-02-14T04:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.221864 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.221912 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.221924 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.221941 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.221954 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:07Z","lastTransitionTime":"2026-02-14T04:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.324649 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.324686 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.324696 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.324713 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.324726 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:07Z","lastTransitionTime":"2026-02-14T04:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.426713 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.426771 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.426789 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.426813 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.426832 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:07Z","lastTransitionTime":"2026-02-14T04:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.529706 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.530020 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.530112 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.530198 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.530329 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:07Z","lastTransitionTime":"2026-02-14T04:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.632580 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.632619 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.632630 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.632648 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.632658 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:07Z","lastTransitionTime":"2026-02-14T04:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.736023 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.736128 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.736183 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.736213 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.736273 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:07Z","lastTransitionTime":"2026-02-14T04:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.839233 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.839277 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.839292 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.839311 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.839325 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:07Z","lastTransitionTime":"2026-02-14T04:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.942058 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.942400 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.942412 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.942431 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.942443 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:07Z","lastTransitionTime":"2026-02-14T04:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.045043 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.045114 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.045126 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.045144 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.045162 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:08Z","lastTransitionTime":"2026-02-14T04:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.099925 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 11:36:24.211384726 +0000 UTC Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.148413 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.148479 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.148541 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.148572 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.148593 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:08Z","lastTransitionTime":"2026-02-14T04:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.252682 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.252761 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.252780 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.252807 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.252826 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:08Z","lastTransitionTime":"2026-02-14T04:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.357113 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.357179 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.357198 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.357230 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.357252 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:08Z","lastTransitionTime":"2026-02-14T04:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.460376 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.460445 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.460462 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.460493 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.460548 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:08Z","lastTransitionTime":"2026-02-14T04:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.563771 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.563867 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.563890 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.563924 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.563983 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:08Z","lastTransitionTime":"2026-02-14T04:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.668381 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.668452 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.668469 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.668497 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.668550 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:08Z","lastTransitionTime":"2026-02-14T04:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.771588 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.771643 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.771654 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.771676 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.771691 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:08Z","lastTransitionTime":"2026-02-14T04:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.874854 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.874900 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.874908 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.874923 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.874934 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:08Z","lastTransitionTime":"2026-02-14T04:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.978556 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.978617 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.978630 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.978649 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.978663 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:08Z","lastTransitionTime":"2026-02-14T04:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.996915 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.997009 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.997050 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.997096 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:10:08 crc kubenswrapper[4867]: E0214 04:10:08.997702 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:10:08 crc kubenswrapper[4867]: E0214 04:10:08.997854 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:10:08 crc kubenswrapper[4867]: E0214 04:10:08.997998 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:10:08 crc kubenswrapper[4867]: E0214 04:10:08.998051 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.018999 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2334a367dde5688d19979264cfd6e67f44426ae7cd249c0b0e18b7e889c8c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:09Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.034006 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:09Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.052117 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5992e46c-bce7-4b9f-82f2-c7ffb93286cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945e51e35cb7125361ec74b9c291782c9bc28f0c319ca5c90a88c27540d6ad95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06b088007e4cc02eff5f33dffc101f9d559fc0af6d9fc99cb7d1a49c47deec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4s95t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:09Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.073352 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5aa8290-4924-4bc2-bd8e-576e53fa4216\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff2ac5b982c695cfabb0b045748396477b0076e3f4bd77aedf8140d8d212eefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c65a590577c74e672bca804403f159a08ded5ed0e25daf1bef640898c304fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b2c4d9c08ee7188cfea877222707517949a93291dac8409facff18ccd5d9243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c96b250166bcf9c613c7707d9b66c11bbb6292c67d03ed9c9cd8359f466d48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9a86a9d4bdcb85bed9cc5869d14d5d0dcd8a0e22ad73bcc1a9db45554d0c687\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T04:09:45Z\\\",\\\"message\\\":\\\"W0214 04:09:34.347288 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0214 04:09:34.347626 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771042174 cert, and key in /tmp/serving-cert-184764736/serving-signer.crt, /tmp/serving-cert-184764736/serving-signer.key\\\\nI0214 04:09:34.829829 1 observer_polling.go:159] Starting file observer\\\\nW0214 04:09:34.832051 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0214 04:09:34.832310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 04:09:34.835332 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-184764736/tls.crt::/tmp/serving-cert-184764736/tls.key\\\\\\\"\\\\nF0214 04:09:45.190789 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d60b00afe16ba210d6cf3e8edd9c12aef490177b83185e6d74f219cc35efbc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:09Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.080369 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.080409 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.080424 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.080443 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.080459 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:09Z","lastTransitionTime":"2026-02-14T04:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.094791 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l6v69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afb01bb-2288-4e50-aa66-3e5f2685af58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a109f9fc7a2ea765543b2d1437ad5eccddd0ccb0542b1ffe6a67490057d6d41e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64stb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l6v69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:09Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.100649 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 08:01:09.390211817 +0000 UTC Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.108040 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7206174b-645b-4924-8345-d1d4b1a5ec39-metrics-certs\") pod \"network-metrics-daemon-4b6k5\" (UID: \"7206174b-645b-4924-8345-d1d4b1a5ec39\") " pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:10:09 crc kubenswrapper[4867]: E0214 04:10:09.108147 4867 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 14 04:10:09 crc kubenswrapper[4867]: E0214 04:10:09.108186 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7206174b-645b-4924-8345-d1d4b1a5ec39-metrics-certs podName:7206174b-645b-4924-8345-d1d4b1a5ec39 nodeName:}" failed. No retries permitted until 2026-02-14 04:10:13.108173708 +0000 UTC m=+45.189111022 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7206174b-645b-4924-8345-d1d4b1a5ec39-metrics-certs") pod "network-metrics-daemon-4b6k5" (UID: "7206174b-645b-4924-8345-d1d4b1a5ec39") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.110096 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:09Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.144052 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34391a30-5865-46e9-af5f-705cc3b11fba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92014b6ab3e1d7c8631fb2a2fa44b60586bb67157c43e3528a7644584fd25b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://250a34062c680cecaa28554a71a782da6fa1c3554900e8b4f2fa6c093f98e307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3393a612147da0ed4305cb2d2fab51792bf4aefb36be402a6faaa698793cfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9937714cb48d5e8bc3542473d8261629ced25c342c26baa13e57c3dc2ace633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://669cee3fb4ea5c0247e6aa92962377d3152fc79d9022e03689b7c017f857e6e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e713ec2c9e59ec516f5c2241a0e87501f4e83e05d6dadd3c54cd7f1cf11f7d1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6275ed33b70a915b2624ce3be264cf800be4656505fbb478cda0c95a4e486648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32de50fe13796a05a11d846751a0d9aba8dcf9dcde8086c0eb90b5dc685c6ef8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T04:10:00Z\\\",\\\"message\\\":\\\"AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0214 04:10:00.855869 6162 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 04:10:00.856816 6162 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0214 04:10:00.856832 6162 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0214 04:10:00.856855 6162 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0214 04:10:00.856860 6162 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0214 04:10:00.856872 6162 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0214 04:10:00.856879 6162 handler.go:208] Removed *v1.Node event handler 7\\\\nI0214 04:10:00.856887 6162 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0214 04:10:00.856891 6162 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0214 04:10:00.856931 6162 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0214 04:10:00.856956 6162 factory.go:656] Stopping watch factory\\\\nI0214 04:10:00.856960 6162 handler.go:208] Removed *v1.Node event handler 2\\\\nI0214 04:10:00.856969 6162 ovnkube.go:599] Stopped ovnkube\\\\nI0214 04:10:00.856978 6162 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0214 04:10:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6275ed33b70a915b2624ce3be264cf800be4656505fbb478cda0c95a4e486648\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"message\\\":\\\"cs-tls service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc000627d57 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:9393,TargetPort:{1 0 metrics},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{name: ingress-operator,},ClusterIP:10.217.5.244,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.244],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0214 04:10:02.346564 6288 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to star\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b353a2a6ce81989e21b42414fdc2911f63d44fbd94dd8c588a704ae66216d8b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nndn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:09Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.158484 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qbv2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55b70fd-de82-48c9-b879-de727928e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6de20a8d6a8a1104338491af05cb4bad2960df3f3d41271922974f2bd0f355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghrlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qbv2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:09Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.174104 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b12a920eac3a6bb901e1eb5b3f4ec399de4fb28f20cd73bdcf463730ccc78bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a373e25fceabb99332a08d8c1928aa6023c103d488a1f02a57b3157eceb75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:09Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.184093 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.184183 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.184204 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.184239 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.184264 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:09Z","lastTransitionTime":"2026-02-14T04:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.194277 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab82cfd4c916a17bb5ae2454a121a8367c532dd78d0ae1e13c02868208b7c7fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:09Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.210348 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:09Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.228283 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2ca498f-e329-422d-8b40-abb4d86f9b5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec966f4b2a6aef7743d32f976a12645c5b0feda623f7baf64edf02bc35389e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://898133696f8478fcb41fba24d15e056570cab68af53a559cb642724dff51617a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ad62a4304538cec45962a9672a69b853848bbfcbce460811135c2ffde4849\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c04ba7033e9c86439f79a30f5ac92368859a69c6b8d46aa6e05ca42fbc37839\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:09Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.249030 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fl729" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb77d03e-6ead-48b5-a96a-db4cbd540192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f23c7e00abcb489852a771f1534532f8a6c3acdd810e4432dd155a72558bcc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gznnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fl729\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:09Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.266276 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9st5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d645541b-4940-4e53-a506-1b42bd296dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80e2ddc09dadcbbbecee7addee881a393497c7456c1ab3fd4ec4b870d86e87ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9st5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:09Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.279343 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbvwr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05957e01-c589-4408-8f80-cd33f8856262\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb1c0677bd48bd254b78efc670de4cf3c1a2ae1a5dde8bcdc4d84ff4524b847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj65g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3962042c51f3b88c029c3ee23ee5704544b33af6a41463e864d81409a6f6845f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj65g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:10:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dbvwr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:09Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.288844 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.288913 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.288932 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.288959 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.288977 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:09Z","lastTransitionTime":"2026-02-14T04:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.294582 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4b6k5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7206174b-645b-4924-8345-d1d4b1a5ec39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-272vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-272vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:10:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4b6k5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:09Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.392715 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.393365 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.393625 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.393835 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.394058 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:09Z","lastTransitionTime":"2026-02-14T04:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.498850 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.498927 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.498951 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.498982 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.499002 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:09Z","lastTransitionTime":"2026-02-14T04:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.602011 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.602061 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.602071 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.602087 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.602097 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:09Z","lastTransitionTime":"2026-02-14T04:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.704327 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.704365 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.704378 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.704395 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.704407 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:09Z","lastTransitionTime":"2026-02-14T04:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.807329 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.807382 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.807392 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.807407 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.807418 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:09Z","lastTransitionTime":"2026-02-14T04:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.910250 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.910286 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.910295 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.910307 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.910316 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:09Z","lastTransitionTime":"2026-02-14T04:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.013620 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.013672 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.013688 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.013711 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.013729 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:10Z","lastTransitionTime":"2026-02-14T04:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.101445 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 18:56:21.266624415 +0000 UTC Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.116933 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.117005 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.117022 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.117046 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.117064 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:10Z","lastTransitionTime":"2026-02-14T04:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.219777 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.219836 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.219851 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.219876 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.219893 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:10Z","lastTransitionTime":"2026-02-14T04:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.322216 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.322275 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.322293 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.322320 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.322341 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:10Z","lastTransitionTime":"2026-02-14T04:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.425570 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.425614 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.425625 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.425649 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.425666 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:10Z","lastTransitionTime":"2026-02-14T04:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.528461 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.528556 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.528570 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.528586 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.528598 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:10Z","lastTransitionTime":"2026-02-14T04:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.631733 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.631795 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.631814 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.631837 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.631854 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:10Z","lastTransitionTime":"2026-02-14T04:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.735151 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.735197 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.735205 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.735219 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.735228 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:10Z","lastTransitionTime":"2026-02-14T04:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.837687 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.837774 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.837798 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.837827 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.837850 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:10Z","lastTransitionTime":"2026-02-14T04:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.940290 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.940323 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.940331 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.940345 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.940354 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:10Z","lastTransitionTime":"2026-02-14T04:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.996885 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.996978 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:10:10 crc kubenswrapper[4867]: E0214 04:10:10.997024 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.997116 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:10:10 crc kubenswrapper[4867]: E0214 04:10:10.997190 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.997215 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:10:10 crc kubenswrapper[4867]: E0214 04:10:10.997374 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:10:10 crc kubenswrapper[4867]: E0214 04:10:10.997559 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.043150 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.043210 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.043235 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.043266 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.043294 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:11Z","lastTransitionTime":"2026-02-14T04:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.102466 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 17:01:13.135265482 +0000 UTC Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.146325 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.146365 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.146380 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.146396 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.146408 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:11Z","lastTransitionTime":"2026-02-14T04:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.249293 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.249352 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.249363 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.249378 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.249389 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:11Z","lastTransitionTime":"2026-02-14T04:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.355425 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.355483 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.355523 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.355543 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.355557 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:11Z","lastTransitionTime":"2026-02-14T04:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.458003 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.458032 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.458040 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.458054 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.458062 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:11Z","lastTransitionTime":"2026-02-14T04:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.560764 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.560806 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.560814 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.560830 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.560839 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:11Z","lastTransitionTime":"2026-02-14T04:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.662923 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.662954 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.662963 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.662976 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.662985 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:11Z","lastTransitionTime":"2026-02-14T04:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.764733 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.764767 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.764775 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.764791 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.764800 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:11Z","lastTransitionTime":"2026-02-14T04:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.866849 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.866887 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.866900 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.866915 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.866925 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:11Z","lastTransitionTime":"2026-02-14T04:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.969074 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.969107 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.969117 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.969131 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.969139 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:11Z","lastTransitionTime":"2026-02-14T04:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.071875 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.071913 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.071921 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.071934 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.071944 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:12Z","lastTransitionTime":"2026-02-14T04:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.102878 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 01:55:00.290882013 +0000 UTC Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.174018 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.174056 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.174066 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.174080 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.174090 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:12Z","lastTransitionTime":"2026-02-14T04:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.276395 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.276432 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.276466 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.276482 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.276490 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:12Z","lastTransitionTime":"2026-02-14T04:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.378998 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.379028 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.379037 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.379051 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.379059 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:12Z","lastTransitionTime":"2026-02-14T04:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.440812 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.440848 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.440861 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.440876 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.440887 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:12Z","lastTransitionTime":"2026-02-14T04:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:12 crc kubenswrapper[4867]: E0214 04:10:12.460483 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148e1364-0af4-4e1f-ae72-52166d888ddc\\\",\\\"systemUUID\\\":\\\"1382a0d3-8d29-4f25-bc2c-dc46ad541396\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:12Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.464365 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.464422 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.464438 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.464460 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.464474 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:12Z","lastTransitionTime":"2026-02-14T04:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:12 crc kubenswrapper[4867]: E0214 04:10:12.476595 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148e1364-0af4-4e1f-ae72-52166d888ddc\\\",\\\"systemUUID\\\":\\\"1382a0d3-8d29-4f25-bc2c-dc46ad541396\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:12Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.480563 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.480602 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.480611 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.480627 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.480638 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:12Z","lastTransitionTime":"2026-02-14T04:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:12 crc kubenswrapper[4867]: E0214 04:10:12.493185 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148e1364-0af4-4e1f-ae72-52166d888ddc\\\",\\\"systemUUID\\\":\\\"1382a0d3-8d29-4f25-bc2c-dc46ad541396\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:12Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.497794 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.497824 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.497833 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.497846 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.497855 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:12Z","lastTransitionTime":"2026-02-14T04:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:12 crc kubenswrapper[4867]: E0214 04:10:12.508220 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148e1364-0af4-4e1f-ae72-52166d888ddc\\\",\\\"systemUUID\\\":\\\"1382a0d3-8d29-4f25-bc2c-dc46ad541396\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:12Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.511151 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.511195 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.511204 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.511220 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.511231 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:12Z","lastTransitionTime":"2026-02-14T04:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:12 crc kubenswrapper[4867]: E0214 04:10:12.521891 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148e1364-0af4-4e1f-ae72-52166d888ddc\\\",\\\"systemUUID\\\":\\\"1382a0d3-8d29-4f25-bc2c-dc46ad541396\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:12Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:12 crc kubenswrapper[4867]: E0214 04:10:12.522045 4867 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.523480 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.523536 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.523547 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.523562 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.523573 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:12Z","lastTransitionTime":"2026-02-14T04:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.625867 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.625910 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.625924 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.625941 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.625951 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:12Z","lastTransitionTime":"2026-02-14T04:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.727938 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.727978 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.727989 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.728005 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.728018 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:12Z","lastTransitionTime":"2026-02-14T04:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.830242 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.831419 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.831447 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.831465 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.831474 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:12Z","lastTransitionTime":"2026-02-14T04:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.934125 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.934162 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.934174 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.934193 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.934206 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:12Z","lastTransitionTime":"2026-02-14T04:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.996289 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.996350 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.996422 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:10:12 crc kubenswrapper[4867]: E0214 04:10:12.996415 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.996444 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:10:12 crc kubenswrapper[4867]: E0214 04:10:12.996569 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:10:12 crc kubenswrapper[4867]: E0214 04:10:12.996649 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:10:12 crc kubenswrapper[4867]: E0214 04:10:12.996747 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.036101 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.036135 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.036146 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.036162 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.036174 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:13Z","lastTransitionTime":"2026-02-14T04:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.103016 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 10:42:58.259945215 +0000 UTC Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.138446 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.138479 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.138487 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.138500 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.138520 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:13Z","lastTransitionTime":"2026-02-14T04:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.157030 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7206174b-645b-4924-8345-d1d4b1a5ec39-metrics-certs\") pod \"network-metrics-daemon-4b6k5\" (UID: \"7206174b-645b-4924-8345-d1d4b1a5ec39\") " pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:10:13 crc kubenswrapper[4867]: E0214 04:10:13.157161 4867 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 14 04:10:13 crc kubenswrapper[4867]: E0214 04:10:13.157212 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7206174b-645b-4924-8345-d1d4b1a5ec39-metrics-certs podName:7206174b-645b-4924-8345-d1d4b1a5ec39 nodeName:}" failed. No retries permitted until 2026-02-14 04:10:21.157196485 +0000 UTC m=+53.238133799 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7206174b-645b-4924-8345-d1d4b1a5ec39-metrics-certs") pod "network-metrics-daemon-4b6k5" (UID: "7206174b-645b-4924-8345-d1d4b1a5ec39") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.240973 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.241009 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.241018 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.241031 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.241042 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:13Z","lastTransitionTime":"2026-02-14T04:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.343599 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.343624 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.343633 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.343646 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.343655 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:13Z","lastTransitionTime":"2026-02-14T04:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.446552 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.446645 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.446662 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.446685 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.446702 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:13Z","lastTransitionTime":"2026-02-14T04:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.549948 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.549989 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.549999 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.550017 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.550028 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:13Z","lastTransitionTime":"2026-02-14T04:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.652908 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.652942 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.652951 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.652963 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.652972 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:13Z","lastTransitionTime":"2026-02-14T04:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.756191 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.756238 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.756249 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.756268 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.756278 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:13Z","lastTransitionTime":"2026-02-14T04:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.859021 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.859052 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.859060 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.859073 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.859082 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:13Z","lastTransitionTime":"2026-02-14T04:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.962441 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.962500 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.962595 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.962626 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.962643 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:13Z","lastTransitionTime":"2026-02-14T04:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.065277 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.065319 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.065328 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.065342 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.065352 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:14Z","lastTransitionTime":"2026-02-14T04:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.103263 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 11:44:44.538939343 +0000 UTC Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.167622 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.167689 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.167712 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.167744 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.167768 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:14Z","lastTransitionTime":"2026-02-14T04:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.270429 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.270500 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.270567 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.270598 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.270622 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:14Z","lastTransitionTime":"2026-02-14T04:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.373185 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.373244 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.373262 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.373289 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.373307 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:14Z","lastTransitionTime":"2026-02-14T04:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.476264 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.476340 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.476356 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.476380 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.476397 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:14Z","lastTransitionTime":"2026-02-14T04:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.579707 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.579803 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.579827 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.579854 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.579871 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:14Z","lastTransitionTime":"2026-02-14T04:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.682778 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.682856 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.682875 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.682901 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.682919 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:14Z","lastTransitionTime":"2026-02-14T04:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.785454 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.785565 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.785600 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.785632 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.785655 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:14Z","lastTransitionTime":"2026-02-14T04:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.888832 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.888895 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.888926 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.888957 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.888978 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:14Z","lastTransitionTime":"2026-02-14T04:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.991401 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.991433 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.991444 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.991458 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.991467 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:14Z","lastTransitionTime":"2026-02-14T04:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.997170 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:10:14 crc kubenswrapper[4867]: E0214 04:10:14.997419 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.997727 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.997808 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:10:14 crc kubenswrapper[4867]: E0214 04:10:14.997900 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.997756 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:10:14 crc kubenswrapper[4867]: E0214 04:10:14.998004 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:10:14 crc kubenswrapper[4867]: E0214 04:10:14.998055 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:10:15 crc kubenswrapper[4867]: I0214 04:10:15.094103 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:15 crc kubenswrapper[4867]: I0214 04:10:15.094149 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:15 crc kubenswrapper[4867]: I0214 04:10:15.094157 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:15 crc kubenswrapper[4867]: I0214 04:10:15.094172 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:15 crc kubenswrapper[4867]: I0214 04:10:15.094182 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:15Z","lastTransitionTime":"2026-02-14T04:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:15 crc kubenswrapper[4867]: I0214 04:10:15.103475 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 10:59:59.174271026 +0000 UTC Feb 14 04:10:15 crc kubenswrapper[4867]: I0214 04:10:15.196096 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:15 crc kubenswrapper[4867]: I0214 04:10:15.196336 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:15 crc kubenswrapper[4867]: I0214 04:10:15.196428 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:15 crc kubenswrapper[4867]: I0214 04:10:15.196533 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:15 crc kubenswrapper[4867]: I0214 04:10:15.196627 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:15Z","lastTransitionTime":"2026-02-14T04:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:15 crc kubenswrapper[4867]: I0214 04:10:15.299385 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:15 crc kubenswrapper[4867]: I0214 04:10:15.299428 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:15 crc kubenswrapper[4867]: I0214 04:10:15.299442 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:15 crc kubenswrapper[4867]: I0214 04:10:15.299459 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:15 crc kubenswrapper[4867]: I0214 04:10:15.299471 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:15Z","lastTransitionTime":"2026-02-14T04:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:15 crc kubenswrapper[4867]: I0214 04:10:15.401700 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:15 crc kubenswrapper[4867]: I0214 04:10:15.401751 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:15 crc kubenswrapper[4867]: I0214 04:10:15.401763 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:15 crc kubenswrapper[4867]: I0214 04:10:15.401780 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:15 crc kubenswrapper[4867]: I0214 04:10:15.401788 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:15Z","lastTransitionTime":"2026-02-14T04:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:15 crc kubenswrapper[4867]: I0214 04:10:15.504393 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:15 crc kubenswrapper[4867]: I0214 04:10:15.504438 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:15 crc kubenswrapper[4867]: I0214 04:10:15.504455 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:15 crc kubenswrapper[4867]: I0214 04:10:15.504478 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:15 crc kubenswrapper[4867]: I0214 04:10:15.504495 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:15Z","lastTransitionTime":"2026-02-14T04:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:15 crc kubenswrapper[4867]: I0214 04:10:15.607671 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:15 crc kubenswrapper[4867]: I0214 04:10:15.607759 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:15 crc kubenswrapper[4867]: I0214 04:10:15.607781 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:15 crc kubenswrapper[4867]: I0214 04:10:15.607808 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:15 crc kubenswrapper[4867]: I0214 04:10:15.607827 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:15Z","lastTransitionTime":"2026-02-14T04:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:15 crc kubenswrapper[4867]: I0214 04:10:15.711027 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:15 crc kubenswrapper[4867]: I0214 04:10:15.711097 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:15 crc kubenswrapper[4867]: I0214 04:10:15.711113 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:15 crc kubenswrapper[4867]: I0214 04:10:15.711137 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:15 crc kubenswrapper[4867]: I0214 04:10:15.711155 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:15Z","lastTransitionTime":"2026-02-14T04:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:15 crc kubenswrapper[4867]: I0214 04:10:15.814044 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:15 crc kubenswrapper[4867]: I0214 04:10:15.814103 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:15 crc kubenswrapper[4867]: I0214 04:10:15.814115 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:15 crc kubenswrapper[4867]: I0214 04:10:15.814131 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:15 crc kubenswrapper[4867]: I0214 04:10:15.814141 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:15Z","lastTransitionTime":"2026-02-14T04:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:15 crc kubenswrapper[4867]: I0214 04:10:15.915959 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:15 crc kubenswrapper[4867]: I0214 04:10:15.916005 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:15 crc kubenswrapper[4867]: I0214 04:10:15.916015 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:15 crc kubenswrapper[4867]: I0214 04:10:15.916029 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:15 crc kubenswrapper[4867]: I0214 04:10:15.916039 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:15Z","lastTransitionTime":"2026-02-14T04:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.018467 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.018563 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.018581 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.018603 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.018619 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:16Z","lastTransitionTime":"2026-02-14T04:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.104619 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 10:07:47.246874983 +0000 UTC Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.121327 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.121373 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.121380 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.121395 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.121403 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:16Z","lastTransitionTime":"2026-02-14T04:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.224080 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.224145 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.224169 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.224197 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.224219 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:16Z","lastTransitionTime":"2026-02-14T04:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.327543 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.327610 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.327627 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.327650 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.327668 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:16Z","lastTransitionTime":"2026-02-14T04:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.429944 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.429988 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.430003 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.430022 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.430035 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:16Z","lastTransitionTime":"2026-02-14T04:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.532484 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.532567 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.532617 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.532635 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.532645 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:16Z","lastTransitionTime":"2026-02-14T04:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.634460 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.634498 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.634525 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.634539 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.634548 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:16Z","lastTransitionTime":"2026-02-14T04:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.736942 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.736985 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.736993 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.737010 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.737024 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:16Z","lastTransitionTime":"2026-02-14T04:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.839659 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.839712 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.839726 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.839748 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.839764 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:16Z","lastTransitionTime":"2026-02-14T04:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.942548 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.942583 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.942591 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.942603 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.942621 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:16Z","lastTransitionTime":"2026-02-14T04:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.996405 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:10:16 crc kubenswrapper[4867]: E0214 04:10:16.996526 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.996618 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.996618 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:10:16 crc kubenswrapper[4867]: E0214 04:10:16.996795 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.996638 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:10:16 crc kubenswrapper[4867]: E0214 04:10:16.996972 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:10:16 crc kubenswrapper[4867]: E0214 04:10:16.997023 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.044827 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.044882 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.044899 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.044923 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.044941 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:17Z","lastTransitionTime":"2026-02-14T04:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.105040 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 19:48:16.587855307 +0000 UTC Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.147073 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.147115 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.147125 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.147140 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.147150 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:17Z","lastTransitionTime":"2026-02-14T04:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.250106 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.250153 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.250169 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.250192 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.250209 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:17Z","lastTransitionTime":"2026-02-14T04:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.352646 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.352709 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.352728 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.352755 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.352774 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:17Z","lastTransitionTime":"2026-02-14T04:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.455654 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.455746 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.455766 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.455792 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.455812 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:17Z","lastTransitionTime":"2026-02-14T04:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.558756 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.558889 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.558943 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.558968 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.558984 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:17Z","lastTransitionTime":"2026-02-14T04:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.662491 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.662592 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.662617 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.662650 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.662673 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:17Z","lastTransitionTime":"2026-02-14T04:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.766150 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.766203 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.766219 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.766242 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.766259 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:17Z","lastTransitionTime":"2026-02-14T04:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.869598 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.869662 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.869681 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.869706 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.869726 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:17Z","lastTransitionTime":"2026-02-14T04:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.972681 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.972750 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.972772 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.972803 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.972825 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:17Z","lastTransitionTime":"2026-02-14T04:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.075464 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.075546 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.075566 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.075590 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.075606 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:18Z","lastTransitionTime":"2026-02-14T04:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.105262 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 08:34:43.543143394 +0000 UTC Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.178088 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.178125 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.178135 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.178148 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.178158 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:18Z","lastTransitionTime":"2026-02-14T04:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.281093 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.281142 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.281152 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.281166 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.281175 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:18Z","lastTransitionTime":"2026-02-14T04:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.383701 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.383744 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.383754 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.383773 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.383784 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:18Z","lastTransitionTime":"2026-02-14T04:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.486316 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.486376 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.486400 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.486428 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.486450 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:18Z","lastTransitionTime":"2026-02-14T04:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.589455 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.589560 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.589586 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.589616 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.589656 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:18Z","lastTransitionTime":"2026-02-14T04:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.692737 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.692803 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.692821 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.692847 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.692865 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:18Z","lastTransitionTime":"2026-02-14T04:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.796169 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.796221 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.796238 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.796261 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.796280 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:18Z","lastTransitionTime":"2026-02-14T04:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.898823 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.898898 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.898917 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.898943 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.898961 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:18Z","lastTransitionTime":"2026-02-14T04:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.996316 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.996359 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.996639 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.996675 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:10:18 crc kubenswrapper[4867]: E0214 04:10:18.996914 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.996988 4867 scope.go:117] "RemoveContainer" containerID="6275ed33b70a915b2624ce3be264cf800be4656505fbb478cda0c95a4e486648" Feb 14 04:10:18 crc kubenswrapper[4867]: E0214 04:10:18.997278 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:10:18 crc kubenswrapper[4867]: E0214 04:10:18.997436 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:10:18 crc kubenswrapper[4867]: E0214 04:10:18.997578 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.001955 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.002013 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.002031 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.002055 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.002072 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:19Z","lastTransitionTime":"2026-02-14T04:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.034295 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34391a30-5865-46e9-af5f-705cc3b11fba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92014b6ab3e1d7c8631fb2a2fa44b60586bb67157c43e3528a7644584fd25b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://250a34062c680cecaa28554a71a782da6fa1c3554900e8b4f2fa6c093f98e307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3393a612147da0ed4305cb2d2fab51792bf4aefb36be402a6faaa698793cfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9937714cb48d5e8bc3542473d8261629ced25c342c26baa13e57c3dc2ace633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://669cee3fb4ea5c0247e6aa92962377d3152fc79d9022e03689b7c017f857e6e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e713ec2c9e59ec516f5c2241a0e87501f4e83e05d6dadd3c54cd7f1cf11f7d1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6275ed33b70a915b2624ce3be264cf800be4656505fbb478cda0c95a4e486648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32de50fe13796a05a11d846751a0d9aba8dcf9dcde8086c0eb90b5dc685c6ef8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T04:10:00Z\\\",\\\"message\\\":\\\"AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0214 04:10:00.855869 6162 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 04:10:00.856816 6162 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0214 04:10:00.856832 6162 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0214 04:10:00.856855 6162 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0214 04:10:00.856860 6162 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0214 04:10:00.856872 6162 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0214 04:10:00.856879 6162 handler.go:208] Removed *v1.Node event handler 7\\\\nI0214 04:10:00.856887 6162 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0214 04:10:00.856891 6162 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0214 04:10:00.856931 6162 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0214 04:10:00.856956 6162 factory.go:656] Stopping watch factory\\\\nI0214 04:10:00.856960 6162 handler.go:208] Removed *v1.Node event handler 2\\\\nI0214 04:10:00.856969 6162 ovnkube.go:599] Stopped ovnkube\\\\nI0214 04:10:00.856978 6162 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0214 04:10:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6275ed33b70a915b2624ce3be264cf800be4656505fbb478cda0c95a4e486648\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"message\\\":\\\"cs-tls service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc000627d57 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:9393,TargetPort:{1 0 metrics},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{name: ingress-operator,},ClusterIP:10.217.5.244,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.244],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0214 04:10:02.346564 6288 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to star\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b353a2a6ce81989e21b42414fdc2911f63d44fbd94dd8c588a704ae66216d8b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nndn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:19Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.049328 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qbv2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55b70fd-de82-48c9-b879-de727928e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6de20a8d6a8a1104338491af05cb4bad2960df3f3d41271922974f2bd0f355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghrlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qbv2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:19Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.066789 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b12a920eac3a6bb901e1eb5b3f4ec399de4fb28f20cd73bdcf463730ccc78bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a373e25fceabb99332a08d8c1928aa6023c103d488a1f02a57b3157eceb75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:19Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.082460 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l6v69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afb01bb-2288-4e50-aa66-3e5f2685af58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a109f9fc7a2ea765543b2d1437ad5eccddd0ccb0542b1ffe6a67490057d6d41e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64stb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l6v69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:19Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.099401 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:19Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.104313 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.104692 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.104703 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.104717 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.104729 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:19Z","lastTransitionTime":"2026-02-14T04:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.105404 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 15:03:49.409035308 +0000 UTC Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.116368 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab82cfd4c916a17bb5ae2454a121a8367c532dd78d0ae1e13c02868208b7c7fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:19Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.132994 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:19Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.147137 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2ca498f-e329-422d-8b40-abb4d86f9b5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec966f4b2a6aef7743d32f976a12645c5b0feda623f7baf64edf02bc35389e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://898133696f8478fcb41fba24d15e056570cab68af53a559cb642724dff51617a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ad62a4304538cec45962a9672a69b853848bbfcbce460811135c2ffde4849\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c04ba7033e9c86439f79a30f5ac92368859a69c6b8d46aa6e05ca42fbc37839\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:19Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.161856 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbvwr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05957e01-c589-4408-8f80-cd33f8856262\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb1c0677bd48bd254b78efc670de4cf3c1a2ae1a5dde8bcdc4d84ff4524b847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj65g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3962042c51f3b88c029c3ee23ee5704544b33af6a41463e864d81409a6f6845f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj65g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:10:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dbvwr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:19Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.174363 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4b6k5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7206174b-645b-4924-8345-d1d4b1a5ec39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-272vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-272vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:10:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4b6k5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:19Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.190416 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fl729" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb77d03e-6ead-48b5-a96a-db4cbd540192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f23c7e00abcb489852a771f1534532f8a6c3acdd810e4432dd155a72558bcc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gznnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fl729\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:19Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.208011 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.208064 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.208082 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.208101 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.208114 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:19Z","lastTransitionTime":"2026-02-14T04:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.208076 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9st5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d645541b-4940-4e53-a506-1b42bd296dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80e2ddc09dadcbbbecee7addee881a393497c7456c1ab3fd4ec4b870d86e87ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9st5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:19Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.222634 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5992e46c-bce7-4b9f-82f2-c7ffb93286cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945e51e35cb7125361ec74b9c291782c9bc28f0c319ca5c90a88c27540d6ad95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06b088007e4cc02eff5f33dffc101f9d559fc0af6d9fc99cb7d1a49c47deec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4s95t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:19Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.237853 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5aa8290-4924-4bc2-bd8e-576e53fa4216\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff2ac5b982c695cfabb0b045748396477b0076e3f4bd77aedf8140d8d212eefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c65a590577c74e672bca804403f159a08ded5ed0e25daf1bef640898c304fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b2c4d9c08ee7188cfea877222707517949a93291dac8409facff18ccd5d9243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c96b250166bcf9c613c7707d9b66c11bbb6292c67d03ed9c9cd8359f466d48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9a86a9d4bdcb85bed9cc5869d14d5d0dcd8a0e22ad73bcc1a9db45554d0c687\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T04:09:45Z\\\",\\\"message\\\":\\\"W0214 04:09:34.347288 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0214 04:09:34.347626 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771042174 cert, and key in /tmp/serving-cert-184764736/serving-signer.crt, /tmp/serving-cert-184764736/serving-signer.key\\\\nI0214 04:09:34.829829 1 observer_polling.go:159] Starting file observer\\\\nW0214 04:09:34.832051 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0214 04:09:34.832310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 04:09:34.835332 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-184764736/tls.crt::/tmp/serving-cert-184764736/tls.key\\\\\\\"\\\\nF0214 04:09:45.190789 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d60b00afe16ba210d6cf3e8edd9c12aef490177b83185e6d74f219cc35efbc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:19Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.252294 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2334a367dde5688d19979264cfd6e67f44426ae7cd249c0b0e18b7e889c8c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:19Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.266489 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:19Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.281810 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5aa8290-4924-4bc2-bd8e-576e53fa4216\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff2ac5b982c695cfabb0b045748396477b0076e3f4bd77aedf8140d8d212eefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c65a590577c74e672bca804403f159a08ded5ed0e25daf1bef640898c304fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b2c4d9c08ee7188cfea877222707517949a93291dac8409facff18ccd5d9243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c96b250166bcf9c613c7707d9b66c11bbb6292c67d03ed9c9cd8359f466d48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9a86a9d4bdcb85bed9cc5869d14d5d0dcd8a0e22ad73bcc1a9db45554d0c687\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T04:09:45Z\\\",\\\"message\\\":\\\"W0214 04:09:34.347288 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0214 04:09:34.347626 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771042174 cert, and key in /tmp/serving-cert-184764736/serving-signer.crt, /tmp/serving-cert-184764736/serving-signer.key\\\\nI0214 04:09:34.829829 1 observer_polling.go:159] Starting file observer\\\\nW0214 04:09:34.832051 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0214 04:09:34.832310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 04:09:34.835332 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-184764736/tls.crt::/tmp/serving-cert-184764736/tls.key\\\\\\\"\\\\nF0214 04:09:45.190789 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d60b00afe16ba210d6cf3e8edd9c12aef490177b83185e6d74f219cc35efbc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:19Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.298096 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2334a367dde5688d19979264cfd6e67f44426ae7cd249c0b0e18b7e889c8c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:19Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.310438 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.310554 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.310573 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.310599 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.310620 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:19Z","lastTransitionTime":"2026-02-14T04:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.312543 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:19Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.328823 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5992e46c-bce7-4b9f-82f2-c7ffb93286cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945e51e35cb7125361ec74b9c291782c9bc28f0c319ca5c90a88c27540d6ad95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06b088007e4cc02eff5f33dffc101f9d559fc0af6d9fc99cb7d1a49c47deec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4s95t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:19Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.344257 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b12a920eac3a6bb901e1eb5b3f4ec399de4fb28f20cd73bdcf463730ccc78bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a373e25fceabb99332a08d8c1928aa6023c103d488a1f02a57b3157eceb75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:19Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.356462 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l6v69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afb01bb-2288-4e50-aa66-3e5f2685af58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a109f9fc7a2ea765543b2d1437ad5eccddd0ccb0542b1ffe6a67490057d6d41e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64stb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l6v69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:19Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.370355 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:19Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.398790 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34391a30-5865-46e9-af5f-705cc3b11fba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92014b6ab3e1d7c8631fb2a2fa44b60586bb67157c43e3528a7644584fd25b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://250a34062c680cecaa28554a71a782da6fa1c3554900e8b4f2fa6c093f98e307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3393a612147da0ed4305cb2d2fab51792bf4aefb36be402a6faaa698793cfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9937714cb48d5e8bc3542473d8261629ced25c342c26baa13e57c3dc2ace633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://669cee3fb4ea5c0247e6aa92962377d3152fc79d9022e03689b7c017f857e6e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e713ec2c9e59ec516f5c2241a0e87501f4e83e05d6dadd3c54cd7f1cf11f7d1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6275ed33b70a915b2624ce3be264cf800be4656505fbb478cda0c95a4e486648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6275ed33b70a915b2624ce3be264cf800be4656505fbb478cda0c95a4e486648\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"message\\\":\\\"cs-tls service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc000627d57 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:9393,TargetPort:{1 0 metrics},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{name: ingress-operator,},ClusterIP:10.217.5.244,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.244],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0214 04:10:02.346564 6288 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to star\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:10:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-6nndn_openshift-ovn-kubernetes(34391a30-5865-46e9-af5f-705cc3b11fba)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b353a2a6ce81989e21b42414fdc2911f63d44fbd94dd8c588a704ae66216d8b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nndn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:19Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.408809 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6nndn_34391a30-5865-46e9-af5f-705cc3b11fba/ovnkube-controller/1.log" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.411781 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qbv2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55b70fd-de82-48c9-b879-de727928e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6de20a8d6a8a1104338491af05cb4bad2960df3f3d41271922974f2bd0f355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghrlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qbv2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:19Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.412959 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.413000 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.413011 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.413031 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.413043 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:19Z","lastTransitionTime":"2026-02-14T04:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.413754 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" event={"ID":"34391a30-5865-46e9-af5f-705cc3b11fba","Type":"ContainerStarted","Data":"901f1924f11611b25b82799b2f09cf1c83f31dada8ce10e3fabf0d2968107b93"} Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.425299 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2ca498f-e329-422d-8b40-abb4d86f9b5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec966f4b2a6aef7743d32f976a12645c5b0feda623f7baf64edf02bc35389e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://898133696f8478fcb41fba24d15e056570cab68af53a559cb642724dff51617a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ad62a4304538cec45962a9672a69b853848bbfcbce460811135c2ffde4849\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c04ba7033e9c86439f79a30f5ac92368859a69c6b8d46aa6e05ca42fbc37839\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:19Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.445271 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab82cfd4c916a17bb5ae2454a121a8367c532dd78d0ae1e13c02868208b7c7fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:19Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.465351 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:19Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.478661 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fl729" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb77d03e-6ead-48b5-a96a-db4cbd540192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f23c7e00abcb489852a771f1534532f8a6c3acdd810e4432dd155a72558bcc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gznnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fl729\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:19Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.493988 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9st5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d645541b-4940-4e53-a506-1b42bd296dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80e2ddc09dadcbbbecee7addee881a393497c7456c1ab3fd4ec4b870d86e87ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9st5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:19Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.508283 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbvwr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05957e01-c589-4408-8f80-cd33f8856262\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb1c0677bd48bd254b78efc670de4cf3c1a2ae1a5dde8bcdc4d84ff4524b847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj65g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3962042c51f3b88c029c3ee23ee5704544b33af6a41463e864d81409a6f6845f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj65g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:10:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dbvwr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:19Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.516204 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.516272 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.516289 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.516310 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.516325 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:19Z","lastTransitionTime":"2026-02-14T04:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.524516 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4b6k5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7206174b-645b-4924-8345-d1d4b1a5ec39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-272vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-272vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:10:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4b6k5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:19Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.618095 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.618146 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.618158 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.618176 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.618190 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:19Z","lastTransitionTime":"2026-02-14T04:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.720687 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.720726 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.720735 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.720751 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.720760 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:19Z","lastTransitionTime":"2026-02-14T04:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.823400 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.824106 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.824139 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.824162 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.824174 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:19Z","lastTransitionTime":"2026-02-14T04:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.926459 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.926727 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.926797 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.926861 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.926932 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:19Z","lastTransitionTime":"2026-02-14T04:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.030061 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.030097 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.030110 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.030129 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.030144 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:20Z","lastTransitionTime":"2026-02-14T04:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.106093 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 21:59:20.152911364 +0000 UTC Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.132746 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.132779 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.132791 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.132809 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.132818 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:20Z","lastTransitionTime":"2026-02-14T04:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.235025 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.235075 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.235084 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.235095 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.235103 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:20Z","lastTransitionTime":"2026-02-14T04:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.337795 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.337830 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.337842 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.337857 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.337866 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:20Z","lastTransitionTime":"2026-02-14T04:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.418555 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6nndn_34391a30-5865-46e9-af5f-705cc3b11fba/ovnkube-controller/2.log" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.419644 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6nndn_34391a30-5865-46e9-af5f-705cc3b11fba/ovnkube-controller/1.log" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.422180 4867 generic.go:334] "Generic (PLEG): container finished" podID="34391a30-5865-46e9-af5f-705cc3b11fba" containerID="901f1924f11611b25b82799b2f09cf1c83f31dada8ce10e3fabf0d2968107b93" exitCode=1 Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.422212 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" event={"ID":"34391a30-5865-46e9-af5f-705cc3b11fba","Type":"ContainerDied","Data":"901f1924f11611b25b82799b2f09cf1c83f31dada8ce10e3fabf0d2968107b93"} Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.422248 4867 scope.go:117] "RemoveContainer" containerID="6275ed33b70a915b2624ce3be264cf800be4656505fbb478cda0c95a4e486648" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.423542 4867 scope.go:117] "RemoveContainer" containerID="901f1924f11611b25b82799b2f09cf1c83f31dada8ce10e3fabf0d2968107b93" Feb 14 04:10:20 crc kubenswrapper[4867]: E0214 04:10:20.423813 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6nndn_openshift-ovn-kubernetes(34391a30-5865-46e9-af5f-705cc3b11fba)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.440205 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.440230 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.440238 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.440252 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.440262 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:20Z","lastTransitionTime":"2026-02-14T04:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.484192 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5992e46c-bce7-4b9f-82f2-c7ffb93286cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945e51e35cb7125361ec74b9c291782c9bc28f0c319ca5c90a88c27540d6ad95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06b088007e4cc02eff5f33dffc101f9d559fc0af6d9fc99cb7d1a49c47deec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4s95t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:20Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.506885 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5aa8290-4924-4bc2-bd8e-576e53fa4216\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff2ac5b982c695cfabb0b045748396477b0076e3f4bd77aedf8140d8d212eefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c65a590577c74e672bca804403f159a08ded5ed0e25daf1bef640898c304fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b2c4d9c08ee7188cfea877222707517949a93291dac8409facff18ccd5d9243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c96b250166bcf9c613c7707d9b66c11bbb6292c67d03ed9c9cd8359f466d48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9a86a9d4bdcb85bed9cc5869d14d5d0dcd8a0e22ad73bcc1a9db45554d0c687\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T04:09:45Z\\\",\\\"message\\\":\\\"W0214 04:09:34.347288 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0214 04:09:34.347626 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771042174 cert, and key in /tmp/serving-cert-184764736/serving-signer.crt, /tmp/serving-cert-184764736/serving-signer.key\\\\nI0214 04:09:34.829829 1 observer_polling.go:159] Starting file observer\\\\nW0214 04:09:34.832051 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0214 04:09:34.832310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 04:09:34.835332 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-184764736/tls.crt::/tmp/serving-cert-184764736/tls.key\\\\\\\"\\\\nF0214 04:09:45.190789 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d60b00afe16ba210d6cf3e8edd9c12aef490177b83185e6d74f219cc35efbc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:20Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.526448 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2334a367dde5688d19979264cfd6e67f44426ae7cd249c0b0e18b7e889c8c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:20Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.539240 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:20Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.543115 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.543164 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.543175 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.543194 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.543210 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:20Z","lastTransitionTime":"2026-02-14T04:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.567851 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34391a30-5865-46e9-af5f-705cc3b11fba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92014b6ab3e1d7c8631fb2a2fa44b60586bb67157c43e3528a7644584fd25b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://250a34062c680cecaa28554a71a782da6fa1c3554900e8b4f2fa6c093f98e307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3393a612147da0ed4305cb2d2fab51792bf4aefb36be402a6faaa698793cfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9937714cb48d5e8bc3542473d8261629ced25c342c26baa13e57c3dc2ace633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://669cee3fb4ea5c0247e6aa92962377d3152fc79d9022e03689b7c017f857e6e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e713ec2c9e59ec516f5c2241a0e87501f4e83e05d6dadd3c54cd7f1cf11f7d1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://901f1924f11611b25b82799b2f09cf1c83f31dada8ce10e3fabf0d2968107b93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6275ed33b70a915b2624ce3be264cf800be4656505fbb478cda0c95a4e486648\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"message\\\":\\\"cs-tls service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc000627d57 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:9393,TargetPort:{1 0 metrics},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{name: ingress-operator,},ClusterIP:10.217.5.244,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.244],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0214 04:10:02.346564 6288 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to star\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:10:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://901f1924f11611b25b82799b2f09cf1c83f31dada8ce10e3fabf0d2968107b93\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T04:10:20Z\\\",\\\"message\\\":\\\".go:160\\\\nI0214 04:10:20.016141 6511 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 04:10:20.016279 6511 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 04:10:20.016402 6511 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0214 04:10:20.016537 6511 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0214 04:10:20.016835 6511 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0214 04:10:20.016886 6511 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0214 04:10:20.016916 6511 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0214 04:10:20.016948 6511 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0214 04:10:20.016968 6511 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0214 04:10:20.016972 6511 factory.go:656] Stopping watch factory\\\\nI0214 04:10:20.016992 6511 ovnkube.go:599] Stopped ovnkube\\\\nI0214 04:10:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b353a2a6ce81989e21b42414fdc2911f63d44fbd94dd8c588a704ae66216d8b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nndn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:20Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.584134 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qbv2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55b70fd-de82-48c9-b879-de727928e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6de20a8d6a8a1104338491af05cb4bad2960df3f3d41271922974f2bd0f355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghrlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qbv2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:20Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.603392 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b12a920eac3a6bb901e1eb5b3f4ec399de4fb28f20cd73bdcf463730ccc78bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a373e25fceabb99332a08d8c1928aa6023c103d488a1f02a57b3157eceb75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:20Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.615772 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l6v69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afb01bb-2288-4e50-aa66-3e5f2685af58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a109f9fc7a2ea765543b2d1437ad5eccddd0ccb0542b1ffe6a67490057d6d41e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64stb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l6v69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:20Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.630634 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:20Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.646058 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.646103 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.646115 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.646133 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.646147 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:20Z","lastTransitionTime":"2026-02-14T04:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.646641 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab82cfd4c916a17bb5ae2454a121a8367c532dd78d0ae1e13c02868208b7c7fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:20Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.664289 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:20Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.679164 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2ca498f-e329-422d-8b40-abb4d86f9b5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec966f4b2a6aef7743d32f976a12645c5b0feda623f7baf64edf02bc35389e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://898133696f8478fcb41fba24d15e056570cab68af53a559cb642724dff51617a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ad62a4304538cec45962a9672a69b853848bbfcbce460811135c2ffde4849\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c04ba7033e9c86439f79a30f5ac92368859a69c6b8d46aa6e05ca42fbc37839\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:20Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.692910 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbvwr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05957e01-c589-4408-8f80-cd33f8856262\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb1c0677bd48bd254b78efc670de4cf3c1a2ae1a5dde8bcdc4d84ff4524b847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj65g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3962042c51f3b88c029c3ee23ee5704544b33af6a41463e864d81409a6f6845f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj65g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:10:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dbvwr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:20Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.707903 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4b6k5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7206174b-645b-4924-8345-d1d4b1a5ec39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-272vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-272vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:10:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4b6k5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:20Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.722775 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fl729" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb77d03e-6ead-48b5-a96a-db4cbd540192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f23c7e00abcb489852a771f1534532f8a6c3acdd810e4432dd155a72558bcc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gznnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fl729\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:20Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.742310 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9st5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d645541b-4940-4e53-a506-1b42bd296dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80e2ddc09dadcbbbecee7addee881a393497c7456c1ab3fd4ec4b870d86e87ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9st5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:20Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.749213 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.749246 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.749258 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.749277 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.749288 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:20Z","lastTransitionTime":"2026-02-14T04:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.851675 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.851722 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.851734 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.851753 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.851766 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:20Z","lastTransitionTime":"2026-02-14T04:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.955066 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.955142 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.955155 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.955172 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.955187 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:20Z","lastTransitionTime":"2026-02-14T04:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.996813 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.996906 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.996840 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.996822 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:10:20 crc kubenswrapper[4867]: E0214 04:10:20.997020 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:10:20 crc kubenswrapper[4867]: E0214 04:10:20.997238 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:10:20 crc kubenswrapper[4867]: E0214 04:10:20.997382 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:10:20 crc kubenswrapper[4867]: E0214 04:10:20.997663 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.058354 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.058416 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.058433 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.058458 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.058476 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:21Z","lastTransitionTime":"2026-02-14T04:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.106781 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 21:15:43.3808039 +0000 UTC Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.161699 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.161840 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.161936 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.162023 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.162053 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:21Z","lastTransitionTime":"2026-02-14T04:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.240928 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7206174b-645b-4924-8345-d1d4b1a5ec39-metrics-certs\") pod \"network-metrics-daemon-4b6k5\" (UID: \"7206174b-645b-4924-8345-d1d4b1a5ec39\") " pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:10:21 crc kubenswrapper[4867]: E0214 04:10:21.241142 4867 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 14 04:10:21 crc kubenswrapper[4867]: E0214 04:10:21.241246 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7206174b-645b-4924-8345-d1d4b1a5ec39-metrics-certs podName:7206174b-645b-4924-8345-d1d4b1a5ec39 nodeName:}" failed. No retries permitted until 2026-02-14 04:10:37.241228115 +0000 UTC m=+69.322165419 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7206174b-645b-4924-8345-d1d4b1a5ec39-metrics-certs") pod "network-metrics-daemon-4b6k5" (UID: "7206174b-645b-4924-8345-d1d4b1a5ec39") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.265279 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.265319 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.265328 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.265341 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.265351 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:21Z","lastTransitionTime":"2026-02-14T04:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.369171 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.369235 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.369249 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.369272 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.369287 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:21Z","lastTransitionTime":"2026-02-14T04:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.418539 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.428237 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6nndn_34391a30-5865-46e9-af5f-705cc3b11fba/ovnkube-controller/2.log" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.433252 4867 scope.go:117] "RemoveContainer" containerID="901f1924f11611b25b82799b2f09cf1c83f31dada8ce10e3fabf0d2968107b93" Feb 14 04:10:21 crc kubenswrapper[4867]: E0214 04:10:21.433451 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6nndn_openshift-ovn-kubernetes(34391a30-5865-46e9-af5f-705cc3b11fba)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.460771 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab82cfd4c916a17bb5ae2454a121a8367c532dd78d0ae1e13c02868208b7c7fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:21Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.472278 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.472343 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.472358 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.472378 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.472389 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:21Z","lastTransitionTime":"2026-02-14T04:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.474394 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:21Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.490688 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2ca498f-e329-422d-8b40-abb4d86f9b5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec966f4b2a6aef7743d32f976a12645c5b0feda623f7baf64edf02bc35389e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://898133696f8478fcb41fba24d15e056570cab68af53a559cb642724dff51617a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ad62a4304538cec45962a9672a69b853848bbfcbce460811135c2ffde4849\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c04ba7033e9c86439f79a30f5ac92368859a69c6b8d46aa6e05ca42fbc37839\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:21Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.507459 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9st5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d645541b-4940-4e53-a506-1b42bd296dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80e2ddc09dadcbbbecee7addee881a393497c7456c1ab3fd4ec4b870d86e87ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9st5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:21Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.521474 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbvwr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05957e01-c589-4408-8f80-cd33f8856262\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb1c0677bd48bd254b78efc670de4cf3c1a2ae1a5dde8bcdc4d84ff4524b847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj65g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3962042c51f3b88c029c3ee23ee5704544b33af6a41463e864d81409a6f6845f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj65g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:10:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dbvwr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:21Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.533357 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4b6k5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7206174b-645b-4924-8345-d1d4b1a5ec39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-272vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-272vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:10:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4b6k5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:21Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.548669 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fl729" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb77d03e-6ead-48b5-a96a-db4cbd540192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f23c7e00abcb489852a771f1534532f8a6c3acdd810e4432dd155a72558bcc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gznnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fl729\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:21Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.563000 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:21Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.575183 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.575262 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.575286 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.575318 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.575341 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:21Z","lastTransitionTime":"2026-02-14T04:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.580098 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5992e46c-bce7-4b9f-82f2-c7ffb93286cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945e51e35cb7125361ec74b9c291782c9bc28f0c319ca5c90a88c27540d6ad95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06b088007e4cc02eff5f33dffc101f9d559fc0af6d9fc99cb7d1a49c47deec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4s95t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:21Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.599605 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5aa8290-4924-4bc2-bd8e-576e53fa4216\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff2ac5b982c695cfabb0b045748396477b0076e3f4bd77aedf8140d8d212eefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c65a590577c74e672bca804403f159a08ded5ed0e25daf1bef640898c304fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b2c4d9c08ee7188cfea877222707517949a93291dac8409facff18ccd5d9243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c96b250166bcf9c613c7707d9b66c11bbb6292c67d03ed9c9cd8359f466d48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9a86a9d4bdcb85bed9cc5869d14d5d0dcd8a0e22ad73bcc1a9db45554d0c687\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T04:09:45Z\\\",\\\"message\\\":\\\"W0214 04:09:34.347288 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0214 04:09:34.347626 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771042174 cert, and key in /tmp/serving-cert-184764736/serving-signer.crt, /tmp/serving-cert-184764736/serving-signer.key\\\\nI0214 04:09:34.829829 1 observer_polling.go:159] Starting file observer\\\\nW0214 04:09:34.832051 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0214 04:09:34.832310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 04:09:34.835332 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-184764736/tls.crt::/tmp/serving-cert-184764736/tls.key\\\\\\\"\\\\nF0214 04:09:45.190789 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d60b00afe16ba210d6cf3e8edd9c12aef490177b83185e6d74f219cc35efbc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:21Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.613727 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2334a367dde5688d19979264cfd6e67f44426ae7cd249c0b0e18b7e889c8c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:21Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.627261 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:21Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.646792 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34391a30-5865-46e9-af5f-705cc3b11fba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92014b6ab3e1d7c8631fb2a2fa44b60586bb67157c43e3528a7644584fd25b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://250a34062c680cecaa28554a71a782da6fa1c3554900e8b4f2fa6c093f98e307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3393a612147da0ed4305cb2d2fab51792bf4aefb36be402a6faaa698793cfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9937714cb48d5e8bc3542473d8261629ced25c342c26baa13e57c3dc2ace633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://669cee3fb4ea5c0247e6aa92962377d3152fc79d9022e03689b7c017f857e6e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e713ec2c9e59ec516f5c2241a0e87501f4e83e05d6dadd3c54cd7f1cf11f7d1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://901f1924f11611b25b82799b2f09cf1c83f31dada8ce10e3fabf0d2968107b93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://901f1924f11611b25b82799b2f09cf1c83f31dada8ce10e3fabf0d2968107b93\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T04:10:20Z\\\",\\\"message\\\":\\\".go:160\\\\nI0214 04:10:20.016141 6511 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 04:10:20.016279 6511 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 04:10:20.016402 6511 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0214 04:10:20.016537 6511 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0214 04:10:20.016835 6511 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0214 04:10:20.016886 6511 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0214 04:10:20.016916 6511 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0214 04:10:20.016948 6511 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0214 04:10:20.016968 6511 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0214 04:10:20.016972 6511 factory.go:656] Stopping watch factory\\\\nI0214 04:10:20.016992 6511 ovnkube.go:599] Stopped ovnkube\\\\nI0214 04:10:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:10:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6nndn_openshift-ovn-kubernetes(34391a30-5865-46e9-af5f-705cc3b11fba)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b353a2a6ce81989e21b42414fdc2911f63d44fbd94dd8c588a704ae66216d8b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nndn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:21Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.657900 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qbv2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55b70fd-de82-48c9-b879-de727928e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6de20a8d6a8a1104338491af05cb4bad2960df3f3d41271922974f2bd0f355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghrlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qbv2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:21Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.671172 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b12a920eac3a6bb901e1eb5b3f4ec399de4fb28f20cd73bdcf463730ccc78bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a373e25fceabb99332a08d8c1928aa6023c103d488a1f02a57b3157eceb75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:21Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.678387 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.678470 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.678493 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.678556 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.678582 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:21Z","lastTransitionTime":"2026-02-14T04:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.682476 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l6v69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afb01bb-2288-4e50-aa66-3e5f2685af58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a109f9fc7a2ea765543b2d1437ad5eccddd0ccb0542b1ffe6a67490057d6d41e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64stb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l6v69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:21Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.780724 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.780778 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.780789 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.780805 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.780818 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:21Z","lastTransitionTime":"2026-02-14T04:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.883181 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.883249 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.883268 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.883293 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.883311 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:21Z","lastTransitionTime":"2026-02-14T04:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.986596 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.986650 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.986667 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.986694 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.986710 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:21Z","lastTransitionTime":"2026-02-14T04:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.089992 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.090055 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.090245 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.090266 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.090296 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:22Z","lastTransitionTime":"2026-02-14T04:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.107003 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 20:16:18.305377205 +0000 UTC Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.193668 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.193729 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.193748 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.193772 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.193789 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:22Z","lastTransitionTime":"2026-02-14T04:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.298266 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.298323 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.298339 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.298363 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.298380 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:22Z","lastTransitionTime":"2026-02-14T04:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.401429 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.401490 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.401581 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.401613 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.401636 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:22Z","lastTransitionTime":"2026-02-14T04:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.437420 4867 scope.go:117] "RemoveContainer" containerID="901f1924f11611b25b82799b2f09cf1c83f31dada8ce10e3fabf0d2968107b93" Feb 14 04:10:22 crc kubenswrapper[4867]: E0214 04:10:22.437713 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6nndn_openshift-ovn-kubernetes(34391a30-5865-46e9-af5f-705cc3b11fba)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.505412 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.505856 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.506053 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.506304 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.506441 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:22Z","lastTransitionTime":"2026-02-14T04:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.609169 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.609210 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.609222 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.609241 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.609254 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:22Z","lastTransitionTime":"2026-02-14T04:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.610433 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.610462 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.610472 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.610485 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.610495 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:22Z","lastTransitionTime":"2026-02-14T04:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:22 crc kubenswrapper[4867]: E0214 04:10:22.625870 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148e1364-0af4-4e1f-ae72-52166d888ddc\\\",\\\"systemUUID\\\":\\\"1382a0d3-8d29-4f25-bc2c-dc46ad541396\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:22Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.630439 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.630468 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.630494 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.630531 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.630542 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:22Z","lastTransitionTime":"2026-02-14T04:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:22 crc kubenswrapper[4867]: E0214 04:10:22.644085 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148e1364-0af4-4e1f-ae72-52166d888ddc\\\",\\\"systemUUID\\\":\\\"1382a0d3-8d29-4f25-bc2c-dc46ad541396\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:22Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.647920 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.647990 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.648005 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.648022 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.648244 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:22Z","lastTransitionTime":"2026-02-14T04:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:22 crc kubenswrapper[4867]: E0214 04:10:22.662392 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148e1364-0af4-4e1f-ae72-52166d888ddc\\\",\\\"systemUUID\\\":\\\"1382a0d3-8d29-4f25-bc2c-dc46ad541396\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:22Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.666853 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.666919 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.666940 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.666965 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.667026 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:22Z","lastTransitionTime":"2026-02-14T04:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:22 crc kubenswrapper[4867]: E0214 04:10:22.688027 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148e1364-0af4-4e1f-ae72-52166d888ddc\\\",\\\"systemUUID\\\":\\\"1382a0d3-8d29-4f25-bc2c-dc46ad541396\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:22Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.692879 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.692964 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.692982 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.693039 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.693069 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:22Z","lastTransitionTime":"2026-02-14T04:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:22 crc kubenswrapper[4867]: E0214 04:10:22.715032 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148e1364-0af4-4e1f-ae72-52166d888ddc\\\",\\\"systemUUID\\\":\\\"1382a0d3-8d29-4f25-bc2c-dc46ad541396\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:22Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:22 crc kubenswrapper[4867]: E0214 04:10:22.715562 4867 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.717901 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.717992 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.718012 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.718073 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.718091 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:22Z","lastTransitionTime":"2026-02-14T04:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.820458 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.820526 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.820538 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.820554 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.820565 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:22Z","lastTransitionTime":"2026-02-14T04:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.924024 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.924073 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.924087 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.924103 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.924115 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:22Z","lastTransitionTime":"2026-02-14T04:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.958706 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.958822 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.958852 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.958897 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.958923 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:10:22 crc kubenswrapper[4867]: E0214 04:10:22.959055 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 14 04:10:22 crc kubenswrapper[4867]: E0214 04:10:22.959072 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 14 04:10:22 crc kubenswrapper[4867]: E0214 04:10:22.959085 4867 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 04:10:22 crc kubenswrapper[4867]: E0214 04:10:22.959078 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:10:54.959006538 +0000 UTC m=+87.039943922 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:10:22 crc kubenswrapper[4867]: E0214 04:10:22.959132 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-14 04:10:54.959117331 +0000 UTC m=+87.040054655 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 04:10:22 crc kubenswrapper[4867]: E0214 04:10:22.959209 4867 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 14 04:10:22 crc kubenswrapper[4867]: E0214 04:10:22.959283 4867 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 14 04:10:22 crc kubenswrapper[4867]: E0214 04:10:22.959313 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 14 04:10:22 crc kubenswrapper[4867]: E0214 04:10:22.959480 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 14 04:10:22 crc kubenswrapper[4867]: E0214 04:10:22.959541 4867 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 04:10:22 crc kubenswrapper[4867]: E0214 04:10:22.959343 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-14 04:10:54.959306626 +0000 UTC m=+87.040243970 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 14 04:10:22 crc kubenswrapper[4867]: E0214 04:10:22.959660 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-14 04:10:54.959628185 +0000 UTC m=+87.040565539 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 14 04:10:22 crc kubenswrapper[4867]: E0214 04:10:22.959706 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-14 04:10:54.959686276 +0000 UTC m=+87.040623690 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.997156 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.997225 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.997281 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:10:22 crc kubenswrapper[4867]: E0214 04:10:22.997341 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.997176 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:10:22 crc kubenswrapper[4867]: E0214 04:10:22.997428 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:10:22 crc kubenswrapper[4867]: E0214 04:10:22.997549 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:10:22 crc kubenswrapper[4867]: E0214 04:10:22.997596 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.026288 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.026332 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.026344 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.026361 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.026373 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:23Z","lastTransitionTime":"2026-02-14T04:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.107142 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 17:33:49.629365913 +0000 UTC Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.129140 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.129179 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.129192 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.129208 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.129220 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:23Z","lastTransitionTime":"2026-02-14T04:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.232298 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.232351 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.232363 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.232379 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.232391 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:23Z","lastTransitionTime":"2026-02-14T04:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.335188 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.335220 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.335231 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.335245 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.335256 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:23Z","lastTransitionTime":"2026-02-14T04:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.438320 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.438367 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.438380 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.438398 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.438408 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:23Z","lastTransitionTime":"2026-02-14T04:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.540084 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.540116 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.540124 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.540135 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.540145 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:23Z","lastTransitionTime":"2026-02-14T04:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.642894 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.642948 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.642959 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.642976 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.642988 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:23Z","lastTransitionTime":"2026-02-14T04:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.746150 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.746207 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.746217 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.746233 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.746243 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:23Z","lastTransitionTime":"2026-02-14T04:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.848021 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.848077 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.848093 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.848111 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.848123 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:23Z","lastTransitionTime":"2026-02-14T04:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.950807 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.950838 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.950847 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.950860 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.950869 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:23Z","lastTransitionTime":"2026-02-14T04:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.053588 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.053632 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.053644 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.053661 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.053678 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:24Z","lastTransitionTime":"2026-02-14T04:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.108161 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 02:58:01.052186568 +0000 UTC Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.157184 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.157245 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.157256 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.157269 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.157278 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:24Z","lastTransitionTime":"2026-02-14T04:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.259577 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.259613 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.259621 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.259633 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.259641 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:24Z","lastTransitionTime":"2026-02-14T04:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.362629 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.362682 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.362699 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.362721 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.362737 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:24Z","lastTransitionTime":"2026-02-14T04:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.464912 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.464944 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.464952 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.464968 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.464977 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:24Z","lastTransitionTime":"2026-02-14T04:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.566659 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.566705 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.566722 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.566742 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.566753 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:24Z","lastTransitionTime":"2026-02-14T04:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.668636 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.668713 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.668723 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.668759 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.668770 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:24Z","lastTransitionTime":"2026-02-14T04:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.771314 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.771345 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.771353 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.771367 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.771376 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:24Z","lastTransitionTime":"2026-02-14T04:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.873748 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.873780 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.873812 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.873828 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.873853 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:24Z","lastTransitionTime":"2026-02-14T04:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.976420 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.976465 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.976476 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.976494 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.976523 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:24Z","lastTransitionTime":"2026-02-14T04:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.996946 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.996995 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.997039 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.996965 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:10:24 crc kubenswrapper[4867]: E0214 04:10:24.997090 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:10:24 crc kubenswrapper[4867]: E0214 04:10:24.997169 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:10:24 crc kubenswrapper[4867]: E0214 04:10:24.997223 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:10:24 crc kubenswrapper[4867]: E0214 04:10:24.997794 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.078470 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.078528 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.078537 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.078549 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.078558 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:25Z","lastTransitionTime":"2026-02-14T04:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.109021 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 20:49:53.87961093 +0000 UTC Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.180663 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.180707 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.180717 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.180730 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.180740 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:25Z","lastTransitionTime":"2026-02-14T04:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.283097 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.283127 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.283135 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.283147 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.283158 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:25Z","lastTransitionTime":"2026-02-14T04:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.384993 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.385024 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.385033 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.385045 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.385055 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:25Z","lastTransitionTime":"2026-02-14T04:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.541164 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.541232 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.541244 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.541263 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.541276 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:25Z","lastTransitionTime":"2026-02-14T04:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.643430 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.643481 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.643497 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.643546 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.643566 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:25Z","lastTransitionTime":"2026-02-14T04:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.745896 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.745933 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.745943 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.745960 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.745970 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:25Z","lastTransitionTime":"2026-02-14T04:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.756356 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.764647 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.771477 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fl729" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb77d03e-6ead-48b5-a96a-db4cbd540192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f23c7e00abcb489852a771f1534532f8a6c3acdd810e4432dd155a72558bcc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gznnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fl729\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:25Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.787021 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9st5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d645541b-4940-4e53-a506-1b42bd296dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80e2ddc09dadcbbbecee7addee881a393497c7456c1ab3fd4ec4b870d86e87ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9st5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:25Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.797602 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbvwr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05957e01-c589-4408-8f80-cd33f8856262\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb1c0677bd48bd254b78efc670de4cf3c1a2ae1a5dde8bcdc4d84ff4524b847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj65g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3962042c51f3b88c029c3ee23ee5704544b33af6a41463e864d81409a6f6845f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj65g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:10:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dbvwr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:25Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.807227 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4b6k5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7206174b-645b-4924-8345-d1d4b1a5ec39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-272vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-272vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:10:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4b6k5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:25Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.818355 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5aa8290-4924-4bc2-bd8e-576e53fa4216\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff2ac5b982c695cfabb0b045748396477b0076e3f4bd77aedf8140d8d212eefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c65a590577c74e672bca804403f159a08ded5ed0e25daf1bef640898c304fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b2c4d9c08ee7188cfea877222707517949a93291dac8409facff18ccd5d9243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c96b250166bcf9c613c7707d9b66c11bbb6292c67d03ed9c9cd8359f466d48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9a86a9d4bdcb85bed9cc5869d14d5d0dcd8a0e22ad73bcc1a9db45554d0c687\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T04:09:45Z\\\",\\\"message\\\":\\\"W0214 04:09:34.347288 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0214 04:09:34.347626 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771042174 cert, and key in /tmp/serving-cert-184764736/serving-signer.crt, /tmp/serving-cert-184764736/serving-signer.key\\\\nI0214 04:09:34.829829 1 observer_polling.go:159] Starting file observer\\\\nW0214 04:09:34.832051 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0214 04:09:34.832310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 04:09:34.835332 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-184764736/tls.crt::/tmp/serving-cert-184764736/tls.key\\\\\\\"\\\\nF0214 04:09:45.190789 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d60b00afe16ba210d6cf3e8edd9c12aef490177b83185e6d74f219cc35efbc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:25Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.828853 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2334a367dde5688d19979264cfd6e67f44426ae7cd249c0b0e18b7e889c8c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:25Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.839852 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:25Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.848926 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.848976 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.848994 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.849018 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.849034 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:25Z","lastTransitionTime":"2026-02-14T04:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.850684 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5992e46c-bce7-4b9f-82f2-c7ffb93286cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945e51e35cb7125361ec74b9c291782c9bc28f0c319ca5c90a88c27540d6ad95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06b088007e4cc02eff5f33dffc101f9d559fc0af6d9fc99cb7d1a49c47deec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4s95t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:25Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.861854 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b12a920eac3a6bb901e1eb5b3f4ec399de4fb28f20cd73bdcf463730ccc78bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a373e25fceabb99332a08d8c1928aa6023c103d488a1f02a57b3157eceb75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:25Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.871544 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l6v69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afb01bb-2288-4e50-aa66-3e5f2685af58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a109f9fc7a2ea765543b2d1437ad5eccddd0ccb0542b1ffe6a67490057d6d41e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64stb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l6v69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:25Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.882677 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:25Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.902184 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34391a30-5865-46e9-af5f-705cc3b11fba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92014b6ab3e1d7c8631fb2a2fa44b60586bb67157c43e3528a7644584fd25b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://250a34062c680cecaa28554a71a782da6fa1c3554900e8b4f2fa6c093f98e307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3393a612147da0ed4305cb2d2fab51792bf4aefb36be402a6faaa698793cfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9937714cb48d5e8bc3542473d8261629ced25c342c26baa13e57c3dc2ace633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://669cee3fb4ea5c0247e6aa92962377d3152fc79d9022e03689b7c017f857e6e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e713ec2c9e59ec516f5c2241a0e87501f4e83e05d6dadd3c54cd7f1cf11f7d1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://901f1924f11611b25b82799b2f09cf1c83f31dada8ce10e3fabf0d2968107b93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://901f1924f11611b25b82799b2f09cf1c83f31dada8ce10e3fabf0d2968107b93\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T04:10:20Z\\\",\\\"message\\\":\\\".go:160\\\\nI0214 04:10:20.016141 6511 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 04:10:20.016279 6511 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 04:10:20.016402 6511 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0214 04:10:20.016537 6511 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0214 04:10:20.016835 6511 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0214 04:10:20.016886 6511 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0214 04:10:20.016916 6511 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0214 04:10:20.016948 6511 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0214 04:10:20.016968 6511 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0214 04:10:20.016972 6511 factory.go:656] Stopping watch factory\\\\nI0214 04:10:20.016992 6511 ovnkube.go:599] Stopped ovnkube\\\\nI0214 04:10:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:10:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6nndn_openshift-ovn-kubernetes(34391a30-5865-46e9-af5f-705cc3b11fba)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b353a2a6ce81989e21b42414fdc2911f63d44fbd94dd8c588a704ae66216d8b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nndn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:25Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.911525 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qbv2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55b70fd-de82-48c9-b879-de727928e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6de20a8d6a8a1104338491af05cb4bad2960df3f3d41271922974f2bd0f355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghrlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qbv2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:25Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.921285 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2ca498f-e329-422d-8b40-abb4d86f9b5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec966f4b2a6aef7743d32f976a12645c5b0feda623f7baf64edf02bc35389e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://898133696f8478fcb41fba24d15e056570cab68af53a559cb642724dff51617a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ad62a4304538cec45962a9672a69b853848bbfcbce460811135c2ffde4849\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c04ba7033e9c86439f79a30f5ac92368859a69c6b8d46aa6e05ca42fbc37839\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:25Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.932488 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab82cfd4c916a17bb5ae2454a121a8367c532dd78d0ae1e13c02868208b7c7fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:25Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.943219 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:25Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.951874 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.951912 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.951923 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.951940 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.951952 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:25Z","lastTransitionTime":"2026-02-14T04:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.053537 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.053564 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.053572 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.053584 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.053593 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:26Z","lastTransitionTime":"2026-02-14T04:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.109279 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 04:58:45.075792111 +0000 UTC Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.155313 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.155348 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.155357 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.155368 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.155377 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:26Z","lastTransitionTime":"2026-02-14T04:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.257197 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.257242 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.257256 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.257275 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.257292 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:26Z","lastTransitionTime":"2026-02-14T04:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.360124 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.360166 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.360177 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.360193 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.360204 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:26Z","lastTransitionTime":"2026-02-14T04:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.462470 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.462576 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.462600 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.462639 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.462660 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:26Z","lastTransitionTime":"2026-02-14T04:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.565882 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.565929 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.565940 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.565957 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.565970 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:26Z","lastTransitionTime":"2026-02-14T04:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.668181 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.668232 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.668241 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.668253 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.668268 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:26Z","lastTransitionTime":"2026-02-14T04:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.770252 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.770285 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.770293 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.770305 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.770315 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:26Z","lastTransitionTime":"2026-02-14T04:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.872673 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.872718 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.872728 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.872745 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.872758 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:26Z","lastTransitionTime":"2026-02-14T04:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.974533 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.974563 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.974573 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.974587 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.974598 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:26Z","lastTransitionTime":"2026-02-14T04:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.996330 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.996371 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.996414 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:10:26 crc kubenswrapper[4867]: E0214 04:10:26.996543 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.996436 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:10:26 crc kubenswrapper[4867]: E0214 04:10:26.996660 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:10:26 crc kubenswrapper[4867]: E0214 04:10:26.996786 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:10:26 crc kubenswrapper[4867]: E0214 04:10:26.996886 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:10:27 crc kubenswrapper[4867]: I0214 04:10:27.076441 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:27 crc kubenswrapper[4867]: I0214 04:10:27.076525 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:27 crc kubenswrapper[4867]: I0214 04:10:27.076542 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:27 crc kubenswrapper[4867]: I0214 04:10:27.076557 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:27 crc kubenswrapper[4867]: I0214 04:10:27.076580 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:27Z","lastTransitionTime":"2026-02-14T04:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:27 crc kubenswrapper[4867]: I0214 04:10:27.110007 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 05:29:51.426943728 +0000 UTC Feb 14 04:10:27 crc kubenswrapper[4867]: I0214 04:10:27.179889 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:27 crc kubenswrapper[4867]: I0214 04:10:27.179963 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:27 crc kubenswrapper[4867]: I0214 04:10:27.179979 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:27 crc kubenswrapper[4867]: I0214 04:10:27.180001 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:27 crc kubenswrapper[4867]: I0214 04:10:27.180018 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:27Z","lastTransitionTime":"2026-02-14T04:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:27 crc kubenswrapper[4867]: I0214 04:10:27.282166 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:27 crc kubenswrapper[4867]: I0214 04:10:27.282245 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:27 crc kubenswrapper[4867]: I0214 04:10:27.282278 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:27 crc kubenswrapper[4867]: I0214 04:10:27.282307 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:27 crc kubenswrapper[4867]: I0214 04:10:27.282328 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:27Z","lastTransitionTime":"2026-02-14T04:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:27 crc kubenswrapper[4867]: I0214 04:10:27.384917 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:27 crc kubenswrapper[4867]: I0214 04:10:27.384959 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:27 crc kubenswrapper[4867]: I0214 04:10:27.384968 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:27 crc kubenswrapper[4867]: I0214 04:10:27.384981 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:27 crc kubenswrapper[4867]: I0214 04:10:27.384989 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:27Z","lastTransitionTime":"2026-02-14T04:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:27 crc kubenswrapper[4867]: I0214 04:10:27.487707 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:27 crc kubenswrapper[4867]: I0214 04:10:27.487822 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:27 crc kubenswrapper[4867]: I0214 04:10:27.487844 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:27 crc kubenswrapper[4867]: I0214 04:10:27.487872 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:27 crc kubenswrapper[4867]: I0214 04:10:27.487892 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:27Z","lastTransitionTime":"2026-02-14T04:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:27 crc kubenswrapper[4867]: I0214 04:10:27.590225 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:27 crc kubenswrapper[4867]: I0214 04:10:27.590305 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:27 crc kubenswrapper[4867]: I0214 04:10:27.590331 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:27 crc kubenswrapper[4867]: I0214 04:10:27.590365 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:27 crc kubenswrapper[4867]: I0214 04:10:27.590389 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:27Z","lastTransitionTime":"2026-02-14T04:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:27 crc kubenswrapper[4867]: I0214 04:10:27.693650 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:27 crc kubenswrapper[4867]: I0214 04:10:27.693746 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:27 crc kubenswrapper[4867]: I0214 04:10:27.693765 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:27 crc kubenswrapper[4867]: I0214 04:10:27.693822 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:27 crc kubenswrapper[4867]: I0214 04:10:27.693840 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:27Z","lastTransitionTime":"2026-02-14T04:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:27 crc kubenswrapper[4867]: I0214 04:10:27.796542 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:27 crc kubenswrapper[4867]: I0214 04:10:27.796592 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:27 crc kubenswrapper[4867]: I0214 04:10:27.796604 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:27 crc kubenswrapper[4867]: I0214 04:10:27.796619 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:27 crc kubenswrapper[4867]: I0214 04:10:27.796636 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:27Z","lastTransitionTime":"2026-02-14T04:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:27 crc kubenswrapper[4867]: I0214 04:10:27.899188 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:27 crc kubenswrapper[4867]: I0214 04:10:27.899244 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:27 crc kubenswrapper[4867]: I0214 04:10:27.899261 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:27 crc kubenswrapper[4867]: I0214 04:10:27.899284 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:27 crc kubenswrapper[4867]: I0214 04:10:27.899306 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:27Z","lastTransitionTime":"2026-02-14T04:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.002115 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.002156 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.002164 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.002176 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.002184 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:28Z","lastTransitionTime":"2026-02-14T04:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.104910 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.104965 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.104974 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.104986 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.104996 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:28Z","lastTransitionTime":"2026-02-14T04:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.110453 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 18:32:21.478362042 +0000 UTC Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.208256 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.208318 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.208334 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.208355 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.208367 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:28Z","lastTransitionTime":"2026-02-14T04:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.310616 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.310661 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.310673 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.310691 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.310706 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:28Z","lastTransitionTime":"2026-02-14T04:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.412708 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.412744 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.412755 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.412772 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.412784 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:28Z","lastTransitionTime":"2026-02-14T04:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.515971 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.516004 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.516013 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.516027 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.516036 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:28Z","lastTransitionTime":"2026-02-14T04:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.619848 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.619905 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.619921 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.619943 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.619960 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:28Z","lastTransitionTime":"2026-02-14T04:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.723334 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.723377 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.723387 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.723402 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.723412 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:28Z","lastTransitionTime":"2026-02-14T04:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.825948 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.826005 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.826020 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.826039 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.826052 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:28Z","lastTransitionTime":"2026-02-14T04:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.928323 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.928383 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.928397 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.928419 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.928435 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:28Z","lastTransitionTime":"2026-02-14T04:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.996272 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:10:28 crc kubenswrapper[4867]: E0214 04:10:28.996382 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.996441 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:10:28 crc kubenswrapper[4867]: E0214 04:10:28.996483 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.996620 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:10:28 crc kubenswrapper[4867]: E0214 04:10:28.996711 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.996912 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:10:28 crc kubenswrapper[4867]: E0214 04:10:28.996962 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.012047 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:29Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.026586 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2ca498f-e329-422d-8b40-abb4d86f9b5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec966f4b2a6aef7743d32f976a12645c5b0feda623f7baf64edf02bc35389e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://898133696f8478fcb41fba24d15e056570cab68af53a559cb642724dff51617a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ad62a4304538cec45962a9672a69b853848bbfcbce460811135c2ffde4849\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c04ba7033e9c86439f79a30f5ac92368859a69c6b8d46aa6e05ca42fbc37839\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:29Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.032972 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.033072 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.033143 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.033176 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.033399 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:29Z","lastTransitionTime":"2026-02-14T04:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.038532 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7eff54a-2d26-4335-ad76-c454354b64c0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61da3ab9eb87eb886d6bdf805db38bcabc3db4334167f9e28fd6144269a76515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c54a1f41a2a0e8fa5eae1575fc40b6f3240fe6ea8cafe6fd89a64e092e5b4602\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f0b9cac3faa5bfffa911cb16b70fa88a320b7bd9314d7a0ee0732b2a57afb90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9361fb0bab2f70eaf2adc19e3fbfa9066fd7ad2fe0c94cd1a13518d2ab3708d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9361fb0bab2f70eaf2adc19e3fbfa9066fd7ad2fe0c94cd1a13518d2ab3708d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:29Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.051215 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab82cfd4c916a17bb5ae2454a121a8367c532dd78d0ae1e13c02868208b7c7fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:29Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.061205 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4b6k5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7206174b-645b-4924-8345-d1d4b1a5ec39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-272vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-272vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:10:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4b6k5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:29Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.079963 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fl729" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb77d03e-6ead-48b5-a96a-db4cbd540192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f23c7e00abcb489852a771f1534532f8a6c3acdd810e4432dd155a72558bcc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gznnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fl729\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:29Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.096458 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9st5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d645541b-4940-4e53-a506-1b42bd296dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80e2ddc09dadcbbbecee7addee881a393497c7456c1ab3fd4ec4b870d86e87ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9st5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:29Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.108365 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbvwr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05957e01-c589-4408-8f80-cd33f8856262\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb1c0677bd48bd254b78efc670de4cf3c1a2ae1a5dde8bcdc4d84ff4524b847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj65g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3962042c51f3b88c029c3ee23ee5704544b33af6a41463e864d81409a6f6845f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj65g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:10:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dbvwr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:29Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.110602 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 15:26:17.515223781 +0000 UTC Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.121476 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5aa8290-4924-4bc2-bd8e-576e53fa4216\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff2ac5b982c695cfabb0b045748396477b0076e3f4bd77aedf8140d8d212eefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c65a590577c74e672bca804403f159a08ded5ed0e25daf1bef640898c304fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b2c4d9c08ee7188cfea877222707517949a93291dac8409facff18ccd5d9243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c96b250166bcf9c613c7707d9b66c11bbb6292c67d03ed9c9cd8359f466d48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9a86a9d4bdcb85bed9cc5869d14d5d0dcd8a0e22ad73bcc1a9db45554d0c687\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T04:09:45Z\\\",\\\"message\\\":\\\"W0214 04:09:34.347288 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0214 04:09:34.347626 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771042174 cert, and key in /tmp/serving-cert-184764736/serving-signer.crt, /tmp/serving-cert-184764736/serving-signer.key\\\\nI0214 04:09:34.829829 1 observer_polling.go:159] Starting file observer\\\\nW0214 04:09:34.832051 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0214 04:09:34.832310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 04:09:34.835332 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-184764736/tls.crt::/tmp/serving-cert-184764736/tls.key\\\\\\\"\\\\nF0214 04:09:45.190789 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d60b00afe16ba210d6cf3e8edd9c12aef490177b83185e6d74f219cc35efbc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:29Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.136081 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2334a367dde5688d19979264cfd6e67f44426ae7cd249c0b0e18b7e889c8c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:29Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.136742 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.136772 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.136798 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.136813 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.136822 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:29Z","lastTransitionTime":"2026-02-14T04:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.149331 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:29Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.161314 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5992e46c-bce7-4b9f-82f2-c7ffb93286cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945e51e35cb7125361ec74b9c291782c9bc28f0c319ca5c90a88c27540d6ad95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06b088007e4cc02eff5f33dffc101f9d559fc0af6d9fc99cb7d1a49c47deec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4s95t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:29Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.171347 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qbv2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55b70fd-de82-48c9-b879-de727928e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6de20a8d6a8a1104338491af05cb4bad2960df3f3d41271922974f2bd0f355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghrlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qbv2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:29Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.184287 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b12a920eac3a6bb901e1eb5b3f4ec399de4fb28f20cd73bdcf463730ccc78bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a373e25fceabb99332a08d8c1928aa6023c103d488a1f02a57b3157eceb75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:29Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.194804 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l6v69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afb01bb-2288-4e50-aa66-3e5f2685af58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a109f9fc7a2ea765543b2d1437ad5eccddd0ccb0542b1ffe6a67490057d6d41e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64stb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l6v69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:29Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.206844 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:29Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.223050 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34391a30-5865-46e9-af5f-705cc3b11fba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92014b6ab3e1d7c8631fb2a2fa44b60586bb67157c43e3528a7644584fd25b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://250a34062c680cecaa28554a71a782da6fa1c3554900e8b4f2fa6c093f98e307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3393a612147da0ed4305cb2d2fab51792bf4aefb36be402a6faaa698793cfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9937714cb48d5e8bc3542473d8261629ced25c342c26baa13e57c3dc2ace633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://669cee3fb4ea5c0247e6aa92962377d3152fc79d9022e03689b7c017f857e6e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e713ec2c9e59ec516f5c2241a0e87501f4e83e05d6dadd3c54cd7f1cf11f7d1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://901f1924f11611b25b82799b2f09cf1c83f31dada8ce10e3fabf0d2968107b93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://901f1924f11611b25b82799b2f09cf1c83f31dada8ce10e3fabf0d2968107b93\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T04:10:20Z\\\",\\\"message\\\":\\\".go:160\\\\nI0214 04:10:20.016141 6511 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 04:10:20.016279 6511 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 04:10:20.016402 6511 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0214 04:10:20.016537 6511 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0214 04:10:20.016835 6511 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0214 04:10:20.016886 6511 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0214 04:10:20.016916 6511 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0214 04:10:20.016948 6511 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0214 04:10:20.016968 6511 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0214 04:10:20.016972 6511 factory.go:656] Stopping watch factory\\\\nI0214 04:10:20.016992 6511 ovnkube.go:599] Stopped ovnkube\\\\nI0214 04:10:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:10:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6nndn_openshift-ovn-kubernetes(34391a30-5865-46e9-af5f-705cc3b11fba)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b353a2a6ce81989e21b42414fdc2911f63d44fbd94dd8c588a704ae66216d8b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nndn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:29Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.238436 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.238485 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.238498 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.238654 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.238669 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:29Z","lastTransitionTime":"2026-02-14T04:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.341708 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.341747 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.341756 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.341768 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.341777 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:29Z","lastTransitionTime":"2026-02-14T04:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.444041 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.444082 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.444095 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.444113 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.444125 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:29Z","lastTransitionTime":"2026-02-14T04:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.546749 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.546812 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.546830 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.546854 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.546872 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:29Z","lastTransitionTime":"2026-02-14T04:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.649787 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.649831 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.649841 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.649857 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.649867 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:29Z","lastTransitionTime":"2026-02-14T04:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.752342 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.752401 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.752420 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.752444 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.752462 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:29Z","lastTransitionTime":"2026-02-14T04:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.855270 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.855317 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.855329 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.855344 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.855358 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:29Z","lastTransitionTime":"2026-02-14T04:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:29.958221 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:29.958274 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:29.958294 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:29.958323 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:29.958343 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:29Z","lastTransitionTime":"2026-02-14T04:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.060094 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.060141 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.060189 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.060218 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.060330 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:30Z","lastTransitionTime":"2026-02-14T04:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.111166 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 10:45:41.466352365 +0000 UTC Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.164869 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.164910 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.164919 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.164935 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.164945 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:30Z","lastTransitionTime":"2026-02-14T04:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.267124 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.267148 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.267156 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.267170 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.267178 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:30Z","lastTransitionTime":"2026-02-14T04:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.369025 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.369070 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.369082 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.369132 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.369144 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:30Z","lastTransitionTime":"2026-02-14T04:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.471319 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.471358 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.471366 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.471381 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.471391 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:30Z","lastTransitionTime":"2026-02-14T04:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.577031 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.577109 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.577124 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.577146 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.577158 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:30Z","lastTransitionTime":"2026-02-14T04:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.679141 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.679187 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.679225 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.679265 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.679276 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:30Z","lastTransitionTime":"2026-02-14T04:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.781800 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.781838 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.781850 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.781883 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.781894 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:30Z","lastTransitionTime":"2026-02-14T04:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.884242 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.884283 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.884291 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.884305 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.884313 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:30Z","lastTransitionTime":"2026-02-14T04:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.986612 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.986653 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.986661 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.986675 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.986686 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:30Z","lastTransitionTime":"2026-02-14T04:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.996879 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.996907 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.996936 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.996959 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:10:30 crc kubenswrapper[4867]: E0214 04:10:30.997019 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:10:30 crc kubenswrapper[4867]: E0214 04:10:30.997061 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:10:30 crc kubenswrapper[4867]: E0214 04:10:30.997141 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:10:30 crc kubenswrapper[4867]: E0214 04:10:30.997233 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:10:31 crc kubenswrapper[4867]: I0214 04:10:31.089278 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:31 crc kubenswrapper[4867]: I0214 04:10:31.089321 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:31 crc kubenswrapper[4867]: I0214 04:10:31.089331 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:31 crc kubenswrapper[4867]: I0214 04:10:31.089345 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:31 crc kubenswrapper[4867]: I0214 04:10:31.089355 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:31Z","lastTransitionTime":"2026-02-14T04:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:31 crc kubenswrapper[4867]: I0214 04:10:31.111555 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 09:29:04.870028864 +0000 UTC Feb 14 04:10:31 crc kubenswrapper[4867]: I0214 04:10:31.191980 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:31 crc kubenswrapper[4867]: I0214 04:10:31.192019 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:31 crc kubenswrapper[4867]: I0214 04:10:31.192027 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:31 crc kubenswrapper[4867]: I0214 04:10:31.192042 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:31 crc kubenswrapper[4867]: I0214 04:10:31.192054 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:31Z","lastTransitionTime":"2026-02-14T04:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:31 crc kubenswrapper[4867]: I0214 04:10:31.294528 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:31 crc kubenswrapper[4867]: I0214 04:10:31.294573 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:31 crc kubenswrapper[4867]: I0214 04:10:31.294584 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:31 crc kubenswrapper[4867]: I0214 04:10:31.294601 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:31 crc kubenswrapper[4867]: I0214 04:10:31.294611 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:31Z","lastTransitionTime":"2026-02-14T04:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:31 crc kubenswrapper[4867]: I0214 04:10:31.397006 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:31 crc kubenswrapper[4867]: I0214 04:10:31.397035 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:31 crc kubenswrapper[4867]: I0214 04:10:31.397044 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:31 crc kubenswrapper[4867]: I0214 04:10:31.397059 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:31 crc kubenswrapper[4867]: I0214 04:10:31.397068 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:31Z","lastTransitionTime":"2026-02-14T04:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:31 crc kubenswrapper[4867]: I0214 04:10:31.498883 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:31 crc kubenswrapper[4867]: I0214 04:10:31.498913 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:31 crc kubenswrapper[4867]: I0214 04:10:31.498923 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:31 crc kubenswrapper[4867]: I0214 04:10:31.498936 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:31 crc kubenswrapper[4867]: I0214 04:10:31.498946 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:31Z","lastTransitionTime":"2026-02-14T04:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:31 crc kubenswrapper[4867]: I0214 04:10:31.603061 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:31 crc kubenswrapper[4867]: I0214 04:10:31.603104 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:31 crc kubenswrapper[4867]: I0214 04:10:31.603113 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:31 crc kubenswrapper[4867]: I0214 04:10:31.603127 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:31 crc kubenswrapper[4867]: I0214 04:10:31.603141 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:31Z","lastTransitionTime":"2026-02-14T04:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:31 crc kubenswrapper[4867]: I0214 04:10:31.704929 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:31 crc kubenswrapper[4867]: I0214 04:10:31.704975 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:31 crc kubenswrapper[4867]: I0214 04:10:31.704987 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:31 crc kubenswrapper[4867]: I0214 04:10:31.705005 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:31 crc kubenswrapper[4867]: I0214 04:10:31.705017 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:31Z","lastTransitionTime":"2026-02-14T04:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:31 crc kubenswrapper[4867]: I0214 04:10:31.807116 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:31 crc kubenswrapper[4867]: I0214 04:10:31.807154 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:31 crc kubenswrapper[4867]: I0214 04:10:31.807167 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:31 crc kubenswrapper[4867]: I0214 04:10:31.807181 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:31 crc kubenswrapper[4867]: I0214 04:10:31.807191 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:31Z","lastTransitionTime":"2026-02-14T04:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:31 crc kubenswrapper[4867]: I0214 04:10:31.909868 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:31 crc kubenswrapper[4867]: I0214 04:10:31.909948 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:31 crc kubenswrapper[4867]: I0214 04:10:31.909966 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:31 crc kubenswrapper[4867]: I0214 04:10:31.909997 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:31 crc kubenswrapper[4867]: I0214 04:10:31.910015 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:31Z","lastTransitionTime":"2026-02-14T04:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.013105 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.013149 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.013161 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.013179 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.013193 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:32Z","lastTransitionTime":"2026-02-14T04:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.112390 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 08:08:54.408153659 +0000 UTC Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.115825 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.115865 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.115874 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.115890 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.115904 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:32Z","lastTransitionTime":"2026-02-14T04:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.218827 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.218863 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.218871 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.218884 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.218893 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:32Z","lastTransitionTime":"2026-02-14T04:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.322685 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.322731 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.322741 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.322757 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.322768 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:32Z","lastTransitionTime":"2026-02-14T04:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.428850 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.428896 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.428905 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.428919 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.428929 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:32Z","lastTransitionTime":"2026-02-14T04:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.530983 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.531035 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.531044 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.531058 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.531067 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:32Z","lastTransitionTime":"2026-02-14T04:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.633347 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.633390 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.633398 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.633412 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.633426 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:32Z","lastTransitionTime":"2026-02-14T04:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.735681 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.736002 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.736144 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.736272 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.736597 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:32Z","lastTransitionTime":"2026-02-14T04:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.840260 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.840294 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.840303 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.840321 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.840333 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:32Z","lastTransitionTime":"2026-02-14T04:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.942986 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.943031 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.943040 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.943057 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.943068 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:32Z","lastTransitionTime":"2026-02-14T04:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:32.998477 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:10:33 crc kubenswrapper[4867]: E0214 04:10:32.998593 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:32.998758 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:10:33 crc kubenswrapper[4867]: E0214 04:10:32.998807 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:32.999069 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:10:33 crc kubenswrapper[4867]: E0214 04:10:32.999113 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:32.999145 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:10:33 crc kubenswrapper[4867]: E0214 04:10:32.999182 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.045083 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.045118 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.045130 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.045143 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.045152 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:33Z","lastTransitionTime":"2026-02-14T04:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.048210 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.048247 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.048257 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.048270 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.048279 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:33Z","lastTransitionTime":"2026-02-14T04:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:33 crc kubenswrapper[4867]: E0214 04:10:33.059356 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148e1364-0af4-4e1f-ae72-52166d888ddc\\\",\\\"systemUUID\\\":\\\"1382a0d3-8d29-4f25-bc2c-dc46ad541396\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:33Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.062261 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.062295 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.062305 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.062318 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.062327 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:33Z","lastTransitionTime":"2026-02-14T04:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:33 crc kubenswrapper[4867]: E0214 04:10:33.072214 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148e1364-0af4-4e1f-ae72-52166d888ddc\\\",\\\"systemUUID\\\":\\\"1382a0d3-8d29-4f25-bc2c-dc46ad541396\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:33Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.074836 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.074865 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.074873 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.074886 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.074896 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:33Z","lastTransitionTime":"2026-02-14T04:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:33 crc kubenswrapper[4867]: E0214 04:10:33.085744 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148e1364-0af4-4e1f-ae72-52166d888ddc\\\",\\\"systemUUID\\\":\\\"1382a0d3-8d29-4f25-bc2c-dc46ad541396\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:33Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.088572 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.088597 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.088608 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.088623 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.088641 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:33Z","lastTransitionTime":"2026-02-14T04:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:33 crc kubenswrapper[4867]: E0214 04:10:33.100576 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148e1364-0af4-4e1f-ae72-52166d888ddc\\\",\\\"systemUUID\\\":\\\"1382a0d3-8d29-4f25-bc2c-dc46ad541396\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:33Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.104130 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.104156 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.104165 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.104178 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.104188 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:33Z","lastTransitionTime":"2026-02-14T04:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.112638 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 23:49:05.778615726 +0000 UTC Feb 14 04:10:33 crc kubenswrapper[4867]: E0214 04:10:33.114579 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148e1364-0af4-4e1f-ae72-52166d888ddc\\\",\\\"systemUUID\\\":\\\"1382a0d3-8d29-4f25-bc2c-dc46ad541396\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:33Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:33 crc kubenswrapper[4867]: E0214 04:10:33.114682 4867 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.147402 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.147427 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.147435 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.147445 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.147454 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:33Z","lastTransitionTime":"2026-02-14T04:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.249838 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.249904 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.249944 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.249978 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.249986 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:33Z","lastTransitionTime":"2026-02-14T04:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.352786 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.352858 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.352880 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.352910 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.352933 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:33Z","lastTransitionTime":"2026-02-14T04:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.455244 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.455283 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.455292 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.455307 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.455316 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:33Z","lastTransitionTime":"2026-02-14T04:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.557990 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.558097 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.558122 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.558149 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.558168 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:33Z","lastTransitionTime":"2026-02-14T04:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.660071 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.660105 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.660116 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.660133 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.660143 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:33Z","lastTransitionTime":"2026-02-14T04:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.762875 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.762939 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.762954 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.762976 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.762993 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:33Z","lastTransitionTime":"2026-02-14T04:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.865781 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.865811 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.865821 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.865834 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.865843 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:33Z","lastTransitionTime":"2026-02-14T04:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.968417 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.968454 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.968463 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.968480 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.968491 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:33Z","lastTransitionTime":"2026-02-14T04:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.071187 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.071228 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.071239 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.071254 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.071267 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:34Z","lastTransitionTime":"2026-02-14T04:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.113513 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 17:50:08.313660573 +0000 UTC Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.173899 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.173930 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.173939 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.173953 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.173963 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:34Z","lastTransitionTime":"2026-02-14T04:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.276109 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.276144 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.276153 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.276168 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.276178 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:34Z","lastTransitionTime":"2026-02-14T04:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.379234 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.379278 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.379288 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.379303 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.379313 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:34Z","lastTransitionTime":"2026-02-14T04:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.481138 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.481172 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.481180 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.481194 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.481203 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:34Z","lastTransitionTime":"2026-02-14T04:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.583591 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.583639 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.583650 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.583668 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.583680 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:34Z","lastTransitionTime":"2026-02-14T04:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.685845 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.685883 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.685893 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.685908 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.685918 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:34Z","lastTransitionTime":"2026-02-14T04:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.787653 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.787698 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.787709 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.787726 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.787735 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:34Z","lastTransitionTime":"2026-02-14T04:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.889737 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.889774 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.889790 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.889806 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.889817 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:34Z","lastTransitionTime":"2026-02-14T04:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.991336 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.991376 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.991385 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.991399 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.991410 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:34Z","lastTransitionTime":"2026-02-14T04:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.996766 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:10:34 crc kubenswrapper[4867]: E0214 04:10:34.996935 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.996777 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:10:34 crc kubenswrapper[4867]: E0214 04:10:34.997134 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.996774 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:10:34 crc kubenswrapper[4867]: E0214 04:10:34.997379 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.996807 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:10:34 crc kubenswrapper[4867]: E0214 04:10:34.997776 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:10:35 crc kubenswrapper[4867]: I0214 04:10:35.093836 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:35 crc kubenswrapper[4867]: I0214 04:10:35.093893 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:35 crc kubenswrapper[4867]: I0214 04:10:35.093906 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:35 crc kubenswrapper[4867]: I0214 04:10:35.093918 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:35 crc kubenswrapper[4867]: I0214 04:10:35.093926 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:35Z","lastTransitionTime":"2026-02-14T04:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:35 crc kubenswrapper[4867]: I0214 04:10:35.113951 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 11:42:19.632037517 +0000 UTC Feb 14 04:10:35 crc kubenswrapper[4867]: I0214 04:10:35.196084 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:35 crc kubenswrapper[4867]: I0214 04:10:35.196120 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:35 crc kubenswrapper[4867]: I0214 04:10:35.196130 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:35 crc kubenswrapper[4867]: I0214 04:10:35.196144 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:35 crc kubenswrapper[4867]: I0214 04:10:35.196153 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:35Z","lastTransitionTime":"2026-02-14T04:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:35 crc kubenswrapper[4867]: I0214 04:10:35.298482 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:35 crc kubenswrapper[4867]: I0214 04:10:35.298524 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:35 crc kubenswrapper[4867]: I0214 04:10:35.298536 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:35 crc kubenswrapper[4867]: I0214 04:10:35.298553 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:35 crc kubenswrapper[4867]: I0214 04:10:35.298565 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:35Z","lastTransitionTime":"2026-02-14T04:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:35 crc kubenswrapper[4867]: I0214 04:10:35.400768 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:35 crc kubenswrapper[4867]: I0214 04:10:35.400798 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:35 crc kubenswrapper[4867]: I0214 04:10:35.400806 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:35 crc kubenswrapper[4867]: I0214 04:10:35.400818 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:35 crc kubenswrapper[4867]: I0214 04:10:35.400826 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:35Z","lastTransitionTime":"2026-02-14T04:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:35 crc kubenswrapper[4867]: I0214 04:10:35.502995 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:35 crc kubenswrapper[4867]: I0214 04:10:35.503026 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:35 crc kubenswrapper[4867]: I0214 04:10:35.503054 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:35 crc kubenswrapper[4867]: I0214 04:10:35.503068 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:35 crc kubenswrapper[4867]: I0214 04:10:35.503076 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:35Z","lastTransitionTime":"2026-02-14T04:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:35 crc kubenswrapper[4867]: I0214 04:10:35.605414 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:35 crc kubenswrapper[4867]: I0214 04:10:35.605455 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:35 crc kubenswrapper[4867]: I0214 04:10:35.605463 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:35 crc kubenswrapper[4867]: I0214 04:10:35.605479 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:35 crc kubenswrapper[4867]: I0214 04:10:35.605489 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:35Z","lastTransitionTime":"2026-02-14T04:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:35 crc kubenswrapper[4867]: I0214 04:10:35.707603 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:35 crc kubenswrapper[4867]: I0214 04:10:35.707635 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:35 crc kubenswrapper[4867]: I0214 04:10:35.707642 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:35 crc kubenswrapper[4867]: I0214 04:10:35.707656 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:35 crc kubenswrapper[4867]: I0214 04:10:35.707664 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:35Z","lastTransitionTime":"2026-02-14T04:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:35 crc kubenswrapper[4867]: I0214 04:10:35.810053 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:35 crc kubenswrapper[4867]: I0214 04:10:35.810139 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:35 crc kubenswrapper[4867]: I0214 04:10:35.810151 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:35 crc kubenswrapper[4867]: I0214 04:10:35.810166 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:35 crc kubenswrapper[4867]: I0214 04:10:35.810177 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:35Z","lastTransitionTime":"2026-02-14T04:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:35 crc kubenswrapper[4867]: I0214 04:10:35.911877 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:35 crc kubenswrapper[4867]: I0214 04:10:35.911915 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:35 crc kubenswrapper[4867]: I0214 04:10:35.911926 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:35 crc kubenswrapper[4867]: I0214 04:10:35.911943 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:35 crc kubenswrapper[4867]: I0214 04:10:35.911955 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:35Z","lastTransitionTime":"2026-02-14T04:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.013631 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.013693 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.013706 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.013721 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.013731 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:36Z","lastTransitionTime":"2026-02-14T04:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.114481 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 12:40:05.242522148 +0000 UTC Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.116696 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.116750 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.116771 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.116797 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.116810 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:36Z","lastTransitionTime":"2026-02-14T04:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.219656 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.219722 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.219740 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.219773 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.219797 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:36Z","lastTransitionTime":"2026-02-14T04:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.323037 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.323585 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.323768 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.323936 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.324085 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:36Z","lastTransitionTime":"2026-02-14T04:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.428144 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.428209 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.428219 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.428239 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.428253 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:36Z","lastTransitionTime":"2026-02-14T04:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.532299 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.532370 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.532417 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.532470 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.532575 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:36Z","lastTransitionTime":"2026-02-14T04:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.634291 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.634352 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.634366 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.634383 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.634394 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:36Z","lastTransitionTime":"2026-02-14T04:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.736287 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.736330 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.736342 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.736358 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.736368 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:36Z","lastTransitionTime":"2026-02-14T04:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.838731 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.838773 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.838782 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.838797 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.838807 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:36Z","lastTransitionTime":"2026-02-14T04:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.940904 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.940950 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.940960 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.940976 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.940986 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:36Z","lastTransitionTime":"2026-02-14T04:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.996744 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.996803 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:10:36 crc kubenswrapper[4867]: E0214 04:10:36.996893 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.996928 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.996768 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:10:36 crc kubenswrapper[4867]: E0214 04:10:36.997058 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:10:36 crc kubenswrapper[4867]: E0214 04:10:36.997092 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:10:36 crc kubenswrapper[4867]: E0214 04:10:36.997176 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.997882 4867 scope.go:117] "RemoveContainer" containerID="901f1924f11611b25b82799b2f09cf1c83f31dada8ce10e3fabf0d2968107b93" Feb 14 04:10:36 crc kubenswrapper[4867]: E0214 04:10:36.998059 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6nndn_openshift-ovn-kubernetes(34391a30-5865-46e9-af5f-705cc3b11fba)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.043285 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.043313 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.043320 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.043333 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.043342 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:37Z","lastTransitionTime":"2026-02-14T04:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.115175 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 23:13:16.935884822 +0000 UTC Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.145913 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.145948 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.145958 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.145972 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.145983 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:37Z","lastTransitionTime":"2026-02-14T04:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.248456 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.248499 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.248603 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.248622 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.248632 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:37Z","lastTransitionTime":"2026-02-14T04:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.314295 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7206174b-645b-4924-8345-d1d4b1a5ec39-metrics-certs\") pod \"network-metrics-daemon-4b6k5\" (UID: \"7206174b-645b-4924-8345-d1d4b1a5ec39\") " pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:10:37 crc kubenswrapper[4867]: E0214 04:10:37.314468 4867 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 14 04:10:37 crc kubenswrapper[4867]: E0214 04:10:37.314642 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7206174b-645b-4924-8345-d1d4b1a5ec39-metrics-certs podName:7206174b-645b-4924-8345-d1d4b1a5ec39 nodeName:}" failed. No retries permitted until 2026-02-14 04:11:09.314614387 +0000 UTC m=+101.395551781 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7206174b-645b-4924-8345-d1d4b1a5ec39-metrics-certs") pod "network-metrics-daemon-4b6k5" (UID: "7206174b-645b-4924-8345-d1d4b1a5ec39") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.350932 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.350982 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.350993 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.351010 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.351022 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:37Z","lastTransitionTime":"2026-02-14T04:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.452920 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.452954 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.452962 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.452974 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.452984 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:37Z","lastTransitionTime":"2026-02-14T04:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.555717 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.555758 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.555766 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.555780 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.555788 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:37Z","lastTransitionTime":"2026-02-14T04:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.657773 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.657814 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.657825 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.657839 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.657851 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:37Z","lastTransitionTime":"2026-02-14T04:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.759954 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.759991 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.760000 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.760032 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.760041 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:37Z","lastTransitionTime":"2026-02-14T04:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.862124 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.862176 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.862187 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.862204 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.862216 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:37Z","lastTransitionTime":"2026-02-14T04:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.964373 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.964402 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.964410 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.964423 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.964432 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:37Z","lastTransitionTime":"2026-02-14T04:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.067486 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.067542 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.067554 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.067595 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.067606 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:38Z","lastTransitionTime":"2026-02-14T04:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.115829 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 20:02:42.907822636 +0000 UTC Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.169867 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.169909 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.169918 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.170090 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.170104 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:38Z","lastTransitionTime":"2026-02-14T04:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.272348 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.272420 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.272434 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.272672 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.272685 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:38Z","lastTransitionTime":"2026-02-14T04:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.375425 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.375451 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.375460 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.375472 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.375480 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:38Z","lastTransitionTime":"2026-02-14T04:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.477972 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.478004 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.478015 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.478032 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.478043 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:38Z","lastTransitionTime":"2026-02-14T04:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.579933 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.579963 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.579971 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.579984 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.579993 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:38Z","lastTransitionTime":"2026-02-14T04:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.681721 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.681763 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.681778 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.681796 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.681807 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:38Z","lastTransitionTime":"2026-02-14T04:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.784281 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.784305 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.784313 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.784327 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.784335 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:38Z","lastTransitionTime":"2026-02-14T04:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.886382 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.886441 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.886451 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.886464 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.886474 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:38Z","lastTransitionTime":"2026-02-14T04:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.988664 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.988701 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.988714 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.988731 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.988741 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:38Z","lastTransitionTime":"2026-02-14T04:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.996908 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:10:38 crc kubenswrapper[4867]: E0214 04:10:38.997027 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.997095 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.997205 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:10:38 crc kubenswrapper[4867]: E0214 04:10:38.997251 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:10:38 crc kubenswrapper[4867]: E0214 04:10:38.997249 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.997539 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:10:38 crc kubenswrapper[4867]: E0214 04:10:38.997609 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.011759 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fl729" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb77d03e-6ead-48b5-a96a-db4cbd540192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f23c7e00abcb489852a771f1534532f8a6c3acdd810e4432dd155a72558bcc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gznnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fl729\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:39Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.027037 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9st5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d645541b-4940-4e53-a506-1b42bd296dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80e2ddc09dadcbbbecee7addee881a393497c7456c1ab3fd4ec4b870d86e87ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9st5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:39Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.037952 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbvwr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05957e01-c589-4408-8f80-cd33f8856262\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb1c0677bd48bd254b78efc670de4cf3c1a2ae1a5dde8bcdc4d84ff4524b847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj65g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3962042c51f3b88c029c3ee23ee5704544b33af6a41463e864d81409a6f6845f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj65g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:10:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dbvwr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:39Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.046640 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4b6k5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7206174b-645b-4924-8345-d1d4b1a5ec39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-272vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-272vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:10:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4b6k5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:39Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.059230 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5aa8290-4924-4bc2-bd8e-576e53fa4216\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff2ac5b982c695cfabb0b045748396477b0076e3f4bd77aedf8140d8d212eefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c65a590577c74e672bca804403f159a08ded5ed0e25daf1bef640898c304fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b2c4d9c08ee7188cfea877222707517949a93291dac8409facff18ccd5d9243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c96b250166bcf9c613c7707d9b66c11bbb6292c67d03ed9c9cd8359f466d48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9a86a9d4bdcb85bed9cc5869d14d5d0dcd8a0e22ad73bcc1a9db45554d0c687\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T04:09:45Z\\\",\\\"message\\\":\\\"W0214 04:09:34.347288 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0214 04:09:34.347626 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771042174 cert, and key in /tmp/serving-cert-184764736/serving-signer.crt, /tmp/serving-cert-184764736/serving-signer.key\\\\nI0214 04:09:34.829829 1 observer_polling.go:159] Starting file observer\\\\nW0214 04:09:34.832051 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0214 04:09:34.832310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 04:09:34.835332 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-184764736/tls.crt::/tmp/serving-cert-184764736/tls.key\\\\\\\"\\\\nF0214 04:09:45.190789 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d60b00afe16ba210d6cf3e8edd9c12aef490177b83185e6d74f219cc35efbc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:39Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.068178 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2334a367dde5688d19979264cfd6e67f44426ae7cd249c0b0e18b7e889c8c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:39Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.082345 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:39Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.090745 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.090772 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.090781 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.090794 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.090802 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:39Z","lastTransitionTime":"2026-02-14T04:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.093581 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5992e46c-bce7-4b9f-82f2-c7ffb93286cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945e51e35cb7125361ec74b9c291782c9bc28f0c319ca5c90a88c27540d6ad95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06b088007e4cc02eff5f33dffc101f9d559fc0af6d9fc99cb7d1a49c47deec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4s95t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:39Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.112001 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b12a920eac3a6bb901e1eb5b3f4ec399de4fb28f20cd73bdcf463730ccc78bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a373e25fceabb99332a08d8c1928aa6023c103d488a1f02a57b3157eceb75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:39Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.116458 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 09:09:11.816422242 +0000 UTC Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.120809 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l6v69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afb01bb-2288-4e50-aa66-3e5f2685af58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a109f9fc7a2ea765543b2d1437ad5eccddd0ccb0542b1ffe6a67490057d6d41e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64stb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l6v69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:39Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.132567 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:39Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.148889 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34391a30-5865-46e9-af5f-705cc3b11fba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92014b6ab3e1d7c8631fb2a2fa44b60586bb67157c43e3528a7644584fd25b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://250a34062c680cecaa28554a71a782da6fa1c3554900e8b4f2fa6c093f98e307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3393a612147da0ed4305cb2d2fab51792bf4aefb36be402a6faaa698793cfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9937714cb48d5e8bc3542473d8261629ced25c342c26baa13e57c3dc2ace633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://669cee3fb4ea5c0247e6aa92962377d3152fc79d9022e03689b7c017f857e6e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e713ec2c9e59ec516f5c2241a0e87501f4e83e05d6dadd3c54cd7f1cf11f7d1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://901f1924f11611b25b82799b2f09cf1c83f31dada8ce10e3fabf0d2968107b93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://901f1924f11611b25b82799b2f09cf1c83f31dada8ce10e3fabf0d2968107b93\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T04:10:20Z\\\",\\\"message\\\":\\\".go:160\\\\nI0214 04:10:20.016141 6511 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 04:10:20.016279 6511 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 04:10:20.016402 6511 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0214 04:10:20.016537 6511 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0214 04:10:20.016835 6511 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0214 04:10:20.016886 6511 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0214 04:10:20.016916 6511 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0214 04:10:20.016948 6511 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0214 04:10:20.016968 6511 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0214 04:10:20.016972 6511 factory.go:656] Stopping watch factory\\\\nI0214 04:10:20.016992 6511 ovnkube.go:599] Stopped ovnkube\\\\nI0214 04:10:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:10:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6nndn_openshift-ovn-kubernetes(34391a30-5865-46e9-af5f-705cc3b11fba)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b353a2a6ce81989e21b42414fdc2911f63d44fbd94dd8c588a704ae66216d8b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nndn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:39Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.158659 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qbv2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55b70fd-de82-48c9-b879-de727928e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6de20a8d6a8a1104338491af05cb4bad2960df3f3d41271922974f2bd0f355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghrlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qbv2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:39Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.168802 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2ca498f-e329-422d-8b40-abb4d86f9b5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec966f4b2a6aef7743d32f976a12645c5b0feda623f7baf64edf02bc35389e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://898133696f8478fcb41fba24d15e056570cab68af53a559cb642724dff51617a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ad62a4304538cec45962a9672a69b853848bbfcbce460811135c2ffde4849\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c04ba7033e9c86439f79a30f5ac92368859a69c6b8d46aa6e05ca42fbc37839\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:39Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.179843 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7eff54a-2d26-4335-ad76-c454354b64c0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61da3ab9eb87eb886d6bdf805db38bcabc3db4334167f9e28fd6144269a76515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c54a1f41a2a0e8fa5eae1575fc40b6f3240fe6ea8cafe6fd89a64e092e5b4602\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f0b9cac3faa5bfffa911cb16b70fa88a320b7bd9314d7a0ee0732b2a57afb90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9361fb0bab2f70eaf2adc19e3fbfa9066fd7ad2fe0c94cd1a13518d2ab3708d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9361fb0bab2f70eaf2adc19e3fbfa9066fd7ad2fe0c94cd1a13518d2ab3708d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:39Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.193267 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab82cfd4c916a17bb5ae2454a121a8367c532dd78d0ae1e13c02868208b7c7fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:39Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.193686 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.193729 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.193741 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.193757 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.193768 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:39Z","lastTransitionTime":"2026-02-14T04:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.206306 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:39Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.296574 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.296613 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.296626 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.296642 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.296653 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:39Z","lastTransitionTime":"2026-02-14T04:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.398880 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.398913 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.398923 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.398936 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.398946 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:39Z","lastTransitionTime":"2026-02-14T04:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.485390 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fl729_fb77d03e-6ead-48b5-a96a-db4cbd540192/kube-multus/0.log" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.485438 4867 generic.go:334] "Generic (PLEG): container finished" podID="fb77d03e-6ead-48b5-a96a-db4cbd540192" containerID="6f23c7e00abcb489852a771f1534532f8a6c3acdd810e4432dd155a72558bcc7" exitCode=1 Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.485466 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fl729" event={"ID":"fb77d03e-6ead-48b5-a96a-db4cbd540192","Type":"ContainerDied","Data":"6f23c7e00abcb489852a771f1534532f8a6c3acdd810e4432dd155a72558bcc7"} Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.485790 4867 scope.go:117] "RemoveContainer" containerID="6f23c7e00abcb489852a771f1534532f8a6c3acdd810e4432dd155a72558bcc7" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.497065 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7eff54a-2d26-4335-ad76-c454354b64c0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61da3ab9eb87eb886d6bdf805db38bcabc3db4334167f9e28fd6144269a76515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c54a1f41a2a0e8fa5eae1575fc40b6f3240fe6ea8cafe6fd89a64e092e5b4602\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f0b9cac3faa5bfffa911cb16b70fa88a320b7bd9314d7a0ee0732b2a57afb90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9361fb0bab2f70eaf2adc19e3fbfa9066fd7ad2fe0c94cd1a13518d2ab3708d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9361fb0bab2f70eaf2adc19e3fbfa9066fd7ad2fe0c94cd1a13518d2ab3708d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:39Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.500467 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.500852 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.500934 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.501033 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.501134 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:39Z","lastTransitionTime":"2026-02-14T04:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.508125 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab82cfd4c916a17bb5ae2454a121a8367c532dd78d0ae1e13c02868208b7c7fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:39Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.518695 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:39Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.533099 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2ca498f-e329-422d-8b40-abb4d86f9b5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec966f4b2a6aef7743d32f976a12645c5b0feda623f7baf64edf02bc35389e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://898133696f8478fcb41fba24d15e056570cab68af53a559cb642724dff51617a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ad62a4304538cec45962a9672a69b853848bbfcbce460811135c2ffde4849\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c04ba7033e9c86439f79a30f5ac92368859a69c6b8d46aa6e05ca42fbc37839\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:39Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.544802 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fl729" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb77d03e-6ead-48b5-a96a-db4cbd540192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f23c7e00abcb489852a771f1534532f8a6c3acdd810e4432dd155a72558bcc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f23c7e00abcb489852a771f1534532f8a6c3acdd810e4432dd155a72558bcc7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T04:10:38Z\\\",\\\"message\\\":\\\"2026-02-14T04:09:53+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_a3f597f2-b921-47ce-8faa-6d588a62271b\\\\n2026-02-14T04:09:53+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_a3f597f2-b921-47ce-8faa-6d588a62271b to /host/opt/cni/bin/\\\\n2026-02-14T04:09:53Z [verbose] multus-daemon started\\\\n2026-02-14T04:09:53Z [verbose] Readiness Indicator file check\\\\n2026-02-14T04:10:38Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gznnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fl729\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:39Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.559104 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9st5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d645541b-4940-4e53-a506-1b42bd296dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80e2ddc09dadcbbbecee7addee881a393497c7456c1ab3fd4ec4b870d86e87ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9st5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:39Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.570215 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbvwr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05957e01-c589-4408-8f80-cd33f8856262\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb1c0677bd48bd254b78efc670de4cf3c1a2ae1a5dde8bcdc4d84ff4524b847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj65g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3962042c51f3b88c029c3ee23ee5704544b33af6a41463e864d81409a6f6845f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj65g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:10:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dbvwr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:39Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.580933 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4b6k5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7206174b-645b-4924-8345-d1d4b1a5ec39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-272vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-272vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:10:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4b6k5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:39Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.592525 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2334a367dde5688d19979264cfd6e67f44426ae7cd249c0b0e18b7e889c8c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:39Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.603912 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:39Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.604048 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.604074 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.604082 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.604098 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.604108 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:39Z","lastTransitionTime":"2026-02-14T04:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.614614 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5992e46c-bce7-4b9f-82f2-c7ffb93286cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945e51e35cb7125361ec74b9c291782c9bc28f0c319ca5c90a88c27540d6ad95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06b088007e4cc02eff5f33dffc101f9d559fc0af6d9fc99cb7d1a49c47deec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4s95t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:39Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.627421 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5aa8290-4924-4bc2-bd8e-576e53fa4216\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff2ac5b982c695cfabb0b045748396477b0076e3f4bd77aedf8140d8d212eefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c65a590577c74e672bca804403f159a08ded5ed0e25daf1bef640898c304fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b2c4d9c08ee7188cfea877222707517949a93291dac8409facff18ccd5d9243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c96b250166bcf9c613c7707d9b66c11bbb6292c67d03ed9c9cd8359f466d48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9a86a9d4bdcb85bed9cc5869d14d5d0dcd8a0e22ad73bcc1a9db45554d0c687\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T04:09:45Z\\\",\\\"message\\\":\\\"W0214 04:09:34.347288 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0214 04:09:34.347626 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771042174 cert, and key in /tmp/serving-cert-184764736/serving-signer.crt, /tmp/serving-cert-184764736/serving-signer.key\\\\nI0214 04:09:34.829829 1 observer_polling.go:159] Starting file observer\\\\nW0214 04:09:34.832051 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0214 04:09:34.832310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 04:09:34.835332 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-184764736/tls.crt::/tmp/serving-cert-184764736/tls.key\\\\\\\"\\\\nF0214 04:09:45.190789 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d60b00afe16ba210d6cf3e8edd9c12aef490177b83185e6d74f219cc35efbc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:39Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.636149 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l6v69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afb01bb-2288-4e50-aa66-3e5f2685af58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a109f9fc7a2ea765543b2d1437ad5eccddd0ccb0542b1ffe6a67490057d6d41e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64stb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l6v69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:39Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.646047 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:39Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.662272 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34391a30-5865-46e9-af5f-705cc3b11fba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92014b6ab3e1d7c8631fb2a2fa44b60586bb67157c43e3528a7644584fd25b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://250a34062c680cecaa28554a71a782da6fa1c3554900e8b4f2fa6c093f98e307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3393a612147da0ed4305cb2d2fab51792bf4aefb36be402a6faaa698793cfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9937714cb48d5e8bc3542473d8261629ced25c342c26baa13e57c3dc2ace633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://669cee3fb4ea5c0247e6aa92962377d3152fc79d9022e03689b7c017f857e6e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e713ec2c9e59ec516f5c2241a0e87501f4e83e05d6dadd3c54cd7f1cf11f7d1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://901f1924f11611b25b82799b2f09cf1c83f31dada8ce10e3fabf0d2968107b93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://901f1924f11611b25b82799b2f09cf1c83f31dada8ce10e3fabf0d2968107b93\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T04:10:20Z\\\",\\\"message\\\":\\\".go:160\\\\nI0214 04:10:20.016141 6511 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 04:10:20.016279 6511 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 04:10:20.016402 6511 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0214 04:10:20.016537 6511 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0214 04:10:20.016835 6511 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0214 04:10:20.016886 6511 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0214 04:10:20.016916 6511 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0214 04:10:20.016948 6511 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0214 04:10:20.016968 6511 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0214 04:10:20.016972 6511 factory.go:656] Stopping watch factory\\\\nI0214 04:10:20.016992 6511 ovnkube.go:599] Stopped ovnkube\\\\nI0214 04:10:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:10:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6nndn_openshift-ovn-kubernetes(34391a30-5865-46e9-af5f-705cc3b11fba)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b353a2a6ce81989e21b42414fdc2911f63d44fbd94dd8c588a704ae66216d8b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nndn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:39Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.671193 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qbv2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55b70fd-de82-48c9-b879-de727928e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6de20a8d6a8a1104338491af05cb4bad2960df3f3d41271922974f2bd0f355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghrlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qbv2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:39Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.682721 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b12a920eac3a6bb901e1eb5b3f4ec399de4fb28f20cd73bdcf463730ccc78bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a373e25fceabb99332a08d8c1928aa6023c103d488a1f02a57b3157eceb75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:39Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.706081 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.706128 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.706138 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.706152 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.706162 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:39Z","lastTransitionTime":"2026-02-14T04:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.808077 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.808113 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.808123 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.808136 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.808144 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:39Z","lastTransitionTime":"2026-02-14T04:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.911304 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.911344 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.911354 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.911368 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.911377 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:39Z","lastTransitionTime":"2026-02-14T04:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.013176 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.013227 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.013238 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.013257 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.013270 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:40Z","lastTransitionTime":"2026-02-14T04:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.115725 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.115770 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.115779 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.115795 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.115804 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:40Z","lastTransitionTime":"2026-02-14T04:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.116809 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 13:59:06.850031552 +0000 UTC Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.217772 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.217808 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.217817 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.217831 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.217842 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:40Z","lastTransitionTime":"2026-02-14T04:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.320184 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.320222 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.320229 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.320242 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.320250 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:40Z","lastTransitionTime":"2026-02-14T04:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.422476 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.422530 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.422540 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.422554 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.422563 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:40Z","lastTransitionTime":"2026-02-14T04:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.490540 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fl729_fb77d03e-6ead-48b5-a96a-db4cbd540192/kube-multus/0.log" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.490594 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fl729" event={"ID":"fb77d03e-6ead-48b5-a96a-db4cbd540192","Type":"ContainerStarted","Data":"2556cf2433d1b1241d711139b8c66aabe3f12046f37c0f19b972b8306ff7917b"} Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.501472 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qbv2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55b70fd-de82-48c9-b879-de727928e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6de20a8d6a8a1104338491af05cb4bad2960df3f3d41271922974f2bd0f355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghrlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qbv2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:40Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.515842 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b12a920eac3a6bb901e1eb5b3f4ec399de4fb28f20cd73bdcf463730ccc78bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a373e25fceabb99332a08d8c1928aa6023c103d488a1f02a57b3157eceb75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:40Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.525354 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.525391 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.525399 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.525413 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.525421 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:40Z","lastTransitionTime":"2026-02-14T04:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.526708 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l6v69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afb01bb-2288-4e50-aa66-3e5f2685af58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a109f9fc7a2ea765543b2d1437ad5eccddd0ccb0542b1ffe6a67490057d6d41e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64stb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l6v69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:40Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.539232 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:40Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.558613 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34391a30-5865-46e9-af5f-705cc3b11fba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92014b6ab3e1d7c8631fb2a2fa44b60586bb67157c43e3528a7644584fd25b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://250a34062c680cecaa28554a71a782da6fa1c3554900e8b4f2fa6c093f98e307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3393a612147da0ed4305cb2d2fab51792bf4aefb36be402a6faaa698793cfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9937714cb48d5e8bc3542473d8261629ced25c342c26baa13e57c3dc2ace633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://669cee3fb4ea5c0247e6aa92962377d3152fc79d9022e03689b7c017f857e6e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e713ec2c9e59ec516f5c2241a0e87501f4e83e05d6dadd3c54cd7f1cf11f7d1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://901f1924f11611b25b82799b2f09cf1c83f31dada8ce10e3fabf0d2968107b93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://901f1924f11611b25b82799b2f09cf1c83f31dada8ce10e3fabf0d2968107b93\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T04:10:20Z\\\",\\\"message\\\":\\\".go:160\\\\nI0214 04:10:20.016141 6511 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 04:10:20.016279 6511 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 04:10:20.016402 6511 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0214 04:10:20.016537 6511 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0214 04:10:20.016835 6511 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0214 04:10:20.016886 6511 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0214 04:10:20.016916 6511 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0214 04:10:20.016948 6511 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0214 04:10:20.016968 6511 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0214 04:10:20.016972 6511 factory.go:656] Stopping watch factory\\\\nI0214 04:10:20.016992 6511 ovnkube.go:599] Stopped ovnkube\\\\nI0214 04:10:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:10:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6nndn_openshift-ovn-kubernetes(34391a30-5865-46e9-af5f-705cc3b11fba)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b353a2a6ce81989e21b42414fdc2911f63d44fbd94dd8c588a704ae66216d8b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nndn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:40Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.571796 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:40Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.583195 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2ca498f-e329-422d-8b40-abb4d86f9b5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec966f4b2a6aef7743d32f976a12645c5b0feda623f7baf64edf02bc35389e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://898133696f8478fcb41fba24d15e056570cab68af53a559cb642724dff51617a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ad62a4304538cec45962a9672a69b853848bbfcbce460811135c2ffde4849\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c04ba7033e9c86439f79a30f5ac92368859a69c6b8d46aa6e05ca42fbc37839\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:40Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.604844 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7eff54a-2d26-4335-ad76-c454354b64c0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61da3ab9eb87eb886d6bdf805db38bcabc3db4334167f9e28fd6144269a76515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c54a1f41a2a0e8fa5eae1575fc40b6f3240fe6ea8cafe6fd89a64e092e5b4602\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f0b9cac3faa5bfffa911cb16b70fa88a320b7bd9314d7a0ee0732b2a57afb90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9361fb0bab2f70eaf2adc19e3fbfa9066fd7ad2fe0c94cd1a13518d2ab3708d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9361fb0bab2f70eaf2adc19e3fbfa9066fd7ad2fe0c94cd1a13518d2ab3708d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:40Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.623645 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab82cfd4c916a17bb5ae2454a121a8367c532dd78d0ae1e13c02868208b7c7fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:40Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.629544 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.629581 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.629589 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.629603 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.629613 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:40Z","lastTransitionTime":"2026-02-14T04:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.638046 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4b6k5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7206174b-645b-4924-8345-d1d4b1a5ec39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-272vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-272vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:10:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4b6k5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:40Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.654778 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fl729" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb77d03e-6ead-48b5-a96a-db4cbd540192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2556cf2433d1b1241d711139b8c66aabe3f12046f37c0f19b972b8306ff7917b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f23c7e00abcb489852a771f1534532f8a6c3acdd810e4432dd155a72558bcc7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T04:10:38Z\\\",\\\"message\\\":\\\"2026-02-14T04:09:53+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_a3f597f2-b921-47ce-8faa-6d588a62271b\\\\n2026-02-14T04:09:53+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_a3f597f2-b921-47ce-8faa-6d588a62271b to /host/opt/cni/bin/\\\\n2026-02-14T04:09:53Z [verbose] multus-daemon started\\\\n2026-02-14T04:09:53Z [verbose] Readiness Indicator file check\\\\n2026-02-14T04:10:38Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gznnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fl729\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:40Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.672712 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9st5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d645541b-4940-4e53-a506-1b42bd296dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80e2ddc09dadcbbbecee7addee881a393497c7456c1ab3fd4ec4b870d86e87ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9st5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:40Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.688038 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbvwr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05957e01-c589-4408-8f80-cd33f8856262\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb1c0677bd48bd254b78efc670de4cf3c1a2ae1a5dde8bcdc4d84ff4524b847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj65g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3962042c51f3b88c029c3ee23ee5704544b33af6a41463e864d81409a6f6845f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj65g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:10:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dbvwr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:40Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.705079 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5aa8290-4924-4bc2-bd8e-576e53fa4216\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff2ac5b982c695cfabb0b045748396477b0076e3f4bd77aedf8140d8d212eefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c65a590577c74e672bca804403f159a08ded5ed0e25daf1bef640898c304fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b2c4d9c08ee7188cfea877222707517949a93291dac8409facff18ccd5d9243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c96b250166bcf9c613c7707d9b66c11bbb6292c67d03ed9c9cd8359f466d48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9a86a9d4bdcb85bed9cc5869d14d5d0dcd8a0e22ad73bcc1a9db45554d0c687\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T04:09:45Z\\\",\\\"message\\\":\\\"W0214 04:09:34.347288 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0214 04:09:34.347626 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771042174 cert, and key in /tmp/serving-cert-184764736/serving-signer.crt, /tmp/serving-cert-184764736/serving-signer.key\\\\nI0214 04:09:34.829829 1 observer_polling.go:159] Starting file observer\\\\nW0214 04:09:34.832051 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0214 04:09:34.832310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 04:09:34.835332 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-184764736/tls.crt::/tmp/serving-cert-184764736/tls.key\\\\\\\"\\\\nF0214 04:09:45.190789 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d60b00afe16ba210d6cf3e8edd9c12aef490177b83185e6d74f219cc35efbc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:40Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.719093 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2334a367dde5688d19979264cfd6e67f44426ae7cd249c0b0e18b7e889c8c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:40Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.732069 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.732105 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.732114 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.732136 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.732149 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:40Z","lastTransitionTime":"2026-02-14T04:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.733495 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:40Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.744080 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5992e46c-bce7-4b9f-82f2-c7ffb93286cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945e51e35cb7125361ec74b9c291782c9bc28f0c319ca5c90a88c27540d6ad95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06b088007e4cc02eff5f33dffc101f9d559fc0af6d9fc99cb7d1a49c47deec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4s95t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:40Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.834593 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.834701 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.834723 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.834750 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.834771 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:40Z","lastTransitionTime":"2026-02-14T04:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.937576 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.937627 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.937637 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.937654 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.937664 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:40Z","lastTransitionTime":"2026-02-14T04:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.996565 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.996609 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.996630 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:10:40 crc kubenswrapper[4867]: E0214 04:10:40.996729 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:10:40 crc kubenswrapper[4867]: E0214 04:10:40.996908 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:10:40 crc kubenswrapper[4867]: E0214 04:10:40.996951 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.996968 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:10:40 crc kubenswrapper[4867]: E0214 04:10:40.997171 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.040335 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.040378 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.040391 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.040412 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.040427 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:41Z","lastTransitionTime":"2026-02-14T04:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.116893 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 21:21:30.684385757 +0000 UTC Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.143046 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.143121 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.143160 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.143194 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.143219 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:41Z","lastTransitionTime":"2026-02-14T04:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.245969 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.246024 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.246034 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.246049 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.246061 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:41Z","lastTransitionTime":"2026-02-14T04:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.348872 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.348910 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.348919 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.348932 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.348942 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:41Z","lastTransitionTime":"2026-02-14T04:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.451386 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.451437 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.451449 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.451466 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.451479 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:41Z","lastTransitionTime":"2026-02-14T04:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.554970 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.555021 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.555046 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.555068 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.555081 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:41Z","lastTransitionTime":"2026-02-14T04:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.657390 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.657455 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.657468 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.657485 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.657498 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:41Z","lastTransitionTime":"2026-02-14T04:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.759451 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.759518 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.759528 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.759545 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.759555 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:41Z","lastTransitionTime":"2026-02-14T04:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.861682 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.861749 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.861759 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.861795 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.861808 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:41Z","lastTransitionTime":"2026-02-14T04:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.964797 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.964837 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.964845 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.964859 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.964869 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:41Z","lastTransitionTime":"2026-02-14T04:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.066932 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.066969 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.066978 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.067062 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.067074 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:42Z","lastTransitionTime":"2026-02-14T04:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.117933 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 01:42:20.592576057 +0000 UTC Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.169253 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.169322 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.169343 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.169374 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.169394 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:42Z","lastTransitionTime":"2026-02-14T04:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.271541 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.271588 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.271599 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.271617 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.271627 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:42Z","lastTransitionTime":"2026-02-14T04:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.374368 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.374471 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.374497 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.374572 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.374600 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:42Z","lastTransitionTime":"2026-02-14T04:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.478121 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.478210 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.479574 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.479609 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.479618 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:42Z","lastTransitionTime":"2026-02-14T04:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.583650 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.583718 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.583736 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.583765 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.583787 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:42Z","lastTransitionTime":"2026-02-14T04:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.687194 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.687247 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.687258 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.687274 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.687284 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:42Z","lastTransitionTime":"2026-02-14T04:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.789532 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.789577 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.789588 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.789605 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.789617 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:42Z","lastTransitionTime":"2026-02-14T04:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.892233 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.892298 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.892316 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.892349 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.892369 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:42Z","lastTransitionTime":"2026-02-14T04:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.995789 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.995835 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.995846 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.995866 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.995880 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:42Z","lastTransitionTime":"2026-02-14T04:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.996210 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.996304 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.996668 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.996720 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:10:42 crc kubenswrapper[4867]: E0214 04:10:42.996724 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:10:42 crc kubenswrapper[4867]: E0214 04:10:42.996810 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:10:42 crc kubenswrapper[4867]: E0214 04:10:42.996994 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:10:42 crc kubenswrapper[4867]: E0214 04:10:42.997083 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.099797 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.099841 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.099853 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.099871 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.099886 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:43Z","lastTransitionTime":"2026-02-14T04:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.118748 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 22:25:39.04593179 +0000 UTC Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.203420 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.203499 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.203556 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.203589 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.203611 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:43Z","lastTransitionTime":"2026-02-14T04:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.205072 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.205116 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.205131 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.205151 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.205166 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:43Z","lastTransitionTime":"2026-02-14T04:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:43 crc kubenswrapper[4867]: E0214 04:10:43.226792 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148e1364-0af4-4e1f-ae72-52166d888ddc\\\",\\\"systemUUID\\\":\\\"1382a0d3-8d29-4f25-bc2c-dc46ad541396\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:43Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.231939 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.231980 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.231989 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.232004 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.232012 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:43Z","lastTransitionTime":"2026-02-14T04:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:43 crc kubenswrapper[4867]: E0214 04:10:43.249246 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148e1364-0af4-4e1f-ae72-52166d888ddc\\\",\\\"systemUUID\\\":\\\"1382a0d3-8d29-4f25-bc2c-dc46ad541396\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:43Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.254298 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.254329 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.254339 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.254356 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.254365 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:43Z","lastTransitionTime":"2026-02-14T04:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:43 crc kubenswrapper[4867]: E0214 04:10:43.272301 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148e1364-0af4-4e1f-ae72-52166d888ddc\\\",\\\"systemUUID\\\":\\\"1382a0d3-8d29-4f25-bc2c-dc46ad541396\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:43Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.276601 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.276694 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.276717 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.276746 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.276774 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:43Z","lastTransitionTime":"2026-02-14T04:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:43 crc kubenswrapper[4867]: E0214 04:10:43.297339 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148e1364-0af4-4e1f-ae72-52166d888ddc\\\",\\\"systemUUID\\\":\\\"1382a0d3-8d29-4f25-bc2c-dc46ad541396\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:43Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.307721 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.307813 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.307836 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.307864 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.307902 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:43Z","lastTransitionTime":"2026-02-14T04:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:43 crc kubenswrapper[4867]: E0214 04:10:43.332283 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148e1364-0af4-4e1f-ae72-52166d888ddc\\\",\\\"systemUUID\\\":\\\"1382a0d3-8d29-4f25-bc2c-dc46ad541396\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:43Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:43 crc kubenswrapper[4867]: E0214 04:10:43.332882 4867 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.345976 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.346044 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.346061 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.346094 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.346115 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:43Z","lastTransitionTime":"2026-02-14T04:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.448919 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.448965 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.448974 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.448989 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.448999 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:43Z","lastTransitionTime":"2026-02-14T04:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.551437 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.551490 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.551531 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.551554 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.551570 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:43Z","lastTransitionTime":"2026-02-14T04:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.654409 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.654466 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.654482 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.654539 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.654560 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:43Z","lastTransitionTime":"2026-02-14T04:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.757801 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.757850 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.757859 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.757875 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.757885 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:43Z","lastTransitionTime":"2026-02-14T04:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.860434 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.860469 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.860479 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.860494 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.860528 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:43Z","lastTransitionTime":"2026-02-14T04:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.962938 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.962982 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.962994 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.963013 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.963024 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:43Z","lastTransitionTime":"2026-02-14T04:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.067041 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.067102 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.067116 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.067137 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.067152 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:44Z","lastTransitionTime":"2026-02-14T04:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.119422 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 10:38:26.932697211 +0000 UTC Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.169779 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.169860 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.169879 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.169908 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.169934 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:44Z","lastTransitionTime":"2026-02-14T04:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.272769 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.272843 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.272865 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.272898 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.272919 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:44Z","lastTransitionTime":"2026-02-14T04:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.375533 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.375560 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.375569 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.375583 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.375591 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:44Z","lastTransitionTime":"2026-02-14T04:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.478559 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.478626 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.478642 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.478670 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.478689 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:44Z","lastTransitionTime":"2026-02-14T04:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.580204 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.580239 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.580248 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.580267 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.580278 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:44Z","lastTransitionTime":"2026-02-14T04:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.683263 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.683365 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.683387 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.683423 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.683447 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:44Z","lastTransitionTime":"2026-02-14T04:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.787191 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.787251 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.787261 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.787280 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.787292 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:44Z","lastTransitionTime":"2026-02-14T04:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.890587 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.891109 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.891198 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.891362 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.891566 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:44Z","lastTransitionTime":"2026-02-14T04:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.994062 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.994103 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.994112 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.994127 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.994137 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:44Z","lastTransitionTime":"2026-02-14T04:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.996335 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:10:44 crc kubenswrapper[4867]: E0214 04:10:44.996445 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.996632 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:10:44 crc kubenswrapper[4867]: E0214 04:10:44.996706 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.996791 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.996855 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:10:44 crc kubenswrapper[4867]: E0214 04:10:44.996997 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:10:44 crc kubenswrapper[4867]: E0214 04:10:44.997145 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:10:45 crc kubenswrapper[4867]: I0214 04:10:45.096971 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:45 crc kubenswrapper[4867]: I0214 04:10:45.097005 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:45 crc kubenswrapper[4867]: I0214 04:10:45.097018 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:45 crc kubenswrapper[4867]: I0214 04:10:45.097034 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:45 crc kubenswrapper[4867]: I0214 04:10:45.097046 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:45Z","lastTransitionTime":"2026-02-14T04:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:45 crc kubenswrapper[4867]: I0214 04:10:45.120440 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 19:10:38.63329086 +0000 UTC Feb 14 04:10:45 crc kubenswrapper[4867]: I0214 04:10:45.199983 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:45 crc kubenswrapper[4867]: I0214 04:10:45.200009 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:45 crc kubenswrapper[4867]: I0214 04:10:45.200017 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:45 crc kubenswrapper[4867]: I0214 04:10:45.200030 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:45 crc kubenswrapper[4867]: I0214 04:10:45.200039 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:45Z","lastTransitionTime":"2026-02-14T04:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:45 crc kubenswrapper[4867]: I0214 04:10:45.303851 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:45 crc kubenswrapper[4867]: I0214 04:10:45.304003 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:45 crc kubenswrapper[4867]: I0214 04:10:45.304027 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:45 crc kubenswrapper[4867]: I0214 04:10:45.304051 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:45 crc kubenswrapper[4867]: I0214 04:10:45.304069 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:45Z","lastTransitionTime":"2026-02-14T04:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:45 crc kubenswrapper[4867]: I0214 04:10:45.407625 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:45 crc kubenswrapper[4867]: I0214 04:10:45.407722 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:45 crc kubenswrapper[4867]: I0214 04:10:45.407746 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:45 crc kubenswrapper[4867]: I0214 04:10:45.407777 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:45 crc kubenswrapper[4867]: I0214 04:10:45.407801 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:45Z","lastTransitionTime":"2026-02-14T04:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:45 crc kubenswrapper[4867]: I0214 04:10:45.509243 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:45 crc kubenswrapper[4867]: I0214 04:10:45.509270 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:45 crc kubenswrapper[4867]: I0214 04:10:45.509278 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:45 crc kubenswrapper[4867]: I0214 04:10:45.509290 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:45 crc kubenswrapper[4867]: I0214 04:10:45.509298 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:45Z","lastTransitionTime":"2026-02-14T04:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:45 crc kubenswrapper[4867]: I0214 04:10:45.612213 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:45 crc kubenswrapper[4867]: I0214 04:10:45.612244 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:45 crc kubenswrapper[4867]: I0214 04:10:45.612252 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:45 crc kubenswrapper[4867]: I0214 04:10:45.612265 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:45 crc kubenswrapper[4867]: I0214 04:10:45.612274 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:45Z","lastTransitionTime":"2026-02-14T04:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:45 crc kubenswrapper[4867]: I0214 04:10:45.715187 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:45 crc kubenswrapper[4867]: I0214 04:10:45.715225 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:45 crc kubenswrapper[4867]: I0214 04:10:45.715233 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:45 crc kubenswrapper[4867]: I0214 04:10:45.715246 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:45 crc kubenswrapper[4867]: I0214 04:10:45.715254 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:45Z","lastTransitionTime":"2026-02-14T04:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:45 crc kubenswrapper[4867]: I0214 04:10:45.820850 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:45 crc kubenswrapper[4867]: I0214 04:10:45.820918 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:45 crc kubenswrapper[4867]: I0214 04:10:45.820932 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:45 crc kubenswrapper[4867]: I0214 04:10:45.820953 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:45 crc kubenswrapper[4867]: I0214 04:10:45.820967 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:45Z","lastTransitionTime":"2026-02-14T04:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:45 crc kubenswrapper[4867]: I0214 04:10:45.923307 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:45 crc kubenswrapper[4867]: I0214 04:10:45.923351 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:45 crc kubenswrapper[4867]: I0214 04:10:45.923363 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:45 crc kubenswrapper[4867]: I0214 04:10:45.923379 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:45 crc kubenswrapper[4867]: I0214 04:10:45.923390 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:45Z","lastTransitionTime":"2026-02-14T04:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.026667 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.026730 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.026741 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.026755 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.026768 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:46Z","lastTransitionTime":"2026-02-14T04:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.121438 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 07:00:03.702248693 +0000 UTC Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.131797 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.131861 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.131878 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.131907 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.131928 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:46Z","lastTransitionTime":"2026-02-14T04:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.235149 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.235214 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.235225 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.235242 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.235255 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:46Z","lastTransitionTime":"2026-02-14T04:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.338013 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.338082 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.338099 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.338126 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.338143 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:46Z","lastTransitionTime":"2026-02-14T04:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.441482 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.441571 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.441592 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.441611 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.441624 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:46Z","lastTransitionTime":"2026-02-14T04:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.544681 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.544722 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.544734 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.544747 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.544756 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:46Z","lastTransitionTime":"2026-02-14T04:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.647374 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.647423 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.647433 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.647450 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.647460 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:46Z","lastTransitionTime":"2026-02-14T04:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.750422 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.750466 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.750476 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.750491 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.750521 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:46Z","lastTransitionTime":"2026-02-14T04:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.853640 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.853703 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.853717 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.853734 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.853747 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:46Z","lastTransitionTime":"2026-02-14T04:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.956334 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.956379 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.956388 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.956402 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.956411 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:46Z","lastTransitionTime":"2026-02-14T04:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.996674 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.996791 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:10:46 crc kubenswrapper[4867]: E0214 04:10:46.996829 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.996863 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.996920 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:10:46 crc kubenswrapper[4867]: E0214 04:10:46.997060 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:10:46 crc kubenswrapper[4867]: E0214 04:10:46.997181 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:10:46 crc kubenswrapper[4867]: E0214 04:10:46.997279 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.058285 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.058327 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.058336 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.058352 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.058365 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:47Z","lastTransitionTime":"2026-02-14T04:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.121705 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 00:31:37.838659421 +0000 UTC Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.160762 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.160796 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.160812 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.160828 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.160839 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:47Z","lastTransitionTime":"2026-02-14T04:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.262831 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.262865 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.262873 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.262886 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.262897 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:47Z","lastTransitionTime":"2026-02-14T04:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.365325 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.365367 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.365378 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.365392 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.365402 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:47Z","lastTransitionTime":"2026-02-14T04:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.467676 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.467717 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.467726 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.467740 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.467754 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:47Z","lastTransitionTime":"2026-02-14T04:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.570157 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.570207 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.570218 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.570235 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.570246 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:47Z","lastTransitionTime":"2026-02-14T04:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.672393 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.672437 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.672449 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.672465 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.672477 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:47Z","lastTransitionTime":"2026-02-14T04:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.776436 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.776497 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.776564 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.776591 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.776613 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:47Z","lastTransitionTime":"2026-02-14T04:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.879264 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.879329 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.879371 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.879399 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.879416 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:47Z","lastTransitionTime":"2026-02-14T04:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.981667 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.981708 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.981718 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.981732 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.981741 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:47Z","lastTransitionTime":"2026-02-14T04:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:48 crc kubenswrapper[4867]: I0214 04:10:48.083803 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:48 crc kubenswrapper[4867]: I0214 04:10:48.083838 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:48 crc kubenswrapper[4867]: I0214 04:10:48.083847 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:48 crc kubenswrapper[4867]: I0214 04:10:48.083861 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:48 crc kubenswrapper[4867]: I0214 04:10:48.083869 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:48Z","lastTransitionTime":"2026-02-14T04:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:48 crc kubenswrapper[4867]: I0214 04:10:48.149302 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 21:44:08.910983852 +0000 UTC Feb 14 04:10:48 crc kubenswrapper[4867]: I0214 04:10:48.186561 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:48 crc kubenswrapper[4867]: I0214 04:10:48.186603 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:48 crc kubenswrapper[4867]: I0214 04:10:48.186614 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:48 crc kubenswrapper[4867]: I0214 04:10:48.186635 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:48 crc kubenswrapper[4867]: I0214 04:10:48.186647 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:48Z","lastTransitionTime":"2026-02-14T04:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:48 crc kubenswrapper[4867]: I0214 04:10:48.289097 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:48 crc kubenswrapper[4867]: I0214 04:10:48.289134 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:48 crc kubenswrapper[4867]: I0214 04:10:48.289146 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:48 crc kubenswrapper[4867]: I0214 04:10:48.289161 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:48 crc kubenswrapper[4867]: I0214 04:10:48.289172 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:48Z","lastTransitionTime":"2026-02-14T04:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:48 crc kubenswrapper[4867]: I0214 04:10:48.391656 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:48 crc kubenswrapper[4867]: I0214 04:10:48.391706 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:48 crc kubenswrapper[4867]: I0214 04:10:48.391719 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:48 crc kubenswrapper[4867]: I0214 04:10:48.391736 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:48 crc kubenswrapper[4867]: I0214 04:10:48.391748 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:48Z","lastTransitionTime":"2026-02-14T04:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:48 crc kubenswrapper[4867]: I0214 04:10:48.493900 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:48 crc kubenswrapper[4867]: I0214 04:10:48.493972 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:48 crc kubenswrapper[4867]: I0214 04:10:48.493984 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:48 crc kubenswrapper[4867]: I0214 04:10:48.494002 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:48 crc kubenswrapper[4867]: I0214 04:10:48.494014 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:48Z","lastTransitionTime":"2026-02-14T04:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:48 crc kubenswrapper[4867]: I0214 04:10:48.596411 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:48 crc kubenswrapper[4867]: I0214 04:10:48.596453 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:48 crc kubenswrapper[4867]: I0214 04:10:48.596462 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:48 crc kubenswrapper[4867]: I0214 04:10:48.596475 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:48 crc kubenswrapper[4867]: I0214 04:10:48.596484 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:48Z","lastTransitionTime":"2026-02-14T04:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:48 crc kubenswrapper[4867]: I0214 04:10:48.698222 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:48 crc kubenswrapper[4867]: I0214 04:10:48.698267 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:48 crc kubenswrapper[4867]: I0214 04:10:48.698275 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:48 crc kubenswrapper[4867]: I0214 04:10:48.698291 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:48 crc kubenswrapper[4867]: I0214 04:10:48.698302 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:48Z","lastTransitionTime":"2026-02-14T04:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:48 crc kubenswrapper[4867]: I0214 04:10:48.800944 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:48 crc kubenswrapper[4867]: I0214 04:10:48.801007 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:48 crc kubenswrapper[4867]: I0214 04:10:48.801021 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:48 crc kubenswrapper[4867]: I0214 04:10:48.801042 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:48 crc kubenswrapper[4867]: I0214 04:10:48.801061 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:48Z","lastTransitionTime":"2026-02-14T04:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:48 crc kubenswrapper[4867]: I0214 04:10:48.903105 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:48 crc kubenswrapper[4867]: I0214 04:10:48.903143 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:48 crc kubenswrapper[4867]: I0214 04:10:48.903151 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:48 crc kubenswrapper[4867]: I0214 04:10:48.903165 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:48 crc kubenswrapper[4867]: I0214 04:10:48.903174 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:48Z","lastTransitionTime":"2026-02-14T04:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:48 crc kubenswrapper[4867]: I0214 04:10:48.997017 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:10:48 crc kubenswrapper[4867]: I0214 04:10:48.997123 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:10:48 crc kubenswrapper[4867]: E0214 04:10:48.997359 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:10:48 crc kubenswrapper[4867]: I0214 04:10:48.997497 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:10:48 crc kubenswrapper[4867]: E0214 04:10:48.997572 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:10:48 crc kubenswrapper[4867]: E0214 04:10:48.997703 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:10:48 crc kubenswrapper[4867]: I0214 04:10:48.997777 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:10:48 crc kubenswrapper[4867]: E0214 04:10:48.997902 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.005337 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.005585 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.005744 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.005883 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.005981 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:49Z","lastTransitionTime":"2026-02-14T04:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.013469 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5aa8290-4924-4bc2-bd8e-576e53fa4216\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff2ac5b982c695cfabb0b045748396477b0076e3f4bd77aedf8140d8d212eefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c65a590577c74e672bca804403f159a08ded5ed0e25daf1bef640898c304fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b2c4d9c08ee7188cfea877222707517949a93291dac8409facff18ccd5d9243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c96b250166bcf9c613c7707d9b66c11bbb6292c67d03ed9c9cd8359f466d48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9a86a9d4bdcb85bed9cc5869d14d5d0dcd8a0e22ad73bcc1a9db45554d0c687\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T04:09:45Z\\\",\\\"message\\\":\\\"W0214 04:09:34.347288 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0214 04:09:34.347626 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771042174 cert, and key in /tmp/serving-cert-184764736/serving-signer.crt, /tmp/serving-cert-184764736/serving-signer.key\\\\nI0214 04:09:34.829829 1 observer_polling.go:159] Starting file observer\\\\nW0214 04:09:34.832051 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0214 04:09:34.832310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 04:09:34.835332 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-184764736/tls.crt::/tmp/serving-cert-184764736/tls.key\\\\\\\"\\\\nF0214 04:09:45.190789 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d60b00afe16ba210d6cf3e8edd9c12aef490177b83185e6d74f219cc35efbc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:49Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.025116 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2334a367dde5688d19979264cfd6e67f44426ae7cd249c0b0e18b7e889c8c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:49Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.036607 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:49Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.046591 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5992e46c-bce7-4b9f-82f2-c7ffb93286cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945e51e35cb7125361ec74b9c291782c9bc28f0c319ca5c90a88c27540d6ad95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06b088007e4cc02eff5f33dffc101f9d559fc0af6d9fc99cb7d1a49c47deec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4s95t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:49Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.062071 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b12a920eac3a6bb901e1eb5b3f4ec399de4fb28f20cd73bdcf463730ccc78bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a373e25fceabb99332a08d8c1928aa6023c103d488a1f02a57b3157eceb75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:49Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.075269 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l6v69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afb01bb-2288-4e50-aa66-3e5f2685af58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a109f9fc7a2ea765543b2d1437ad5eccddd0ccb0542b1ffe6a67490057d6d41e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64stb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l6v69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:49Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.090140 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:49Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.106863 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34391a30-5865-46e9-af5f-705cc3b11fba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92014b6ab3e1d7c8631fb2a2fa44b60586bb67157c43e3528a7644584fd25b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://250a34062c680cecaa28554a71a782da6fa1c3554900e8b4f2fa6c093f98e307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3393a612147da0ed4305cb2d2fab51792bf4aefb36be402a6faaa698793cfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9937714cb48d5e8bc3542473d8261629ced25c342c26baa13e57c3dc2ace633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://669cee3fb4ea5c0247e6aa92962377d3152fc79d9022e03689b7c017f857e6e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e713ec2c9e59ec516f5c2241a0e87501f4e83e05d6dadd3c54cd7f1cf11f7d1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://901f1924f11611b25b82799b2f09cf1c83f31dada8ce10e3fabf0d2968107b93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://901f1924f11611b25b82799b2f09cf1c83f31dada8ce10e3fabf0d2968107b93\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T04:10:20Z\\\",\\\"message\\\":\\\".go:160\\\\nI0214 04:10:20.016141 6511 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 04:10:20.016279 6511 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 04:10:20.016402 6511 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0214 04:10:20.016537 6511 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0214 04:10:20.016835 6511 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0214 04:10:20.016886 6511 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0214 04:10:20.016916 6511 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0214 04:10:20.016948 6511 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0214 04:10:20.016968 6511 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0214 04:10:20.016972 6511 factory.go:656] Stopping watch factory\\\\nI0214 04:10:20.016992 6511 ovnkube.go:599] Stopped ovnkube\\\\nI0214 04:10:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:10:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6nndn_openshift-ovn-kubernetes(34391a30-5865-46e9-af5f-705cc3b11fba)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b353a2a6ce81989e21b42414fdc2911f63d44fbd94dd8c588a704ae66216d8b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nndn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:49Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.109236 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.109269 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.109281 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.109298 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.109310 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:49Z","lastTransitionTime":"2026-02-14T04:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.119630 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qbv2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55b70fd-de82-48c9-b879-de727928e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6de20a8d6a8a1104338491af05cb4bad2960df3f3d41271922974f2bd0f355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghrlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qbv2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:49Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.134991 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2ca498f-e329-422d-8b40-abb4d86f9b5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec966f4b2a6aef7743d32f976a12645c5b0feda623f7baf64edf02bc35389e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://898133696f8478fcb41fba24d15e056570cab68af53a559cb642724dff51617a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ad62a4304538cec45962a9672a69b853848bbfcbce460811135c2ffde4849\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c04ba7033e9c86439f79a30f5ac92368859a69c6b8d46aa6e05ca42fbc37839\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:49Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.149274 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7eff54a-2d26-4335-ad76-c454354b64c0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61da3ab9eb87eb886d6bdf805db38bcabc3db4334167f9e28fd6144269a76515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c54a1f41a2a0e8fa5eae1575fc40b6f3240fe6ea8cafe6fd89a64e092e5b4602\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f0b9cac3faa5bfffa911cb16b70fa88a320b7bd9314d7a0ee0732b2a57afb90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9361fb0bab2f70eaf2adc19e3fbfa9066fd7ad2fe0c94cd1a13518d2ab3708d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9361fb0bab2f70eaf2adc19e3fbfa9066fd7ad2fe0c94cd1a13518d2ab3708d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:49Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.149394 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 04:41:41.583765224 +0000 UTC Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.164598 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab82cfd4c916a17bb5ae2454a121a8367c532dd78d0ae1e13c02868208b7c7fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:49Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.182896 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:49Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.196168 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fl729" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb77d03e-6ead-48b5-a96a-db4cbd540192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2556cf2433d1b1241d711139b8c66aabe3f12046f37c0f19b972b8306ff7917b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f23c7e00abcb489852a771f1534532f8a6c3acdd810e4432dd155a72558bcc7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T04:10:38Z\\\",\\\"message\\\":\\\"2026-02-14T04:09:53+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_a3f597f2-b921-47ce-8faa-6d588a62271b\\\\n2026-02-14T04:09:53+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_a3f597f2-b921-47ce-8faa-6d588a62271b to /host/opt/cni/bin/\\\\n2026-02-14T04:09:53Z [verbose] multus-daemon started\\\\n2026-02-14T04:09:53Z [verbose] Readiness Indicator file check\\\\n2026-02-14T04:10:38Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gznnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fl729\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:49Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.211430 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.211464 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.211474 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.211490 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.211515 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:49Z","lastTransitionTime":"2026-02-14T04:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.218768 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9st5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d645541b-4940-4e53-a506-1b42bd296dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80e2ddc09dadcbbbecee7addee881a393497c7456c1ab3fd4ec4b870d86e87ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9st5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:49Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.230624 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbvwr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05957e01-c589-4408-8f80-cd33f8856262\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb1c0677bd48bd254b78efc670de4cf3c1a2ae1a5dde8bcdc4d84ff4524b847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj65g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3962042c51f3b88c029c3ee23ee5704544b33af6a41463e864d81409a6f6845f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj65g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:10:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dbvwr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:49Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.247360 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4b6k5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7206174b-645b-4924-8345-d1d4b1a5ec39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-272vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-272vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:10:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4b6k5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:49Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.313361 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.313414 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.313425 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.313439 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.313448 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:49Z","lastTransitionTime":"2026-02-14T04:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.415626 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.415654 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.415664 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.415679 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.415689 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:49Z","lastTransitionTime":"2026-02-14T04:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.518560 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.518615 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.518645 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.518666 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.518681 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:49Z","lastTransitionTime":"2026-02-14T04:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.621987 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.622051 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.622062 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.622098 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.622111 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:49Z","lastTransitionTime":"2026-02-14T04:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.731203 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.731257 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.731271 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.731293 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.731305 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:49Z","lastTransitionTime":"2026-02-14T04:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.834956 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.835007 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.835022 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.835043 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.835059 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:49Z","lastTransitionTime":"2026-02-14T04:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.938117 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.938185 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.938198 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.938234 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.938249 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:49Z","lastTransitionTime":"2026-02-14T04:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.040792 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.040828 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.040836 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.040851 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.040859 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:50Z","lastTransitionTime":"2026-02-14T04:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.143696 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.143729 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.143737 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.143751 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.143761 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:50Z","lastTransitionTime":"2026-02-14T04:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.150321 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 07:08:15.741219139 +0000 UTC Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.246239 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.246267 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.246276 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.246288 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.246296 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:50Z","lastTransitionTime":"2026-02-14T04:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.348339 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.348463 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.348478 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.348493 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.348519 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:50Z","lastTransitionTime":"2026-02-14T04:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.450573 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.450613 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.450621 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.450635 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.450645 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:50Z","lastTransitionTime":"2026-02-14T04:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.553208 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.553267 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.553283 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.553309 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.553323 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:50Z","lastTransitionTime":"2026-02-14T04:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.655674 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.655726 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.655739 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.655757 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.655771 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:50Z","lastTransitionTime":"2026-02-14T04:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.758841 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.758890 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.758902 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.758921 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.758933 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:50Z","lastTransitionTime":"2026-02-14T04:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.861853 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.861912 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.861932 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.861956 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.861972 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:50Z","lastTransitionTime":"2026-02-14T04:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.964208 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.964255 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.964266 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.964283 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.964302 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:50Z","lastTransitionTime":"2026-02-14T04:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.996854 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.996907 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.996866 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:10:50 crc kubenswrapper[4867]: E0214 04:10:50.997014 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.997196 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:10:50 crc kubenswrapper[4867]: E0214 04:10:50.997299 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:10:50 crc kubenswrapper[4867]: E0214 04:10:50.997369 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:10:50 crc kubenswrapper[4867]: E0214 04:10:50.997497 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.998148 4867 scope.go:117] "RemoveContainer" containerID="901f1924f11611b25b82799b2f09cf1c83f31dada8ce10e3fabf0d2968107b93" Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.010851 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.066679 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.066716 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.066724 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.066736 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.066747 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:51Z","lastTransitionTime":"2026-02-14T04:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.151201 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 15:36:17.77420617 +0000 UTC Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.170064 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.170121 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.170130 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.170144 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.170154 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:51Z","lastTransitionTime":"2026-02-14T04:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.272625 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.272663 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.272676 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.272698 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.272710 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:51Z","lastTransitionTime":"2026-02-14T04:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.375936 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.376134 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.376213 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.376302 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.376382 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:51Z","lastTransitionTime":"2026-02-14T04:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.479574 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.479990 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.480419 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.480554 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.480656 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:51Z","lastTransitionTime":"2026-02-14T04:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.583233 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.583758 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.583777 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.583803 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.583821 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:51Z","lastTransitionTime":"2026-02-14T04:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.686424 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.686468 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.686479 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.686492 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.686501 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:51Z","lastTransitionTime":"2026-02-14T04:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.789390 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.789452 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.789471 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.789496 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.789555 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:51Z","lastTransitionTime":"2026-02-14T04:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.893041 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.893173 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.893195 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.893280 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.893308 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:51Z","lastTransitionTime":"2026-02-14T04:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.996304 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.996409 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.996432 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.996462 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.996487 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:51Z","lastTransitionTime":"2026-02-14T04:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.098973 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.099014 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.099023 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.099036 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.099045 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:52Z","lastTransitionTime":"2026-02-14T04:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.151442 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 05:25:43.888723023 +0000 UTC Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.201393 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.201440 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.201452 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.201470 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.201483 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:52Z","lastTransitionTime":"2026-02-14T04:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.303707 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.303740 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.303768 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.303783 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.303793 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:52Z","lastTransitionTime":"2026-02-14T04:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.406231 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.406274 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.406283 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.406298 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.406309 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:52Z","lastTransitionTime":"2026-02-14T04:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.508465 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.508534 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.508545 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.508559 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.508569 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:52Z","lastTransitionTime":"2026-02-14T04:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.531078 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6nndn_34391a30-5865-46e9-af5f-705cc3b11fba/ovnkube-controller/3.log" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.531736 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6nndn_34391a30-5865-46e9-af5f-705cc3b11fba/ovnkube-controller/2.log" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.534160 4867 generic.go:334] "Generic (PLEG): container finished" podID="34391a30-5865-46e9-af5f-705cc3b11fba" containerID="97e1fa8b3d99d969cac9ac1d4bdd1161353186d3cf50512e692adeee0f21778a" exitCode=1 Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.534204 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" event={"ID":"34391a30-5865-46e9-af5f-705cc3b11fba","Type":"ContainerDied","Data":"97e1fa8b3d99d969cac9ac1d4bdd1161353186d3cf50512e692adeee0f21778a"} Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.534248 4867 scope.go:117] "RemoveContainer" containerID="901f1924f11611b25b82799b2f09cf1c83f31dada8ce10e3fabf0d2968107b93" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.535211 4867 scope.go:117] "RemoveContainer" containerID="97e1fa8b3d99d969cac9ac1d4bdd1161353186d3cf50512e692adeee0f21778a" Feb 14 04:10:52 crc kubenswrapper[4867]: E0214 04:10:52.535441 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-6nndn_openshift-ovn-kubernetes(34391a30-5865-46e9-af5f-705cc3b11fba)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.548337 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qbv2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55b70fd-de82-48c9-b879-de727928e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6de20a8d6a8a1104338491af05cb4bad2960df3f3d41271922974f2bd0f355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghrlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qbv2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:52Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.561405 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b12a920eac3a6bb901e1eb5b3f4ec399de4fb28f20cd73bdcf463730ccc78bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a373e25fceabb99332a08d8c1928aa6023c103d488a1f02a57b3157eceb75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:52Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.570179 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l6v69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afb01bb-2288-4e50-aa66-3e5f2685af58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a109f9fc7a2ea765543b2d1437ad5eccddd0ccb0542b1ffe6a67490057d6d41e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64stb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l6v69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:52Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.581577 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:52Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.597849 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34391a30-5865-46e9-af5f-705cc3b11fba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92014b6ab3e1d7c8631fb2a2fa44b60586bb67157c43e3528a7644584fd25b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://250a34062c680cecaa28554a71a782da6fa1c3554900e8b4f2fa6c093f98e307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3393a612147da0ed4305cb2d2fab51792bf4aefb36be402a6faaa698793cfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9937714cb48d5e8bc3542473d8261629ced25c342c26baa13e57c3dc2ace633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://669cee3fb4ea5c0247e6aa92962377d3152fc79d9022e03689b7c017f857e6e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e713ec2c9e59ec516f5c2241a0e87501f4e83e05d6dadd3c54cd7f1cf11f7d1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97e1fa8b3d99d969cac9ac1d4bdd1161353186d3cf50512e692adeee0f21778a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://901f1924f11611b25b82799b2f09cf1c83f31dada8ce10e3fabf0d2968107b93\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T04:10:20Z\\\",\\\"message\\\":\\\".go:160\\\\nI0214 04:10:20.016141 6511 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 04:10:20.016279 6511 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 04:10:20.016402 6511 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0214 04:10:20.016537 6511 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0214 04:10:20.016835 6511 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0214 04:10:20.016886 6511 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0214 04:10:20.016916 6511 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0214 04:10:20.016948 6511 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0214 04:10:20.016968 6511 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0214 04:10:20.016972 6511 factory.go:656] Stopping watch factory\\\\nI0214 04:10:20.016992 6511 ovnkube.go:599] Stopped ovnkube\\\\nI0214 04:10:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:10:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97e1fa8b3d99d969cac9ac1d4bdd1161353186d3cf50512e692adeee0f21778a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T04:10:52Z\\\",\\\"message\\\":\\\"et:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0214 04:10:52.432610 6954 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0214 04:10:52.432921 6954 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0214 04:10:52.432929 6954 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nF0214 04:10:52.432932 6954 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?time\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:10:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b353a2a6ce81989e21b42414fdc2911f63d44fbd94dd8c588a704ae66216d8b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nndn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:52Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.610232 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:52Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.610763 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.610798 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.610808 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.610823 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.610832 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:52Z","lastTransitionTime":"2026-02-14T04:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.622560 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2ca498f-e329-422d-8b40-abb4d86f9b5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec966f4b2a6aef7743d32f976a12645c5b0feda623f7baf64edf02bc35389e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://898133696f8478fcb41fba24d15e056570cab68af53a559cb642724dff51617a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ad62a4304538cec45962a9672a69b853848bbfcbce460811135c2ffde4849\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c04ba7033e9c86439f79a30f5ac92368859a69c6b8d46aa6e05ca42fbc37839\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:52Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.634076 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7eff54a-2d26-4335-ad76-c454354b64c0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61da3ab9eb87eb886d6bdf805db38bcabc3db4334167f9e28fd6144269a76515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c54a1f41a2a0e8fa5eae1575fc40b6f3240fe6ea8cafe6fd89a64e092e5b4602\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f0b9cac3faa5bfffa911cb16b70fa88a320b7bd9314d7a0ee0732b2a57afb90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9361fb0bab2f70eaf2adc19e3fbfa9066fd7ad2fe0c94cd1a13518d2ab3708d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9361fb0bab2f70eaf2adc19e3fbfa9066fd7ad2fe0c94cd1a13518d2ab3708d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:52Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.645714 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab82cfd4c916a17bb5ae2454a121a8367c532dd78d0ae1e13c02868208b7c7fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:52Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.656657 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4b6k5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7206174b-645b-4924-8345-d1d4b1a5ec39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-272vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-272vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:10:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4b6k5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:52Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.673786 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98379eae-150a-49e4-bc5a-774db567b411\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1680b0766cf32cd9af06a1636274ebdc0e1a0eb1ef8ebf2dd5af50a426593936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c647364c951a6adef887ffa61edec540e1ba09f957cffaf60aa4e2fb6ecaa22d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f07e13016eff40608d9a7f5dbdbd6e4faa7b21b965957c062bfd1c40b04d582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85486406cb9ccb97ccb382e44c3c4372c54609d367aeec7a04ddfa06424c9cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5777a20697086ac1eaf7dd01c471658a6ea96751fc9184d7bc2597777d86949a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e4d315b1c424660a2a02ab7882b4d25e0baa2407cbcc9efab29adf052733231\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e4d315b1c424660a2a02ab7882b4d25e0baa2407cbcc9efab29adf052733231\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6718fb3f6cc2532e0ed35f4a37eb39738cd75a5f20f85e778dec867a620eba6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6718fb3f6cc2532e0ed35f4a37eb39738cd75a5f20f85e778dec867a620eba6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://185d95c4c216a23ddee54c001dee313a17659c22037a5f60772d4449bd8fdd08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://185d95c4c216a23ddee54c001dee313a17659c22037a5f60772d4449bd8fdd08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:52Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.687090 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fl729" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb77d03e-6ead-48b5-a96a-db4cbd540192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2556cf2433d1b1241d711139b8c66aabe3f12046f37c0f19b972b8306ff7917b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f23c7e00abcb489852a771f1534532f8a6c3acdd810e4432dd155a72558bcc7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T04:10:38Z\\\",\\\"message\\\":\\\"2026-02-14T04:09:53+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_a3f597f2-b921-47ce-8faa-6d588a62271b\\\\n2026-02-14T04:09:53+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_a3f597f2-b921-47ce-8faa-6d588a62271b to /host/opt/cni/bin/\\\\n2026-02-14T04:09:53Z [verbose] multus-daemon started\\\\n2026-02-14T04:09:53Z [verbose] Readiness Indicator file check\\\\n2026-02-14T04:10:38Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gznnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fl729\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:52Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.700207 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9st5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d645541b-4940-4e53-a506-1b42bd296dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80e2ddc09dadcbbbecee7addee881a393497c7456c1ab3fd4ec4b870d86e87ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9st5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:52Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.711729 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbvwr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05957e01-c589-4408-8f80-cd33f8856262\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb1c0677bd48bd254b78efc670de4cf3c1a2ae1a5dde8bcdc4d84ff4524b847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj65g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3962042c51f3b88c029c3ee23ee5704544b33af6a41463e864d81409a6f6845f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj65g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:10:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dbvwr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:52Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.712783 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.712821 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.712830 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.712846 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.712856 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:52Z","lastTransitionTime":"2026-02-14T04:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.724630 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5aa8290-4924-4bc2-bd8e-576e53fa4216\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff2ac5b982c695cfabb0b045748396477b0076e3f4bd77aedf8140d8d212eefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c65a590577c74e672bca804403f159a08ded5ed0e25daf1bef640898c304fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b2c4d9c08ee7188cfea877222707517949a93291dac8409facff18ccd5d9243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c96b250166bcf9c613c7707d9b66c11bbb6292c67d03ed9c9cd8359f466d48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9a86a9d4bdcb85bed9cc5869d14d5d0dcd8a0e22ad73bcc1a9db45554d0c687\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T04:09:45Z\\\",\\\"message\\\":\\\"W0214 04:09:34.347288 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0214 04:09:34.347626 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771042174 cert, and key in /tmp/serving-cert-184764736/serving-signer.crt, /tmp/serving-cert-184764736/serving-signer.key\\\\nI0214 04:09:34.829829 1 observer_polling.go:159] Starting file observer\\\\nW0214 04:09:34.832051 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0214 04:09:34.832310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 04:09:34.835332 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-184764736/tls.crt::/tmp/serving-cert-184764736/tls.key\\\\\\\"\\\\nF0214 04:09:45.190789 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d60b00afe16ba210d6cf3e8edd9c12aef490177b83185e6d74f219cc35efbc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:52Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.735360 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2334a367dde5688d19979264cfd6e67f44426ae7cd249c0b0e18b7e889c8c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:52Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.746165 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:52Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.757172 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5992e46c-bce7-4b9f-82f2-c7ffb93286cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945e51e35cb7125361ec74b9c291782c9bc28f0c319ca5c90a88c27540d6ad95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06b088007e4cc02eff5f33dffc101f9d559fc0af6d9fc99cb7d1a49c47deec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4s95t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:52Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.815711 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.815765 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.815773 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.815787 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.815798 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:52Z","lastTransitionTime":"2026-02-14T04:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.918143 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.918192 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.918201 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.918216 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.918225 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:52Z","lastTransitionTime":"2026-02-14T04:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.996856 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.996911 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.996942 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:10:52 crc kubenswrapper[4867]: E0214 04:10:52.997065 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.997101 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:10:52 crc kubenswrapper[4867]: E0214 04:10:52.997238 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:10:52 crc kubenswrapper[4867]: E0214 04:10:52.997385 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:10:52 crc kubenswrapper[4867]: E0214 04:10:52.997622 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.020668 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.020717 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.020734 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.020755 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.020773 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:53Z","lastTransitionTime":"2026-02-14T04:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.123919 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.123996 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.124013 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.124038 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.124054 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:53Z","lastTransitionTime":"2026-02-14T04:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.152224 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 12:32:13.672254515 +0000 UTC Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.227796 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.227841 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.227853 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.227871 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.227884 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:53Z","lastTransitionTime":"2026-02-14T04:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.330033 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.330066 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.330074 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.330088 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.330096 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:53Z","lastTransitionTime":"2026-02-14T04:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.432787 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.432839 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.432849 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.432866 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.432877 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:53Z","lastTransitionTime":"2026-02-14T04:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.534819 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.534849 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.534857 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.534869 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.534879 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:53Z","lastTransitionTime":"2026-02-14T04:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.538328 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6nndn_34391a30-5865-46e9-af5f-705cc3b11fba/ovnkube-controller/3.log" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.637743 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.637793 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.637811 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.637833 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.637851 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:53Z","lastTransitionTime":"2026-02-14T04:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.656278 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.656349 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.656483 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.656672 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.656708 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:53Z","lastTransitionTime":"2026-02-14T04:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:53 crc kubenswrapper[4867]: E0214 04:10:53.677386 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148e1364-0af4-4e1f-ae72-52166d888ddc\\\",\\\"systemUUID\\\":\\\"1382a0d3-8d29-4f25-bc2c-dc46ad541396\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:53Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.681940 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.681961 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.681971 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.681985 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.681996 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:53Z","lastTransitionTime":"2026-02-14T04:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:53 crc kubenswrapper[4867]: E0214 04:10:53.697704 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148e1364-0af4-4e1f-ae72-52166d888ddc\\\",\\\"systemUUID\\\":\\\"1382a0d3-8d29-4f25-bc2c-dc46ad541396\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:53Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.701892 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.701953 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.701978 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.702004 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.702025 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:53Z","lastTransitionTime":"2026-02-14T04:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:53 crc kubenswrapper[4867]: E0214 04:10:53.720680 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148e1364-0af4-4e1f-ae72-52166d888ddc\\\",\\\"systemUUID\\\":\\\"1382a0d3-8d29-4f25-bc2c-dc46ad541396\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:53Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.724329 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.724367 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.724380 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.724397 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.724411 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:53Z","lastTransitionTime":"2026-02-14T04:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:53 crc kubenswrapper[4867]: E0214 04:10:53.743019 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148e1364-0af4-4e1f-ae72-52166d888ddc\\\",\\\"systemUUID\\\":\\\"1382a0d3-8d29-4f25-bc2c-dc46ad541396\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:53Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.746780 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.746825 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.746840 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.746860 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.746874 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:53Z","lastTransitionTime":"2026-02-14T04:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:53 crc kubenswrapper[4867]: E0214 04:10:53.761216 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148e1364-0af4-4e1f-ae72-52166d888ddc\\\",\\\"systemUUID\\\":\\\"1382a0d3-8d29-4f25-bc2c-dc46ad541396\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:53Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:53 crc kubenswrapper[4867]: E0214 04:10:53.761371 4867 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.762944 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.762980 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.762995 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.763014 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.763030 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:53Z","lastTransitionTime":"2026-02-14T04:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.865040 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.865080 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.865090 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.865110 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.865123 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:53Z","lastTransitionTime":"2026-02-14T04:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.968357 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.968423 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.968445 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.968467 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.968485 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:53Z","lastTransitionTime":"2026-02-14T04:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.071733 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.071806 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.071833 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.071862 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.071883 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:54Z","lastTransitionTime":"2026-02-14T04:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.152864 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 08:01:11.664243721 +0000 UTC Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.175079 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.175132 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.175151 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.175175 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.175193 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:54Z","lastTransitionTime":"2026-02-14T04:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.278207 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.278275 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.278299 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.278333 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.278354 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:54Z","lastTransitionTime":"2026-02-14T04:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.381040 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.381081 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.381089 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.381103 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.381113 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:54Z","lastTransitionTime":"2026-02-14T04:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.484306 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.484381 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.484393 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.484411 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.484449 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:54Z","lastTransitionTime":"2026-02-14T04:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.586894 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.586934 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.586962 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.586978 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.586990 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:54Z","lastTransitionTime":"2026-02-14T04:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.689958 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.689992 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.690003 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.690019 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.690030 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:54Z","lastTransitionTime":"2026-02-14T04:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.793319 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.793359 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.793369 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.793386 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.793397 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:54Z","lastTransitionTime":"2026-02-14T04:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.896239 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.896303 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.896323 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.896346 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.896362 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:54Z","lastTransitionTime":"2026-02-14T04:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.996156 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.996235 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:10:54 crc kubenswrapper[4867]: E0214 04:10:54.996305 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.996328 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.996387 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:10:54 crc kubenswrapper[4867]: E0214 04:10:54.996545 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:10:54 crc kubenswrapper[4867]: E0214 04:10:54.996670 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:10:54 crc kubenswrapper[4867]: E0214 04:10:54.996808 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.998995 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.999019 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.999032 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.999046 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.999058 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:54Z","lastTransitionTime":"2026-02-14T04:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.017320 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.017479 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.017657 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:10:55 crc kubenswrapper[4867]: E0214 04:10:55.017724 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 14 04:10:55 crc kubenswrapper[4867]: E0214 04:10:55.017768 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 14 04:10:55 crc kubenswrapper[4867]: E0214 04:10:55.017770 4867 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 14 04:10:55 crc kubenswrapper[4867]: E0214 04:10:55.017791 4867 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 04:10:55 crc kubenswrapper[4867]: E0214 04:10:55.018116 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:59.017673752 +0000 UTC m=+151.098611076 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.018319 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:10:55 crc kubenswrapper[4867]: E0214 04:10:55.018382 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-14 04:11:59.01836285 +0000 UTC m=+151.099300164 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 14 04:10:55 crc kubenswrapper[4867]: E0214 04:10:55.018602 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-14 04:11:59.018581106 +0000 UTC m=+151.099518460 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.018637 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:10:55 crc kubenswrapper[4867]: E0214 04:10:55.018752 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 14 04:10:55 crc kubenswrapper[4867]: E0214 04:10:55.018771 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 14 04:10:55 crc kubenswrapper[4867]: E0214 04:10:55.018787 4867 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 04:10:55 crc kubenswrapper[4867]: E0214 04:10:55.018833 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-14 04:11:59.018818832 +0000 UTC m=+151.099756176 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 04:10:55 crc kubenswrapper[4867]: E0214 04:10:55.018415 4867 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 14 04:10:55 crc kubenswrapper[4867]: E0214 04:10:55.019027 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-14 04:11:59.019014857 +0000 UTC m=+151.099952201 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.101576 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.101615 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.101627 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.101642 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.101654 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:55Z","lastTransitionTime":"2026-02-14T04:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.153974 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 03:47:39.383829598 +0000 UTC Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.203983 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.204052 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.204069 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.204091 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.204108 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:55Z","lastTransitionTime":"2026-02-14T04:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.307386 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.307472 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.307502 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.307566 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.307588 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:55Z","lastTransitionTime":"2026-02-14T04:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.410042 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.410090 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.410104 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.410127 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.410146 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:55Z","lastTransitionTime":"2026-02-14T04:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.512968 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.513033 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.513055 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.513085 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.513103 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:55Z","lastTransitionTime":"2026-02-14T04:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.615607 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.615655 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.615669 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.615688 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.615702 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:55Z","lastTransitionTime":"2026-02-14T04:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.718543 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.718586 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.718597 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.718617 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.718631 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:55Z","lastTransitionTime":"2026-02-14T04:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.821632 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.821682 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.821696 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.821719 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.821732 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:55Z","lastTransitionTime":"2026-02-14T04:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.924131 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.924172 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.924182 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.924196 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.924210 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:55Z","lastTransitionTime":"2026-02-14T04:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.026358 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.026397 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.026408 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.026426 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.026435 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:56Z","lastTransitionTime":"2026-02-14T04:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.128480 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.128546 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.128558 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.128575 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.128586 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:56Z","lastTransitionTime":"2026-02-14T04:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.154802 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 01:52:57.764848678 +0000 UTC Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.230466 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.230631 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.230693 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.230721 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.230740 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:56Z","lastTransitionTime":"2026-02-14T04:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.333795 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.333864 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.333883 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.333914 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.333934 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:56Z","lastTransitionTime":"2026-02-14T04:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.436548 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.436622 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.436647 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.436689 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.436718 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:56Z","lastTransitionTime":"2026-02-14T04:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.539570 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.539621 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.539630 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.539667 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.539680 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:56Z","lastTransitionTime":"2026-02-14T04:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.641823 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.641871 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.641882 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.641898 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.641911 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:56Z","lastTransitionTime":"2026-02-14T04:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.744208 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.744302 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.744319 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.744341 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.744358 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:56Z","lastTransitionTime":"2026-02-14T04:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.846536 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.846606 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.846615 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.846631 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.846639 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:56Z","lastTransitionTime":"2026-02-14T04:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.949982 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.950026 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.950040 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.950058 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.950069 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:56Z","lastTransitionTime":"2026-02-14T04:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.996390 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.996420 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.996463 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.996399 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:10:56 crc kubenswrapper[4867]: E0214 04:10:56.996602 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:10:56 crc kubenswrapper[4867]: E0214 04:10:56.996722 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:10:56 crc kubenswrapper[4867]: E0214 04:10:56.996886 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:10:56 crc kubenswrapper[4867]: E0214 04:10:56.996974 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.052157 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.052217 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.052225 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.052240 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.052273 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:57Z","lastTransitionTime":"2026-02-14T04:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.154392 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.154430 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.154438 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.154451 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.154459 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:57Z","lastTransitionTime":"2026-02-14T04:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.155500 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 05:38:33.848117862 +0000 UTC Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.257018 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.257066 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.257076 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.257092 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.257104 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:57Z","lastTransitionTime":"2026-02-14T04:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.359417 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.359451 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.359460 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.359475 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.359487 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:57Z","lastTransitionTime":"2026-02-14T04:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.462034 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.462080 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.462091 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.462106 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.462116 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:57Z","lastTransitionTime":"2026-02-14T04:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.564773 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.564823 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.564839 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.564862 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.564879 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:57Z","lastTransitionTime":"2026-02-14T04:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.667204 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.667241 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.667257 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.667278 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.667291 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:57Z","lastTransitionTime":"2026-02-14T04:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.771339 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.771435 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.771461 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.771496 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.771560 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:57Z","lastTransitionTime":"2026-02-14T04:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.875264 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.875337 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.875360 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.875389 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.875412 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:57Z","lastTransitionTime":"2026-02-14T04:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.979025 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.979091 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.979110 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.979136 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.979156 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:57Z","lastTransitionTime":"2026-02-14T04:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:58 crc kubenswrapper[4867]: I0214 04:10:58.082024 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:58 crc kubenswrapper[4867]: I0214 04:10:58.082098 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:58 crc kubenswrapper[4867]: I0214 04:10:58.082124 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:58 crc kubenswrapper[4867]: I0214 04:10:58.082153 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:58 crc kubenswrapper[4867]: I0214 04:10:58.082174 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:58Z","lastTransitionTime":"2026-02-14T04:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:58 crc kubenswrapper[4867]: I0214 04:10:58.156337 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 15:24:28.006731968 +0000 UTC Feb 14 04:10:58 crc kubenswrapper[4867]: I0214 04:10:58.193583 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:58 crc kubenswrapper[4867]: I0214 04:10:58.193634 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:58 crc kubenswrapper[4867]: I0214 04:10:58.193651 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:58 crc kubenswrapper[4867]: I0214 04:10:58.193676 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:58 crc kubenswrapper[4867]: I0214 04:10:58.193694 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:58Z","lastTransitionTime":"2026-02-14T04:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:58 crc kubenswrapper[4867]: I0214 04:10:58.297698 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:58 crc kubenswrapper[4867]: I0214 04:10:58.297783 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:58 crc kubenswrapper[4867]: I0214 04:10:58.297806 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:58 crc kubenswrapper[4867]: I0214 04:10:58.297843 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:58 crc kubenswrapper[4867]: I0214 04:10:58.297868 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:58Z","lastTransitionTime":"2026-02-14T04:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:58 crc kubenswrapper[4867]: I0214 04:10:58.400868 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:58 crc kubenswrapper[4867]: I0214 04:10:58.400934 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:58 crc kubenswrapper[4867]: I0214 04:10:58.400944 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:58 crc kubenswrapper[4867]: I0214 04:10:58.400966 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:58 crc kubenswrapper[4867]: I0214 04:10:58.400978 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:58Z","lastTransitionTime":"2026-02-14T04:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:58 crc kubenswrapper[4867]: I0214 04:10:58.503864 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:58 crc kubenswrapper[4867]: I0214 04:10:58.504210 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:58 crc kubenswrapper[4867]: I0214 04:10:58.504328 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:58 crc kubenswrapper[4867]: I0214 04:10:58.504427 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:58 crc kubenswrapper[4867]: I0214 04:10:58.504561 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:58Z","lastTransitionTime":"2026-02-14T04:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:58 crc kubenswrapper[4867]: I0214 04:10:58.607392 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:58 crc kubenswrapper[4867]: I0214 04:10:58.607429 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:58 crc kubenswrapper[4867]: I0214 04:10:58.607440 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:58 crc kubenswrapper[4867]: I0214 04:10:58.607465 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:58 crc kubenswrapper[4867]: I0214 04:10:58.607479 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:58Z","lastTransitionTime":"2026-02-14T04:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:58 crc kubenswrapper[4867]: I0214 04:10:58.709713 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:58 crc kubenswrapper[4867]: I0214 04:10:58.709797 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:58 crc kubenswrapper[4867]: I0214 04:10:58.709818 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:58 crc kubenswrapper[4867]: I0214 04:10:58.709845 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:58 crc kubenswrapper[4867]: I0214 04:10:58.709865 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:58Z","lastTransitionTime":"2026-02-14T04:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:58 crc kubenswrapper[4867]: I0214 04:10:58.812405 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:58 crc kubenswrapper[4867]: I0214 04:10:58.812837 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:58 crc kubenswrapper[4867]: I0214 04:10:58.812971 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:58 crc kubenswrapper[4867]: I0214 04:10:58.813130 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:58 crc kubenswrapper[4867]: I0214 04:10:58.813275 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:58Z","lastTransitionTime":"2026-02-14T04:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:58 crc kubenswrapper[4867]: I0214 04:10:58.916116 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:58 crc kubenswrapper[4867]: I0214 04:10:58.916158 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:58 crc kubenswrapper[4867]: I0214 04:10:58.916167 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:58 crc kubenswrapper[4867]: I0214 04:10:58.916181 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:58 crc kubenswrapper[4867]: I0214 04:10:58.916191 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:58Z","lastTransitionTime":"2026-02-14T04:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:58 crc kubenswrapper[4867]: I0214 04:10:58.996975 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:10:58 crc kubenswrapper[4867]: I0214 04:10:58.997019 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:10:58 crc kubenswrapper[4867]: I0214 04:10:58.996983 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:10:58 crc kubenswrapper[4867]: I0214 04:10:58.997100 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:10:58 crc kubenswrapper[4867]: E0214 04:10:58.997207 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:10:58 crc kubenswrapper[4867]: E0214 04:10:58.997398 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:10:58 crc kubenswrapper[4867]: E0214 04:10:58.997474 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:10:58 crc kubenswrapper[4867]: E0214 04:10:58.997715 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.013764 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5aa8290-4924-4bc2-bd8e-576e53fa4216\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff2ac5b982c695cfabb0b045748396477b0076e3f4bd77aedf8140d8d212eefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c65a590577c74e672bca804403f159a08ded5ed0e25daf1bef640898c304fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b2c4d9c08ee7188cfea877222707517949a93291dac8409facff18ccd5d9243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c96b250166bcf9c613c7707d9b66c11bbb6292c67d03ed9c9cd8359f466d48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9a86a9d4bdcb85bed9cc5869d14d5d0dcd8a0e22ad73bcc1a9db45554d0c687\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T04:09:45Z\\\",\\\"message\\\":\\\"W0214 04:09:34.347288 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0214 04:09:34.347626 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771042174 cert, and key in /tmp/serving-cert-184764736/serving-signer.crt, /tmp/serving-cert-184764736/serving-signer.key\\\\nI0214 04:09:34.829829 1 observer_polling.go:159] Starting file observer\\\\nW0214 04:09:34.832051 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0214 04:09:34.832310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 04:09:34.835332 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-184764736/tls.crt::/tmp/serving-cert-184764736/tls.key\\\\\\\"\\\\nF0214 04:09:45.190789 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d60b00afe16ba210d6cf3e8edd9c12aef490177b83185e6d74f219cc35efbc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:59Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.019466 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.019537 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.019553 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.019575 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.019588 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:59Z","lastTransitionTime":"2026-02-14T04:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.029906 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2334a367dde5688d19979264cfd6e67f44426ae7cd249c0b0e18b7e889c8c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:59Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.048214 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:59Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.061877 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5992e46c-bce7-4b9f-82f2-c7ffb93286cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945e51e35cb7125361ec74b9c291782c9bc28f0c319ca5c90a88c27540d6ad95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06b088007e4cc02eff5f33dffc101f9d559fc0af6d9fc99cb7d1a49c47deec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4s95t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:59Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.075460 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b12a920eac3a6bb901e1eb5b3f4ec399de4fb28f20cd73bdcf463730ccc78bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a373e25fceabb99332a08d8c1928aa6023c103d488a1f02a57b3157eceb75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:59Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.086580 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l6v69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afb01bb-2288-4e50-aa66-3e5f2685af58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a109f9fc7a2ea765543b2d1437ad5eccddd0ccb0542b1ffe6a67490057d6d41e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64stb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l6v69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:59Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.099656 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:59Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.121988 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.122049 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.122065 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.122129 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.122147 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:59Z","lastTransitionTime":"2026-02-14T04:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.127079 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34391a30-5865-46e9-af5f-705cc3b11fba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92014b6ab3e1d7c8631fb2a2fa44b60586bb67157c43e3528a7644584fd25b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://250a34062c680cecaa28554a71a782da6fa1c3554900e8b4f2fa6c093f98e307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3393a612147da0ed4305cb2d2fab51792bf4aefb36be402a6faaa698793cfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9937714cb48d5e8bc3542473d8261629ced25c342c26baa13e57c3dc2ace633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://669cee3fb4ea5c0247e6aa92962377d3152fc79d9022e03689b7c017f857e6e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e713ec2c9e59ec516f5c2241a0e87501f4e83e05d6dadd3c54cd7f1cf11f7d1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97e1fa8b3d99d969cac9ac1d4bdd1161353186d3cf50512e692adeee0f21778a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://901f1924f11611b25b82799b2f09cf1c83f31dada8ce10e3fabf0d2968107b93\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T04:10:20Z\\\",\\\"message\\\":\\\".go:160\\\\nI0214 04:10:20.016141 6511 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 04:10:20.016279 6511 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 04:10:20.016402 6511 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0214 04:10:20.016537 6511 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0214 04:10:20.016835 6511 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0214 04:10:20.016886 6511 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0214 04:10:20.016916 6511 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0214 04:10:20.016948 6511 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0214 04:10:20.016968 6511 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0214 04:10:20.016972 6511 factory.go:656] Stopping watch factory\\\\nI0214 04:10:20.016992 6511 ovnkube.go:599] Stopped ovnkube\\\\nI0214 04:10:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:10:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97e1fa8b3d99d969cac9ac1d4bdd1161353186d3cf50512e692adeee0f21778a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T04:10:52Z\\\",\\\"message\\\":\\\"et:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0214 04:10:52.432610 6954 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0214 04:10:52.432921 6954 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0214 04:10:52.432929 6954 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nF0214 04:10:52.432932 6954 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?time\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:10:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b353a2a6ce81989e21b42414fdc2911f63d44fbd94dd8c588a704ae66216d8b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nndn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:59Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.139066 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qbv2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55b70fd-de82-48c9-b879-de727928e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6de20a8d6a8a1104338491af05cb4bad2960df3f3d41271922974f2bd0f355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghrlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qbv2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:59Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.156239 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2ca498f-e329-422d-8b40-abb4d86f9b5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec966f4b2a6aef7743d32f976a12645c5b0feda623f7baf64edf02bc35389e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://898133696f8478fcb41fba24d15e056570cab68af53a559cb642724dff51617a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ad62a4304538cec45962a9672a69b853848bbfcbce460811135c2ffde4849\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c04ba7033e9c86439f79a30f5ac92368859a69c6b8d46aa6e05ca42fbc37839\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:59Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.156521 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 07:16:15.549282464 +0000 UTC Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.170220 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7eff54a-2d26-4335-ad76-c454354b64c0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61da3ab9eb87eb886d6bdf805db38bcabc3db4334167f9e28fd6144269a76515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c54a1f41a2a0e8fa5eae1575fc40b6f3240fe6ea8cafe6fd89a64e092e5b4602\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f0b9cac3faa5bfffa911cb16b70fa88a320b7bd9314d7a0ee0732b2a57afb90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9361fb0bab2f70eaf2adc19e3fbfa9066fd7ad2fe0c94cd1a13518d2ab3708d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9361fb0bab2f70eaf2adc19e3fbfa9066fd7ad2fe0c94cd1a13518d2ab3708d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:59Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.183963 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab82cfd4c916a17bb5ae2454a121a8367c532dd78d0ae1e13c02868208b7c7fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:59Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.196582 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:59Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.217444 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98379eae-150a-49e4-bc5a-774db567b411\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1680b0766cf32cd9af06a1636274ebdc0e1a0eb1ef8ebf2dd5af50a426593936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c647364c951a6adef887ffa61edec540e1ba09f957cffaf60aa4e2fb6ecaa22d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f07e13016eff40608d9a7f5dbdbd6e4faa7b21b965957c062bfd1c40b04d582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85486406cb9ccb97ccb382e44c3c4372c54609d367aeec7a04ddfa06424c9cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5777a20697086ac1eaf7dd01c471658a6ea96751fc9184d7bc2597777d86949a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e4d315b1c424660a2a02ab7882b4d25e0baa2407cbcc9efab29adf052733231\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e4d315b1c424660a2a02ab7882b4d25e0baa2407cbcc9efab29adf052733231\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6718fb3f6cc2532e0ed35f4a37eb39738cd75a5f20f85e778dec867a620eba6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6718fb3f6cc2532e0ed35f4a37eb39738cd75a5f20f85e778dec867a620eba6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://185d95c4c216a23ddee54c001dee313a17659c22037a5f60772d4449bd8fdd08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://185d95c4c216a23ddee54c001dee313a17659c22037a5f60772d4449bd8fdd08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:59Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.224611 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.224670 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.224695 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.224720 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.224736 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:59Z","lastTransitionTime":"2026-02-14T04:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.231482 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fl729" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb77d03e-6ead-48b5-a96a-db4cbd540192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2556cf2433d1b1241d711139b8c66aabe3f12046f37c0f19b972b8306ff7917b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f23c7e00abcb489852a771f1534532f8a6c3acdd810e4432dd155a72558bcc7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T04:10:38Z\\\",\\\"message\\\":\\\"2026-02-14T04:09:53+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_a3f597f2-b921-47ce-8faa-6d588a62271b\\\\n2026-02-14T04:09:53+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_a3f597f2-b921-47ce-8faa-6d588a62271b to /host/opt/cni/bin/\\\\n2026-02-14T04:09:53Z [verbose] multus-daemon started\\\\n2026-02-14T04:09:53Z [verbose] Readiness Indicator file check\\\\n2026-02-14T04:10:38Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gznnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fl729\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:59Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.246627 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9st5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d645541b-4940-4e53-a506-1b42bd296dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80e2ddc09dadcbbbecee7addee881a393497c7456c1ab3fd4ec4b870d86e87ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9st5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:59Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.259006 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbvwr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05957e01-c589-4408-8f80-cd33f8856262\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb1c0677bd48bd254b78efc670de4cf3c1a2ae1a5dde8bcdc4d84ff4524b847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj65g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3962042c51f3b88c029c3ee23ee5704544b33af6a41463e864d81409a6f6845f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj65g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:10:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dbvwr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:59Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.269182 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4b6k5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7206174b-645b-4924-8345-d1d4b1a5ec39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-272vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-272vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:10:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4b6k5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:59Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.327040 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.327083 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.327097 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.327115 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.327128 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:59Z","lastTransitionTime":"2026-02-14T04:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.429646 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.429685 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.429699 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.429716 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.429728 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:59Z","lastTransitionTime":"2026-02-14T04:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.532206 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.532271 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.532296 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.532325 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.532343 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:59Z","lastTransitionTime":"2026-02-14T04:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.634758 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.634796 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.634805 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.634817 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.634825 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:59Z","lastTransitionTime":"2026-02-14T04:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.737200 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.737271 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.737288 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.737311 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.737330 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:59Z","lastTransitionTime":"2026-02-14T04:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.839767 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.839796 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.839805 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.839818 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.839827 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:59Z","lastTransitionTime":"2026-02-14T04:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.943416 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.943591 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.943613 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.943644 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.943663 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:59Z","lastTransitionTime":"2026-02-14T04:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.045579 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.045636 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.045650 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.045671 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.045682 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:00Z","lastTransitionTime":"2026-02-14T04:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.147946 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.148423 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.148549 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.148660 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.148753 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:00Z","lastTransitionTime":"2026-02-14T04:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.157279 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 13:09:32.773233552 +0000 UTC Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.251026 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.251699 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.251729 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.251754 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.251789 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:00Z","lastTransitionTime":"2026-02-14T04:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.354802 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.354880 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.354899 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.354925 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.354943 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:00Z","lastTransitionTime":"2026-02-14T04:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.457767 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.457820 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.457834 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.457856 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.457870 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:00Z","lastTransitionTime":"2026-02-14T04:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.561086 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.561133 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.561148 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.561170 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.561182 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:00Z","lastTransitionTime":"2026-02-14T04:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.663546 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.663623 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.663647 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.663680 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.663711 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:00Z","lastTransitionTime":"2026-02-14T04:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.766942 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.766999 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.767015 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.767036 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.767054 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:00Z","lastTransitionTime":"2026-02-14T04:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.870222 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.870591 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.870676 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.870790 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.870899 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:00Z","lastTransitionTime":"2026-02-14T04:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.973328 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.973565 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.973662 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.973732 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.973792 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:00Z","lastTransitionTime":"2026-02-14T04:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.996238 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.996238 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.996365 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.996591 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:11:00 crc kubenswrapper[4867]: E0214 04:11:00.996749 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:11:00 crc kubenswrapper[4867]: E0214 04:11:00.996894 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:11:00 crc kubenswrapper[4867]: E0214 04:11:00.997029 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:11:00 crc kubenswrapper[4867]: E0214 04:11:00.997154 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:11:01 crc kubenswrapper[4867]: I0214 04:11:01.076524 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:01 crc kubenswrapper[4867]: I0214 04:11:01.076566 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:01 crc kubenswrapper[4867]: I0214 04:11:01.076578 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:01 crc kubenswrapper[4867]: I0214 04:11:01.076593 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:01 crc kubenswrapper[4867]: I0214 04:11:01.076605 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:01Z","lastTransitionTime":"2026-02-14T04:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:01 crc kubenswrapper[4867]: I0214 04:11:01.158109 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 17:50:45.83111862 +0000 UTC Feb 14 04:11:01 crc kubenswrapper[4867]: I0214 04:11:01.180819 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:01 crc kubenswrapper[4867]: I0214 04:11:01.181195 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:01 crc kubenswrapper[4867]: I0214 04:11:01.181360 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:01 crc kubenswrapper[4867]: I0214 04:11:01.181501 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:01 crc kubenswrapper[4867]: I0214 04:11:01.181680 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:01Z","lastTransitionTime":"2026-02-14T04:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:01 crc kubenswrapper[4867]: I0214 04:11:01.284938 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:01 crc kubenswrapper[4867]: I0214 04:11:01.285753 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:01 crc kubenswrapper[4867]: I0214 04:11:01.285897 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:01 crc kubenswrapper[4867]: I0214 04:11:01.286029 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:01 crc kubenswrapper[4867]: I0214 04:11:01.286152 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:01Z","lastTransitionTime":"2026-02-14T04:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:01 crc kubenswrapper[4867]: I0214 04:11:01.389081 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:01 crc kubenswrapper[4867]: I0214 04:11:01.389381 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:01 crc kubenswrapper[4867]: I0214 04:11:01.389570 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:01 crc kubenswrapper[4867]: I0214 04:11:01.389828 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:01 crc kubenswrapper[4867]: I0214 04:11:01.390035 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:01Z","lastTransitionTime":"2026-02-14T04:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:01 crc kubenswrapper[4867]: I0214 04:11:01.493278 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:01 crc kubenswrapper[4867]: I0214 04:11:01.493347 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:01 crc kubenswrapper[4867]: I0214 04:11:01.493368 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:01 crc kubenswrapper[4867]: I0214 04:11:01.493398 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:01 crc kubenswrapper[4867]: I0214 04:11:01.493419 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:01Z","lastTransitionTime":"2026-02-14T04:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:01 crc kubenswrapper[4867]: I0214 04:11:01.596135 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:01 crc kubenswrapper[4867]: I0214 04:11:01.596589 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:01 crc kubenswrapper[4867]: I0214 04:11:01.596776 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:01 crc kubenswrapper[4867]: I0214 04:11:01.596998 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:01 crc kubenswrapper[4867]: I0214 04:11:01.597159 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:01Z","lastTransitionTime":"2026-02-14T04:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:01 crc kubenswrapper[4867]: I0214 04:11:01.700817 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:01 crc kubenswrapper[4867]: I0214 04:11:01.700889 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:01 crc kubenswrapper[4867]: I0214 04:11:01.700910 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:01 crc kubenswrapper[4867]: I0214 04:11:01.700945 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:01 crc kubenswrapper[4867]: I0214 04:11:01.700965 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:01Z","lastTransitionTime":"2026-02-14T04:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:01 crc kubenswrapper[4867]: I0214 04:11:01.804878 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:01 crc kubenswrapper[4867]: I0214 04:11:01.804957 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:01 crc kubenswrapper[4867]: I0214 04:11:01.804977 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:01 crc kubenswrapper[4867]: I0214 04:11:01.805007 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:01 crc kubenswrapper[4867]: I0214 04:11:01.805029 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:01Z","lastTransitionTime":"2026-02-14T04:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:01 crc kubenswrapper[4867]: I0214 04:11:01.909077 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:01 crc kubenswrapper[4867]: I0214 04:11:01.909407 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:01 crc kubenswrapper[4867]: I0214 04:11:01.909559 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:01 crc kubenswrapper[4867]: I0214 04:11:01.909683 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:01 crc kubenswrapper[4867]: I0214 04:11:01.909830 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:01Z","lastTransitionTime":"2026-02-14T04:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.013032 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.013106 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.013126 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.013152 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.013170 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:02Z","lastTransitionTime":"2026-02-14T04:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.014905 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.116878 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.116952 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.116975 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.117003 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.117025 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:02Z","lastTransitionTime":"2026-02-14T04:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.158685 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 23:57:41.488149865 +0000 UTC Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.220620 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.220689 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.220708 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.220741 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.220763 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:02Z","lastTransitionTime":"2026-02-14T04:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.324988 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.325062 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.325084 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.325112 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.325132 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:02Z","lastTransitionTime":"2026-02-14T04:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.428200 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.428280 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.428305 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.428340 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.428407 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:02Z","lastTransitionTime":"2026-02-14T04:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.532065 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.532139 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.532166 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.532203 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.532229 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:02Z","lastTransitionTime":"2026-02-14T04:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.636098 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.636146 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.636159 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.636180 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.636196 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:02Z","lastTransitionTime":"2026-02-14T04:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.738561 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.738627 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.738650 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.738677 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.738695 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:02Z","lastTransitionTime":"2026-02-14T04:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.842219 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.842560 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.842691 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.842833 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.842959 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:02Z","lastTransitionTime":"2026-02-14T04:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.946834 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.947215 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.947373 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.947577 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.947759 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:02Z","lastTransitionTime":"2026-02-14T04:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.000898 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.000952 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.001048 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.000917 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:11:03 crc kubenswrapper[4867]: E0214 04:11:03.001153 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:11:03 crc kubenswrapper[4867]: E0214 04:11:03.001288 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:11:03 crc kubenswrapper[4867]: E0214 04:11:03.001427 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:11:03 crc kubenswrapper[4867]: E0214 04:11:03.001628 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.051210 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.051496 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.051786 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.051858 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.051927 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:03Z","lastTransitionTime":"2026-02-14T04:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.154861 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.154975 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.154997 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.155024 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.155041 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:03Z","lastTransitionTime":"2026-02-14T04:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.159751 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 07:26:24.286025591 +0000 UTC Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.258089 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.258620 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.258800 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.258961 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.259091 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:03Z","lastTransitionTime":"2026-02-14T04:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.362221 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.362251 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.362275 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.362288 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.362296 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:03Z","lastTransitionTime":"2026-02-14T04:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.464760 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.464799 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.464809 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.464847 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.464857 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:03Z","lastTransitionTime":"2026-02-14T04:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.567436 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.567479 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.567497 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.567532 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.567544 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:03Z","lastTransitionTime":"2026-02-14T04:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.669854 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.669903 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.669915 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.669934 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.669947 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:03Z","lastTransitionTime":"2026-02-14T04:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.772443 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.772487 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.772499 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.772541 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.772554 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:03Z","lastTransitionTime":"2026-02-14T04:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.875351 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.875400 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.875412 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.875430 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.875442 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:03Z","lastTransitionTime":"2026-02-14T04:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.919263 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.919337 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.919350 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.919398 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.919411 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:03Z","lastTransitionTime":"2026-02-14T04:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:03 crc kubenswrapper[4867]: E0214 04:11:03.937993 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:11:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:11:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:11:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:11:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:11:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:11:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:11:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:11:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148e1364-0af4-4e1f-ae72-52166d888ddc\\\",\\\"systemUUID\\\":\\\"1382a0d3-8d29-4f25-bc2c-dc46ad541396\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:11:03Z is after 2025-08-24T17:21:41Z" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.942703 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.942755 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.943691 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.943864 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.943907 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:03Z","lastTransitionTime":"2026-02-14T04:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:03 crc kubenswrapper[4867]: E0214 04:11:03.968059 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:11:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:11:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:11:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:11:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:11:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:11:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:11:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:11:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148e1364-0af4-4e1f-ae72-52166d888ddc\\\",\\\"systemUUID\\\":\\\"1382a0d3-8d29-4f25-bc2c-dc46ad541396\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:11:03Z is after 2025-08-24T17:21:41Z" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.973070 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.973137 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.973154 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.973181 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.973194 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:03Z","lastTransitionTime":"2026-02-14T04:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:03 crc kubenswrapper[4867]: E0214 04:11:03.992402 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:11:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:11:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:11:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:11:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:11:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:11:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:11:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:11:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148e1364-0af4-4e1f-ae72-52166d888ddc\\\",\\\"systemUUID\\\":\\\"1382a0d3-8d29-4f25-bc2c-dc46ad541396\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:11:03Z is after 2025-08-24T17:21:41Z" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.996115 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.996147 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.996159 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.996173 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.996185 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:03Z","lastTransitionTime":"2026-02-14T04:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:04 crc kubenswrapper[4867]: E0214 04:11:04.011010 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:11:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:11:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:11:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:11:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:11:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:11:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:11:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:11:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148e1364-0af4-4e1f-ae72-52166d888ddc\\\",\\\"systemUUID\\\":\\\"1382a0d3-8d29-4f25-bc2c-dc46ad541396\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:11:04Z is after 2025-08-24T17:21:41Z" Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.014786 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.014819 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.014832 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.014848 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.014859 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:04Z","lastTransitionTime":"2026-02-14T04:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:04 crc kubenswrapper[4867]: E0214 04:11:04.031313 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:11:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:11:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:11:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:11:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:11:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:11:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:11:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:11:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148e1364-0af4-4e1f-ae72-52166d888ddc\\\",\\\"systemUUID\\\":\\\"1382a0d3-8d29-4f25-bc2c-dc46ad541396\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:11:04Z is after 2025-08-24T17:21:41Z" Feb 14 04:11:04 crc kubenswrapper[4867]: E0214 04:11:04.031472 4867 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.033871 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.033909 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.033924 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.033942 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.033953 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:04Z","lastTransitionTime":"2026-02-14T04:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.136903 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.136951 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.136966 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.136983 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.136993 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:04Z","lastTransitionTime":"2026-02-14T04:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.160429 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 21:28:47.905212162 +0000 UTC Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.239758 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.239798 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.239806 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.239820 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.239831 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:04Z","lastTransitionTime":"2026-02-14T04:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.342253 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.342303 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.342313 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.342328 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.342340 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:04Z","lastTransitionTime":"2026-02-14T04:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.444600 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.444640 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.444651 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.444668 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.444680 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:04Z","lastTransitionTime":"2026-02-14T04:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.547977 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.548016 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.548043 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.548064 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.548080 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:04Z","lastTransitionTime":"2026-02-14T04:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.650647 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.650723 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.650743 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.650769 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.650788 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:04Z","lastTransitionTime":"2026-02-14T04:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.753310 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.753393 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.753412 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.753440 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.753460 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:04Z","lastTransitionTime":"2026-02-14T04:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.856181 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.856221 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.856229 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.856242 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.856251 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:04Z","lastTransitionTime":"2026-02-14T04:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.958827 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.958879 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.958890 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.958911 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.958923 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:04Z","lastTransitionTime":"2026-02-14T04:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.997264 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.997305 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.997315 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.997431 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:11:04 crc kubenswrapper[4867]: E0214 04:11:04.997426 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:11:04 crc kubenswrapper[4867]: E0214 04:11:04.997631 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:11:04 crc kubenswrapper[4867]: E0214 04:11:04.997653 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:11:04 crc kubenswrapper[4867]: E0214 04:11:04.997695 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.060970 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.061032 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.061051 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.061073 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.061090 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:05Z","lastTransitionTime":"2026-02-14T04:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.161441 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 00:03:05.286064072 +0000 UTC Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.163752 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.163797 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.163812 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.163835 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.163853 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:05Z","lastTransitionTime":"2026-02-14T04:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.266850 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.266925 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.266951 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.266981 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.267043 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:05Z","lastTransitionTime":"2026-02-14T04:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.370382 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.370473 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.370493 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.370590 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.370618 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:05Z","lastTransitionTime":"2026-02-14T04:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.473309 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.473972 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.474071 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.474179 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.474268 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:05Z","lastTransitionTime":"2026-02-14T04:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.577637 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.577666 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.577673 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.577685 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.577693 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:05Z","lastTransitionTime":"2026-02-14T04:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.679844 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.679889 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.679900 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.679916 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.679929 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:05Z","lastTransitionTime":"2026-02-14T04:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.782219 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.782294 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.782318 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.782350 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.782372 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:05Z","lastTransitionTime":"2026-02-14T04:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.885747 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.885842 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.885868 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.885900 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.885924 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:05Z","lastTransitionTime":"2026-02-14T04:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.988690 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.988756 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.988774 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.988798 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.988815 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:05Z","lastTransitionTime":"2026-02-14T04:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.999558 4867 scope.go:117] "RemoveContainer" containerID="97e1fa8b3d99d969cac9ac1d4bdd1161353186d3cf50512e692adeee0f21778a" Feb 14 04:11:06 crc kubenswrapper[4867]: E0214 04:11:06.000249 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-6nndn_openshift-ovn-kubernetes(34391a30-5865-46e9-af5f-705cc3b11fba)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.014759 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbvwr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05957e01-c589-4408-8f80-cd33f8856262\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb1c0677bd48bd254b78efc670de4cf3c1a2ae1a5dde8bcdc4d84ff4524b847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj65g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3962042c51f3b88c029c3ee23ee5704544b33af6a41463e864d81409a6f6845f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj65g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:10:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dbvwr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:11:06Z is after 2025-08-24T17:21:41Z" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.028978 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4b6k5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7206174b-645b-4924-8345-d1d4b1a5ec39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-272vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-272vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:10:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4b6k5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:11:06Z is after 2025-08-24T17:21:41Z" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.065975 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98379eae-150a-49e4-bc5a-774db567b411\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1680b0766cf32cd9af06a1636274ebdc0e1a0eb1ef8ebf2dd5af50a426593936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c647364c951a6adef887ffa61edec540e1ba09f957cffaf60aa4e2fb6ecaa22d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f07e13016eff40608d9a7f5dbdbd6e4faa7b21b965957c062bfd1c40b04d582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85486406cb9ccb97ccb382e44c3c4372c54609d367aeec7a04ddfa06424c9cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5777a20697086ac1eaf7dd01c471658a6ea96751fc9184d7bc2597777d86949a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e4d315b1c424660a2a02ab7882b4d25e0baa2407cbcc9efab29adf052733231\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e4d315b1c424660a2a02ab7882b4d25e0baa2407cbcc9efab29adf052733231\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6718fb3f6cc2532e0ed35f4a37eb39738cd75a5f20f85e778dec867a620eba6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6718fb3f6cc2532e0ed35f4a37eb39738cd75a5f20f85e778dec867a620eba6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://185d95c4c216a23ddee54c001dee313a17659c22037a5f60772d4449bd8fdd08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://185d95c4c216a23ddee54c001dee313a17659c22037a5f60772d4449bd8fdd08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:11:06Z is after 2025-08-24T17:21:41Z" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.080303 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fl729" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb77d03e-6ead-48b5-a96a-db4cbd540192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2556cf2433d1b1241d711139b8c66aabe3f12046f37c0f19b972b8306ff7917b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f23c7e00abcb489852a771f1534532f8a6c3acdd810e4432dd155a72558bcc7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T04:10:38Z\\\",\\\"message\\\":\\\"2026-02-14T04:09:53+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_a3f597f2-b921-47ce-8faa-6d588a62271b\\\\n2026-02-14T04:09:53+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_a3f597f2-b921-47ce-8faa-6d588a62271b to /host/opt/cni/bin/\\\\n2026-02-14T04:09:53Z [verbose] multus-daemon started\\\\n2026-02-14T04:09:53Z [verbose] Readiness Indicator file check\\\\n2026-02-14T04:10:38Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gznnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fl729\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:11:06Z is after 2025-08-24T17:21:41Z" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.092098 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.092194 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.092219 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.092745 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.093033 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:06Z","lastTransitionTime":"2026-02-14T04:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.101120 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9st5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d645541b-4940-4e53-a506-1b42bd296dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80e2ddc09dadcbbbecee7addee881a393497c7456c1ab3fd4ec4b870d86e87ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9st5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:11:06Z is after 2025-08-24T17:21:41Z" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.113694 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5992e46c-bce7-4b9f-82f2-c7ffb93286cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945e51e35cb7125361ec74b9c291782c9bc28f0c319ca5c90a88c27540d6ad95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06b088007e4cc02eff5f33dffc101f9d559fc0af6d9fc99cb7d1a49c47deec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4s95t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:11:06Z is after 2025-08-24T17:21:41Z" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.133718 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5aa8290-4924-4bc2-bd8e-576e53fa4216\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff2ac5b982c695cfabb0b045748396477b0076e3f4bd77aedf8140d8d212eefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c65a590577c74e672bca804403f159a08ded5ed0e25daf1bef640898c304fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b2c4d9c08ee7188cfea877222707517949a93291dac8409facff18ccd5d9243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c96b250166bcf9c613c7707d9b66c11bbb6292c67d03ed9c9cd8359f466d48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9a86a9d4bdcb85bed9cc5869d14d5d0dcd8a0e22ad73bcc1a9db45554d0c687\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T04:09:45Z\\\",\\\"message\\\":\\\"W0214 04:09:34.347288 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0214 04:09:34.347626 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771042174 cert, and key in /tmp/serving-cert-184764736/serving-signer.crt, /tmp/serving-cert-184764736/serving-signer.key\\\\nI0214 04:09:34.829829 1 observer_polling.go:159] Starting file observer\\\\nW0214 04:09:34.832051 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0214 04:09:34.832310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 04:09:34.835332 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-184764736/tls.crt::/tmp/serving-cert-184764736/tls.key\\\\\\\"\\\\nF0214 04:09:45.190789 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d60b00afe16ba210d6cf3e8edd9c12aef490177b83185e6d74f219cc35efbc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:11:06Z is after 2025-08-24T17:21:41Z" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.150130 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2334a367dde5688d19979264cfd6e67f44426ae7cd249c0b0e18b7e889c8c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:11:06Z is after 2025-08-24T17:21:41Z" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.162388 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 13:37:09.047778055 +0000 UTC Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.164620 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:11:06Z is after 2025-08-24T17:21:41Z" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.184388 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34391a30-5865-46e9-af5f-705cc3b11fba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92014b6ab3e1d7c8631fb2a2fa44b60586bb67157c43e3528a7644584fd25b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://250a34062c680cecaa28554a71a782da6fa1c3554900e8b4f2fa6c093f98e307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3393a612147da0ed4305cb2d2fab51792bf4aefb36be402a6faaa698793cfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9937714cb48d5e8bc3542473d8261629ced25c342c26baa13e57c3dc2ace633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://669cee3fb4ea5c0247e6aa92962377d3152fc79d9022e03689b7c017f857e6e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e713ec2c9e59ec516f5c2241a0e87501f4e83e05d6dadd3c54cd7f1cf11f7d1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97e1fa8b3d99d969cac9ac1d4bdd1161353186d3cf50512e692adeee0f21778a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97e1fa8b3d99d969cac9ac1d4bdd1161353186d3cf50512e692adeee0f21778a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T04:10:52Z\\\",\\\"message\\\":\\\"et:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0214 04:10:52.432610 6954 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0214 04:10:52.432921 6954 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0214 04:10:52.432929 6954 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nF0214 04:10:52.432932 6954 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?time\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:10:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-6nndn_openshift-ovn-kubernetes(34391a30-5865-46e9-af5f-705cc3b11fba)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b353a2a6ce81989e21b42414fdc2911f63d44fbd94dd8c588a704ae66216d8b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nndn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:11:06Z is after 2025-08-24T17:21:41Z" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.195108 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qbv2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55b70fd-de82-48c9-b879-de727928e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6de20a8d6a8a1104338491af05cb4bad2960df3f3d41271922974f2bd0f355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghrlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qbv2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:11:06Z is after 2025-08-24T17:21:41Z" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.195785 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.195828 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.195840 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.195859 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.195873 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:06Z","lastTransitionTime":"2026-02-14T04:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.209220 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b12a920eac3a6bb901e1eb5b3f4ec399de4fb28f20cd73bdcf463730ccc78bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a373e25fceabb99332a08d8c1928aa6023c103d488a1f02a57b3157eceb75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:11:06Z is after 2025-08-24T17:21:41Z" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.221914 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l6v69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afb01bb-2288-4e50-aa66-3e5f2685af58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a109f9fc7a2ea765543b2d1437ad5eccddd0ccb0542b1ffe6a67490057d6d41e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64stb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l6v69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:11:06Z is after 2025-08-24T17:21:41Z" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.235890 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:11:06Z is after 2025-08-24T17:21:41Z" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.248220 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab82cfd4c916a17bb5ae2454a121a8367c532dd78d0ae1e13c02868208b7c7fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:11:06Z is after 2025-08-24T17:21:41Z" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.263495 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:11:06Z is after 2025-08-24T17:21:41Z" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.277290 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2ca498f-e329-422d-8b40-abb4d86f9b5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec966f4b2a6aef7743d32f976a12645c5b0feda623f7baf64edf02bc35389e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://898133696f8478fcb41fba24d15e056570cab68af53a559cb642724dff51617a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ad62a4304538cec45962a9672a69b853848bbfcbce460811135c2ffde4849\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c04ba7033e9c86439f79a30f5ac92368859a69c6b8d46aa6e05ca42fbc37839\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:11:06Z is after 2025-08-24T17:21:41Z" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.289371 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7eff54a-2d26-4335-ad76-c454354b64c0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61da3ab9eb87eb886d6bdf805db38bcabc3db4334167f9e28fd6144269a76515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c54a1f41a2a0e8fa5eae1575fc40b6f3240fe6ea8cafe6fd89a64e092e5b4602\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f0b9cac3faa5bfffa911cb16b70fa88a320b7bd9314d7a0ee0732b2a57afb90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9361fb0bab2f70eaf2adc19e3fbfa9066fd7ad2fe0c94cd1a13518d2ab3708d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9361fb0bab2f70eaf2adc19e3fbfa9066fd7ad2fe0c94cd1a13518d2ab3708d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:11:06Z is after 2025-08-24T17:21:41Z" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.298835 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.298878 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.298891 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.298907 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.298918 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:06Z","lastTransitionTime":"2026-02-14T04:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.300196 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96d081a5-08ac-4716-b6ab-64959cf2933f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62a23e7ed290c1546350cfd89f40731062a0bbfc60ee74489cb0fc243bb8187f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://313dd94a6a60cea26237126b4d80e162ff2866b335e74ba876fa919f2950922e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://313dd94a6a60cea26237126b4d80e162ff2866b335e74ba876fa919f2950922e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:11:06Z is after 2025-08-24T17:21:41Z" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.401753 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.401802 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.401812 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.401828 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.401954 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:06Z","lastTransitionTime":"2026-02-14T04:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.504429 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.504461 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.504472 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.504488 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.504499 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:06Z","lastTransitionTime":"2026-02-14T04:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.606445 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.606490 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.606498 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.606776 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.606789 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:06Z","lastTransitionTime":"2026-02-14T04:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.709221 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.709282 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.709291 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.709305 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.709313 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:06Z","lastTransitionTime":"2026-02-14T04:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.811327 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.811360 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.811368 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.811382 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.811391 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:06Z","lastTransitionTime":"2026-02-14T04:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.913735 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.913771 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.913782 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.913797 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.913808 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:06Z","lastTransitionTime":"2026-02-14T04:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.996818 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.996876 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.996907 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.996992 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:11:06 crc kubenswrapper[4867]: E0214 04:11:06.996998 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:11:06 crc kubenswrapper[4867]: E0214 04:11:06.997056 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:11:06 crc kubenswrapper[4867]: E0214 04:11:06.997082 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:11:06 crc kubenswrapper[4867]: E0214 04:11:06.997105 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.016272 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.016316 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.016326 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.016342 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.016352 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:07Z","lastTransitionTime":"2026-02-14T04:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.119197 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.119248 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.119266 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.119288 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.119306 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:07Z","lastTransitionTime":"2026-02-14T04:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.163464 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 15:05:50.328609962 +0000 UTC Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.221675 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.221744 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.221756 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.221775 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.221788 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:07Z","lastTransitionTime":"2026-02-14T04:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.324056 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.324115 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.324133 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.324156 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.324173 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:07Z","lastTransitionTime":"2026-02-14T04:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.426970 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.427024 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.427040 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.427214 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.427370 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:07Z","lastTransitionTime":"2026-02-14T04:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.530326 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.530409 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.530428 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.530461 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.530483 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:07Z","lastTransitionTime":"2026-02-14T04:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.633775 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.633855 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.633873 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.633900 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.633968 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:07Z","lastTransitionTime":"2026-02-14T04:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.737020 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.737064 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.737073 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.737089 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.737099 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:07Z","lastTransitionTime":"2026-02-14T04:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.842326 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.842395 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.842414 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.842442 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.842461 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:07Z","lastTransitionTime":"2026-02-14T04:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.946066 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.946126 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.946139 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.946159 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.946172 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:07Z","lastTransitionTime":"2026-02-14T04:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.051396 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.051458 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.051475 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.051499 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.051550 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:08Z","lastTransitionTime":"2026-02-14T04:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.154977 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.155049 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.155068 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.155095 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.155116 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:08Z","lastTransitionTime":"2026-02-14T04:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.164609 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 05:02:11.701766968 +0000 UTC Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.258590 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.258661 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.258684 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.258717 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.258742 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:08Z","lastTransitionTime":"2026-02-14T04:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.362057 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.362131 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.362149 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.362176 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.362196 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:08Z","lastTransitionTime":"2026-02-14T04:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.465832 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.465965 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.466000 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.466039 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.466066 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:08Z","lastTransitionTime":"2026-02-14T04:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.568933 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.568985 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.569000 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.569021 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.569035 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:08Z","lastTransitionTime":"2026-02-14T04:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.671453 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.671537 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.671558 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.671579 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.671592 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:08Z","lastTransitionTime":"2026-02-14T04:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.773530 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.773653 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.773673 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.773703 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.773721 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:08Z","lastTransitionTime":"2026-02-14T04:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.875364 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.875406 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.875418 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.875434 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.875445 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:08Z","lastTransitionTime":"2026-02-14T04:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.977997 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.978062 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.978080 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.978110 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.978130 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:08Z","lastTransitionTime":"2026-02-14T04:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.996762 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.996765 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.996773 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.996894 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:11:08 crc kubenswrapper[4867]: E0214 04:11:08.997095 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:11:08 crc kubenswrapper[4867]: E0214 04:11:08.997249 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:11:08 crc kubenswrapper[4867]: E0214 04:11:08.997549 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:11:08 crc kubenswrapper[4867]: E0214 04:11:08.997747 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.015782 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4b6k5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7206174b-645b-4924-8345-d1d4b1a5ec39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-272vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-272vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:10:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4b6k5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:11:09Z is after 2025-08-24T17:21:41Z" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.052922 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98379eae-150a-49e4-bc5a-774db567b411\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1680b0766cf32cd9af06a1636274ebdc0e1a0eb1ef8ebf2dd5af50a426593936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c647364c951a6adef887ffa61edec540e1ba09f957cffaf60aa4e2fb6ecaa22d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f07e13016eff40608d9a7f5dbdbd6e4faa7b21b965957c062bfd1c40b04d582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85486406cb9ccb97ccb382e44c3c4372c54609d367aeec7a04ddfa06424c9cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5777a20697086ac1eaf7dd01c471658a6ea96751fc9184d7bc2597777d86949a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e4d315b1c424660a2a02ab7882b4d25e0baa2407cbcc9efab29adf052733231\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e4d315b1c424660a2a02ab7882b4d25e0baa2407cbcc9efab29adf052733231\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6718fb3f6cc2532e0ed35f4a37eb39738cd75a5f20f85e778dec867a620eba6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6718fb3f6cc2532e0ed35f4a37eb39738cd75a5f20f85e778dec867a620eba6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://185d95c4c216a23ddee54c001dee313a17659c22037a5f60772d4449bd8fdd08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://185d95c4c216a23ddee54c001dee313a17659c22037a5f60772d4449bd8fdd08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:11:09Z is after 2025-08-24T17:21:41Z" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.070316 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fl729" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb77d03e-6ead-48b5-a96a-db4cbd540192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2556cf2433d1b1241d711139b8c66aabe3f12046f37c0f19b972b8306ff7917b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f23c7e00abcb489852a771f1534532f8a6c3acdd810e4432dd155a72558bcc7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T04:10:38Z\\\",\\\"message\\\":\\\"2026-02-14T04:09:53+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_a3f597f2-b921-47ce-8faa-6d588a62271b\\\\n2026-02-14T04:09:53+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_a3f597f2-b921-47ce-8faa-6d588a62271b to /host/opt/cni/bin/\\\\n2026-02-14T04:09:53Z [verbose] multus-daemon started\\\\n2026-02-14T04:09:53Z [verbose] Readiness Indicator file check\\\\n2026-02-14T04:10:38Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gznnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fl729\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:11:09Z is after 2025-08-24T17:21:41Z" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.081967 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.082043 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.082065 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.082108 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.082134 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:09Z","lastTransitionTime":"2026-02-14T04:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.089145 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9st5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d645541b-4940-4e53-a506-1b42bd296dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80e2ddc09dadcbbbecee7addee881a393497c7456c1ab3fd4ec4b870d86e87ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9st5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:11:09Z is after 2025-08-24T17:21:41Z" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.109150 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbvwr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05957e01-c589-4408-8f80-cd33f8856262\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb1c0677bd48bd254b78efc670de4cf3c1a2ae1a5dde8bcdc4d84ff4524b847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj65g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3962042c51f3b88c029c3ee23ee5704544b33af6a41463e864d81409a6f6845f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj65g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:10:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dbvwr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:11:09Z is after 2025-08-24T17:21:41Z" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.130154 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5aa8290-4924-4bc2-bd8e-576e53fa4216\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff2ac5b982c695cfabb0b045748396477b0076e3f4bd77aedf8140d8d212eefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c65a590577c74e672bca804403f159a08ded5ed0e25daf1bef640898c304fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b2c4d9c08ee7188cfea877222707517949a93291dac8409facff18ccd5d9243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c96b250166bcf9c613c7707d9b66c11bbb6292c67d03ed9c9cd8359f466d48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9a86a9d4bdcb85bed9cc5869d14d5d0dcd8a0e22ad73bcc1a9db45554d0c687\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T04:09:45Z\\\",\\\"message\\\":\\\"W0214 04:09:34.347288 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0214 04:09:34.347626 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771042174 cert, and key in /tmp/serving-cert-184764736/serving-signer.crt, /tmp/serving-cert-184764736/serving-signer.key\\\\nI0214 04:09:34.829829 1 observer_polling.go:159] Starting file observer\\\\nW0214 04:09:34.832051 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0214 04:09:34.832310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 04:09:34.835332 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-184764736/tls.crt::/tmp/serving-cert-184764736/tls.key\\\\\\\"\\\\nF0214 04:09:45.190789 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d60b00afe16ba210d6cf3e8edd9c12aef490177b83185e6d74f219cc35efbc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:11:09Z is after 2025-08-24T17:21:41Z" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.147764 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2334a367dde5688d19979264cfd6e67f44426ae7cd249c0b0e18b7e889c8c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:11:09Z is after 2025-08-24T17:21:41Z" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.165539 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 06:03:53.011435318 +0000 UTC Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.165976 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:11:09Z is after 2025-08-24T17:21:41Z" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.182071 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5992e46c-bce7-4b9f-82f2-c7ffb93286cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945e51e35cb7125361ec74b9c291782c9bc28f0c319ca5c90a88c27540d6ad95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06b088007e4cc02eff5f33dffc101f9d559fc0af6d9fc99cb7d1a49c47deec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4s95t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:11:09Z is after 2025-08-24T17:21:41Z" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.186928 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.186981 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.186998 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.187023 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.187041 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:09Z","lastTransitionTime":"2026-02-14T04:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.194542 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qbv2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55b70fd-de82-48c9-b879-de727928e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6de20a8d6a8a1104338491af05cb4bad2960df3f3d41271922974f2bd0f355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghrlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qbv2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:11:09Z is after 2025-08-24T17:21:41Z" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.209410 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b12a920eac3a6bb901e1eb5b3f4ec399de4fb28f20cd73bdcf463730ccc78bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a373e25fceabb99332a08d8c1928aa6023c103d488a1f02a57b3157eceb75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:11:09Z is after 2025-08-24T17:21:41Z" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.226617 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l6v69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afb01bb-2288-4e50-aa66-3e5f2685af58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a109f9fc7a2ea765543b2d1437ad5eccddd0ccb0542b1ffe6a67490057d6d41e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64stb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l6v69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:11:09Z is after 2025-08-24T17:21:41Z" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.246312 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:11:09Z is after 2025-08-24T17:21:41Z" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.278570 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34391a30-5865-46e9-af5f-705cc3b11fba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92014b6ab3e1d7c8631fb2a2fa44b60586bb67157c43e3528a7644584fd25b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://250a34062c680cecaa28554a71a782da6fa1c3554900e8b4f2fa6c093f98e307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3393a612147da0ed4305cb2d2fab51792bf4aefb36be402a6faaa698793cfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9937714cb48d5e8bc3542473d8261629ced25c342c26baa13e57c3dc2ace633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://669cee3fb4ea5c0247e6aa92962377d3152fc79d9022e03689b7c017f857e6e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e713ec2c9e59ec516f5c2241a0e87501f4e83e05d6dadd3c54cd7f1cf11f7d1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97e1fa8b3d99d969cac9ac1d4bdd1161353186d3cf50512e692adeee0f21778a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97e1fa8b3d99d969cac9ac1d4bdd1161353186d3cf50512e692adeee0f21778a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T04:10:52Z\\\",\\\"message\\\":\\\"et:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0214 04:10:52.432610 6954 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0214 04:10:52.432921 6954 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0214 04:10:52.432929 6954 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nF0214 04:10:52.432932 6954 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?time\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:10:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-6nndn_openshift-ovn-kubernetes(34391a30-5865-46e9-af5f-705cc3b11fba)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b353a2a6ce81989e21b42414fdc2911f63d44fbd94dd8c588a704ae66216d8b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nndn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:11:09Z is after 2025-08-24T17:21:41Z" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.291151 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.291246 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.291267 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.291294 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.291312 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:09Z","lastTransitionTime":"2026-02-14T04:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.302328 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:11:09Z is after 2025-08-24T17:21:41Z" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.326254 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2ca498f-e329-422d-8b40-abb4d86f9b5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec966f4b2a6aef7743d32f976a12645c5b0feda623f7baf64edf02bc35389e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://898133696f8478fcb41fba24d15e056570cab68af53a559cb642724dff51617a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ad62a4304538cec45962a9672a69b853848bbfcbce460811135c2ffde4849\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c04ba7033e9c86439f79a30f5ac92368859a69c6b8d46aa6e05ca42fbc37839\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:11:09Z is after 2025-08-24T17:21:41Z" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.348917 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7eff54a-2d26-4335-ad76-c454354b64c0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61da3ab9eb87eb886d6bdf805db38bcabc3db4334167f9e28fd6144269a76515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c54a1f41a2a0e8fa5eae1575fc40b6f3240fe6ea8cafe6fd89a64e092e5b4602\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f0b9cac3faa5bfffa911cb16b70fa88a320b7bd9314d7a0ee0732b2a57afb90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9361fb0bab2f70eaf2adc19e3fbfa9066fd7ad2fe0c94cd1a13518d2ab3708d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9361fb0bab2f70eaf2adc19e3fbfa9066fd7ad2fe0c94cd1a13518d2ab3708d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:11:09Z is after 2025-08-24T17:21:41Z" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.361281 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96d081a5-08ac-4716-b6ab-64959cf2933f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62a23e7ed290c1546350cfd89f40731062a0bbfc60ee74489cb0fc243bb8187f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://313dd94a6a60cea26237126b4d80e162ff2866b335e74ba876fa919f2950922e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://313dd94a6a60cea26237126b4d80e162ff2866b335e74ba876fa919f2950922e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:11:09Z is after 2025-08-24T17:21:41Z" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.379985 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7206174b-645b-4924-8345-d1d4b1a5ec39-metrics-certs\") pod \"network-metrics-daemon-4b6k5\" (UID: \"7206174b-645b-4924-8345-d1d4b1a5ec39\") " pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:11:09 crc kubenswrapper[4867]: E0214 04:11:09.380219 4867 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 14 04:11:09 crc kubenswrapper[4867]: E0214 04:11:09.380349 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7206174b-645b-4924-8345-d1d4b1a5ec39-metrics-certs podName:7206174b-645b-4924-8345-d1d4b1a5ec39 nodeName:}" failed. No retries permitted until 2026-02-14 04:12:13.380313277 +0000 UTC m=+165.461250591 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7206174b-645b-4924-8345-d1d4b1a5ec39-metrics-certs") pod "network-metrics-daemon-4b6k5" (UID: "7206174b-645b-4924-8345-d1d4b1a5ec39") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.383679 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab82cfd4c916a17bb5ae2454a121a8367c532dd78d0ae1e13c02868208b7c7fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:11:09Z is after 2025-08-24T17:21:41Z" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.394181 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.394306 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.394397 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.394462 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.394540 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:09Z","lastTransitionTime":"2026-02-14T04:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.497068 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.497152 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.497170 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.497192 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.497210 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:09Z","lastTransitionTime":"2026-02-14T04:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.599883 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.599953 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.599969 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.600030 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.600044 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:09Z","lastTransitionTime":"2026-02-14T04:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.702941 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.703001 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.703022 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.703047 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.703064 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:09Z","lastTransitionTime":"2026-02-14T04:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.805389 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.805453 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.805463 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.805478 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.805490 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:09Z","lastTransitionTime":"2026-02-14T04:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.908466 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.908655 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.908727 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.908753 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.908772 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:09Z","lastTransitionTime":"2026-02-14T04:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.012819 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.012863 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.012873 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.012916 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.012925 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:10Z","lastTransitionTime":"2026-02-14T04:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.116010 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.116057 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.116066 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.116081 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.116090 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:10Z","lastTransitionTime":"2026-02-14T04:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.166080 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 16:00:15.082804538 +0000 UTC Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.218842 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.218901 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.218911 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.218926 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.218937 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:10Z","lastTransitionTime":"2026-02-14T04:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.321902 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.321983 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.322001 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.322022 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.322036 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:10Z","lastTransitionTime":"2026-02-14T04:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.424460 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.424498 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.424530 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.424547 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.424559 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:10Z","lastTransitionTime":"2026-02-14T04:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.527058 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.527092 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.527103 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.527120 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.527132 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:10Z","lastTransitionTime":"2026-02-14T04:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.630812 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.630886 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.630906 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.630933 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.630952 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:10Z","lastTransitionTime":"2026-02-14T04:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.733735 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.733775 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.733785 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.733801 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.733813 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:10Z","lastTransitionTime":"2026-02-14T04:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.836998 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.837070 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.837089 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.837120 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.837140 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:10Z","lastTransitionTime":"2026-02-14T04:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.940093 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.940180 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.940207 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.940239 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.940265 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:10Z","lastTransitionTime":"2026-02-14T04:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.996745 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.996841 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.996922 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:11:10 crc kubenswrapper[4867]: E0214 04:11:10.997054 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.997101 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:11:10 crc kubenswrapper[4867]: E0214 04:11:10.997221 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:11:10 crc kubenswrapper[4867]: E0214 04:11:10.997241 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:11:10 crc kubenswrapper[4867]: E0214 04:11:10.997382 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.043048 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.043108 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.043123 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.043147 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.043171 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:11Z","lastTransitionTime":"2026-02-14T04:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.145474 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.145542 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.145565 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.145590 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.145605 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:11Z","lastTransitionTime":"2026-02-14T04:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.166387 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 13:57:45.934682728 +0000 UTC Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.248148 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.248191 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.248201 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.248218 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.248230 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:11Z","lastTransitionTime":"2026-02-14T04:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.351428 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.351534 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.351561 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.351589 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.351612 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:11Z","lastTransitionTime":"2026-02-14T04:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.454174 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.454283 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.454293 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.454305 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.454314 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:11Z","lastTransitionTime":"2026-02-14T04:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.558436 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.558546 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.558574 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.558602 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.558620 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:11Z","lastTransitionTime":"2026-02-14T04:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.661153 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.661193 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.661203 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.661219 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.661228 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:11Z","lastTransitionTime":"2026-02-14T04:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.763962 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.764007 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.764016 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.764032 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.764040 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:11Z","lastTransitionTime":"2026-02-14T04:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.866169 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.866222 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.866240 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.866262 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.866280 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:11Z","lastTransitionTime":"2026-02-14T04:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.969032 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.969098 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.969122 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.969152 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.969175 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:11Z","lastTransitionTime":"2026-02-14T04:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.072239 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.072310 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.072328 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.072355 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.072371 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:12Z","lastTransitionTime":"2026-02-14T04:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.167075 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 16:51:23.461573499 +0000 UTC Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.175044 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.175110 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.175128 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.175155 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.175173 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:12Z","lastTransitionTime":"2026-02-14T04:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.279126 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.279277 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.279307 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.279336 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.279360 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:12Z","lastTransitionTime":"2026-02-14T04:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.381550 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.381606 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.381617 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.381635 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.381648 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:12Z","lastTransitionTime":"2026-02-14T04:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.484846 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.484905 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.484923 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.484948 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.484964 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:12Z","lastTransitionTime":"2026-02-14T04:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.586932 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.586987 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.586996 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.587010 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.587019 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:12Z","lastTransitionTime":"2026-02-14T04:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.689857 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.689974 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.689995 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.690036 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.690059 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:12Z","lastTransitionTime":"2026-02-14T04:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.792646 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.792693 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.792710 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.792727 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.792743 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:12Z","lastTransitionTime":"2026-02-14T04:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.895861 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.895927 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.895946 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.895983 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.896001 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:12Z","lastTransitionTime":"2026-02-14T04:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.997243 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.997317 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.997263 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.997357 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:11:12 crc kubenswrapper[4867]: E0214 04:11:12.997471 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:11:12 crc kubenswrapper[4867]: E0214 04:11:12.997638 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:11:12 crc kubenswrapper[4867]: E0214 04:11:12.997940 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:11:12 crc kubenswrapper[4867]: E0214 04:11:12.998034 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.998977 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.999002 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.999012 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.999030 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.999041 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:12Z","lastTransitionTime":"2026-02-14T04:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:13 crc kubenswrapper[4867]: I0214 04:11:13.101054 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:13 crc kubenswrapper[4867]: I0214 04:11:13.101117 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:13 crc kubenswrapper[4867]: I0214 04:11:13.101139 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:13 crc kubenswrapper[4867]: I0214 04:11:13.101167 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:13 crc kubenswrapper[4867]: I0214 04:11:13.101187 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:13Z","lastTransitionTime":"2026-02-14T04:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:13 crc kubenswrapper[4867]: I0214 04:11:13.168054 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 05:12:37.541447037 +0000 UTC Feb 14 04:11:13 crc kubenswrapper[4867]: I0214 04:11:13.203470 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:13 crc kubenswrapper[4867]: I0214 04:11:13.203566 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:13 crc kubenswrapper[4867]: I0214 04:11:13.203627 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:13 crc kubenswrapper[4867]: I0214 04:11:13.203651 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:13 crc kubenswrapper[4867]: I0214 04:11:13.203668 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:13Z","lastTransitionTime":"2026-02-14T04:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:13 crc kubenswrapper[4867]: I0214 04:11:13.306356 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:13 crc kubenswrapper[4867]: I0214 04:11:13.306409 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:13 crc kubenswrapper[4867]: I0214 04:11:13.306420 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:13 crc kubenswrapper[4867]: I0214 04:11:13.306436 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:13 crc kubenswrapper[4867]: I0214 04:11:13.306449 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:13Z","lastTransitionTime":"2026-02-14T04:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:13 crc kubenswrapper[4867]: I0214 04:11:13.410061 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:13 crc kubenswrapper[4867]: I0214 04:11:13.410103 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:13 crc kubenswrapper[4867]: I0214 04:11:13.410113 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:13 crc kubenswrapper[4867]: I0214 04:11:13.410127 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:13 crc kubenswrapper[4867]: I0214 04:11:13.410136 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:13Z","lastTransitionTime":"2026-02-14T04:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:13 crc kubenswrapper[4867]: I0214 04:11:13.513600 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:13 crc kubenswrapper[4867]: I0214 04:11:13.513655 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:13 crc kubenswrapper[4867]: I0214 04:11:13.513666 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:13 crc kubenswrapper[4867]: I0214 04:11:13.513683 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:13 crc kubenswrapper[4867]: I0214 04:11:13.513694 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:13Z","lastTransitionTime":"2026-02-14T04:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:13 crc kubenswrapper[4867]: I0214 04:11:13.616175 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:13 crc kubenswrapper[4867]: I0214 04:11:13.616246 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:13 crc kubenswrapper[4867]: I0214 04:11:13.616267 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:13 crc kubenswrapper[4867]: I0214 04:11:13.616294 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:13 crc kubenswrapper[4867]: I0214 04:11:13.616317 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:13Z","lastTransitionTime":"2026-02-14T04:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:13 crc kubenswrapper[4867]: I0214 04:11:13.718630 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:13 crc kubenswrapper[4867]: I0214 04:11:13.718677 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:13 crc kubenswrapper[4867]: I0214 04:11:13.718688 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:13 crc kubenswrapper[4867]: I0214 04:11:13.718704 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:13 crc kubenswrapper[4867]: I0214 04:11:13.718715 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:13Z","lastTransitionTime":"2026-02-14T04:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:13 crc kubenswrapper[4867]: I0214 04:11:13.821416 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:13 crc kubenswrapper[4867]: I0214 04:11:13.821474 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:13 crc kubenswrapper[4867]: I0214 04:11:13.821486 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:13 crc kubenswrapper[4867]: I0214 04:11:13.821526 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:13 crc kubenswrapper[4867]: I0214 04:11:13.821544 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:13Z","lastTransitionTime":"2026-02-14T04:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:13 crc kubenswrapper[4867]: I0214 04:11:13.924384 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:13 crc kubenswrapper[4867]: I0214 04:11:13.924415 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:13 crc kubenswrapper[4867]: I0214 04:11:13.924423 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:13 crc kubenswrapper[4867]: I0214 04:11:13.924436 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:13 crc kubenswrapper[4867]: I0214 04:11:13.924444 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:13Z","lastTransitionTime":"2026-02-14T04:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.028022 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.028105 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.028127 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.028153 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.028171 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:14Z","lastTransitionTime":"2026-02-14T04:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.130900 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.130979 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.131004 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.131029 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.131047 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:14Z","lastTransitionTime":"2026-02-14T04:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.168630 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 02:34:04.508544899 +0000 UTC Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.202038 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.202112 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.202134 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.202163 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.202185 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:14Z","lastTransitionTime":"2026-02-14T04:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.234993 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.235078 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.235103 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.235133 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.235153 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:14Z","lastTransitionTime":"2026-02-14T04:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.268898 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-865l7"] Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.269410 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-865l7" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.273755 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.274003 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.274155 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.274280 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.292414 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podStartSLOduration=84.292394151 podStartE2EDuration="1m24.292394151s" podCreationTimestamp="2026-02-14 04:09:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:14.292393471 +0000 UTC m=+106.373330815" watchObservedRunningTime="2026-02-14 04:11:14.292394151 +0000 UTC m=+106.373331475" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.330828 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=83.330804115 podStartE2EDuration="1m23.330804115s" podCreationTimestamp="2026-02-14 04:09:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:14.314207081 +0000 UTC m=+106.395144405" watchObservedRunningTime="2026-02-14 04:11:14.330804115 +0000 UTC m=+106.411741439" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.331052 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/079535bb-0b3b-4373-bdc5-6dbf0d926179-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-865l7\" (UID: \"079535bb-0b3b-4373-bdc5-6dbf0d926179\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-865l7" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.331247 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/079535bb-0b3b-4373-bdc5-6dbf0d926179-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-865l7\" (UID: \"079535bb-0b3b-4373-bdc5-6dbf0d926179\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-865l7" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.331281 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/079535bb-0b3b-4373-bdc5-6dbf0d926179-service-ca\") pod \"cluster-version-operator-5c965bbfc6-865l7\" (UID: \"079535bb-0b3b-4373-bdc5-6dbf0d926179\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-865l7" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.331303 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/079535bb-0b3b-4373-bdc5-6dbf0d926179-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-865l7\" (UID: \"079535bb-0b3b-4373-bdc5-6dbf0d926179\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-865l7" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.331388 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/079535bb-0b3b-4373-bdc5-6dbf0d926179-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-865l7\" (UID: \"079535bb-0b3b-4373-bdc5-6dbf0d926179\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-865l7" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.404627 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-qbv2g" podStartSLOduration=83.404605743 podStartE2EDuration="1m23.404605743s" podCreationTimestamp="2026-02-14 04:09:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:14.389142539 +0000 UTC m=+106.470079863" watchObservedRunningTime="2026-02-14 04:11:14.404605743 +0000 UTC m=+106.485543067" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.426728 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-l6v69" podStartSLOduration=84.426709271 podStartE2EDuration="1m24.426709271s" podCreationTimestamp="2026-02-14 04:09:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:14.42667261 +0000 UTC m=+106.507609934" watchObservedRunningTime="2026-02-14 04:11:14.426709271 +0000 UTC m=+106.507646585" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.432270 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/079535bb-0b3b-4373-bdc5-6dbf0d926179-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-865l7\" (UID: \"079535bb-0b3b-4373-bdc5-6dbf0d926179\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-865l7" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.432311 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/079535bb-0b3b-4373-bdc5-6dbf0d926179-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-865l7\" (UID: \"079535bb-0b3b-4373-bdc5-6dbf0d926179\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-865l7" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.432333 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/079535bb-0b3b-4373-bdc5-6dbf0d926179-service-ca\") pod \"cluster-version-operator-5c965bbfc6-865l7\" (UID: \"079535bb-0b3b-4373-bdc5-6dbf0d926179\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-865l7" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.432370 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/079535bb-0b3b-4373-bdc5-6dbf0d926179-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-865l7\" (UID: \"079535bb-0b3b-4373-bdc5-6dbf0d926179\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-865l7" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.432390 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/079535bb-0b3b-4373-bdc5-6dbf0d926179-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-865l7\" (UID: \"079535bb-0b3b-4373-bdc5-6dbf0d926179\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-865l7" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.432455 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/079535bb-0b3b-4373-bdc5-6dbf0d926179-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-865l7\" (UID: \"079535bb-0b3b-4373-bdc5-6dbf0d926179\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-865l7" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.432653 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/079535bb-0b3b-4373-bdc5-6dbf0d926179-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-865l7\" (UID: \"079535bb-0b3b-4373-bdc5-6dbf0d926179\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-865l7" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.433193 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/079535bb-0b3b-4373-bdc5-6dbf0d926179-service-ca\") pod \"cluster-version-operator-5c965bbfc6-865l7\" (UID: \"079535bb-0b3b-4373-bdc5-6dbf0d926179\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-865l7" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.439017 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/079535bb-0b3b-4373-bdc5-6dbf0d926179-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-865l7\" (UID: \"079535bb-0b3b-4373-bdc5-6dbf0d926179\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-865l7" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.447499 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/079535bb-0b3b-4373-bdc5-6dbf0d926179-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-865l7\" (UID: \"079535bb-0b3b-4373-bdc5-6dbf0d926179\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-865l7" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.481604 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=80.481589485 podStartE2EDuration="1m20.481589485s" podCreationTimestamp="2026-02-14 04:09:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:14.481231845 +0000 UTC m=+106.562169159" watchObservedRunningTime="2026-02-14 04:11:14.481589485 +0000 UTC m=+106.562526799" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.501225 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=49.501209057 podStartE2EDuration="49.501209057s" podCreationTimestamp="2026-02-14 04:10:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:14.493127376 +0000 UTC m=+106.574064690" watchObservedRunningTime="2026-02-14 04:11:14.501209057 +0000 UTC m=+106.582146371" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.501314 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=12.5013098 podStartE2EDuration="12.5013098s" podCreationTimestamp="2026-02-14 04:11:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:14.500993162 +0000 UTC m=+106.581930476" watchObservedRunningTime="2026-02-14 04:11:14.5013098 +0000 UTC m=+106.582247114" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.515253 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbvwr" podStartSLOduration=83.515230394 podStartE2EDuration="1m23.515230394s" podCreationTimestamp="2026-02-14 04:09:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:14.514485814 +0000 UTC m=+106.595423128" watchObservedRunningTime="2026-02-14 04:11:14.515230394 +0000 UTC m=+106.596167718" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.545683 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=23.545668709 podStartE2EDuration="23.545668709s" podCreationTimestamp="2026-02-14 04:10:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:14.544802586 +0000 UTC m=+106.625739900" watchObservedRunningTime="2026-02-14 04:11:14.545668709 +0000 UTC m=+106.626606023" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.557912 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-fl729" podStartSLOduration=84.557897019 podStartE2EDuration="1m24.557897019s" podCreationTimestamp="2026-02-14 04:09:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:14.557332824 +0000 UTC m=+106.638270138" watchObservedRunningTime="2026-02-14 04:11:14.557897019 +0000 UTC m=+106.638834333" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.576837 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-9st5b" podStartSLOduration=84.576818363 podStartE2EDuration="1m24.576818363s" podCreationTimestamp="2026-02-14 04:09:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:14.576004532 +0000 UTC m=+106.656941846" watchObservedRunningTime="2026-02-14 04:11:14.576818363 +0000 UTC m=+106.657755677" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.597456 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-865l7" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.615767 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-865l7" event={"ID":"079535bb-0b3b-4373-bdc5-6dbf0d926179","Type":"ContainerStarted","Data":"dd9eb63857a7b6f97d57091eaa0caa9a4cf1c61cd4592adf8a4e53b3ca48770b"} Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.997031 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.997076 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:11:14 crc kubenswrapper[4867]: E0214 04:11:14.997180 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.997234 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.997379 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:11:14 crc kubenswrapper[4867]: E0214 04:11:14.997380 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:11:14 crc kubenswrapper[4867]: E0214 04:11:14.997570 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:11:14 crc kubenswrapper[4867]: E0214 04:11:14.997652 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:11:15 crc kubenswrapper[4867]: I0214 04:11:15.169420 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 15:55:37.050201241 +0000 UTC Feb 14 04:11:15 crc kubenswrapper[4867]: I0214 04:11:15.169601 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 14 04:11:15 crc kubenswrapper[4867]: I0214 04:11:15.178882 4867 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 14 04:11:15 crc kubenswrapper[4867]: I0214 04:11:15.621604 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-865l7" event={"ID":"079535bb-0b3b-4373-bdc5-6dbf0d926179","Type":"ContainerStarted","Data":"f30dc88fc10bb24a59a40d3befe525549578a5e026d09551fa9145de8fdb8f0f"} Feb 14 04:11:16 crc kubenswrapper[4867]: I0214 04:11:16.996743 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:11:16 crc kubenswrapper[4867]: I0214 04:11:16.996756 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:11:16 crc kubenswrapper[4867]: I0214 04:11:16.996743 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:11:16 crc kubenswrapper[4867]: I0214 04:11:16.996872 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:11:16 crc kubenswrapper[4867]: E0214 04:11:16.996990 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:11:16 crc kubenswrapper[4867]: E0214 04:11:16.997039 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:11:16 crc kubenswrapper[4867]: E0214 04:11:16.997102 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:11:16 crc kubenswrapper[4867]: E0214 04:11:16.997203 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:11:18 crc kubenswrapper[4867]: I0214 04:11:18.996205 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:11:18 crc kubenswrapper[4867]: I0214 04:11:18.996259 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:11:18 crc kubenswrapper[4867]: I0214 04:11:18.998804 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:11:18 crc kubenswrapper[4867]: E0214 04:11:18.998798 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:11:18 crc kubenswrapper[4867]: I0214 04:11:18.998848 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:11:18 crc kubenswrapper[4867]: E0214 04:11:18.998986 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:11:18 crc kubenswrapper[4867]: E0214 04:11:18.999105 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:11:18 crc kubenswrapper[4867]: E0214 04:11:18.999205 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:11:19 crc kubenswrapper[4867]: I0214 04:11:19.998353 4867 scope.go:117] "RemoveContainer" containerID="97e1fa8b3d99d969cac9ac1d4bdd1161353186d3cf50512e692adeee0f21778a" Feb 14 04:11:19 crc kubenswrapper[4867]: E0214 04:11:19.998503 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-6nndn_openshift-ovn-kubernetes(34391a30-5865-46e9-af5f-705cc3b11fba)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" Feb 14 04:11:20 crc kubenswrapper[4867]: I0214 04:11:20.997161 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:11:20 crc kubenswrapper[4867]: I0214 04:11:20.997213 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:11:20 crc kubenswrapper[4867]: I0214 04:11:20.997287 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:11:20 crc kubenswrapper[4867]: E0214 04:11:20.997291 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:11:20 crc kubenswrapper[4867]: E0214 04:11:20.997367 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:11:20 crc kubenswrapper[4867]: E0214 04:11:20.997430 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:11:20 crc kubenswrapper[4867]: I0214 04:11:20.997548 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:11:20 crc kubenswrapper[4867]: E0214 04:11:20.997593 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:11:21 crc kubenswrapper[4867]: I0214 04:11:21.418783 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:11:21 crc kubenswrapper[4867]: I0214 04:11:21.420227 4867 scope.go:117] "RemoveContainer" containerID="97e1fa8b3d99d969cac9ac1d4bdd1161353186d3cf50512e692adeee0f21778a" Feb 14 04:11:21 crc kubenswrapper[4867]: E0214 04:11:21.420486 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-6nndn_openshift-ovn-kubernetes(34391a30-5865-46e9-af5f-705cc3b11fba)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" Feb 14 04:11:22 crc kubenswrapper[4867]: I0214 04:11:22.996819 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:11:22 crc kubenswrapper[4867]: I0214 04:11:22.996860 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:11:22 crc kubenswrapper[4867]: E0214 04:11:22.996960 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:11:22 crc kubenswrapper[4867]: E0214 04:11:22.997114 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:11:22 crc kubenswrapper[4867]: I0214 04:11:22.997587 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:11:22 crc kubenswrapper[4867]: E0214 04:11:22.997664 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:11:22 crc kubenswrapper[4867]: I0214 04:11:22.997694 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:11:22 crc kubenswrapper[4867]: E0214 04:11:22.997765 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:11:24 crc kubenswrapper[4867]: I0214 04:11:24.996387 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:11:24 crc kubenswrapper[4867]: I0214 04:11:24.996685 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:11:24 crc kubenswrapper[4867]: I0214 04:11:24.996711 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:11:24 crc kubenswrapper[4867]: I0214 04:11:24.996822 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:11:24 crc kubenswrapper[4867]: E0214 04:11:24.997976 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:11:24 crc kubenswrapper[4867]: E0214 04:11:24.998018 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:11:24 crc kubenswrapper[4867]: E0214 04:11:24.998234 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:11:24 crc kubenswrapper[4867]: E0214 04:11:24.998808 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:11:25 crc kubenswrapper[4867]: I0214 04:11:25.655391 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fl729_fb77d03e-6ead-48b5-a96a-db4cbd540192/kube-multus/1.log" Feb 14 04:11:25 crc kubenswrapper[4867]: I0214 04:11:25.655980 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fl729_fb77d03e-6ead-48b5-a96a-db4cbd540192/kube-multus/0.log" Feb 14 04:11:25 crc kubenswrapper[4867]: I0214 04:11:25.656045 4867 generic.go:334] "Generic (PLEG): container finished" podID="fb77d03e-6ead-48b5-a96a-db4cbd540192" containerID="2556cf2433d1b1241d711139b8c66aabe3f12046f37c0f19b972b8306ff7917b" exitCode=1 Feb 14 04:11:25 crc kubenswrapper[4867]: I0214 04:11:25.656087 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fl729" event={"ID":"fb77d03e-6ead-48b5-a96a-db4cbd540192","Type":"ContainerDied","Data":"2556cf2433d1b1241d711139b8c66aabe3f12046f37c0f19b972b8306ff7917b"} Feb 14 04:11:25 crc kubenswrapper[4867]: I0214 04:11:25.656129 4867 scope.go:117] "RemoveContainer" containerID="6f23c7e00abcb489852a771f1534532f8a6c3acdd810e4432dd155a72558bcc7" Feb 14 04:11:25 crc kubenswrapper[4867]: I0214 04:11:25.656530 4867 scope.go:117] "RemoveContainer" containerID="2556cf2433d1b1241d711139b8c66aabe3f12046f37c0f19b972b8306ff7917b" Feb 14 04:11:25 crc kubenswrapper[4867]: E0214 04:11:25.656677 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-fl729_openshift-multus(fb77d03e-6ead-48b5-a96a-db4cbd540192)\"" pod="openshift-multus/multus-fl729" podUID="fb77d03e-6ead-48b5-a96a-db4cbd540192" Feb 14 04:11:25 crc kubenswrapper[4867]: I0214 04:11:25.675606 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-865l7" podStartSLOduration=95.675587657 podStartE2EDuration="1m35.675587657s" podCreationTimestamp="2026-02-14 04:09:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:15.635649959 +0000 UTC m=+107.716587293" watchObservedRunningTime="2026-02-14 04:11:25.675587657 +0000 UTC m=+117.756524981" Feb 14 04:11:26 crc kubenswrapper[4867]: I0214 04:11:26.659061 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fl729_fb77d03e-6ead-48b5-a96a-db4cbd540192/kube-multus/1.log" Feb 14 04:11:26 crc kubenswrapper[4867]: I0214 04:11:26.996486 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:11:26 crc kubenswrapper[4867]: I0214 04:11:26.996538 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:11:26 crc kubenswrapper[4867]: E0214 04:11:26.997060 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:11:26 crc kubenswrapper[4867]: I0214 04:11:26.996668 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:11:26 crc kubenswrapper[4867]: E0214 04:11:26.997328 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:11:26 crc kubenswrapper[4867]: I0214 04:11:26.996581 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:11:26 crc kubenswrapper[4867]: E0214 04:11:26.997594 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:11:26 crc kubenswrapper[4867]: E0214 04:11:26.997060 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:11:28 crc kubenswrapper[4867]: I0214 04:11:28.996418 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:11:28 crc kubenswrapper[4867]: I0214 04:11:28.996489 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:11:28 crc kubenswrapper[4867]: I0214 04:11:28.996568 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:11:28 crc kubenswrapper[4867]: E0214 04:11:28.997987 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:11:28 crc kubenswrapper[4867]: I0214 04:11:28.998010 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:11:28 crc kubenswrapper[4867]: E0214 04:11:28.998472 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:11:28 crc kubenswrapper[4867]: E0214 04:11:28.998552 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:11:28 crc kubenswrapper[4867]: E0214 04:11:28.998684 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:11:29 crc kubenswrapper[4867]: E0214 04:11:29.012464 4867 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 14 04:11:29 crc kubenswrapper[4867]: E0214 04:11:29.089740 4867 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 14 04:11:30 crc kubenswrapper[4867]: I0214 04:11:30.996736 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:11:30 crc kubenswrapper[4867]: I0214 04:11:30.996878 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:11:30 crc kubenswrapper[4867]: E0214 04:11:30.996887 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:11:30 crc kubenswrapper[4867]: I0214 04:11:30.997029 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:11:30 crc kubenswrapper[4867]: E0214 04:11:30.997161 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:11:30 crc kubenswrapper[4867]: E0214 04:11:30.997746 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:11:30 crc kubenswrapper[4867]: I0214 04:11:30.997774 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:11:30 crc kubenswrapper[4867]: E0214 04:11:30.997970 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:11:32 crc kubenswrapper[4867]: I0214 04:11:32.996291 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:11:32 crc kubenswrapper[4867]: I0214 04:11:32.996344 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:11:32 crc kubenswrapper[4867]: E0214 04:11:32.996430 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:11:32 crc kubenswrapper[4867]: I0214 04:11:32.996475 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:11:32 crc kubenswrapper[4867]: I0214 04:11:32.996309 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:11:32 crc kubenswrapper[4867]: E0214 04:11:32.996592 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:11:32 crc kubenswrapper[4867]: E0214 04:11:32.996657 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:11:32 crc kubenswrapper[4867]: E0214 04:11:32.996706 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:11:34 crc kubenswrapper[4867]: E0214 04:11:34.091428 4867 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 14 04:11:34 crc kubenswrapper[4867]: I0214 04:11:34.997706 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:11:34 crc kubenswrapper[4867]: I0214 04:11:34.997787 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:11:34 crc kubenswrapper[4867]: E0214 04:11:34.997836 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:11:34 crc kubenswrapper[4867]: I0214 04:11:34.997868 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:11:34 crc kubenswrapper[4867]: I0214 04:11:34.997706 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:11:34 crc kubenswrapper[4867]: E0214 04:11:34.998293 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:11:34 crc kubenswrapper[4867]: E0214 04:11:34.998428 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:11:34 crc kubenswrapper[4867]: I0214 04:11:34.998605 4867 scope.go:117] "RemoveContainer" containerID="97e1fa8b3d99d969cac9ac1d4bdd1161353186d3cf50512e692adeee0f21778a" Feb 14 04:11:34 crc kubenswrapper[4867]: E0214 04:11:34.998630 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:11:35 crc kubenswrapper[4867]: I0214 04:11:35.687788 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6nndn_34391a30-5865-46e9-af5f-705cc3b11fba/ovnkube-controller/3.log" Feb 14 04:11:35 crc kubenswrapper[4867]: I0214 04:11:35.690491 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" event={"ID":"34391a30-5865-46e9-af5f-705cc3b11fba","Type":"ContainerStarted","Data":"e1b94247074b50625f56bc042c6a881f72145192ce803fa834d64741635d9444"} Feb 14 04:11:35 crc kubenswrapper[4867]: I0214 04:11:35.690953 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:11:35 crc kubenswrapper[4867]: I0214 04:11:35.912422 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" podStartSLOduration=105.91239968 podStartE2EDuration="1m45.91239968s" podCreationTimestamp="2026-02-14 04:09:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:35.716594533 +0000 UTC m=+127.797531847" watchObservedRunningTime="2026-02-14 04:11:35.91239968 +0000 UTC m=+127.993336994" Feb 14 04:11:35 crc kubenswrapper[4867]: I0214 04:11:35.913602 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-4b6k5"] Feb 14 04:11:35 crc kubenswrapper[4867]: I0214 04:11:35.913725 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:11:35 crc kubenswrapper[4867]: E0214 04:11:35.913837 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:11:36 crc kubenswrapper[4867]: I0214 04:11:36.996681 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:11:36 crc kubenswrapper[4867]: I0214 04:11:36.996778 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:11:36 crc kubenswrapper[4867]: I0214 04:11:36.996721 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:11:36 crc kubenswrapper[4867]: E0214 04:11:36.996949 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:11:36 crc kubenswrapper[4867]: E0214 04:11:36.997080 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:11:36 crc kubenswrapper[4867]: E0214 04:11:36.997196 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:11:37 crc kubenswrapper[4867]: I0214 04:11:37.996272 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:11:37 crc kubenswrapper[4867]: E0214 04:11:37.996598 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:11:37 crc kubenswrapper[4867]: I0214 04:11:37.996631 4867 scope.go:117] "RemoveContainer" containerID="2556cf2433d1b1241d711139b8c66aabe3f12046f37c0f19b972b8306ff7917b" Feb 14 04:11:38 crc kubenswrapper[4867]: I0214 04:11:38.710235 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fl729_fb77d03e-6ead-48b5-a96a-db4cbd540192/kube-multus/1.log" Feb 14 04:11:38 crc kubenswrapper[4867]: I0214 04:11:38.710996 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fl729" event={"ID":"fb77d03e-6ead-48b5-a96a-db4cbd540192","Type":"ContainerStarted","Data":"b07a230a65d345e7f64ecb41b905a120a6174dc5229f73c67b086608b27b5a72"} Feb 14 04:11:38 crc kubenswrapper[4867]: I0214 04:11:38.996449 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:11:39 crc kubenswrapper[4867]: E0214 04:11:39.002923 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:11:39 crc kubenswrapper[4867]: I0214 04:11:39.003101 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:11:39 crc kubenswrapper[4867]: I0214 04:11:39.003145 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:11:39 crc kubenswrapper[4867]: E0214 04:11:39.003304 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:11:39 crc kubenswrapper[4867]: E0214 04:11:39.003417 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:11:39 crc kubenswrapper[4867]: E0214 04:11:39.092637 4867 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 14 04:11:39 crc kubenswrapper[4867]: I0214 04:11:39.997184 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:11:39 crc kubenswrapper[4867]: E0214 04:11:39.997421 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:11:40 crc kubenswrapper[4867]: I0214 04:11:40.996930 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:11:40 crc kubenswrapper[4867]: E0214 04:11:40.997062 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:11:40 crc kubenswrapper[4867]: I0214 04:11:40.996929 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:11:40 crc kubenswrapper[4867]: E0214 04:11:40.997140 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:11:40 crc kubenswrapper[4867]: I0214 04:11:40.997182 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:11:40 crc kubenswrapper[4867]: E0214 04:11:40.997393 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:11:41 crc kubenswrapper[4867]: I0214 04:11:41.996593 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:11:41 crc kubenswrapper[4867]: E0214 04:11:41.996797 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:11:42 crc kubenswrapper[4867]: I0214 04:11:42.996783 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:11:42 crc kubenswrapper[4867]: I0214 04:11:42.996839 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:11:42 crc kubenswrapper[4867]: I0214 04:11:42.996795 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:11:42 crc kubenswrapper[4867]: E0214 04:11:42.996930 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:11:42 crc kubenswrapper[4867]: E0214 04:11:42.997079 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:11:42 crc kubenswrapper[4867]: E0214 04:11:42.997219 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:11:43 crc kubenswrapper[4867]: I0214 04:11:43.996759 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:11:43 crc kubenswrapper[4867]: E0214 04:11:43.996898 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.624665 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.656235 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-pctg8"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.656865 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-pctg8" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.656910 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-8qkg2"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.657550 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-8qkg2" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.657656 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-699tj"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.658113 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-699tj" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.658665 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.659241 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-5kv6p"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.659701 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5kv6p" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.664264 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.664335 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.664729 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.664739 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.664931 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.665048 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.665225 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.665378 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.665571 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.665732 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pmlgc"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.665984 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.666076 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.666092 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.666176 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pmlgc" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.666431 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.667318 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-l8d7w"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.667711 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.667852 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-htv2n"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.668017 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l8d7w" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.668175 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-htv2n" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.670601 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.677786 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.677854 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.677937 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.678076 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.678087 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.678184 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.678214 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.678404 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.678406 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.678633 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.678717 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.678937 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.679114 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.679123 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.679303 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.681224 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ff8rv"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.681858 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ff8rv" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.682339 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-jsc7b"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.682853 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jsc7b" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.683810 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-x9sjv"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.684382 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-x9sjv" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.684911 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-29p6h"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.685360 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-29p6h" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.687675 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.688056 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.689327 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nmdjh"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.690064 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nmdjh" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.690453 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-ccg6j"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.691061 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-ccg6j" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.692988 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.693442 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.694824 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.695110 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.695165 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-886ct"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.695297 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.695320 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.695527 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.695558 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-886ct" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.698118 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.699055 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.699161 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.699462 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.699638 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.699701 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.699641 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.700352 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.700592 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.700960 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.711691 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.712086 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.712300 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.712473 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.712594 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.712725 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.712821 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.713218 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-5rxcg"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.713788 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.714322 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-pctg8"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.715586 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.715636 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8bmcr"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.715704 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.715772 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.715840 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.716070 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8bmcr" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.719573 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.719746 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.719885 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.719916 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.720892 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.721585 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.721746 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.721811 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.721893 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.722049 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.722095 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.722128 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.722185 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.722269 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.722295 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.722375 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.722451 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.722552 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.722660 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.722681 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.722764 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.722900 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.723852 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.723929 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.724154 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.724732 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.724743 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-szcmx"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.725423 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-c4c52"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.725876 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-c4c52" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.726267 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-szcmx" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.726308 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-l6gq7"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.727038 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-l6gq7" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.728174 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-t8bst"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.728402 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.728798 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-t8bst" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.729026 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.729424 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.729822 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.730014 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-wcdc2"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.730679 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.730765 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.731183 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-c65kr"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.731439 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wcdc2" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.731834 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-485km"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.732483 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-485km" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.734347 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.734802 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.735022 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.735028 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.735413 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.736006 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.740459 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.751574 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-p69vd"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.752301 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wgfm8"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.752794 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wgfm8" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.753048 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-p69vd" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.759907 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-699tj"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.763386 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-9kgzh"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.767452 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.767700 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.773088 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-9kgzh" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.774791 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.775019 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s94ht"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.775517 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s94ht" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.779562 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6tvm5"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.780406 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tcss9"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.781116 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6tvm5" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.781140 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tcss9" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.782268 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rv8cb"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.782861 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rv8cb" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.783515 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.783652 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-5k4wz"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.784216 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5k4wz" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.787952 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-t6c97"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.789240 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mkw9h"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.789619 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-mkw9h" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.789742 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-t6c97" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.792070 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517360-jfvsd"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.793175 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-qlkzp"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.794014 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-qlkzp" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.794207 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517360-jfvsd" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.794421 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-rxprp"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.795025 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-rxprp" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.797605 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dgp2v"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.798365 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f47sx"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.798836 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-gc8sl"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.799107 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dgp2v" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.799410 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gc8sl" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.799608 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f47sx" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.799969 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pmlgc"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.801233 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-l8d7w"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.803467 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-htv2n"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.820828 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.824981 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-t8bst"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.825912 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s94ht"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.827405 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-29p6h"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.839305 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.842231 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-5k4wz"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.843690 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wgfm8"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.846559 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-jsc7b"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.846618 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-8qkg2"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.858031 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-ccg6j"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.858886 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d58c6e7c-e0bc-4833-ab34-348c03f75da7-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-jsc7b\" (UID: \"d58c6e7c-e0bc-4833-ab34-348c03f75da7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jsc7b" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.858937 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/553b1e39-c2d5-459d-a7fd-058f936804cb-config\") pod \"authentication-operator-69f744f599-p69vd\" (UID: \"553b1e39-c2d5-459d-a7fd-058f936804cb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-p69vd" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.858962 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ccd97956-aef1-45cf-9475-02928c866124-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-szcmx\" (UID: \"ccd97956-aef1-45cf-9475-02928c866124\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-szcmx" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.858983 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kl4jb\" (UniqueName: \"kubernetes.io/projected/d1f6fd76-f362-495f-969d-a644f072552f-kube-api-access-kl4jb\") pod \"openshift-config-operator-7777fb866f-l8d7w\" (UID: \"d1f6fd76-f362-495f-969d-a644f072552f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-l8d7w" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.858997 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d58c6e7c-e0bc-4833-ab34-348c03f75da7-serving-cert\") pod \"apiserver-7bbb656c7d-jsc7b\" (UID: \"d58c6e7c-e0bc-4833-ab34-348c03f75da7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jsc7b" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.859021 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/894233bb-65ed-4cdd-ac61-7a8bd8f66140-etcd-client\") pod \"apiserver-76f77b778f-8qkg2\" (UID: \"894233bb-65ed-4cdd-ac61-7a8bd8f66140\") " pod="openshift-apiserver/apiserver-76f77b778f-8qkg2" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.859035 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1261994f-a993-4ffc-851a-dfce5bcc10b1-config\") pod \"machine-approver-56656f9798-5kv6p\" (UID: \"1261994f-a993-4ffc-851a-dfce5bcc10b1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5kv6p" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.859057 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77ddb26b-22ee-4a97-81ab-7e82c611ebd5-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-wgfm8\" (UID: \"77ddb26b-22ee-4a97-81ab-7e82c611ebd5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wgfm8" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.859073 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvv7t\" (UniqueName: \"kubernetes.io/projected/bb63883f-65f5-4107-877a-ff786d6c00f9-kube-api-access-zvv7t\") pod \"console-f9d7485db-c4c52\" (UID: \"bb63883f-65f5-4107-877a-ff786d6c00f9\") " pod="openshift-console/console-f9d7485db-c4c52" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.859096 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqcq7\" (UniqueName: \"kubernetes.io/projected/07dd9173-fdfe-4edb-821b-37c94116b53e-kube-api-access-bqcq7\") pod \"controller-manager-879f6c89f-pctg8\" (UID: \"07dd9173-fdfe-4edb-821b-37c94116b53e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pctg8" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.859111 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t74ck\" (UniqueName: \"kubernetes.io/projected/a9bcb9a2-1128-4c6b-80b1-47afd1a46511-kube-api-access-t74ck\") pod \"multus-admission-controller-857f4d67dd-l6gq7\" (UID: \"a9bcb9a2-1128-4c6b-80b1-47afd1a46511\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-l6gq7" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.859127 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5l2hg\" (UniqueName: \"kubernetes.io/projected/6d8ea50d-6822-425a-8eac-6311c8537eb7-kube-api-access-5l2hg\") pod \"openshift-controller-manager-operator-756b6f6bc6-886ct\" (UID: \"6d8ea50d-6822-425a-8eac-6311c8537eb7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-886ct" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.859143 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1f6fd76-f362-495f-969d-a644f072552f-serving-cert\") pod \"openshift-config-operator-7777fb866f-l8d7w\" (UID: \"d1f6fd76-f362-495f-969d-a644f072552f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-l8d7w" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.859155 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d58c6e7c-e0bc-4833-ab34-348c03f75da7-etcd-client\") pod \"apiserver-7bbb656c7d-jsc7b\" (UID: \"d58c6e7c-e0bc-4833-ab34-348c03f75da7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jsc7b" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.859184 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfv86\" (UniqueName: \"kubernetes.io/projected/72546cbc-3499-4110-b0e4-58beab7cc8a5-kube-api-access-kfv86\") pod \"downloads-7954f5f757-x9sjv\" (UID: \"72546cbc-3499-4110-b0e4-58beab7cc8a5\") " pod="openshift-console/downloads-7954f5f757-x9sjv" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.859204 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w44ng\" (UniqueName: \"kubernetes.io/projected/0ccfed17-f056-4bbe-8ec3-cdd31f37be63-kube-api-access-w44ng\") pod \"dns-operator-744455d44c-t8bst\" (UID: \"0ccfed17-f056-4bbe-8ec3-cdd31f37be63\") " pod="openshift-dns-operator/dns-operator-744455d44c-t8bst" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.859224 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1261994f-a993-4ffc-851a-dfce5bcc10b1-auth-proxy-config\") pod \"machine-approver-56656f9798-5kv6p\" (UID: \"1261994f-a993-4ffc-851a-dfce5bcc10b1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5kv6p" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.859239 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/894233bb-65ed-4cdd-ac61-7a8bd8f66140-image-import-ca\") pod \"apiserver-76f77b778f-8qkg2\" (UID: \"894233bb-65ed-4cdd-ac61-7a8bd8f66140\") " pod="openshift-apiserver/apiserver-76f77b778f-8qkg2" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.859269 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1815da32-cba4-41f4-80ca-45a750c7e93f-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-ff8rv\" (UID: \"1815da32-cba4-41f4-80ca-45a750c7e93f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ff8rv" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.859283 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bb63883f-65f5-4107-877a-ff786d6c00f9-service-ca\") pod \"console-f9d7485db-c4c52\" (UID: \"bb63883f-65f5-4107-877a-ff786d6c00f9\") " pod="openshift-console/console-f9d7485db-c4c52" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.859298 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07dd9173-fdfe-4edb-821b-37c94116b53e-config\") pod \"controller-manager-879f6c89f-pctg8\" (UID: \"07dd9173-fdfe-4edb-821b-37c94116b53e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pctg8" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.859322 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/894233bb-65ed-4cdd-ac61-7a8bd8f66140-config\") pod \"apiserver-76f77b778f-8qkg2\" (UID: \"894233bb-65ed-4cdd-ac61-7a8bd8f66140\") " pod="openshift-apiserver/apiserver-76f77b778f-8qkg2" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.859341 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/07dd9173-fdfe-4edb-821b-37c94116b53e-client-ca\") pod \"controller-manager-879f6c89f-pctg8\" (UID: \"07dd9173-fdfe-4edb-821b-37c94116b53e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pctg8" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.859361 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c4dfcc-144e-40cd-bed2-dc28c210a130-config\") pod \"etcd-operator-b45778765-ccg6j\" (UID: \"22c4dfcc-144e-40cd-bed2-dc28c210a130\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ccg6j" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.859375 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xpw2\" (UniqueName: \"kubernetes.io/projected/22c4dfcc-144e-40cd-bed2-dc28c210a130-kube-api-access-5xpw2\") pod \"etcd-operator-b45778765-ccg6j\" (UID: \"22c4dfcc-144e-40cd-bed2-dc28c210a130\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ccg6j" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.859394 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d46c3923-f64c-42de-b84c-98bc872f5de6-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-nmdjh\" (UID: \"d46c3923-f64c-42de-b84c-98bc872f5de6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nmdjh" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.859427 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07dd9173-fdfe-4edb-821b-37c94116b53e-serving-cert\") pod \"controller-manager-879f6c89f-pctg8\" (UID: \"07dd9173-fdfe-4edb-821b-37c94116b53e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pctg8" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.859442 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ccd97956-aef1-45cf-9475-02928c866124-proxy-tls\") pod \"machine-config-controller-84d6567774-szcmx\" (UID: \"ccd97956-aef1-45cf-9475-02928c866124\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-szcmx" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.859464 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fx7lz\" (UniqueName: \"kubernetes.io/projected/acdb1323-fec8-46fa-9f36-9b0f7f74cca4-kube-api-access-fx7lz\") pod \"cluster-samples-operator-665b6dd947-pmlgc\" (UID: \"acdb1323-fec8-46fa-9f36-9b0f7f74cca4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pmlgc" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.859480 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d8ea50d-6822-425a-8eac-6311c8537eb7-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-886ct\" (UID: \"6d8ea50d-6822-425a-8eac-6311c8537eb7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-886ct" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.859494 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/894233bb-65ed-4cdd-ac61-7a8bd8f66140-trusted-ca-bundle\") pod \"apiserver-76f77b778f-8qkg2\" (UID: \"894233bb-65ed-4cdd-ac61-7a8bd8f66140\") " pod="openshift-apiserver/apiserver-76f77b778f-8qkg2" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.859524 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1261994f-a993-4ffc-851a-dfce5bcc10b1-machine-approver-tls\") pod \"machine-approver-56656f9798-5kv6p\" (UID: \"1261994f-a993-4ffc-851a-dfce5bcc10b1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5kv6p" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.859540 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d58c6e7c-e0bc-4833-ab34-348c03f75da7-audit-policies\") pod \"apiserver-7bbb656c7d-jsc7b\" (UID: \"d58c6e7c-e0bc-4833-ab34-348c03f75da7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jsc7b" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.859557 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1815da32-cba4-41f4-80ca-45a750c7e93f-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-ff8rv\" (UID: \"1815da32-cba4-41f4-80ca-45a750c7e93f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ff8rv" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.859571 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlknt\" (UniqueName: \"kubernetes.io/projected/d58c6e7c-e0bc-4833-ab34-348c03f75da7-kube-api-access-jlknt\") pod \"apiserver-7bbb656c7d-jsc7b\" (UID: \"d58c6e7c-e0bc-4833-ab34-348c03f75da7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jsc7b" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.859592 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22c4dfcc-144e-40cd-bed2-dc28c210a130-serving-cert\") pod \"etcd-operator-b45778765-ccg6j\" (UID: \"22c4dfcc-144e-40cd-bed2-dc28c210a130\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ccg6j" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.859605 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/894233bb-65ed-4cdd-ac61-7a8bd8f66140-serving-cert\") pod \"apiserver-76f77b778f-8qkg2\" (UID: \"894233bb-65ed-4cdd-ac61-7a8bd8f66140\") " pod="openshift-apiserver/apiserver-76f77b778f-8qkg2" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.859624 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d46c3923-f64c-42de-b84c-98bc872f5de6-config\") pod \"openshift-apiserver-operator-796bbdcf4f-nmdjh\" (UID: \"d46c3923-f64c-42de-b84c-98bc872f5de6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nmdjh" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.859660 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nmdjh"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.859794 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/894233bb-65ed-4cdd-ac61-7a8bd8f66140-etcd-serving-ca\") pod \"apiserver-76f77b778f-8qkg2\" (UID: \"894233bb-65ed-4cdd-ac61-7a8bd8f66140\") " pod="openshift-apiserver/apiserver-76f77b778f-8qkg2" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.859851 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/835c6d49-e42e-444a-a276-fb9f064fdbda-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-8bmcr\" (UID: \"835c6d49-e42e-444a-a276-fb9f064fdbda\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8bmcr" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.861859 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.861982 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/835c6d49-e42e-444a-a276-fb9f064fdbda-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-8bmcr\" (UID: \"835c6d49-e42e-444a-a276-fb9f064fdbda\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8bmcr" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.862026 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/acdb1323-fec8-46fa-9f36-9b0f7f74cca4-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-pmlgc\" (UID: \"acdb1323-fec8-46fa-9f36-9b0f7f74cca4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pmlgc" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.862056 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5tsn\" (UniqueName: \"kubernetes.io/projected/553b1e39-c2d5-459d-a7fd-058f936804cb-kube-api-access-b5tsn\") pod \"authentication-operator-69f744f599-p69vd\" (UID: \"553b1e39-c2d5-459d-a7fd-058f936804cb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-p69vd" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.862081 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77ddb26b-22ee-4a97-81ab-7e82c611ebd5-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-wgfm8\" (UID: \"77ddb26b-22ee-4a97-81ab-7e82c611ebd5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wgfm8" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.862151 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ln4g\" (UniqueName: \"kubernetes.io/projected/1261994f-a993-4ffc-851a-dfce5bcc10b1-kube-api-access-7ln4g\") pod \"machine-approver-56656f9798-5kv6p\" (UID: \"1261994f-a993-4ffc-851a-dfce5bcc10b1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5kv6p" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.862181 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/835c6d49-e42e-444a-a276-fb9f064fdbda-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-8bmcr\" (UID: \"835c6d49-e42e-444a-a276-fb9f064fdbda\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8bmcr" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.862224 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-5rxcg"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.862231 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bb63883f-65f5-4107-877a-ff786d6c00f9-oauth-serving-cert\") pod \"console-f9d7485db-c4c52\" (UID: \"bb63883f-65f5-4107-877a-ff786d6c00f9\") " pod="openshift-console/console-f9d7485db-c4c52" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.862318 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/894233bb-65ed-4cdd-ac61-7a8bd8f66140-encryption-config\") pod \"apiserver-76f77b778f-8qkg2\" (UID: \"894233bb-65ed-4cdd-ac61-7a8bd8f66140\") " pod="openshift-apiserver/apiserver-76f77b778f-8qkg2" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.862356 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/553b1e39-c2d5-459d-a7fd-058f936804cb-serving-cert\") pod \"authentication-operator-69f744f599-p69vd\" (UID: \"553b1e39-c2d5-459d-a7fd-058f936804cb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-p69vd" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.862390 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bb63883f-65f5-4107-877a-ff786d6c00f9-trusted-ca-bundle\") pod \"console-f9d7485db-c4c52\" (UID: \"bb63883f-65f5-4107-877a-ff786d6c00f9\") " pod="openshift-console/console-f9d7485db-c4c52" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.862467 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxzvd\" (UniqueName: \"kubernetes.io/projected/6a8f75ff-3558-4d7b-8adb-722a732d0633-kube-api-access-mxzvd\") pod \"machine-config-operator-74547568cd-wcdc2\" (UID: \"6a8f75ff-3558-4d7b-8adb-722a732d0633\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wcdc2" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.862525 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0ccfed17-f056-4bbe-8ec3-cdd31f37be63-metrics-tls\") pod \"dns-operator-744455d44c-t8bst\" (UID: \"0ccfed17-f056-4bbe-8ec3-cdd31f37be63\") " pod="openshift-dns-operator/dns-operator-744455d44c-t8bst" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.862551 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d58c6e7c-e0bc-4833-ab34-348c03f75da7-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-jsc7b\" (UID: \"d58c6e7c-e0bc-4833-ab34-348c03f75da7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jsc7b" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.862576 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/22c4dfcc-144e-40cd-bed2-dc28c210a130-etcd-service-ca\") pod \"etcd-operator-b45778765-ccg6j\" (UID: \"22c4dfcc-144e-40cd-bed2-dc28c210a130\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ccg6j" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.862597 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpkk9\" (UniqueName: \"kubernetes.io/projected/ccd97956-aef1-45cf-9475-02928c866124-kube-api-access-gpkk9\") pod \"machine-config-controller-84d6567774-szcmx\" (UID: \"ccd97956-aef1-45cf-9475-02928c866124\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-szcmx" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.862621 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bb63883f-65f5-4107-877a-ff786d6c00f9-console-serving-cert\") pod \"console-f9d7485db-c4c52\" (UID: \"bb63883f-65f5-4107-877a-ff786d6c00f9\") " pod="openshift-console/console-f9d7485db-c4c52" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.862638 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6a8f75ff-3558-4d7b-8adb-722a732d0633-auth-proxy-config\") pod \"machine-config-operator-74547568cd-wcdc2\" (UID: \"6a8f75ff-3558-4d7b-8adb-722a732d0633\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wcdc2" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.862657 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/894233bb-65ed-4cdd-ac61-7a8bd8f66140-audit\") pod \"apiserver-76f77b778f-8qkg2\" (UID: \"894233bb-65ed-4cdd-ac61-7a8bd8f66140\") " pod="openshift-apiserver/apiserver-76f77b778f-8qkg2" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.862683 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d8ea50d-6822-425a-8eac-6311c8537eb7-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-886ct\" (UID: \"6d8ea50d-6822-425a-8eac-6311c8537eb7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-886ct" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.862703 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/553b1e39-c2d5-459d-a7fd-058f936804cb-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-p69vd\" (UID: \"553b1e39-c2d5-459d-a7fd-058f936804cb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-p69vd" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.862999 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/22c4dfcc-144e-40cd-bed2-dc28c210a130-etcd-client\") pod \"etcd-operator-b45778765-ccg6j\" (UID: \"22c4dfcc-144e-40cd-bed2-dc28c210a130\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ccg6j" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.863028 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/894233bb-65ed-4cdd-ac61-7a8bd8f66140-node-pullsecrets\") pod \"apiserver-76f77b778f-8qkg2\" (UID: \"894233bb-65ed-4cdd-ac61-7a8bd8f66140\") " pod="openshift-apiserver/apiserver-76f77b778f-8qkg2" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.863050 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d58c6e7c-e0bc-4833-ab34-348c03f75da7-encryption-config\") pod \"apiserver-7bbb656c7d-jsc7b\" (UID: \"d58c6e7c-e0bc-4833-ab34-348c03f75da7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jsc7b" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.863069 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d58c6e7c-e0bc-4833-ab34-348c03f75da7-audit-dir\") pod \"apiserver-7bbb656c7d-jsc7b\" (UID: \"d58c6e7c-e0bc-4833-ab34-348c03f75da7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jsc7b" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.863278 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/07dd9173-fdfe-4edb-821b-37c94116b53e-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-pctg8\" (UID: \"07dd9173-fdfe-4edb-821b-37c94116b53e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pctg8" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.863296 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a9bcb9a2-1128-4c6b-80b1-47afd1a46511-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-l6gq7\" (UID: \"a9bcb9a2-1128-4c6b-80b1-47afd1a46511\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-l6gq7" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.863315 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rcf4\" (UniqueName: \"kubernetes.io/projected/835c6d49-e42e-444a-a276-fb9f064fdbda-kube-api-access-5rcf4\") pod \"cluster-image-registry-operator-dc59b4c8b-8bmcr\" (UID: \"835c6d49-e42e-444a-a276-fb9f064fdbda\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8bmcr" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.863334 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6a8f75ff-3558-4d7b-8adb-722a732d0633-images\") pod \"machine-config-operator-74547568cd-wcdc2\" (UID: \"6a8f75ff-3558-4d7b-8adb-722a732d0633\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wcdc2" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.863359 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbcmm\" (UniqueName: \"kubernetes.io/projected/77ddb26b-22ee-4a97-81ab-7e82c611ebd5-kube-api-access-hbcmm\") pod \"kube-storage-version-migrator-operator-b67b599dd-wgfm8\" (UID: \"77ddb26b-22ee-4a97-81ab-7e82c611ebd5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wgfm8" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.863379 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bb63883f-65f5-4107-877a-ff786d6c00f9-console-oauth-config\") pod \"console-f9d7485db-c4c52\" (UID: \"bb63883f-65f5-4107-877a-ff786d6c00f9\") " pod="openshift-console/console-f9d7485db-c4c52" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.863399 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/22c4dfcc-144e-40cd-bed2-dc28c210a130-etcd-ca\") pod \"etcd-operator-b45778765-ccg6j\" (UID: \"22c4dfcc-144e-40cd-bed2-dc28c210a130\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ccg6j" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.863431 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/894233bb-65ed-4cdd-ac61-7a8bd8f66140-audit-dir\") pod \"apiserver-76f77b778f-8qkg2\" (UID: \"894233bb-65ed-4cdd-ac61-7a8bd8f66140\") " pod="openshift-apiserver/apiserver-76f77b778f-8qkg2" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.863471 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hp6r7\" (UniqueName: \"kubernetes.io/projected/d46c3923-f64c-42de-b84c-98bc872f5de6-kube-api-access-hp6r7\") pod \"openshift-apiserver-operator-796bbdcf4f-nmdjh\" (UID: \"d46c3923-f64c-42de-b84c-98bc872f5de6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nmdjh" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.863523 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1815da32-cba4-41f4-80ca-45a750c7e93f-config\") pod \"kube-apiserver-operator-766d6c64bb-ff8rv\" (UID: \"1815da32-cba4-41f4-80ca-45a750c7e93f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ff8rv" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.863544 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/553b1e39-c2d5-459d-a7fd-058f936804cb-service-ca-bundle\") pod \"authentication-operator-69f744f599-p69vd\" (UID: \"553b1e39-c2d5-459d-a7fd-058f936804cb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-p69vd" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.863571 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pfz2\" (UniqueName: \"kubernetes.io/projected/894233bb-65ed-4cdd-ac61-7a8bd8f66140-kube-api-access-6pfz2\") pod \"apiserver-76f77b778f-8qkg2\" (UID: \"894233bb-65ed-4cdd-ac61-7a8bd8f66140\") " pod="openshift-apiserver/apiserver-76f77b778f-8qkg2" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.863599 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/d1f6fd76-f362-495f-969d-a644f072552f-available-featuregates\") pod \"openshift-config-operator-7777fb866f-l8d7w\" (UID: \"d1f6fd76-f362-495f-969d-a644f072552f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-l8d7w" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.863616 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bb63883f-65f5-4107-877a-ff786d6c00f9-console-config\") pod \"console-f9d7485db-c4c52\" (UID: \"bb63883f-65f5-4107-877a-ff786d6c00f9\") " pod="openshift-console/console-f9d7485db-c4c52" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.863631 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6a8f75ff-3558-4d7b-8adb-722a732d0633-proxy-tls\") pod \"machine-config-operator-74547568cd-wcdc2\" (UID: \"6a8f75ff-3558-4d7b-8adb-722a732d0633\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wcdc2" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.866149 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-485km"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.868408 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tcss9"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.878404 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.882805 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ff8rv"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.884156 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-x9sjv"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.885451 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-sz8l8"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.904423 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-l6gq7"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.904575 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-sz8l8" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.907620 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8bmcr"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.909785 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6tvm5"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.911676 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-886ct"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.916569 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.919666 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-p69vd"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.921558 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.922493 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-c4c52"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.923834 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-t6c97"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.926261 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-9kgzh"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.927169 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rv8cb"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.929787 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-wcdc2"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.930559 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-szcmx"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.931365 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-c65kr"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.932378 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-rxprp"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.933313 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f47sx"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.934319 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dgp2v"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.936425 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mkw9h"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.937668 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517360-jfvsd"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.938694 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-gc8sl"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.939720 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-pzj5s"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.940968 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-pzj5s"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.941063 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-pzj5s" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.944939 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.958186 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.965945 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-8ftf5"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.966707 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-8ftf5" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.967467 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1f6fd76-f362-495f-969d-a644f072552f-serving-cert\") pod \"openshift-config-operator-7777fb866f-l8d7w\" (UID: \"d1f6fd76-f362-495f-969d-a644f072552f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-l8d7w" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.967609 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfv86\" (UniqueName: \"kubernetes.io/projected/72546cbc-3499-4110-b0e4-58beab7cc8a5-kube-api-access-kfv86\") pod \"downloads-7954f5f757-x9sjv\" (UID: \"72546cbc-3499-4110-b0e4-58beab7cc8a5\") " pod="openshift-console/downloads-7954f5f757-x9sjv" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.967635 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w44ng\" (UniqueName: \"kubernetes.io/projected/0ccfed17-f056-4bbe-8ec3-cdd31f37be63-kube-api-access-w44ng\") pod \"dns-operator-744455d44c-t8bst\" (UID: \"0ccfed17-f056-4bbe-8ec3-cdd31f37be63\") " pod="openshift-dns-operator/dns-operator-744455d44c-t8bst" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.967651 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1261994f-a993-4ffc-851a-dfce5bcc10b1-auth-proxy-config\") pod \"machine-approver-56656f9798-5kv6p\" (UID: \"1261994f-a993-4ffc-851a-dfce5bcc10b1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5kv6p" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.967762 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d58c6e7c-e0bc-4833-ab34-348c03f75da7-etcd-client\") pod \"apiserver-7bbb656c7d-jsc7b\" (UID: \"d58c6e7c-e0bc-4833-ab34-348c03f75da7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jsc7b" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.967787 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1815da32-cba4-41f4-80ca-45a750c7e93f-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-ff8rv\" (UID: \"1815da32-cba4-41f4-80ca-45a750c7e93f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ff8rv" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.967807 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bb63883f-65f5-4107-877a-ff786d6c00f9-service-ca\") pod \"console-f9d7485db-c4c52\" (UID: \"bb63883f-65f5-4107-877a-ff786d6c00f9\") " pod="openshift-console/console-f9d7485db-c4c52" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.967901 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07dd9173-fdfe-4edb-821b-37c94116b53e-config\") pod \"controller-manager-879f6c89f-pctg8\" (UID: \"07dd9173-fdfe-4edb-821b-37c94116b53e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pctg8" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.967958 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/894233bb-65ed-4cdd-ac61-7a8bd8f66140-config\") pod \"apiserver-76f77b778f-8qkg2\" (UID: \"894233bb-65ed-4cdd-ac61-7a8bd8f66140\") " pod="openshift-apiserver/apiserver-76f77b778f-8qkg2" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.968008 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/894233bb-65ed-4cdd-ac61-7a8bd8f66140-image-import-ca\") pod \"apiserver-76f77b778f-8qkg2\" (UID: \"894233bb-65ed-4cdd-ac61-7a8bd8f66140\") " pod="openshift-apiserver/apiserver-76f77b778f-8qkg2" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.968035 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/07dd9173-fdfe-4edb-821b-37c94116b53e-client-ca\") pod \"controller-manager-879f6c89f-pctg8\" (UID: \"07dd9173-fdfe-4edb-821b-37c94116b53e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pctg8" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.968240 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c4dfcc-144e-40cd-bed2-dc28c210a130-config\") pod \"etcd-operator-b45778765-ccg6j\" (UID: \"22c4dfcc-144e-40cd-bed2-dc28c210a130\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ccg6j" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.968267 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xpw2\" (UniqueName: \"kubernetes.io/projected/22c4dfcc-144e-40cd-bed2-dc28c210a130-kube-api-access-5xpw2\") pod \"etcd-operator-b45778765-ccg6j\" (UID: \"22c4dfcc-144e-40cd-bed2-dc28c210a130\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ccg6j" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.968285 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d46c3923-f64c-42de-b84c-98bc872f5de6-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-nmdjh\" (UID: \"d46c3923-f64c-42de-b84c-98bc872f5de6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nmdjh" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.968447 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07dd9173-fdfe-4edb-821b-37c94116b53e-serving-cert\") pod \"controller-manager-879f6c89f-pctg8\" (UID: \"07dd9173-fdfe-4edb-821b-37c94116b53e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pctg8" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.968497 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ccd97956-aef1-45cf-9475-02928c866124-proxy-tls\") pod \"machine-config-controller-84d6567774-szcmx\" (UID: \"ccd97956-aef1-45cf-9475-02928c866124\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-szcmx" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.968736 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fx7lz\" (UniqueName: \"kubernetes.io/projected/acdb1323-fec8-46fa-9f36-9b0f7f74cca4-kube-api-access-fx7lz\") pod \"cluster-samples-operator-665b6dd947-pmlgc\" (UID: \"acdb1323-fec8-46fa-9f36-9b0f7f74cca4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pmlgc" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.968759 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d8ea50d-6822-425a-8eac-6311c8537eb7-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-886ct\" (UID: \"6d8ea50d-6822-425a-8eac-6311c8537eb7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-886ct" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.968903 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/894233bb-65ed-4cdd-ac61-7a8bd8f66140-trusted-ca-bundle\") pod \"apiserver-76f77b778f-8qkg2\" (UID: \"894233bb-65ed-4cdd-ac61-7a8bd8f66140\") " pod="openshift-apiserver/apiserver-76f77b778f-8qkg2" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.968925 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d58c6e7c-e0bc-4833-ab34-348c03f75da7-audit-policies\") pod \"apiserver-7bbb656c7d-jsc7b\" (UID: \"d58c6e7c-e0bc-4833-ab34-348c03f75da7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jsc7b" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.969067 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1261994f-a993-4ffc-851a-dfce5bcc10b1-machine-approver-tls\") pod \"machine-approver-56656f9798-5kv6p\" (UID: \"1261994f-a993-4ffc-851a-dfce5bcc10b1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5kv6p" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.969088 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1815da32-cba4-41f4-80ca-45a750c7e93f-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-ff8rv\" (UID: \"1815da32-cba4-41f4-80ca-45a750c7e93f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ff8rv" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.969281 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlknt\" (UniqueName: \"kubernetes.io/projected/d58c6e7c-e0bc-4833-ab34-348c03f75da7-kube-api-access-jlknt\") pod \"apiserver-7bbb656c7d-jsc7b\" (UID: \"d58c6e7c-e0bc-4833-ab34-348c03f75da7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jsc7b" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.969324 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22c4dfcc-144e-40cd-bed2-dc28c210a130-serving-cert\") pod \"etcd-operator-b45778765-ccg6j\" (UID: \"22c4dfcc-144e-40cd-bed2-dc28c210a130\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ccg6j" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.969347 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/894233bb-65ed-4cdd-ac61-7a8bd8f66140-serving-cert\") pod \"apiserver-76f77b778f-8qkg2\" (UID: \"894233bb-65ed-4cdd-ac61-7a8bd8f66140\") " pod="openshift-apiserver/apiserver-76f77b778f-8qkg2" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.969363 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d46c3923-f64c-42de-b84c-98bc872f5de6-config\") pod \"openshift-apiserver-operator-796bbdcf4f-nmdjh\" (UID: \"d46c3923-f64c-42de-b84c-98bc872f5de6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nmdjh" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.969386 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/894233bb-65ed-4cdd-ac61-7a8bd8f66140-etcd-serving-ca\") pod \"apiserver-76f77b778f-8qkg2\" (UID: \"894233bb-65ed-4cdd-ac61-7a8bd8f66140\") " pod="openshift-apiserver/apiserver-76f77b778f-8qkg2" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.969407 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/835c6d49-e42e-444a-a276-fb9f064fdbda-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-8bmcr\" (UID: \"835c6d49-e42e-444a-a276-fb9f064fdbda\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8bmcr" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.969428 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/835c6d49-e42e-444a-a276-fb9f064fdbda-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-8bmcr\" (UID: \"835c6d49-e42e-444a-a276-fb9f064fdbda\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8bmcr" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.969447 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/acdb1323-fec8-46fa-9f36-9b0f7f74cca4-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-pmlgc\" (UID: \"acdb1323-fec8-46fa-9f36-9b0f7f74cca4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pmlgc" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.969464 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5tsn\" (UniqueName: \"kubernetes.io/projected/553b1e39-c2d5-459d-a7fd-058f936804cb-kube-api-access-b5tsn\") pod \"authentication-operator-69f744f599-p69vd\" (UID: \"553b1e39-c2d5-459d-a7fd-058f936804cb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-p69vd" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.969482 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ln4g\" (UniqueName: \"kubernetes.io/projected/1261994f-a993-4ffc-851a-dfce5bcc10b1-kube-api-access-7ln4g\") pod \"machine-approver-56656f9798-5kv6p\" (UID: \"1261994f-a993-4ffc-851a-dfce5bcc10b1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5kv6p" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.969522 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/835c6d49-e42e-444a-a276-fb9f064fdbda-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-8bmcr\" (UID: \"835c6d49-e42e-444a-a276-fb9f064fdbda\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8bmcr" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.969548 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77ddb26b-22ee-4a97-81ab-7e82c611ebd5-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-wgfm8\" (UID: \"77ddb26b-22ee-4a97-81ab-7e82c611ebd5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wgfm8" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.969571 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bb63883f-65f5-4107-877a-ff786d6c00f9-oauth-serving-cert\") pod \"console-f9d7485db-c4c52\" (UID: \"bb63883f-65f5-4107-877a-ff786d6c00f9\") " pod="openshift-console/console-f9d7485db-c4c52" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.969592 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/894233bb-65ed-4cdd-ac61-7a8bd8f66140-encryption-config\") pod \"apiserver-76f77b778f-8qkg2\" (UID: \"894233bb-65ed-4cdd-ac61-7a8bd8f66140\") " pod="openshift-apiserver/apiserver-76f77b778f-8qkg2" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.969616 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/553b1e39-c2d5-459d-a7fd-058f936804cb-serving-cert\") pod \"authentication-operator-69f744f599-p69vd\" (UID: \"553b1e39-c2d5-459d-a7fd-058f936804cb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-p69vd" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.969642 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bb63883f-65f5-4107-877a-ff786d6c00f9-trusted-ca-bundle\") pod \"console-f9d7485db-c4c52\" (UID: \"bb63883f-65f5-4107-877a-ff786d6c00f9\") " pod="openshift-console/console-f9d7485db-c4c52" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.969672 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/894233bb-65ed-4cdd-ac61-7a8bd8f66140-config\") pod \"apiserver-76f77b778f-8qkg2\" (UID: \"894233bb-65ed-4cdd-ac61-7a8bd8f66140\") " pod="openshift-apiserver/apiserver-76f77b778f-8qkg2" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.969679 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxzvd\" (UniqueName: \"kubernetes.io/projected/6a8f75ff-3558-4d7b-8adb-722a732d0633-kube-api-access-mxzvd\") pod \"machine-config-operator-74547568cd-wcdc2\" (UID: \"6a8f75ff-3558-4d7b-8adb-722a732d0633\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wcdc2" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.969740 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0ccfed17-f056-4bbe-8ec3-cdd31f37be63-metrics-tls\") pod \"dns-operator-744455d44c-t8bst\" (UID: \"0ccfed17-f056-4bbe-8ec3-cdd31f37be63\") " pod="openshift-dns-operator/dns-operator-744455d44c-t8bst" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.969768 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d58c6e7c-e0bc-4833-ab34-348c03f75da7-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-jsc7b\" (UID: \"d58c6e7c-e0bc-4833-ab34-348c03f75da7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jsc7b" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.969796 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/22c4dfcc-144e-40cd-bed2-dc28c210a130-etcd-service-ca\") pod \"etcd-operator-b45778765-ccg6j\" (UID: \"22c4dfcc-144e-40cd-bed2-dc28c210a130\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ccg6j" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.969820 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bb63883f-65f5-4107-877a-ff786d6c00f9-console-serving-cert\") pod \"console-f9d7485db-c4c52\" (UID: \"bb63883f-65f5-4107-877a-ff786d6c00f9\") " pod="openshift-console/console-f9d7485db-c4c52" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.969844 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6a8f75ff-3558-4d7b-8adb-722a732d0633-auth-proxy-config\") pod \"machine-config-operator-74547568cd-wcdc2\" (UID: \"6a8f75ff-3558-4d7b-8adb-722a732d0633\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wcdc2" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.969867 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/894233bb-65ed-4cdd-ac61-7a8bd8f66140-audit\") pod \"apiserver-76f77b778f-8qkg2\" (UID: \"894233bb-65ed-4cdd-ac61-7a8bd8f66140\") " pod="openshift-apiserver/apiserver-76f77b778f-8qkg2" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.969893 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpkk9\" (UniqueName: \"kubernetes.io/projected/ccd97956-aef1-45cf-9475-02928c866124-kube-api-access-gpkk9\") pod \"machine-config-controller-84d6567774-szcmx\" (UID: \"ccd97956-aef1-45cf-9475-02928c866124\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-szcmx" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.969919 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/553b1e39-c2d5-459d-a7fd-058f936804cb-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-p69vd\" (UID: \"553b1e39-c2d5-459d-a7fd-058f936804cb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-p69vd" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.969942 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d8ea50d-6822-425a-8eac-6311c8537eb7-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-886ct\" (UID: \"6d8ea50d-6822-425a-8eac-6311c8537eb7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-886ct" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.969964 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/22c4dfcc-144e-40cd-bed2-dc28c210a130-etcd-client\") pod \"etcd-operator-b45778765-ccg6j\" (UID: \"22c4dfcc-144e-40cd-bed2-dc28c210a130\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ccg6j" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.969987 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/894233bb-65ed-4cdd-ac61-7a8bd8f66140-node-pullsecrets\") pod \"apiserver-76f77b778f-8qkg2\" (UID: \"894233bb-65ed-4cdd-ac61-7a8bd8f66140\") " pod="openshift-apiserver/apiserver-76f77b778f-8qkg2" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.970007 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d58c6e7c-e0bc-4833-ab34-348c03f75da7-encryption-config\") pod \"apiserver-7bbb656c7d-jsc7b\" (UID: \"d58c6e7c-e0bc-4833-ab34-348c03f75da7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jsc7b" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.970029 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d58c6e7c-e0bc-4833-ab34-348c03f75da7-audit-dir\") pod \"apiserver-7bbb656c7d-jsc7b\" (UID: \"d58c6e7c-e0bc-4833-ab34-348c03f75da7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jsc7b" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.970052 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/07dd9173-fdfe-4edb-821b-37c94116b53e-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-pctg8\" (UID: \"07dd9173-fdfe-4edb-821b-37c94116b53e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pctg8" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.970076 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a9bcb9a2-1128-4c6b-80b1-47afd1a46511-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-l6gq7\" (UID: \"a9bcb9a2-1128-4c6b-80b1-47afd1a46511\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-l6gq7" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.970138 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rcf4\" (UniqueName: \"kubernetes.io/projected/835c6d49-e42e-444a-a276-fb9f064fdbda-kube-api-access-5rcf4\") pod \"cluster-image-registry-operator-dc59b4c8b-8bmcr\" (UID: \"835c6d49-e42e-444a-a276-fb9f064fdbda\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8bmcr" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.970167 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbcmm\" (UniqueName: \"kubernetes.io/projected/77ddb26b-22ee-4a97-81ab-7e82c611ebd5-kube-api-access-hbcmm\") pod \"kube-storage-version-migrator-operator-b67b599dd-wgfm8\" (UID: \"77ddb26b-22ee-4a97-81ab-7e82c611ebd5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wgfm8" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.970191 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bb63883f-65f5-4107-877a-ff786d6c00f9-console-oauth-config\") pod \"console-f9d7485db-c4c52\" (UID: \"bb63883f-65f5-4107-877a-ff786d6c00f9\") " pod="openshift-console/console-f9d7485db-c4c52" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.970246 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/22c4dfcc-144e-40cd-bed2-dc28c210a130-etcd-ca\") pod \"etcd-operator-b45778765-ccg6j\" (UID: \"22c4dfcc-144e-40cd-bed2-dc28c210a130\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ccg6j" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.970273 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6a8f75ff-3558-4d7b-8adb-722a732d0633-images\") pod \"machine-config-operator-74547568cd-wcdc2\" (UID: \"6a8f75ff-3558-4d7b-8adb-722a732d0633\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wcdc2" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.970298 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hp6r7\" (UniqueName: \"kubernetes.io/projected/d46c3923-f64c-42de-b84c-98bc872f5de6-kube-api-access-hp6r7\") pod \"openshift-apiserver-operator-796bbdcf4f-nmdjh\" (UID: \"d46c3923-f64c-42de-b84c-98bc872f5de6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nmdjh" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.970323 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/894233bb-65ed-4cdd-ac61-7a8bd8f66140-audit-dir\") pod \"apiserver-76f77b778f-8qkg2\" (UID: \"894233bb-65ed-4cdd-ac61-7a8bd8f66140\") " pod="openshift-apiserver/apiserver-76f77b778f-8qkg2" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.970349 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1815da32-cba4-41f4-80ca-45a750c7e93f-config\") pod \"kube-apiserver-operator-766d6c64bb-ff8rv\" (UID: \"1815da32-cba4-41f4-80ca-45a750c7e93f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ff8rv" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.970359 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bb63883f-65f5-4107-877a-ff786d6c00f9-oauth-serving-cert\") pod \"console-f9d7485db-c4c52\" (UID: \"bb63883f-65f5-4107-877a-ff786d6c00f9\") " pod="openshift-console/console-f9d7485db-c4c52" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.970371 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/553b1e39-c2d5-459d-a7fd-058f936804cb-service-ca-bundle\") pod \"authentication-operator-69f744f599-p69vd\" (UID: \"553b1e39-c2d5-459d-a7fd-058f936804cb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-p69vd" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.970398 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pfz2\" (UniqueName: \"kubernetes.io/projected/894233bb-65ed-4cdd-ac61-7a8bd8f66140-kube-api-access-6pfz2\") pod \"apiserver-76f77b778f-8qkg2\" (UID: \"894233bb-65ed-4cdd-ac61-7a8bd8f66140\") " pod="openshift-apiserver/apiserver-76f77b778f-8qkg2" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.970423 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/d1f6fd76-f362-495f-969d-a644f072552f-available-featuregates\") pod \"openshift-config-operator-7777fb866f-l8d7w\" (UID: \"d1f6fd76-f362-495f-969d-a644f072552f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-l8d7w" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.970444 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bb63883f-65f5-4107-877a-ff786d6c00f9-console-config\") pod \"console-f9d7485db-c4c52\" (UID: \"bb63883f-65f5-4107-877a-ff786d6c00f9\") " pod="openshift-console/console-f9d7485db-c4c52" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.970468 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6a8f75ff-3558-4d7b-8adb-722a732d0633-proxy-tls\") pod \"machine-config-operator-74547568cd-wcdc2\" (UID: \"6a8f75ff-3558-4d7b-8adb-722a732d0633\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wcdc2" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.970490 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d58c6e7c-e0bc-4833-ab34-348c03f75da7-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-jsc7b\" (UID: \"d58c6e7c-e0bc-4833-ab34-348c03f75da7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jsc7b" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.970534 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/553b1e39-c2d5-459d-a7fd-058f936804cb-config\") pod \"authentication-operator-69f744f599-p69vd\" (UID: \"553b1e39-c2d5-459d-a7fd-058f936804cb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-p69vd" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.970560 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ccd97956-aef1-45cf-9475-02928c866124-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-szcmx\" (UID: \"ccd97956-aef1-45cf-9475-02928c866124\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-szcmx" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.970584 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kl4jb\" (UniqueName: \"kubernetes.io/projected/d1f6fd76-f362-495f-969d-a644f072552f-kube-api-access-kl4jb\") pod \"openshift-config-operator-7777fb866f-l8d7w\" (UID: \"d1f6fd76-f362-495f-969d-a644f072552f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-l8d7w" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.969675 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/07dd9173-fdfe-4edb-821b-37c94116b53e-client-ca\") pod \"controller-manager-879f6c89f-pctg8\" (UID: \"07dd9173-fdfe-4edb-821b-37c94116b53e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pctg8" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.970634 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d58c6e7c-e0bc-4833-ab34-348c03f75da7-serving-cert\") pod \"apiserver-7bbb656c7d-jsc7b\" (UID: \"d58c6e7c-e0bc-4833-ab34-348c03f75da7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jsc7b" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.970667 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/894233bb-65ed-4cdd-ac61-7a8bd8f66140-etcd-client\") pod \"apiserver-76f77b778f-8qkg2\" (UID: \"894233bb-65ed-4cdd-ac61-7a8bd8f66140\") " pod="openshift-apiserver/apiserver-76f77b778f-8qkg2" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.970700 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1261994f-a993-4ffc-851a-dfce5bcc10b1-config\") pod \"machine-approver-56656f9798-5kv6p\" (UID: \"1261994f-a993-4ffc-851a-dfce5bcc10b1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5kv6p" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.970714 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d58c6e7c-e0bc-4833-ab34-348c03f75da7-audit-policies\") pod \"apiserver-7bbb656c7d-jsc7b\" (UID: \"d58c6e7c-e0bc-4833-ab34-348c03f75da7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jsc7b" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.970727 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvv7t\" (UniqueName: \"kubernetes.io/projected/bb63883f-65f5-4107-877a-ff786d6c00f9-kube-api-access-zvv7t\") pod \"console-f9d7485db-c4c52\" (UID: \"bb63883f-65f5-4107-877a-ff786d6c00f9\") " pod="openshift-console/console-f9d7485db-c4c52" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.970747 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqcq7\" (UniqueName: \"kubernetes.io/projected/07dd9173-fdfe-4edb-821b-37c94116b53e-kube-api-access-bqcq7\") pod \"controller-manager-879f6c89f-pctg8\" (UID: \"07dd9173-fdfe-4edb-821b-37c94116b53e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pctg8" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.970771 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t74ck\" (UniqueName: \"kubernetes.io/projected/a9bcb9a2-1128-4c6b-80b1-47afd1a46511-kube-api-access-t74ck\") pod \"multus-admission-controller-857f4d67dd-l6gq7\" (UID: \"a9bcb9a2-1128-4c6b-80b1-47afd1a46511\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-l6gq7" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.970803 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5l2hg\" (UniqueName: \"kubernetes.io/projected/6d8ea50d-6822-425a-8eac-6311c8537eb7-kube-api-access-5l2hg\") pod \"openshift-controller-manager-operator-756b6f6bc6-886ct\" (UID: \"6d8ea50d-6822-425a-8eac-6311c8537eb7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-886ct" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.970827 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77ddb26b-22ee-4a97-81ab-7e82c611ebd5-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-wgfm8\" (UID: \"77ddb26b-22ee-4a97-81ab-7e82c611ebd5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wgfm8" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.971421 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d46c3923-f64c-42de-b84c-98bc872f5de6-config\") pod \"openshift-apiserver-operator-796bbdcf4f-nmdjh\" (UID: \"d46c3923-f64c-42de-b84c-98bc872f5de6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nmdjh" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.971755 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c4dfcc-144e-40cd-bed2-dc28c210a130-config\") pod \"etcd-operator-b45778765-ccg6j\" (UID: \"22c4dfcc-144e-40cd-bed2-dc28c210a130\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ccg6j" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.971905 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1261994f-a993-4ffc-851a-dfce5bcc10b1-auth-proxy-config\") pod \"machine-approver-56656f9798-5kv6p\" (UID: \"1261994f-a993-4ffc-851a-dfce5bcc10b1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5kv6p" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.971909 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/894233bb-65ed-4cdd-ac61-7a8bd8f66140-node-pullsecrets\") pod \"apiserver-76f77b778f-8qkg2\" (UID: \"894233bb-65ed-4cdd-ac61-7a8bd8f66140\") " pod="openshift-apiserver/apiserver-76f77b778f-8qkg2" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.972033 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bb63883f-65f5-4107-877a-ff786d6c00f9-trusted-ca-bundle\") pod \"console-f9d7485db-c4c52\" (UID: \"bb63883f-65f5-4107-877a-ff786d6c00f9\") " pod="openshift-console/console-f9d7485db-c4c52" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.972281 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/894233bb-65ed-4cdd-ac61-7a8bd8f66140-audit-dir\") pod \"apiserver-76f77b778f-8qkg2\" (UID: \"894233bb-65ed-4cdd-ac61-7a8bd8f66140\") " pod="openshift-apiserver/apiserver-76f77b778f-8qkg2" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.972643 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/d1f6fd76-f362-495f-969d-a644f072552f-available-featuregates\") pod \"openshift-config-operator-7777fb866f-l8d7w\" (UID: \"d1f6fd76-f362-495f-969d-a644f072552f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-l8d7w" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.972882 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07dd9173-fdfe-4edb-821b-37c94116b53e-config\") pod \"controller-manager-879f6c89f-pctg8\" (UID: \"07dd9173-fdfe-4edb-821b-37c94116b53e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pctg8" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.973153 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/894233bb-65ed-4cdd-ac61-7a8bd8f66140-image-import-ca\") pod \"apiserver-76f77b778f-8qkg2\" (UID: \"894233bb-65ed-4cdd-ac61-7a8bd8f66140\") " pod="openshift-apiserver/apiserver-76f77b778f-8qkg2" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.973261 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/22c4dfcc-144e-40cd-bed2-dc28c210a130-etcd-service-ca\") pod \"etcd-operator-b45778765-ccg6j\" (UID: \"22c4dfcc-144e-40cd-bed2-dc28c210a130\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ccg6j" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.973794 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1815da32-cba4-41f4-80ca-45a750c7e93f-config\") pod \"kube-apiserver-operator-766d6c64bb-ff8rv\" (UID: \"1815da32-cba4-41f4-80ca-45a750c7e93f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ff8rv" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.973823 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6a8f75ff-3558-4d7b-8adb-722a732d0633-auth-proxy-config\") pod \"machine-config-operator-74547568cd-wcdc2\" (UID: \"6a8f75ff-3558-4d7b-8adb-722a732d0633\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wcdc2" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.974262 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1815da32-cba4-41f4-80ca-45a750c7e93f-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-ff8rv\" (UID: \"1815da32-cba4-41f4-80ca-45a750c7e93f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ff8rv" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.974605 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/894233bb-65ed-4cdd-ac61-7a8bd8f66140-audit\") pod \"apiserver-76f77b778f-8qkg2\" (UID: \"894233bb-65ed-4cdd-ac61-7a8bd8f66140\") " pod="openshift-apiserver/apiserver-76f77b778f-8qkg2" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.974831 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d8ea50d-6822-425a-8eac-6311c8537eb7-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-886ct\" (UID: \"6d8ea50d-6822-425a-8eac-6311c8537eb7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-886ct" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.975197 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/894233bb-65ed-4cdd-ac61-7a8bd8f66140-etcd-serving-ca\") pod \"apiserver-76f77b778f-8qkg2\" (UID: \"894233bb-65ed-4cdd-ac61-7a8bd8f66140\") " pod="openshift-apiserver/apiserver-76f77b778f-8qkg2" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.970443 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bb63883f-65f5-4107-877a-ff786d6c00f9-service-ca\") pod \"console-f9d7485db-c4c52\" (UID: \"bb63883f-65f5-4107-877a-ff786d6c00f9\") " pod="openshift-console/console-f9d7485db-c4c52" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.975299 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d58c6e7c-e0bc-4833-ab34-348c03f75da7-audit-dir\") pod \"apiserver-7bbb656c7d-jsc7b\" (UID: \"d58c6e7c-e0bc-4833-ab34-348c03f75da7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jsc7b" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.976067 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/22c4dfcc-144e-40cd-bed2-dc28c210a130-etcd-ca\") pod \"etcd-operator-b45778765-ccg6j\" (UID: \"22c4dfcc-144e-40cd-bed2-dc28c210a130\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ccg6j" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.976112 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1261994f-a993-4ffc-851a-dfce5bcc10b1-config\") pod \"machine-approver-56656f9798-5kv6p\" (UID: \"1261994f-a993-4ffc-851a-dfce5bcc10b1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5kv6p" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.976445 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bb63883f-65f5-4107-877a-ff786d6c00f9-console-config\") pod \"console-f9d7485db-c4c52\" (UID: \"bb63883f-65f5-4107-877a-ff786d6c00f9\") " pod="openshift-console/console-f9d7485db-c4c52" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.976522 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6a8f75ff-3558-4d7b-8adb-722a732d0633-images\") pod \"machine-config-operator-74547568cd-wcdc2\" (UID: \"6a8f75ff-3558-4d7b-8adb-722a732d0633\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wcdc2" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.977645 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1261994f-a993-4ffc-851a-dfce5bcc10b1-machine-approver-tls\") pod \"machine-approver-56656f9798-5kv6p\" (UID: \"1261994f-a993-4ffc-851a-dfce5bcc10b1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5kv6p" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.977820 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d58c6e7c-e0bc-4833-ab34-348c03f75da7-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-jsc7b\" (UID: \"d58c6e7c-e0bc-4833-ab34-348c03f75da7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jsc7b" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.977957 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/894233bb-65ed-4cdd-ac61-7a8bd8f66140-trusted-ca-bundle\") pod \"apiserver-76f77b778f-8qkg2\" (UID: \"894233bb-65ed-4cdd-ac61-7a8bd8f66140\") " pod="openshift-apiserver/apiserver-76f77b778f-8qkg2" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.979840 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/835c6d49-e42e-444a-a276-fb9f064fdbda-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-8bmcr\" (UID: \"835c6d49-e42e-444a-a276-fb9f064fdbda\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8bmcr" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.979950 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/894233bb-65ed-4cdd-ac61-7a8bd8f66140-serving-cert\") pod \"apiserver-76f77b778f-8qkg2\" (UID: \"894233bb-65ed-4cdd-ac61-7a8bd8f66140\") " pod="openshift-apiserver/apiserver-76f77b778f-8qkg2" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.980791 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07dd9173-fdfe-4edb-821b-37c94116b53e-serving-cert\") pod \"controller-manager-879f6c89f-pctg8\" (UID: \"07dd9173-fdfe-4edb-821b-37c94116b53e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pctg8" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.980788 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ccd97956-aef1-45cf-9475-02928c866124-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-szcmx\" (UID: \"ccd97956-aef1-45cf-9475-02928c866124\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-szcmx" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.980975 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/acdb1323-fec8-46fa-9f36-9b0f7f74cca4-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-pmlgc\" (UID: \"acdb1323-fec8-46fa-9f36-9b0f7f74cca4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pmlgc" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.981030 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d58c6e7c-e0bc-4833-ab34-348c03f75da7-etcd-client\") pod \"apiserver-7bbb656c7d-jsc7b\" (UID: \"d58c6e7c-e0bc-4833-ab34-348c03f75da7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jsc7b" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.981602 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/22c4dfcc-144e-40cd-bed2-dc28c210a130-etcd-client\") pod \"etcd-operator-b45778765-ccg6j\" (UID: \"22c4dfcc-144e-40cd-bed2-dc28c210a130\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ccg6j" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.985114 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d46c3923-f64c-42de-b84c-98bc872f5de6-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-nmdjh\" (UID: \"d46c3923-f64c-42de-b84c-98bc872f5de6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nmdjh" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.990122 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0ccfed17-f056-4bbe-8ec3-cdd31f37be63-metrics-tls\") pod \"dns-operator-744455d44c-t8bst\" (UID: \"0ccfed17-f056-4bbe-8ec3-cdd31f37be63\") " pod="openshift-dns-operator/dns-operator-744455d44c-t8bst" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.990147 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6a8f75ff-3558-4d7b-8adb-722a732d0633-proxy-tls\") pod \"machine-config-operator-74547568cd-wcdc2\" (UID: \"6a8f75ff-3558-4d7b-8adb-722a732d0633\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wcdc2" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.990194 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d8ea50d-6822-425a-8eac-6311c8537eb7-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-886ct\" (UID: \"6d8ea50d-6822-425a-8eac-6311c8537eb7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-886ct" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.990285 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/894233bb-65ed-4cdd-ac61-7a8bd8f66140-etcd-client\") pod \"apiserver-76f77b778f-8qkg2\" (UID: \"894233bb-65ed-4cdd-ac61-7a8bd8f66140\") " pod="openshift-apiserver/apiserver-76f77b778f-8qkg2" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.990455 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bb63883f-65f5-4107-877a-ff786d6c00f9-console-serving-cert\") pod \"console-f9d7485db-c4c52\" (UID: \"bb63883f-65f5-4107-877a-ff786d6c00f9\") " pod="openshift-console/console-f9d7485db-c4c52" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.990551 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22c4dfcc-144e-40cd-bed2-dc28c210a130-serving-cert\") pod \"etcd-operator-b45778765-ccg6j\" (UID: \"22c4dfcc-144e-40cd-bed2-dc28c210a130\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ccg6j" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.990562 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a9bcb9a2-1128-4c6b-80b1-47afd1a46511-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-l6gq7\" (UID: \"a9bcb9a2-1128-4c6b-80b1-47afd1a46511\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-l6gq7" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.990632 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bb63883f-65f5-4107-877a-ff786d6c00f9-console-oauth-config\") pod \"console-f9d7485db-c4c52\" (UID: \"bb63883f-65f5-4107-877a-ff786d6c00f9\") " pod="openshift-console/console-f9d7485db-c4c52" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.990667 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1f6fd76-f362-495f-969d-a644f072552f-serving-cert\") pod \"openshift-config-operator-7777fb866f-l8d7w\" (UID: \"d1f6fd76-f362-495f-969d-a644f072552f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-l8d7w" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.990774 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/894233bb-65ed-4cdd-ac61-7a8bd8f66140-encryption-config\") pod \"apiserver-76f77b778f-8qkg2\" (UID: \"894233bb-65ed-4cdd-ac61-7a8bd8f66140\") " pod="openshift-apiserver/apiserver-76f77b778f-8qkg2" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.990853 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d58c6e7c-e0bc-4833-ab34-348c03f75da7-encryption-config\") pod \"apiserver-7bbb656c7d-jsc7b\" (UID: \"d58c6e7c-e0bc-4833-ab34-348c03f75da7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jsc7b" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.990959 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ccd97956-aef1-45cf-9475-02928c866124-proxy-tls\") pod \"machine-config-controller-84d6567774-szcmx\" (UID: \"ccd97956-aef1-45cf-9475-02928c866124\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-szcmx" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.991010 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d58c6e7c-e0bc-4833-ab34-348c03f75da7-serving-cert\") pod \"apiserver-7bbb656c7d-jsc7b\" (UID: \"d58c6e7c-e0bc-4833-ab34-348c03f75da7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jsc7b" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.991748 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d58c6e7c-e0bc-4833-ab34-348c03f75da7-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-jsc7b\" (UID: \"d58c6e7c-e0bc-4833-ab34-348c03f75da7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jsc7b" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.992999 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-8ftf5"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.993498 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/07dd9173-fdfe-4edb-821b-37c94116b53e-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-pctg8\" (UID: \"07dd9173-fdfe-4edb-821b-37c94116b53e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pctg8" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.995082 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/835c6d49-e42e-444a-a276-fb9f064fdbda-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-8bmcr\" (UID: \"835c6d49-e42e-444a-a276-fb9f064fdbda\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8bmcr" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.996224 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.996270 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.996402 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.999554 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:44.999717 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.018686 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.039453 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.058928 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.078814 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.099040 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.119896 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.139802 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.159196 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.179239 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.198470 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.234425 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.238847 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.258483 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.284686 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.299232 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.319236 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.339643 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.359033 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.378636 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.385499 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77ddb26b-22ee-4a97-81ab-7e82c611ebd5-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-wgfm8\" (UID: \"77ddb26b-22ee-4a97-81ab-7e82c611ebd5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wgfm8" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.399748 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.418605 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.421592 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77ddb26b-22ee-4a97-81ab-7e82c611ebd5-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-wgfm8\" (UID: \"77ddb26b-22ee-4a97-81ab-7e82c611ebd5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wgfm8" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.438310 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.458770 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.464215 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/553b1e39-c2d5-459d-a7fd-058f936804cb-serving-cert\") pod \"authentication-operator-69f744f599-p69vd\" (UID: \"553b1e39-c2d5-459d-a7fd-058f936804cb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-p69vd" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.479760 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.482707 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/553b1e39-c2d5-459d-a7fd-058f936804cb-config\") pod \"authentication-operator-69f744f599-p69vd\" (UID: \"553b1e39-c2d5-459d-a7fd-058f936804cb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-p69vd" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.504923 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.513929 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/553b1e39-c2d5-459d-a7fd-058f936804cb-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-p69vd\" (UID: \"553b1e39-c2d5-459d-a7fd-058f936804cb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-p69vd" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.519180 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.524683 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/553b1e39-c2d5-459d-a7fd-058f936804cb-service-ca-bundle\") pod \"authentication-operator-69f744f599-p69vd\" (UID: \"553b1e39-c2d5-459d-a7fd-058f936804cb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-p69vd" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.539329 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.578895 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.600002 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.618875 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.638436 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.660387 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.679962 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.699627 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.719249 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.739787 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.759239 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.779898 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.797252 4867 request.go:700] Waited for 1.015297606s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dkube-controller-manager-operator-config&limit=500&resourceVersion=0 Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.799537 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.819386 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.839876 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.859013 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.879494 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.899223 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.918642 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.938795 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.958554 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.979281 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.996965 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.998568 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.019752 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.039413 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.059678 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.087246 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.098840 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.119683 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.138481 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.158284 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.178905 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.199188 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.219041 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.238801 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.259616 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.280172 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.299868 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.320204 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.339734 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.358764 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.380018 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.399282 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.418644 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.439883 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.460256 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.480619 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.500316 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.520945 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.559908 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.579744 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.599941 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.619547 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.639241 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.659661 4867 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.678840 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.699617 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.720276 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.739021 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.774770 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfv86\" (UniqueName: \"kubernetes.io/projected/72546cbc-3499-4110-b0e4-58beab7cc8a5-kube-api-access-kfv86\") pod \"downloads-7954f5f757-x9sjv\" (UID: \"72546cbc-3499-4110-b0e4-58beab7cc8a5\") " pod="openshift-console/downloads-7954f5f757-x9sjv" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.792536 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w44ng\" (UniqueName: \"kubernetes.io/projected/0ccfed17-f056-4bbe-8ec3-cdd31f37be63-kube-api-access-w44ng\") pod \"dns-operator-744455d44c-t8bst\" (UID: \"0ccfed17-f056-4bbe-8ec3-cdd31f37be63\") " pod="openshift-dns-operator/dns-operator-744455d44c-t8bst" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.797354 4867 request.go:700] Waited for 1.827950885s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/serviceaccounts/kube-apiserver-operator/token Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.812829 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1815da32-cba4-41f4-80ca-45a750c7e93f-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-ff8rv\" (UID: \"1815da32-cba4-41f4-80ca-45a750c7e93f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ff8rv" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.833111 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxzvd\" (UniqueName: \"kubernetes.io/projected/6a8f75ff-3558-4d7b-8adb-722a732d0633-kube-api-access-mxzvd\") pod \"machine-config-operator-74547568cd-wcdc2\" (UID: \"6a8f75ff-3558-4d7b-8adb-722a732d0633\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wcdc2" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.852759 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/835c6d49-e42e-444a-a276-fb9f064fdbda-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-8bmcr\" (UID: \"835c6d49-e42e-444a-a276-fb9f064fdbda\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8bmcr" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.883620 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlknt\" (UniqueName: \"kubernetes.io/projected/d58c6e7c-e0bc-4833-ab34-348c03f75da7-kube-api-access-jlknt\") pod \"apiserver-7bbb656c7d-jsc7b\" (UID: \"d58c6e7c-e0bc-4833-ab34-348c03f75da7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jsc7b" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.893284 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fx7lz\" (UniqueName: \"kubernetes.io/projected/acdb1323-fec8-46fa-9f36-9b0f7f74cca4-kube-api-access-fx7lz\") pod \"cluster-samples-operator-665b6dd947-pmlgc\" (UID: \"acdb1323-fec8-46fa-9f36-9b0f7f74cca4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pmlgc" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.895971 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ff8rv" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.907671 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jsc7b" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.914520 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rcf4\" (UniqueName: \"kubernetes.io/projected/835c6d49-e42e-444a-a276-fb9f064fdbda-kube-api-access-5rcf4\") pod \"cluster-image-registry-operator-dc59b4c8b-8bmcr\" (UID: \"835c6d49-e42e-444a-a276-fb9f064fdbda\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8bmcr" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.918216 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-x9sjv" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.934485 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbcmm\" (UniqueName: \"kubernetes.io/projected/77ddb26b-22ee-4a97-81ab-7e82c611ebd5-kube-api-access-hbcmm\") pod \"kube-storage-version-migrator-operator-b67b599dd-wgfm8\" (UID: \"77ddb26b-22ee-4a97-81ab-7e82c611ebd5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wgfm8" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.956008 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ln4g\" (UniqueName: \"kubernetes.io/projected/1261994f-a993-4ffc-851a-dfce5bcc10b1-kube-api-access-7ln4g\") pod \"machine-approver-56656f9798-5kv6p\" (UID: \"1261994f-a993-4ffc-851a-dfce5bcc10b1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5kv6p" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.973239 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5tsn\" (UniqueName: \"kubernetes.io/projected/553b1e39-c2d5-459d-a7fd-058f936804cb-kube-api-access-b5tsn\") pod \"authentication-operator-69f744f599-p69vd\" (UID: \"553b1e39-c2d5-459d-a7fd-058f936804cb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-p69vd" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.984220 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8bmcr" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.997014 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpkk9\" (UniqueName: \"kubernetes.io/projected/ccd97956-aef1-45cf-9475-02928c866124-kube-api-access-gpkk9\") pod \"machine-config-controller-84d6567774-szcmx\" (UID: \"ccd97956-aef1-45cf-9475-02928c866124\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-szcmx" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.017090 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xpw2\" (UniqueName: \"kubernetes.io/projected/22c4dfcc-144e-40cd-bed2-dc28c210a130-kube-api-access-5xpw2\") pod \"etcd-operator-b45778765-ccg6j\" (UID: \"22c4dfcc-144e-40cd-bed2-dc28c210a130\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ccg6j" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.024944 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-t8bst" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.033573 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pfz2\" (UniqueName: \"kubernetes.io/projected/894233bb-65ed-4cdd-ac61-7a8bd8f66140-kube-api-access-6pfz2\") pod \"apiserver-76f77b778f-8qkg2\" (UID: \"894233bb-65ed-4cdd-ac61-7a8bd8f66140\") " pod="openshift-apiserver/apiserver-76f77b778f-8qkg2" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.056637 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-szcmx" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.057865 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqcq7\" (UniqueName: \"kubernetes.io/projected/07dd9173-fdfe-4edb-821b-37c94116b53e-kube-api-access-bqcq7\") pod \"controller-manager-879f6c89f-pctg8\" (UID: \"07dd9173-fdfe-4edb-821b-37c94116b53e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pctg8" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.068225 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wcdc2" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.074759 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvv7t\" (UniqueName: \"kubernetes.io/projected/bb63883f-65f5-4107-877a-ff786d6c00f9-kube-api-access-zvv7t\") pod \"console-f9d7485db-c4c52\" (UID: \"bb63883f-65f5-4107-877a-ff786d6c00f9\") " pod="openshift-console/console-f9d7485db-c4c52" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.078139 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-pctg8" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.087411 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-8qkg2" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.101868 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5l2hg\" (UniqueName: \"kubernetes.io/projected/6d8ea50d-6822-425a-8eac-6311c8537eb7-kube-api-access-5l2hg\") pod \"openshift-controller-manager-operator-756b6f6bc6-886ct\" (UID: \"6d8ea50d-6822-425a-8eac-6311c8537eb7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-886ct" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.115321 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wgfm8" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.115772 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kl4jb\" (UniqueName: \"kubernetes.io/projected/d1f6fd76-f362-495f-969d-a644f072552f-kube-api-access-kl4jb\") pod \"openshift-config-operator-7777fb866f-l8d7w\" (UID: \"d1f6fd76-f362-495f-969d-a644f072552f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-l8d7w" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.121213 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5kv6p" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.123037 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-p69vd" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.135410 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t74ck\" (UniqueName: \"kubernetes.io/projected/a9bcb9a2-1128-4c6b-80b1-47afd1a46511-kube-api-access-t74ck\") pod \"multus-admission-controller-857f4d67dd-l6gq7\" (UID: \"a9bcb9a2-1128-4c6b-80b1-47afd1a46511\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-l6gq7" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.156819 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pmlgc" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.160610 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.167637 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l8d7w" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.168463 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hp6r7\" (UniqueName: \"kubernetes.io/projected/d46c3923-f64c-42de-b84c-98bc872f5de6-kube-api-access-hp6r7\") pod \"openshift-apiserver-operator-796bbdcf4f-nmdjh\" (UID: \"d46c3923-f64c-42de-b84c-98bc872f5de6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nmdjh" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.169141 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ff8rv"] Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.181146 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-x9sjv"] Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.181345 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.206073 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.210370 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-jsc7b"] Feb 14 04:11:47 crc kubenswrapper[4867]: W0214 04:11:47.213519 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1815da32_cba4_41f4_80ca_45a750c7e93f.slice/crio-813e40a2e1867731aba1c9c1cac2258dab16eefb257f8f867e54e1c39dbd1222 WatchSource:0}: Error finding container 813e40a2e1867731aba1c9c1cac2258dab16eefb257f8f867e54e1c39dbd1222: Status 404 returned error can't find the container with id 813e40a2e1867731aba1c9c1cac2258dab16eefb257f8f867e54e1c39dbd1222 Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.219056 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.224717 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8bmcr"] Feb 14 04:11:47 crc kubenswrapper[4867]: W0214 04:11:47.230876 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd58c6e7c_e0bc_4833_ab34_348c03f75da7.slice/crio-c5c4776deb3975945db7e0cf31af409b0ccecd9b88acf8d033c946f648493142 WatchSource:0}: Error finding container c5c4776deb3975945db7e0cf31af409b0ccecd9b88acf8d033c946f648493142: Status 404 returned error can't find the container with id c5c4776deb3975945db7e0cf31af409b0ccecd9b88acf8d033c946f648493142 Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.234777 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nmdjh" Feb 14 04:11:47 crc kubenswrapper[4867]: W0214 04:11:47.242347 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod72546cbc_3499_4110_b0e4_58beab7cc8a5.slice/crio-ec4665aac003c1b4e7cba85ff048914da8febde16b0034c9afb5b3fb2a36029a WatchSource:0}: Error finding container ec4665aac003c1b4e7cba85ff048914da8febde16b0034c9afb5b3fb2a36029a: Status 404 returned error can't find the container with id ec4665aac003c1b4e7cba85ff048914da8febde16b0034c9afb5b3fb2a36029a Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.247882 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-ccg6j" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.263692 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-886ct" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.267799 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 14 04:11:47 crc kubenswrapper[4867]: W0214 04:11:47.284640 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod835c6d49_e42e_444a_a276_fb9f064fdbda.slice/crio-144c6c8b1c76f545a725545d137202c6089bbe081caa00b695421ad1383b769d WatchSource:0}: Error finding container 144c6c8b1c76f545a725545d137202c6089bbe081caa00b695421ad1383b769d: Status 404 returned error can't find the container with id 144c6c8b1c76f545a725545d137202c6089bbe081caa00b695421ad1383b769d Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.285186 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.309620 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-c4c52" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.310644 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-c65kr\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.310693 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-c65kr\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.310715 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c029599e-5014-4874-917f-076635849451-registry-certificates\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.310732 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-c65kr\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.310787 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c029599e-5014-4874-917f-076635849451-bound-sa-token\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.310805 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mznjl\" (UniqueName: \"kubernetes.io/projected/1fd832b4-de40-4266-93fb-3682eeb9dd3e-kube-api-access-mznjl\") pod \"ingress-operator-5b745b69d9-485km\" (UID: \"1fd832b4-de40-4266-93fb-3682eeb9dd3e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-485km" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.310847 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z26vn\" (UniqueName: \"kubernetes.io/projected/dc723269-8ee6-4236-9eaa-169a00d76442-kube-api-access-z26vn\") pod \"console-operator-58897d9998-htv2n\" (UID: \"dc723269-8ee6-4236-9eaa-169a00d76442\") " pod="openshift-console-operator/console-operator-58897d9998-htv2n" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.310872 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c029599e-5014-4874-917f-076635849451-registry-tls\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.310916 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2kd6\" (UniqueName: \"kubernetes.io/projected/14efaf39-985f-45ea-ab79-0b8b2044c7f7-kube-api-access-q2kd6\") pod \"route-controller-manager-6576b87f9c-29p6h\" (UID: \"14efaf39-985f-45ea-ab79-0b8b2044c7f7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-29p6h" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.310931 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8437deca-adf5-4648-9abe-2c1c6376d07b-images\") pod \"machine-api-operator-5694c8668f-699tj\" (UID: \"8437deca-adf5-4648-9abe-2c1c6376d07b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-699tj" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.310948 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0ad7b333-6328-41ea-a81d-bce9790b185a-audit-policies\") pod \"oauth-openshift-558db77b4-c65kr\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.310972 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmbh6\" (UniqueName: \"kubernetes.io/projected/c029599e-5014-4874-917f-076635849451-kube-api-access-bmbh6\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.310995 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.311017 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/14efaf39-985f-45ea-ab79-0b8b2044c7f7-client-ca\") pod \"route-controller-manager-6576b87f9c-29p6h\" (UID: \"14efaf39-985f-45ea-ab79-0b8b2044c7f7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-29p6h" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.311036 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0ad7b333-6328-41ea-a81d-bce9790b185a-audit-dir\") pod \"oauth-openshift-558db77b4-c65kr\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.311061 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-c65kr\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.311083 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c029599e-5014-4874-917f-076635849451-installation-pull-secrets\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.311109 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-c65kr\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.311132 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1fd832b4-de40-4266-93fb-3682eeb9dd3e-bound-sa-token\") pod \"ingress-operator-5b745b69d9-485km\" (UID: \"1fd832b4-de40-4266-93fb-3682eeb9dd3e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-485km" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.311154 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-c65kr\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.311177 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1fd832b4-de40-4266-93fb-3682eeb9dd3e-metrics-tls\") pod \"ingress-operator-5b745b69d9-485km\" (UID: \"1fd832b4-de40-4266-93fb-3682eeb9dd3e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-485km" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.311200 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-c65kr\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.311223 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-c65kr\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.311253 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c029599e-5014-4874-917f-076635849451-ca-trust-extracted\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.311274 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1fd832b4-de40-4266-93fb-3682eeb9dd3e-trusted-ca\") pod \"ingress-operator-5b745b69d9-485km\" (UID: \"1fd832b4-de40-4266-93fb-3682eeb9dd3e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-485km" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.311306 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8437deca-adf5-4648-9abe-2c1c6376d07b-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-699tj\" (UID: \"8437deca-adf5-4648-9abe-2c1c6376d07b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-699tj" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.311324 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-c65kr\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.311346 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkhp4\" (UniqueName: \"kubernetes.io/projected/8437deca-adf5-4648-9abe-2c1c6376d07b-kube-api-access-wkhp4\") pod \"machine-api-operator-5694c8668f-699tj\" (UID: \"8437deca-adf5-4648-9abe-2c1c6376d07b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-699tj" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.311364 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-c65kr\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.311385 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dc723269-8ee6-4236-9eaa-169a00d76442-trusted-ca\") pod \"console-operator-58897d9998-htv2n\" (UID: \"dc723269-8ee6-4236-9eaa-169a00d76442\") " pod="openshift-console-operator/console-operator-58897d9998-htv2n" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.311404 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14efaf39-985f-45ea-ab79-0b8b2044c7f7-config\") pod \"route-controller-manager-6576b87f9c-29p6h\" (UID: \"14efaf39-985f-45ea-ab79-0b8b2044c7f7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-29p6h" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.311426 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14efaf39-985f-45ea-ab79-0b8b2044c7f7-serving-cert\") pod \"route-controller-manager-6576b87f9c-29p6h\" (UID: \"14efaf39-985f-45ea-ab79-0b8b2044c7f7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-29p6h" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.311450 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c029599e-5014-4874-917f-076635849451-trusted-ca\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.311464 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-c65kr\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.311480 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dc723269-8ee6-4236-9eaa-169a00d76442-serving-cert\") pod \"console-operator-58897d9998-htv2n\" (UID: \"dc723269-8ee6-4236-9eaa-169a00d76442\") " pod="openshift-console-operator/console-operator-58897d9998-htv2n" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.311494 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tf64k\" (UniqueName: \"kubernetes.io/projected/0ad7b333-6328-41ea-a81d-bce9790b185a-kube-api-access-tf64k\") pod \"oauth-openshift-558db77b4-c65kr\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.311527 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8437deca-adf5-4648-9abe-2c1c6376d07b-config\") pod \"machine-api-operator-5694c8668f-699tj\" (UID: \"8437deca-adf5-4648-9abe-2c1c6376d07b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-699tj" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.311542 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc723269-8ee6-4236-9eaa-169a00d76442-config\") pod \"console-operator-58897d9998-htv2n\" (UID: \"dc723269-8ee6-4236-9eaa-169a00d76442\") " pod="openshift-console-operator/console-operator-58897d9998-htv2n" Feb 14 04:11:47 crc kubenswrapper[4867]: E0214 04:11:47.311828 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:47.81181592 +0000 UTC m=+139.892753234 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.312483 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-szcmx"] Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.341367 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-l6gq7" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.412959 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.413136 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1fd832b4-de40-4266-93fb-3682eeb9dd3e-metrics-tls\") pod \"ingress-operator-5b745b69d9-485km\" (UID: \"1fd832b4-de40-4266-93fb-3682eeb9dd3e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-485km" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.413165 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zc7cr\" (UniqueName: \"kubernetes.io/projected/d3658855-0c06-490f-9bcc-33de7069178e-kube-api-access-zc7cr\") pod \"ingress-canary-8ftf5\" (UID: \"d3658855-0c06-490f-9bcc-33de7069178e\") " pod="openshift-ingress-canary/ingress-canary-8ftf5" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.413184 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/0d05475f-b787-49dc-8a0b-c98e47f40a3b-certs\") pod \"machine-config-server-sz8l8\" (UID: \"0d05475f-b787-49dc-8a0b-c98e47f40a3b\") " pod="openshift-machine-config-operator/machine-config-server-sz8l8" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.413200 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/89db71f1-1a8b-4c57-9a3d-eb725060aee9-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-f47sx\" (UID: \"89db71f1-1a8b-4c57-9a3d-eb725060aee9\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f47sx" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.413215 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gs22v\" (UniqueName: \"kubernetes.io/projected/9a16b0f1-4ef6-457a-a766-a0cc2181501f-kube-api-access-gs22v\") pod \"migrator-59844c95c7-5k4wz\" (UID: \"9a16b0f1-4ef6-457a-a766-a0cc2181501f\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5k4wz" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.413229 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02d4609f-f699-4ac2-bc41-752b879681ba-config\") pod \"service-ca-operator-777779d784-rxprp\" (UID: \"02d4609f-f699-4ac2-bc41-752b879681ba\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rxprp" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.413247 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fx4z\" (UniqueName: \"kubernetes.io/projected/46664b60-c0df-4869-9304-cec4de385a86-kube-api-access-7fx4z\") pod \"olm-operator-6b444d44fb-tcss9\" (UID: \"46664b60-c0df-4869-9304-cec4de385a86\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tcss9" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.413272 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7cedc5a6-929b-43ca-a8b0-6dca555ca455-registration-dir\") pod \"csi-hostpathplugin-pzj5s\" (UID: \"7cedc5a6-929b-43ca-a8b0-6dca555ca455\") " pod="hostpath-provisioner/csi-hostpathplugin-pzj5s" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.413286 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd4dbaf5-45ee-4171-b6b9-7deba44931ff-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-t6c97\" (UID: \"dd4dbaf5-45ee-4171-b6b9-7deba44931ff\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-t6c97" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.413301 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-c65kr\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.413332 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/7cedc5a6-929b-43ca-a8b0-6dca555ca455-plugins-dir\") pod \"csi-hostpathplugin-pzj5s\" (UID: \"7cedc5a6-929b-43ca-a8b0-6dca555ca455\") " pod="hostpath-provisioner/csi-hostpathplugin-pzj5s" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.413347 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-c65kr\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.413363 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stznr\" (UniqueName: \"kubernetes.io/projected/a0c7654d-1553-4b68-8af4-253f77d7c657-kube-api-access-stznr\") pod \"package-server-manager-789f6589d5-rv8cb\" (UID: \"a0c7654d-1553-4b68-8af4-253f77d7c657\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rv8cb" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.413378 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzxjb\" (UniqueName: \"kubernetes.io/projected/7cedc5a6-929b-43ca-a8b0-6dca555ca455-kube-api-access-hzxjb\") pod \"csi-hostpathplugin-pzj5s\" (UID: \"7cedc5a6-929b-43ca-a8b0-6dca555ca455\") " pod="hostpath-provisioner/csi-hostpathplugin-pzj5s" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.413393 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c029599e-5014-4874-917f-076635849451-ca-trust-extracted\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.413408 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a0c7654d-1553-4b68-8af4-253f77d7c657-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-rv8cb\" (UID: \"a0c7654d-1553-4b68-8af4-253f77d7c657\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rv8cb" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.413441 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/7cedc5a6-929b-43ca-a8b0-6dca555ca455-mountpoint-dir\") pod \"csi-hostpathplugin-pzj5s\" (UID: \"7cedc5a6-929b-43ca-a8b0-6dca555ca455\") " pod="hostpath-provisioner/csi-hostpathplugin-pzj5s" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.413457 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1fd832b4-de40-4266-93fb-3682eeb9dd3e-trusted-ca\") pod \"ingress-operator-5b745b69d9-485km\" (UID: \"1fd832b4-de40-4266-93fb-3682eeb9dd3e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-485km" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.413489 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd4dbaf5-45ee-4171-b6b9-7deba44931ff-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-t6c97\" (UID: \"dd4dbaf5-45ee-4171-b6b9-7deba44931ff\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-t6c97" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.413566 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4b71d414-e6bf-4f51-a808-1938c1edf207-service-ca-bundle\") pod \"router-default-5444994796-qlkzp\" (UID: \"4b71d414-e6bf-4f51-a808-1938c1edf207\") " pod="openshift-ingress/router-default-5444994796-qlkzp" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.413592 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02d4609f-f699-4ac2-bc41-752b879681ba-serving-cert\") pod \"service-ca-operator-777779d784-rxprp\" (UID: \"02d4609f-f699-4ac2-bc41-752b879681ba\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rxprp" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.413627 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8437deca-adf5-4648-9abe-2c1c6376d07b-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-699tj\" (UID: \"8437deca-adf5-4648-9abe-2c1c6376d07b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-699tj" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.413650 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-c65kr\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.413691 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkhp4\" (UniqueName: \"kubernetes.io/projected/8437deca-adf5-4648-9abe-2c1c6376d07b-kube-api-access-wkhp4\") pod \"machine-api-operator-5694c8668f-699tj\" (UID: \"8437deca-adf5-4648-9abe-2c1c6376d07b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-699tj" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.413717 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-c65kr\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.413744 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7cedc5a6-929b-43ca-a8b0-6dca555ca455-socket-dir\") pod \"csi-hostpathplugin-pzj5s\" (UID: \"7cedc5a6-929b-43ca-a8b0-6dca555ca455\") " pod="hostpath-provisioner/csi-hostpathplugin-pzj5s" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.413761 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dc723269-8ee6-4236-9eaa-169a00d76442-trusted-ca\") pod \"console-operator-58897d9998-htv2n\" (UID: \"dc723269-8ee6-4236-9eaa-169a00d76442\") " pod="openshift-console-operator/console-operator-58897d9998-htv2n" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.413775 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14efaf39-985f-45ea-ab79-0b8b2044c7f7-config\") pod \"route-controller-manager-6576b87f9c-29p6h\" (UID: \"14efaf39-985f-45ea-ab79-0b8b2044c7f7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-29p6h" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.413789 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkqlf\" (UniqueName: \"kubernetes.io/projected/02d4609f-f699-4ac2-bc41-752b879681ba-kube-api-access-bkqlf\") pod \"service-ca-operator-777779d784-rxprp\" (UID: \"02d4609f-f699-4ac2-bc41-752b879681ba\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rxprp" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.413807 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/71ac31c5-7a3b-4c18-aa9e-c193fa8f778a-config-volume\") pod \"collect-profiles-29517360-jfvsd\" (UID: \"71ac31c5-7a3b-4c18-aa9e-c193fa8f778a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517360-jfvsd" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.413831 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14efaf39-985f-45ea-ab79-0b8b2044c7f7-serving-cert\") pod \"route-controller-manager-6576b87f9c-29p6h\" (UID: \"14efaf39-985f-45ea-ab79-0b8b2044c7f7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-29p6h" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.413846 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/4b71d414-e6bf-4f51-a808-1938c1edf207-stats-auth\") pod \"router-default-5444994796-qlkzp\" (UID: \"4b71d414-e6bf-4f51-a808-1938c1edf207\") " pod="openshift-ingress/router-default-5444994796-qlkzp" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.413870 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/541a6523-92f6-477b-9d35-a3a0074f5de3-metrics-tls\") pod \"dns-default-gc8sl\" (UID: \"541a6523-92f6-477b-9d35-a3a0074f5de3\") " pod="openshift-dns/dns-default-gc8sl" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.413886 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c029599e-5014-4874-917f-076635849451-trusted-ca\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.413900 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-c65kr\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.413915 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec0d5c79-9e98-4f09-a336-9c284ba81d82-config\") pod \"kube-controller-manager-operator-78b949d7b-6tvm5\" (UID: \"ec0d5c79-9e98-4f09-a336-9c284ba81d82\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6tvm5" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.413931 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b1dba42c-e410-49fd-8c48-449fca5d65dc-srv-cert\") pod \"catalog-operator-68c6474976-dgp2v\" (UID: \"b1dba42c-e410-49fd-8c48-449fca5d65dc\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dgp2v" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.413956 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dc723269-8ee6-4236-9eaa-169a00d76442-serving-cert\") pod \"console-operator-58897d9998-htv2n\" (UID: \"dc723269-8ee6-4236-9eaa-169a00d76442\") " pod="openshift-console-operator/console-operator-58897d9998-htv2n" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.413972 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pt2g9\" (UniqueName: \"kubernetes.io/projected/1b196c26-84a1-408f-913b-eb50572102cf-kube-api-access-pt2g9\") pod \"packageserver-d55dfcdfc-s94ht\" (UID: \"1b196c26-84a1-408f-913b-eb50572102cf\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s94ht" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.414003 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tf64k\" (UniqueName: \"kubernetes.io/projected/0ad7b333-6328-41ea-a81d-bce9790b185a-kube-api-access-tf64k\") pod \"oauth-openshift-558db77b4-c65kr\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.414019 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/1b196c26-84a1-408f-913b-eb50572102cf-tmpfs\") pod \"packageserver-d55dfcdfc-s94ht\" (UID: \"1b196c26-84a1-408f-913b-eb50572102cf\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s94ht" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.414033 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/0d05475f-b787-49dc-8a0b-c98e47f40a3b-node-bootstrap-token\") pod \"machine-config-server-sz8l8\" (UID: \"0d05475f-b787-49dc-8a0b-c98e47f40a3b\") " pod="openshift-machine-config-operator/machine-config-server-sz8l8" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.414048 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/541a6523-92f6-477b-9d35-a3a0074f5de3-config-volume\") pod \"dns-default-gc8sl\" (UID: \"541a6523-92f6-477b-9d35-a3a0074f5de3\") " pod="openshift-dns/dns-default-gc8sl" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.414085 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8437deca-adf5-4648-9abe-2c1c6376d07b-config\") pod \"machine-api-operator-5694c8668f-699tj\" (UID: \"8437deca-adf5-4648-9abe-2c1c6376d07b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-699tj" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.414099 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc723269-8ee6-4236-9eaa-169a00d76442-config\") pod \"console-operator-58897d9998-htv2n\" (UID: \"dc723269-8ee6-4236-9eaa-169a00d76442\") " pod="openshift-console-operator/console-operator-58897d9998-htv2n" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.414481 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-mkw9h\" (UID: \"0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2\") " pod="openshift-marketplace/marketplace-operator-79b997595-mkw9h" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.414521 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ec0d5c79-9e98-4f09-a336-9c284ba81d82-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-6tvm5\" (UID: \"ec0d5c79-9e98-4f09-a336-9c284ba81d82\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6tvm5" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.414566 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkjjw\" (UniqueName: \"kubernetes.io/projected/0d05475f-b787-49dc-8a0b-c98e47f40a3b-kube-api-access-nkjjw\") pod \"machine-config-server-sz8l8\" (UID: \"0d05475f-b787-49dc-8a0b-c98e47f40a3b\") " pod="openshift-machine-config-operator/machine-config-server-sz8l8" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.414583 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec0d5c79-9e98-4f09-a336-9c284ba81d82-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-6tvm5\" (UID: \"ec0d5c79-9e98-4f09-a336-9c284ba81d82\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6tvm5" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.414632 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/71ac31c5-7a3b-4c18-aa9e-c193fa8f778a-secret-volume\") pod \"collect-profiles-29517360-jfvsd\" (UID: \"71ac31c5-7a3b-4c18-aa9e-c193fa8f778a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517360-jfvsd" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.414675 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-c65kr\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.414691 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/7cedc5a6-929b-43ca-a8b0-6dca555ca455-csi-data-dir\") pod \"csi-hostpathplugin-pzj5s\" (UID: \"7cedc5a6-929b-43ca-a8b0-6dca555ca455\") " pod="hostpath-provisioner/csi-hostpathplugin-pzj5s" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.414707 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-c65kr\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.414733 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c029599e-5014-4874-917f-076635849451-registry-certificates\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.414749 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-c65kr\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.414781 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c029599e-5014-4874-917f-076635849451-bound-sa-token\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.414797 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mznjl\" (UniqueName: \"kubernetes.io/projected/1fd832b4-de40-4266-93fb-3682eeb9dd3e-kube-api-access-mznjl\") pod \"ingress-operator-5b745b69d9-485km\" (UID: \"1fd832b4-de40-4266-93fb-3682eeb9dd3e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-485km" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.414812 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z26vn\" (UniqueName: \"kubernetes.io/projected/dc723269-8ee6-4236-9eaa-169a00d76442-kube-api-access-z26vn\") pod \"console-operator-58897d9998-htv2n\" (UID: \"dc723269-8ee6-4236-9eaa-169a00d76442\") " pod="openshift-console-operator/console-operator-58897d9998-htv2n" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.414827 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1b196c26-84a1-408f-913b-eb50572102cf-apiservice-cert\") pod \"packageserver-d55dfcdfc-s94ht\" (UID: \"1b196c26-84a1-408f-913b-eb50572102cf\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s94ht" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.414841 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/4b71d414-e6bf-4f51-a808-1938c1edf207-default-certificate\") pod \"router-default-5444994796-qlkzp\" (UID: \"4b71d414-e6bf-4f51-a808-1938c1edf207\") " pod="openshift-ingress/router-default-5444994796-qlkzp" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.414876 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjwtw\" (UniqueName: \"kubernetes.io/projected/d74f081b-fe53-4642-8340-a8e602c627f1-kube-api-access-kjwtw\") pod \"service-ca-9c57cc56f-9kgzh\" (UID: \"d74f081b-fe53-4642-8340-a8e602c627f1\") " pod="openshift-service-ca/service-ca-9c57cc56f-9kgzh" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.414891 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whx59\" (UniqueName: \"kubernetes.io/projected/4b71d414-e6bf-4f51-a808-1938c1edf207-kube-api-access-whx59\") pod \"router-default-5444994796-qlkzp\" (UID: \"4b71d414-e6bf-4f51-a808-1938c1edf207\") " pod="openshift-ingress/router-default-5444994796-qlkzp" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.414908 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c029599e-5014-4874-917f-076635849451-registry-tls\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.414925 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2kd6\" (UniqueName: \"kubernetes.io/projected/14efaf39-985f-45ea-ab79-0b8b2044c7f7-kube-api-access-q2kd6\") pod \"route-controller-manager-6576b87f9c-29p6h\" (UID: \"14efaf39-985f-45ea-ab79-0b8b2044c7f7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-29p6h" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.414940 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-mkw9h\" (UID: \"0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2\") " pod="openshift-marketplace/marketplace-operator-79b997595-mkw9h" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.414958 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8437deca-adf5-4648-9abe-2c1c6376d07b-images\") pod \"machine-api-operator-5694c8668f-699tj\" (UID: \"8437deca-adf5-4648-9abe-2c1c6376d07b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-699tj" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.414974 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0ad7b333-6328-41ea-a81d-bce9790b185a-audit-policies\") pod \"oauth-openshift-558db77b4-c65kr\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.414989 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b1dba42c-e410-49fd-8c48-449fca5d65dc-profile-collector-cert\") pod \"catalog-operator-68c6474976-dgp2v\" (UID: \"b1dba42c-e410-49fd-8c48-449fca5d65dc\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dgp2v" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.415024 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmbh6\" (UniqueName: \"kubernetes.io/projected/c029599e-5014-4874-917f-076635849451-kube-api-access-bmbh6\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.415039 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4b71d414-e6bf-4f51-a808-1938c1edf207-metrics-certs\") pod \"router-default-5444994796-qlkzp\" (UID: \"4b71d414-e6bf-4f51-a808-1938c1edf207\") " pod="openshift-ingress/router-default-5444994796-qlkzp" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.415061 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/14efaf39-985f-45ea-ab79-0b8b2044c7f7-client-ca\") pod \"route-controller-manager-6576b87f9c-29p6h\" (UID: \"14efaf39-985f-45ea-ab79-0b8b2044c7f7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-29p6h" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.415076 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rr9gw\" (UniqueName: \"kubernetes.io/projected/89db71f1-1a8b-4c57-9a3d-eb725060aee9-kube-api-access-rr9gw\") pod \"control-plane-machine-set-operator-78cbb6b69f-f47sx\" (UID: \"89db71f1-1a8b-4c57-9a3d-eb725060aee9\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f47sx" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.415102 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0ad7b333-6328-41ea-a81d-bce9790b185a-audit-dir\") pod \"oauth-openshift-558db77b4-c65kr\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.415118 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zw4m\" (UniqueName: \"kubernetes.io/projected/b1dba42c-e410-49fd-8c48-449fca5d65dc-kube-api-access-4zw4m\") pod \"catalog-operator-68c6474976-dgp2v\" (UID: \"b1dba42c-e410-49fd-8c48-449fca5d65dc\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dgp2v" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.415133 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/46664b60-c0df-4869-9304-cec4de385a86-srv-cert\") pod \"olm-operator-6b444d44fb-tcss9\" (UID: \"46664b60-c0df-4869-9304-cec4de385a86\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tcss9" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.415159 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gq2jw\" (UniqueName: \"kubernetes.io/projected/0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2-kube-api-access-gq2jw\") pod \"marketplace-operator-79b997595-mkw9h\" (UID: \"0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2\") " pod="openshift-marketplace/marketplace-operator-79b997595-mkw9h" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.415174 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6dn8\" (UniqueName: \"kubernetes.io/projected/71ac31c5-7a3b-4c18-aa9e-c193fa8f778a-kube-api-access-s6dn8\") pod \"collect-profiles-29517360-jfvsd\" (UID: \"71ac31c5-7a3b-4c18-aa9e-c193fa8f778a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517360-jfvsd" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.415202 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cs9hv\" (UniqueName: \"kubernetes.io/projected/541a6523-92f6-477b-9d35-a3a0074f5de3-kube-api-access-cs9hv\") pod \"dns-default-gc8sl\" (UID: \"541a6523-92f6-477b-9d35-a3a0074f5de3\") " pod="openshift-dns/dns-default-gc8sl" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.415228 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-c65kr\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.415281 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c029599e-5014-4874-917f-076635849451-installation-pull-secrets\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.415297 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1b196c26-84a1-408f-913b-eb50572102cf-webhook-cert\") pod \"packageserver-d55dfcdfc-s94ht\" (UID: \"1b196c26-84a1-408f-913b-eb50572102cf\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s94ht" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.415359 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/46664b60-c0df-4869-9304-cec4de385a86-profile-collector-cert\") pod \"olm-operator-6b444d44fb-tcss9\" (UID: \"46664b60-c0df-4869-9304-cec4de385a86\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tcss9" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.415383 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/d74f081b-fe53-4642-8340-a8e602c627f1-signing-key\") pod \"service-ca-9c57cc56f-9kgzh\" (UID: \"d74f081b-fe53-4642-8340-a8e602c627f1\") " pod="openshift-service-ca/service-ca-9c57cc56f-9kgzh" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.415398 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/d74f081b-fe53-4642-8340-a8e602c627f1-signing-cabundle\") pod \"service-ca-9c57cc56f-9kgzh\" (UID: \"d74f081b-fe53-4642-8340-a8e602c627f1\") " pod="openshift-service-ca/service-ca-9c57cc56f-9kgzh" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.415412 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d3658855-0c06-490f-9bcc-33de7069178e-cert\") pod \"ingress-canary-8ftf5\" (UID: \"d3658855-0c06-490f-9bcc-33de7069178e\") " pod="openshift-ingress-canary/ingress-canary-8ftf5" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.415427 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dd4dbaf5-45ee-4171-b6b9-7deba44931ff-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-t6c97\" (UID: \"dd4dbaf5-45ee-4171-b6b9-7deba44931ff\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-t6c97" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.416787 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c029599e-5014-4874-917f-076635849451-trusted-ca\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.417222 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0ad7b333-6328-41ea-a81d-bce9790b185a-audit-policies\") pod \"oauth-openshift-558db77b4-c65kr\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.417706 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8437deca-adf5-4648-9abe-2c1c6376d07b-config\") pod \"machine-api-operator-5694c8668f-699tj\" (UID: \"8437deca-adf5-4648-9abe-2c1c6376d07b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-699tj" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.418177 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc723269-8ee6-4236-9eaa-169a00d76442-config\") pod \"console-operator-58897d9998-htv2n\" (UID: \"dc723269-8ee6-4236-9eaa-169a00d76442\") " pod="openshift-console-operator/console-operator-58897d9998-htv2n" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.418342 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-c65kr\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.418381 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1fd832b4-de40-4266-93fb-3682eeb9dd3e-bound-sa-token\") pod \"ingress-operator-5b745b69d9-485km\" (UID: \"1fd832b4-de40-4266-93fb-3682eeb9dd3e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-485km" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.418398 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-c65kr\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.418646 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c029599e-5014-4874-917f-076635849451-ca-trust-extracted\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.418716 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dc723269-8ee6-4236-9eaa-169a00d76442-trusted-ca\") pod \"console-operator-58897d9998-htv2n\" (UID: \"dc723269-8ee6-4236-9eaa-169a00d76442\") " pod="openshift-console-operator/console-operator-58897d9998-htv2n" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.425127 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-c65kr\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.425772 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8437deca-adf5-4648-9abe-2c1c6376d07b-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-699tj\" (UID: \"8437deca-adf5-4648-9abe-2c1c6376d07b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-699tj" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.425818 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dc723269-8ee6-4236-9eaa-169a00d76442-serving-cert\") pod \"console-operator-58897d9998-htv2n\" (UID: \"dc723269-8ee6-4236-9eaa-169a00d76442\") " pod="openshift-console-operator/console-operator-58897d9998-htv2n" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.426044 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-c65kr\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.430805 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1fd832b4-de40-4266-93fb-3682eeb9dd3e-trusted-ca\") pod \"ingress-operator-5b745b69d9-485km\" (UID: \"1fd832b4-de40-4266-93fb-3682eeb9dd3e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-485km" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.431265 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0ad7b333-6328-41ea-a81d-bce9790b185a-audit-dir\") pod \"oauth-openshift-558db77b4-c65kr\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.437565 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-c65kr\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.437705 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-c65kr\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.440079 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c029599e-5014-4874-917f-076635849451-registry-certificates\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.449246 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-c65kr\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.449611 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-c65kr\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.451059 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14efaf39-985f-45ea-ab79-0b8b2044c7f7-serving-cert\") pod \"route-controller-manager-6576b87f9c-29p6h\" (UID: \"14efaf39-985f-45ea-ab79-0b8b2044c7f7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-29p6h" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.451581 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8437deca-adf5-4648-9abe-2c1c6376d07b-images\") pod \"machine-api-operator-5694c8668f-699tj\" (UID: \"8437deca-adf5-4648-9abe-2c1c6376d07b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-699tj" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.452093 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14efaf39-985f-45ea-ab79-0b8b2044c7f7-config\") pod \"route-controller-manager-6576b87f9c-29p6h\" (UID: \"14efaf39-985f-45ea-ab79-0b8b2044c7f7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-29p6h" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.453183 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-c65kr\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.454792 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-c65kr\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.457533 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/14efaf39-985f-45ea-ab79-0b8b2044c7f7-client-ca\") pod \"route-controller-manager-6576b87f9c-29p6h\" (UID: \"14efaf39-985f-45ea-ab79-0b8b2044c7f7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-29p6h" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.463674 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c029599e-5014-4874-917f-076635849451-installation-pull-secrets\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.468845 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-c65kr\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.470727 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c029599e-5014-4874-917f-076635849451-registry-tls\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.470982 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-c65kr\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.472829 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-c65kr\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.472555 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1fd832b4-de40-4266-93fb-3682eeb9dd3e-metrics-tls\") pod \"ingress-operator-5b745b69d9-485km\" (UID: \"1fd832b4-de40-4266-93fb-3682eeb9dd3e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-485km" Feb 14 04:11:47 crc kubenswrapper[4867]: E0214 04:11:47.475091 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:47.975051522 +0000 UTC m=+140.055988836 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.475939 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-t8bst"] Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.477262 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-8qkg2"] Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.479154 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tf64k\" (UniqueName: \"kubernetes.io/projected/0ad7b333-6328-41ea-a81d-bce9790b185a-kube-api-access-tf64k\") pod \"oauth-openshift-558db77b4-c65kr\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.491994 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkhp4\" (UniqueName: \"kubernetes.io/projected/8437deca-adf5-4648-9abe-2c1c6376d07b-kube-api-access-wkhp4\") pod \"machine-api-operator-5694c8668f-699tj\" (UID: \"8437deca-adf5-4648-9abe-2c1c6376d07b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-699tj" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.499609 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1fd832b4-de40-4266-93fb-3682eeb9dd3e-bound-sa-token\") pod \"ingress-operator-5b745b69d9-485km\" (UID: \"1fd832b4-de40-4266-93fb-3682eeb9dd3e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-485km" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.519321 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkjjw\" (UniqueName: \"kubernetes.io/projected/0d05475f-b787-49dc-8a0b-c98e47f40a3b-kube-api-access-nkjjw\") pod \"machine-config-server-sz8l8\" (UID: \"0d05475f-b787-49dc-8a0b-c98e47f40a3b\") " pod="openshift-machine-config-operator/machine-config-server-sz8l8" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.519363 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec0d5c79-9e98-4f09-a336-9c284ba81d82-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-6tvm5\" (UID: \"ec0d5c79-9e98-4f09-a336-9c284ba81d82\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6tvm5" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.519397 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/71ac31c5-7a3b-4c18-aa9e-c193fa8f778a-secret-volume\") pod \"collect-profiles-29517360-jfvsd\" (UID: \"71ac31c5-7a3b-4c18-aa9e-c193fa8f778a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517360-jfvsd" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.519425 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/7cedc5a6-929b-43ca-a8b0-6dca555ca455-csi-data-dir\") pod \"csi-hostpathplugin-pzj5s\" (UID: \"7cedc5a6-929b-43ca-a8b0-6dca555ca455\") " pod="hostpath-provisioner/csi-hostpathplugin-pzj5s" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.519467 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1b196c26-84a1-408f-913b-eb50572102cf-apiservice-cert\") pod \"packageserver-d55dfcdfc-s94ht\" (UID: \"1b196c26-84a1-408f-913b-eb50572102cf\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s94ht" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.519484 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/4b71d414-e6bf-4f51-a808-1938c1edf207-default-certificate\") pod \"router-default-5444994796-qlkzp\" (UID: \"4b71d414-e6bf-4f51-a808-1938c1edf207\") " pod="openshift-ingress/router-default-5444994796-qlkzp" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.519546 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjwtw\" (UniqueName: \"kubernetes.io/projected/d74f081b-fe53-4642-8340-a8e602c627f1-kube-api-access-kjwtw\") pod \"service-ca-9c57cc56f-9kgzh\" (UID: \"d74f081b-fe53-4642-8340-a8e602c627f1\") " pod="openshift-service-ca/service-ca-9c57cc56f-9kgzh" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.519566 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whx59\" (UniqueName: \"kubernetes.io/projected/4b71d414-e6bf-4f51-a808-1938c1edf207-kube-api-access-whx59\") pod \"router-default-5444994796-qlkzp\" (UID: \"4b71d414-e6bf-4f51-a808-1938c1edf207\") " pod="openshift-ingress/router-default-5444994796-qlkzp" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.519603 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-mkw9h\" (UID: \"0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2\") " pod="openshift-marketplace/marketplace-operator-79b997595-mkw9h" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.519626 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b1dba42c-e410-49fd-8c48-449fca5d65dc-profile-collector-cert\") pod \"catalog-operator-68c6474976-dgp2v\" (UID: \"b1dba42c-e410-49fd-8c48-449fca5d65dc\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dgp2v" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.519656 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4b71d414-e6bf-4f51-a808-1938c1edf207-metrics-certs\") pod \"router-default-5444994796-qlkzp\" (UID: \"4b71d414-e6bf-4f51-a808-1938c1edf207\") " pod="openshift-ingress/router-default-5444994796-qlkzp" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.519677 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.519699 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rr9gw\" (UniqueName: \"kubernetes.io/projected/89db71f1-1a8b-4c57-9a3d-eb725060aee9-kube-api-access-rr9gw\") pod \"control-plane-machine-set-operator-78cbb6b69f-f47sx\" (UID: \"89db71f1-1a8b-4c57-9a3d-eb725060aee9\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f47sx" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.519716 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zw4m\" (UniqueName: \"kubernetes.io/projected/b1dba42c-e410-49fd-8c48-449fca5d65dc-kube-api-access-4zw4m\") pod \"catalog-operator-68c6474976-dgp2v\" (UID: \"b1dba42c-e410-49fd-8c48-449fca5d65dc\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dgp2v" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.519736 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/46664b60-c0df-4869-9304-cec4de385a86-srv-cert\") pod \"olm-operator-6b444d44fb-tcss9\" (UID: \"46664b60-c0df-4869-9304-cec4de385a86\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tcss9" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.519758 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gq2jw\" (UniqueName: \"kubernetes.io/projected/0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2-kube-api-access-gq2jw\") pod \"marketplace-operator-79b997595-mkw9h\" (UID: \"0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2\") " pod="openshift-marketplace/marketplace-operator-79b997595-mkw9h" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.519776 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6dn8\" (UniqueName: \"kubernetes.io/projected/71ac31c5-7a3b-4c18-aa9e-c193fa8f778a-kube-api-access-s6dn8\") pod \"collect-profiles-29517360-jfvsd\" (UID: \"71ac31c5-7a3b-4c18-aa9e-c193fa8f778a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517360-jfvsd" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.519797 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cs9hv\" (UniqueName: \"kubernetes.io/projected/541a6523-92f6-477b-9d35-a3a0074f5de3-kube-api-access-cs9hv\") pod \"dns-default-gc8sl\" (UID: \"541a6523-92f6-477b-9d35-a3a0074f5de3\") " pod="openshift-dns/dns-default-gc8sl" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.519821 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1b196c26-84a1-408f-913b-eb50572102cf-webhook-cert\") pod \"packageserver-d55dfcdfc-s94ht\" (UID: \"1b196c26-84a1-408f-913b-eb50572102cf\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s94ht" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.519846 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/46664b60-c0df-4869-9304-cec4de385a86-profile-collector-cert\") pod \"olm-operator-6b444d44fb-tcss9\" (UID: \"46664b60-c0df-4869-9304-cec4de385a86\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tcss9" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.519864 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/d74f081b-fe53-4642-8340-a8e602c627f1-signing-key\") pod \"service-ca-9c57cc56f-9kgzh\" (UID: \"d74f081b-fe53-4642-8340-a8e602c627f1\") " pod="openshift-service-ca/service-ca-9c57cc56f-9kgzh" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.519886 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/d74f081b-fe53-4642-8340-a8e602c627f1-signing-cabundle\") pod \"service-ca-9c57cc56f-9kgzh\" (UID: \"d74f081b-fe53-4642-8340-a8e602c627f1\") " pod="openshift-service-ca/service-ca-9c57cc56f-9kgzh" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.519905 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d3658855-0c06-490f-9bcc-33de7069178e-cert\") pod \"ingress-canary-8ftf5\" (UID: \"d3658855-0c06-490f-9bcc-33de7069178e\") " pod="openshift-ingress-canary/ingress-canary-8ftf5" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.519925 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dd4dbaf5-45ee-4171-b6b9-7deba44931ff-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-t6c97\" (UID: \"dd4dbaf5-45ee-4171-b6b9-7deba44931ff\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-t6c97" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.519949 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zc7cr\" (UniqueName: \"kubernetes.io/projected/d3658855-0c06-490f-9bcc-33de7069178e-kube-api-access-zc7cr\") pod \"ingress-canary-8ftf5\" (UID: \"d3658855-0c06-490f-9bcc-33de7069178e\") " pod="openshift-ingress-canary/ingress-canary-8ftf5" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.519966 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/0d05475f-b787-49dc-8a0b-c98e47f40a3b-certs\") pod \"machine-config-server-sz8l8\" (UID: \"0d05475f-b787-49dc-8a0b-c98e47f40a3b\") " pod="openshift-machine-config-operator/machine-config-server-sz8l8" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.519985 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/89db71f1-1a8b-4c57-9a3d-eb725060aee9-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-f47sx\" (UID: \"89db71f1-1a8b-4c57-9a3d-eb725060aee9\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f47sx" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.520004 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gs22v\" (UniqueName: \"kubernetes.io/projected/9a16b0f1-4ef6-457a-a766-a0cc2181501f-kube-api-access-gs22v\") pod \"migrator-59844c95c7-5k4wz\" (UID: \"9a16b0f1-4ef6-457a-a766-a0cc2181501f\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5k4wz" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.520025 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02d4609f-f699-4ac2-bc41-752b879681ba-config\") pod \"service-ca-operator-777779d784-rxprp\" (UID: \"02d4609f-f699-4ac2-bc41-752b879681ba\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rxprp" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.520042 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fx4z\" (UniqueName: \"kubernetes.io/projected/46664b60-c0df-4869-9304-cec4de385a86-kube-api-access-7fx4z\") pod \"olm-operator-6b444d44fb-tcss9\" (UID: \"46664b60-c0df-4869-9304-cec4de385a86\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tcss9" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.520062 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7cedc5a6-929b-43ca-a8b0-6dca555ca455-registration-dir\") pod \"csi-hostpathplugin-pzj5s\" (UID: \"7cedc5a6-929b-43ca-a8b0-6dca555ca455\") " pod="hostpath-provisioner/csi-hostpathplugin-pzj5s" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.520078 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd4dbaf5-45ee-4171-b6b9-7deba44931ff-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-t6c97\" (UID: \"dd4dbaf5-45ee-4171-b6b9-7deba44931ff\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-t6c97" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.520097 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/7cedc5a6-929b-43ca-a8b0-6dca555ca455-plugins-dir\") pod \"csi-hostpathplugin-pzj5s\" (UID: \"7cedc5a6-929b-43ca-a8b0-6dca555ca455\") " pod="hostpath-provisioner/csi-hostpathplugin-pzj5s" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.520115 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stznr\" (UniqueName: \"kubernetes.io/projected/a0c7654d-1553-4b68-8af4-253f77d7c657-kube-api-access-stznr\") pod \"package-server-manager-789f6589d5-rv8cb\" (UID: \"a0c7654d-1553-4b68-8af4-253f77d7c657\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rv8cb" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.520134 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzxjb\" (UniqueName: \"kubernetes.io/projected/7cedc5a6-929b-43ca-a8b0-6dca555ca455-kube-api-access-hzxjb\") pod \"csi-hostpathplugin-pzj5s\" (UID: \"7cedc5a6-929b-43ca-a8b0-6dca555ca455\") " pod="hostpath-provisioner/csi-hostpathplugin-pzj5s" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.520153 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a0c7654d-1553-4b68-8af4-253f77d7c657-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-rv8cb\" (UID: \"a0c7654d-1553-4b68-8af4-253f77d7c657\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rv8cb" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.520174 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/7cedc5a6-929b-43ca-a8b0-6dca555ca455-mountpoint-dir\") pod \"csi-hostpathplugin-pzj5s\" (UID: \"7cedc5a6-929b-43ca-a8b0-6dca555ca455\") " pod="hostpath-provisioner/csi-hostpathplugin-pzj5s" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.520196 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd4dbaf5-45ee-4171-b6b9-7deba44931ff-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-t6c97\" (UID: \"dd4dbaf5-45ee-4171-b6b9-7deba44931ff\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-t6c97" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.520224 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4b71d414-e6bf-4f51-a808-1938c1edf207-service-ca-bundle\") pod \"router-default-5444994796-qlkzp\" (UID: \"4b71d414-e6bf-4f51-a808-1938c1edf207\") " pod="openshift-ingress/router-default-5444994796-qlkzp" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.520242 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02d4609f-f699-4ac2-bc41-752b879681ba-serving-cert\") pod \"service-ca-operator-777779d784-rxprp\" (UID: \"02d4609f-f699-4ac2-bc41-752b879681ba\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rxprp" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.520262 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7cedc5a6-929b-43ca-a8b0-6dca555ca455-socket-dir\") pod \"csi-hostpathplugin-pzj5s\" (UID: \"7cedc5a6-929b-43ca-a8b0-6dca555ca455\") " pod="hostpath-provisioner/csi-hostpathplugin-pzj5s" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.520286 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkqlf\" (UniqueName: \"kubernetes.io/projected/02d4609f-f699-4ac2-bc41-752b879681ba-kube-api-access-bkqlf\") pod \"service-ca-operator-777779d784-rxprp\" (UID: \"02d4609f-f699-4ac2-bc41-752b879681ba\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rxprp" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.520308 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/71ac31c5-7a3b-4c18-aa9e-c193fa8f778a-config-volume\") pod \"collect-profiles-29517360-jfvsd\" (UID: \"71ac31c5-7a3b-4c18-aa9e-c193fa8f778a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517360-jfvsd" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.520332 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/4b71d414-e6bf-4f51-a808-1938c1edf207-stats-auth\") pod \"router-default-5444994796-qlkzp\" (UID: \"4b71d414-e6bf-4f51-a808-1938c1edf207\") " pod="openshift-ingress/router-default-5444994796-qlkzp" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.520351 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/541a6523-92f6-477b-9d35-a3a0074f5de3-metrics-tls\") pod \"dns-default-gc8sl\" (UID: \"541a6523-92f6-477b-9d35-a3a0074f5de3\") " pod="openshift-dns/dns-default-gc8sl" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.520369 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec0d5c79-9e98-4f09-a336-9c284ba81d82-config\") pod \"kube-controller-manager-operator-78b949d7b-6tvm5\" (UID: \"ec0d5c79-9e98-4f09-a336-9c284ba81d82\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6tvm5" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.520387 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b1dba42c-e410-49fd-8c48-449fca5d65dc-srv-cert\") pod \"catalog-operator-68c6474976-dgp2v\" (UID: \"b1dba42c-e410-49fd-8c48-449fca5d65dc\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dgp2v" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.520406 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pt2g9\" (UniqueName: \"kubernetes.io/projected/1b196c26-84a1-408f-913b-eb50572102cf-kube-api-access-pt2g9\") pod \"packageserver-d55dfcdfc-s94ht\" (UID: \"1b196c26-84a1-408f-913b-eb50572102cf\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s94ht" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.520424 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/1b196c26-84a1-408f-913b-eb50572102cf-tmpfs\") pod \"packageserver-d55dfcdfc-s94ht\" (UID: \"1b196c26-84a1-408f-913b-eb50572102cf\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s94ht" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.520441 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/0d05475f-b787-49dc-8a0b-c98e47f40a3b-node-bootstrap-token\") pod \"machine-config-server-sz8l8\" (UID: \"0d05475f-b787-49dc-8a0b-c98e47f40a3b\") " pod="openshift-machine-config-operator/machine-config-server-sz8l8" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.520457 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/541a6523-92f6-477b-9d35-a3a0074f5de3-config-volume\") pod \"dns-default-gc8sl\" (UID: \"541a6523-92f6-477b-9d35-a3a0074f5de3\") " pod="openshift-dns/dns-default-gc8sl" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.520484 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-mkw9h\" (UID: \"0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2\") " pod="openshift-marketplace/marketplace-operator-79b997595-mkw9h" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.520520 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ec0d5c79-9e98-4f09-a336-9c284ba81d82-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-6tvm5\" (UID: \"ec0d5c79-9e98-4f09-a336-9c284ba81d82\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6tvm5" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.522024 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec0d5c79-9e98-4f09-a336-9c284ba81d82-config\") pod \"kube-controller-manager-operator-78b949d7b-6tvm5\" (UID: \"ec0d5c79-9e98-4f09-a336-9c284ba81d82\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6tvm5" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.522207 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7cedc5a6-929b-43ca-a8b0-6dca555ca455-socket-dir\") pod \"csi-hostpathplugin-pzj5s\" (UID: \"7cedc5a6-929b-43ca-a8b0-6dca555ca455\") " pod="hostpath-provisioner/csi-hostpathplugin-pzj5s" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.522347 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/7cedc5a6-929b-43ca-a8b0-6dca555ca455-plugins-dir\") pod \"csi-hostpathplugin-pzj5s\" (UID: \"7cedc5a6-929b-43ca-a8b0-6dca555ca455\") " pod="hostpath-provisioner/csi-hostpathplugin-pzj5s" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.522862 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/71ac31c5-7a3b-4c18-aa9e-c193fa8f778a-config-volume\") pod \"collect-profiles-29517360-jfvsd\" (UID: \"71ac31c5-7a3b-4c18-aa9e-c193fa8f778a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517360-jfvsd" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.523299 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02d4609f-f699-4ac2-bc41-752b879681ba-config\") pod \"service-ca-operator-777779d784-rxprp\" (UID: \"02d4609f-f699-4ac2-bc41-752b879681ba\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rxprp" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.523466 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7cedc5a6-929b-43ca-a8b0-6dca555ca455-registration-dir\") pod \"csi-hostpathplugin-pzj5s\" (UID: \"7cedc5a6-929b-43ca-a8b0-6dca555ca455\") " pod="hostpath-provisioner/csi-hostpathplugin-pzj5s" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.524453 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec0d5c79-9e98-4f09-a336-9c284ba81d82-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-6tvm5\" (UID: \"ec0d5c79-9e98-4f09-a336-9c284ba81d82\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6tvm5" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.524623 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/7cedc5a6-929b-43ca-a8b0-6dca555ca455-mountpoint-dir\") pod \"csi-hostpathplugin-pzj5s\" (UID: \"7cedc5a6-929b-43ca-a8b0-6dca555ca455\") " pod="hostpath-provisioner/csi-hostpathplugin-pzj5s" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.525296 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02d4609f-f699-4ac2-bc41-752b879681ba-serving-cert\") pod \"service-ca-operator-777779d784-rxprp\" (UID: \"02d4609f-f699-4ac2-bc41-752b879681ba\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rxprp" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.525557 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4b71d414-e6bf-4f51-a808-1938c1edf207-service-ca-bundle\") pod \"router-default-5444994796-qlkzp\" (UID: \"4b71d414-e6bf-4f51-a808-1938c1edf207\") " pod="openshift-ingress/router-default-5444994796-qlkzp" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.526997 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/7cedc5a6-929b-43ca-a8b0-6dca555ca455-csi-data-dir\") pod \"csi-hostpathplugin-pzj5s\" (UID: \"7cedc5a6-929b-43ca-a8b0-6dca555ca455\") " pod="hostpath-provisioner/csi-hostpathplugin-pzj5s" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.527268 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/0d05475f-b787-49dc-8a0b-c98e47f40a3b-certs\") pod \"machine-config-server-sz8l8\" (UID: \"0d05475f-b787-49dc-8a0b-c98e47f40a3b\") " pod="openshift-machine-config-operator/machine-config-server-sz8l8" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.527318 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-mkw9h\" (UID: \"0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2\") " pod="openshift-marketplace/marketplace-operator-79b997595-mkw9h" Feb 14 04:11:47 crc kubenswrapper[4867]: E0214 04:11:47.527564 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:48.027551411 +0000 UTC m=+140.108488725 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.527973 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/541a6523-92f6-477b-9d35-a3a0074f5de3-config-volume\") pod \"dns-default-gc8sl\" (UID: \"541a6523-92f6-477b-9d35-a3a0074f5de3\") " pod="openshift-dns/dns-default-gc8sl" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.528130 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/1b196c26-84a1-408f-913b-eb50572102cf-tmpfs\") pod \"packageserver-d55dfcdfc-s94ht\" (UID: \"1b196c26-84a1-408f-913b-eb50572102cf\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s94ht" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.528775 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/d74f081b-fe53-4642-8340-a8e602c627f1-signing-cabundle\") pod \"service-ca-9c57cc56f-9kgzh\" (UID: \"d74f081b-fe53-4642-8340-a8e602c627f1\") " pod="openshift-service-ca/service-ca-9c57cc56f-9kgzh" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.529685 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/541a6523-92f6-477b-9d35-a3a0074f5de3-metrics-tls\") pod \"dns-default-gc8sl\" (UID: \"541a6523-92f6-477b-9d35-a3a0074f5de3\") " pod="openshift-dns/dns-default-gc8sl" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.530036 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1b196c26-84a1-408f-913b-eb50572102cf-apiservice-cert\") pod \"packageserver-d55dfcdfc-s94ht\" (UID: \"1b196c26-84a1-408f-913b-eb50572102cf\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s94ht" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.532637 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/4b71d414-e6bf-4f51-a808-1938c1edf207-stats-auth\") pod \"router-default-5444994796-qlkzp\" (UID: \"4b71d414-e6bf-4f51-a808-1938c1edf207\") " pod="openshift-ingress/router-default-5444994796-qlkzp" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.532698 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd4dbaf5-45ee-4171-b6b9-7deba44931ff-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-t6c97\" (UID: \"dd4dbaf5-45ee-4171-b6b9-7deba44931ff\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-t6c97" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.533319 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/46664b60-c0df-4869-9304-cec4de385a86-profile-collector-cert\") pod \"olm-operator-6b444d44fb-tcss9\" (UID: \"46664b60-c0df-4869-9304-cec4de385a86\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tcss9" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.533365 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/4b71d414-e6bf-4f51-a808-1938c1edf207-default-certificate\") pod \"router-default-5444994796-qlkzp\" (UID: \"4b71d414-e6bf-4f51-a808-1938c1edf207\") " pod="openshift-ingress/router-default-5444994796-qlkzp" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.535842 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/89db71f1-1a8b-4c57-9a3d-eb725060aee9-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-f47sx\" (UID: \"89db71f1-1a8b-4c57-9a3d-eb725060aee9\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f47sx" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.536275 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a0c7654d-1553-4b68-8af4-253f77d7c657-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-rv8cb\" (UID: \"a0c7654d-1553-4b68-8af4-253f77d7c657\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rv8cb" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.540137 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-mkw9h\" (UID: \"0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2\") " pod="openshift-marketplace/marketplace-operator-79b997595-mkw9h" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.540249 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/0d05475f-b787-49dc-8a0b-c98e47f40a3b-node-bootstrap-token\") pod \"machine-config-server-sz8l8\" (UID: \"0d05475f-b787-49dc-8a0b-c98e47f40a3b\") " pod="openshift-machine-config-operator/machine-config-server-sz8l8" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.540470 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd4dbaf5-45ee-4171-b6b9-7deba44931ff-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-t6c97\" (UID: \"dd4dbaf5-45ee-4171-b6b9-7deba44931ff\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-t6c97" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.540581 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b1dba42c-e410-49fd-8c48-449fca5d65dc-srv-cert\") pod \"catalog-operator-68c6474976-dgp2v\" (UID: \"b1dba42c-e410-49fd-8c48-449fca5d65dc\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dgp2v" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.543547 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b1dba42c-e410-49fd-8c48-449fca5d65dc-profile-collector-cert\") pod \"catalog-operator-68c6474976-dgp2v\" (UID: \"b1dba42c-e410-49fd-8c48-449fca5d65dc\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dgp2v" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.543968 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/71ac31c5-7a3b-4c18-aa9e-c193fa8f778a-secret-volume\") pod \"collect-profiles-29517360-jfvsd\" (UID: \"71ac31c5-7a3b-4c18-aa9e-c193fa8f778a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517360-jfvsd" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.544257 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/46664b60-c0df-4869-9304-cec4de385a86-srv-cert\") pod \"olm-operator-6b444d44fb-tcss9\" (UID: \"46664b60-c0df-4869-9304-cec4de385a86\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tcss9" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.545041 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d3658855-0c06-490f-9bcc-33de7069178e-cert\") pod \"ingress-canary-8ftf5\" (UID: \"d3658855-0c06-490f-9bcc-33de7069178e\") " pod="openshift-ingress-canary/ingress-canary-8ftf5" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.545489 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/d74f081b-fe53-4642-8340-a8e602c627f1-signing-key\") pod \"service-ca-9c57cc56f-9kgzh\" (UID: \"d74f081b-fe53-4642-8340-a8e602c627f1\") " pod="openshift-service-ca/service-ca-9c57cc56f-9kgzh" Feb 14 04:11:47 crc kubenswrapper[4867]: W0214 04:11:47.556166 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod894233bb_65ed_4cdd_ac61_7a8bd8f66140.slice/crio-ead0db96b1b320d87e57ce68f5ba9c92c1e3e7abf4498321b5f8a82d424e007a WatchSource:0}: Error finding container ead0db96b1b320d87e57ce68f5ba9c92c1e3e7abf4498321b5f8a82d424e007a: Status 404 returned error can't find the container with id ead0db96b1b320d87e57ce68f5ba9c92c1e3e7abf4498321b5f8a82d424e007a Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.556488 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1b196c26-84a1-408f-913b-eb50572102cf-webhook-cert\") pod \"packageserver-d55dfcdfc-s94ht\" (UID: \"1b196c26-84a1-408f-913b-eb50572102cf\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s94ht" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.556642 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4b71d414-e6bf-4f51-a808-1938c1edf207-metrics-certs\") pod \"router-default-5444994796-qlkzp\" (UID: \"4b71d414-e6bf-4f51-a808-1938c1edf207\") " pod="openshift-ingress/router-default-5444994796-qlkzp" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.566948 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c029599e-5014-4874-917f-076635849451-bound-sa-token\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.570828 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-wcdc2"] Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.572974 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z26vn\" (UniqueName: \"kubernetes.io/projected/dc723269-8ee6-4236-9eaa-169a00d76442-kube-api-access-z26vn\") pod \"console-operator-58897d9998-htv2n\" (UID: \"dc723269-8ee6-4236-9eaa-169a00d76442\") " pod="openshift-console-operator/console-operator-58897d9998-htv2n" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.578226 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmbh6\" (UniqueName: \"kubernetes.io/projected/c029599e-5014-4874-917f-076635849451-kube-api-access-bmbh6\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.586473 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-pctg8"] Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.597805 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2kd6\" (UniqueName: \"kubernetes.io/projected/14efaf39-985f-45ea-ab79-0b8b2044c7f7-kube-api-access-q2kd6\") pod \"route-controller-manager-6576b87f9c-29p6h\" (UID: \"14efaf39-985f-45ea-ab79-0b8b2044c7f7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-29p6h" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.613136 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mznjl\" (UniqueName: \"kubernetes.io/projected/1fd832b4-de40-4266-93fb-3682eeb9dd3e-kube-api-access-mznjl\") pod \"ingress-operator-5b745b69d9-485km\" (UID: \"1fd832b4-de40-4266-93fb-3682eeb9dd3e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-485km" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.621738 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:47 crc kubenswrapper[4867]: E0214 04:11:47.622185 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:48.122160193 +0000 UTC m=+140.203097507 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.634711 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-485km" Feb 14 04:11:47 crc kubenswrapper[4867]: W0214 04:11:47.643403 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a8f75ff_3558_4d7b_8adb_722a732d0633.slice/crio-702bf3960d4dd807ff95a87ec715e2a8341224aa7f7a185ffa011415c4aa6f9c WatchSource:0}: Error finding container 702bf3960d4dd807ff95a87ec715e2a8341224aa7f7a185ffa011415c4aa6f9c: Status 404 returned error can't find the container with id 702bf3960d4dd807ff95a87ec715e2a8341224aa7f7a185ffa011415c4aa6f9c Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.650489 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.694477 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-c4c52"] Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.695114 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-l8d7w"] Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.698065 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkjjw\" (UniqueName: \"kubernetes.io/projected/0d05475f-b787-49dc-8a0b-c98e47f40a3b-kube-api-access-nkjjw\") pod \"machine-config-server-sz8l8\" (UID: \"0d05475f-b787-49dc-8a0b-c98e47f40a3b\") " pod="openshift-machine-config-operator/machine-config-server-sz8l8" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.701981 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-699tj" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.720750 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkqlf\" (UniqueName: \"kubernetes.io/projected/02d4609f-f699-4ac2-bc41-752b879681ba-kube-api-access-bkqlf\") pod \"service-ca-operator-777779d784-rxprp\" (UID: \"02d4609f-f699-4ac2-bc41-752b879681ba\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rxprp" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.724796 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:47 crc kubenswrapper[4867]: E0214 04:11:47.725119 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:48.225108738 +0000 UTC m=+140.306046042 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.730260 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nmdjh"] Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.734733 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gs22v\" (UniqueName: \"kubernetes.io/projected/9a16b0f1-4ef6-457a-a766-a0cc2181501f-kube-api-access-gs22v\") pod \"migrator-59844c95c7-5k4wz\" (UID: \"9a16b0f1-4ef6-457a-a766-a0cc2181501f\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5k4wz" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.744101 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-x9sjv" event={"ID":"72546cbc-3499-4110-b0e4-58beab7cc8a5","Type":"ContainerStarted","Data":"ec4665aac003c1b4e7cba85ff048914da8febde16b0034c9afb5b3fb2a36029a"} Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.744890 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5kv6p" event={"ID":"1261994f-a993-4ffc-851a-dfce5bcc10b1","Type":"ContainerStarted","Data":"2bbbfd0f929a463b3834210b817fb454c9c5152759f36a425668d7478a36ca3a"} Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.745497 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ff8rv" event={"ID":"1815da32-cba4-41f4-80ca-45a750c7e93f","Type":"ContainerStarted","Data":"813e40a2e1867731aba1c9c1cac2258dab16eefb257f8f867e54e1c39dbd1222"} Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.747199 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-t8bst" event={"ID":"0ccfed17-f056-4bbe-8ec3-cdd31f37be63","Type":"ContainerStarted","Data":"f16afef3d7e808dbf734065cea30fefc8ce32136d50bcf987d25ea20a8ea7a54"} Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.748197 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-p69vd"] Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.749985 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-8qkg2" event={"ID":"894233bb-65ed-4cdd-ac61-7a8bd8f66140","Type":"ContainerStarted","Data":"ead0db96b1b320d87e57ce68f5ba9c92c1e3e7abf4498321b5f8a82d424e007a"} Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.750770 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8bmcr" event={"ID":"835c6d49-e42e-444a-a276-fb9f064fdbda","Type":"ContainerStarted","Data":"144c6c8b1c76f545a725545d137202c6089bbe081caa00b695421ad1383b769d"} Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.753437 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jsc7b" event={"ID":"d58c6e7c-e0bc-4833-ab34-348c03f75da7","Type":"ContainerStarted","Data":"c5c4776deb3975945db7e0cf31af409b0ccecd9b88acf8d033c946f648493142"} Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.755756 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fx4z\" (UniqueName: \"kubernetes.io/projected/46664b60-c0df-4869-9304-cec4de385a86-kube-api-access-7fx4z\") pod \"olm-operator-6b444d44fb-tcss9\" (UID: \"46664b60-c0df-4869-9304-cec4de385a86\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tcss9" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.760731 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-pctg8" event={"ID":"07dd9173-fdfe-4edb-821b-37c94116b53e","Type":"ContainerStarted","Data":"c43a26497795da97ad6a6c4586b62e12ae1ccaaa8dd33d4cfe17199345411003"} Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.763439 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5k4wz" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.764957 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-szcmx" event={"ID":"ccd97956-aef1-45cf-9475-02928c866124","Type":"ContainerStarted","Data":"ec6dcbf0f8a230a42d760e895824929c89848228caceaa01f075507289d58748"} Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.765428 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rr9gw\" (UniqueName: \"kubernetes.io/projected/89db71f1-1a8b-4c57-9a3d-eb725060aee9-kube-api-access-rr9gw\") pod \"control-plane-machine-set-operator-78cbb6b69f-f47sx\" (UID: \"89db71f1-1a8b-4c57-9a3d-eb725060aee9\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f47sx" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.768962 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wgfm8"] Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.769499 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wcdc2" event={"ID":"6a8f75ff-3558-4d7b-8adb-722a732d0633","Type":"ContainerStarted","Data":"702bf3960d4dd807ff95a87ec715e2a8341224aa7f7a185ffa011415c4aa6f9c"} Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.769726 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ec0d5c79-9e98-4f09-a336-9c284ba81d82-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-6tvm5\" (UID: \"ec0d5c79-9e98-4f09-a336-9c284ba81d82\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6tvm5" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.771199 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pmlgc"] Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.775709 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjwtw\" (UniqueName: \"kubernetes.io/projected/d74f081b-fe53-4642-8340-a8e602c627f1-kube-api-access-kjwtw\") pod \"service-ca-9c57cc56f-9kgzh\" (UID: \"d74f081b-fe53-4642-8340-a8e602c627f1\") " pod="openshift-service-ca/service-ca-9c57cc56f-9kgzh" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.785481 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-htv2n" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.793693 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-rxprp" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.800247 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stznr\" (UniqueName: \"kubernetes.io/projected/a0c7654d-1553-4b68-8af4-253f77d7c657-kube-api-access-stznr\") pod \"package-server-manager-789f6589d5-rv8cb\" (UID: \"a0c7654d-1553-4b68-8af4-253f77d7c657\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rv8cb" Feb 14 04:11:47 crc kubenswrapper[4867]: W0214 04:11:47.801730 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb63883f_65f5_4107_877a_ff786d6c00f9.slice/crio-0bfaa5034c5f4aa419ca6cadf9c2423257fac17593840dedc0a8810563cfdfe4 WatchSource:0}: Error finding container 0bfaa5034c5f4aa419ca6cadf9c2423257fac17593840dedc0a8810563cfdfe4: Status 404 returned error can't find the container with id 0bfaa5034c5f4aa419ca6cadf9c2423257fac17593840dedc0a8810563cfdfe4 Feb 14 04:11:47 crc kubenswrapper[4867]: W0214 04:11:47.805662 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1f6fd76_f362_495f_969d_a644f072552f.slice/crio-607ec17b312d47e50fa406a7fff1d088a74c699097b2d55b67b19d4ae24f518b WatchSource:0}: Error finding container 607ec17b312d47e50fa406a7fff1d088a74c699097b2d55b67b19d4ae24f518b: Status 404 returned error can't find the container with id 607ec17b312d47e50fa406a7fff1d088a74c699097b2d55b67b19d4ae24f518b Feb 14 04:11:47 crc kubenswrapper[4867]: W0214 04:11:47.807128 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd46c3923_f64c_42de_b84c_98bc872f5de6.slice/crio-227104c829d767a5114f57777c951690c6c9a1f5b806413a08b5f9308019149a WatchSource:0}: Error finding container 227104c829d767a5114f57777c951690c6c9a1f5b806413a08b5f9308019149a: Status 404 returned error can't find the container with id 227104c829d767a5114f57777c951690c6c9a1f5b806413a08b5f9308019149a Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.817890 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zc7cr\" (UniqueName: \"kubernetes.io/projected/d3658855-0c06-490f-9bcc-33de7069178e-kube-api-access-zc7cr\") pod \"ingress-canary-8ftf5\" (UID: \"d3658855-0c06-490f-9bcc-33de7069178e\") " pod="openshift-ingress-canary/ingress-canary-8ftf5" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.820209 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f47sx" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.826078 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:47 crc kubenswrapper[4867]: E0214 04:11:47.826259 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:48.326231936 +0000 UTC m=+140.407169250 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.826445 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.826466 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-29p6h" Feb 14 04:11:47 crc kubenswrapper[4867]: E0214 04:11:47.826865 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:48.326843932 +0000 UTC m=+140.407781256 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.830845 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-sz8l8" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.835235 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zw4m\" (UniqueName: \"kubernetes.io/projected/b1dba42c-e410-49fd-8c48-449fca5d65dc-kube-api-access-4zw4m\") pod \"catalog-operator-68c6474976-dgp2v\" (UID: \"b1dba42c-e410-49fd-8c48-449fca5d65dc\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dgp2v" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.841671 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-ccg6j"] Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.854282 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-8ftf5" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.855203 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whx59\" (UniqueName: \"kubernetes.io/projected/4b71d414-e6bf-4f51-a808-1938c1edf207-kube-api-access-whx59\") pod \"router-default-5444994796-qlkzp\" (UID: \"4b71d414-e6bf-4f51-a808-1938c1edf207\") " pod="openshift-ingress/router-default-5444994796-qlkzp" Feb 14 04:11:47 crc kubenswrapper[4867]: W0214 04:11:47.862872 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod22c4dfcc_144e_40cd_bed2_dc28c210a130.slice/crio-263364339aa134cb4836f537b5988d857bea9c4594e07ba03a259b85c85888f6 WatchSource:0}: Error finding container 263364339aa134cb4836f537b5988d857bea9c4594e07ba03a259b85c85888f6: Status 404 returned error can't find the container with id 263364339aa134cb4836f537b5988d857bea9c4594e07ba03a259b85c85888f6 Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.875761 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pt2g9\" (UniqueName: \"kubernetes.io/projected/1b196c26-84a1-408f-913b-eb50572102cf-kube-api-access-pt2g9\") pod \"packageserver-d55dfcdfc-s94ht\" (UID: \"1b196c26-84a1-408f-913b-eb50572102cf\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s94ht" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.894515 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-485km"] Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.896019 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzxjb\" (UniqueName: \"kubernetes.io/projected/7cedc5a6-929b-43ca-a8b0-6dca555ca455-kube-api-access-hzxjb\") pod \"csi-hostpathplugin-pzj5s\" (UID: \"7cedc5a6-929b-43ca-a8b0-6dca555ca455\") " pod="hostpath-provisioner/csi-hostpathplugin-pzj5s" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.917491 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gq2jw\" (UniqueName: \"kubernetes.io/projected/0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2-kube-api-access-gq2jw\") pod \"marketplace-operator-79b997595-mkw9h\" (UID: \"0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2\") " pod="openshift-marketplace/marketplace-operator-79b997595-mkw9h" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.927792 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:47 crc kubenswrapper[4867]: E0214 04:11:47.927975 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:48.42794729 +0000 UTC m=+140.508884604 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.928143 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:47 crc kubenswrapper[4867]: E0214 04:11:47.928568 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:48.428556145 +0000 UTC m=+140.509493459 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.937602 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6dn8\" (UniqueName: \"kubernetes.io/projected/71ac31c5-7a3b-4c18-aa9e-c193fa8f778a-kube-api-access-s6dn8\") pod \"collect-profiles-29517360-jfvsd\" (UID: \"71ac31c5-7a3b-4c18-aa9e-c193fa8f778a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517360-jfvsd" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.953828 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cs9hv\" (UniqueName: \"kubernetes.io/projected/541a6523-92f6-477b-9d35-a3a0074f5de3-kube-api-access-cs9hv\") pod \"dns-default-gc8sl\" (UID: \"541a6523-92f6-477b-9d35-a3a0074f5de3\") " pod="openshift-dns/dns-default-gc8sl" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.975975 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dd4dbaf5-45ee-4171-b6b9-7deba44931ff-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-t6c97\" (UID: \"dd4dbaf5-45ee-4171-b6b9-7deba44931ff\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-t6c97" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.988699 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-l6gq7"] Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.992000 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-886ct"] Feb 14 04:11:48 crc kubenswrapper[4867]: W0214 04:11:48.014294 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1fd832b4_de40_4266_93fb_3682eeb9dd3e.slice/crio-eeb2706f83e48e704a97b22ba18e66fc2203ad21a3e5aaa8b32f2186829ae52e WatchSource:0}: Error finding container eeb2706f83e48e704a97b22ba18e66fc2203ad21a3e5aaa8b32f2186829ae52e: Status 404 returned error can't find the container with id eeb2706f83e48e704a97b22ba18e66fc2203ad21a3e5aaa8b32f2186829ae52e Feb 14 04:11:48 crc kubenswrapper[4867]: I0214 04:11:48.028917 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:48 crc kubenswrapper[4867]: I0214 04:11:48.029321 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-9kgzh" Feb 14 04:11:48 crc kubenswrapper[4867]: E0214 04:11:48.029492 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:48.529474968 +0000 UTC m=+140.610412282 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:48 crc kubenswrapper[4867]: I0214 04:11:48.029754 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:48 crc kubenswrapper[4867]: E0214 04:11:48.030051 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:48.530043383 +0000 UTC m=+140.610980697 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:48 crc kubenswrapper[4867]: W0214 04:11:48.033539 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6d8ea50d_6822_425a_8eac_6311c8537eb7.slice/crio-50102f8a422b34bb67884ff8b07519f02caa657555708d3170f9b4f1160b2d78 WatchSource:0}: Error finding container 50102f8a422b34bb67884ff8b07519f02caa657555708d3170f9b4f1160b2d78: Status 404 returned error can't find the container with id 50102f8a422b34bb67884ff8b07519f02caa657555708d3170f9b4f1160b2d78 Feb 14 04:11:48 crc kubenswrapper[4867]: I0214 04:11:48.036461 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s94ht" Feb 14 04:11:48 crc kubenswrapper[4867]: I0214 04:11:48.043514 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tcss9" Feb 14 04:11:48 crc kubenswrapper[4867]: I0214 04:11:48.045881 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-c65kr"] Feb 14 04:11:48 crc kubenswrapper[4867]: I0214 04:11:48.052354 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6tvm5" Feb 14 04:11:48 crc kubenswrapper[4867]: W0214 04:11:48.053328 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda9bcb9a2_1128_4c6b_80b1_47afd1a46511.slice/crio-9ea62d91d1858319052e207e1983303fd5ae8466b8ddde272b8623ca28891674 WatchSource:0}: Error finding container 9ea62d91d1858319052e207e1983303fd5ae8466b8ddde272b8623ca28891674: Status 404 returned error can't find the container with id 9ea62d91d1858319052e207e1983303fd5ae8466b8ddde272b8623ca28891674 Feb 14 04:11:48 crc kubenswrapper[4867]: I0214 04:11:48.056707 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rv8cb" Feb 14 04:11:48 crc kubenswrapper[4867]: I0214 04:11:48.071886 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-mkw9h" Feb 14 04:11:48 crc kubenswrapper[4867]: I0214 04:11:48.078397 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-t6c97" Feb 14 04:11:48 crc kubenswrapper[4867]: I0214 04:11:48.086723 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517360-jfvsd" Feb 14 04:11:48 crc kubenswrapper[4867]: I0214 04:11:48.101932 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-qlkzp" Feb 14 04:11:48 crc kubenswrapper[4867]: I0214 04:11:48.109404 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gc8sl" Feb 14 04:11:48 crc kubenswrapper[4867]: I0214 04:11:48.125257 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dgp2v" Feb 14 04:11:48 crc kubenswrapper[4867]: I0214 04:11:48.130279 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:48 crc kubenswrapper[4867]: E0214 04:11:48.130623 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:48.630608947 +0000 UTC m=+140.711546261 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:48 crc kubenswrapper[4867]: I0214 04:11:48.146185 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-pzj5s" Feb 14 04:11:48 crc kubenswrapper[4867]: I0214 04:11:48.166498 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-htv2n"] Feb 14 04:11:48 crc kubenswrapper[4867]: I0214 04:11:48.232752 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:48 crc kubenswrapper[4867]: E0214 04:11:48.233076 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:48.733065829 +0000 UTC m=+140.814003143 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:48 crc kubenswrapper[4867]: I0214 04:11:48.305354 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-699tj"] Feb 14 04:11:48 crc kubenswrapper[4867]: I0214 04:11:48.334104 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:48 crc kubenswrapper[4867]: E0214 04:11:48.334275 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:48.834242749 +0000 UTC m=+140.915180063 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:48 crc kubenswrapper[4867]: I0214 04:11:48.334404 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:48 crc kubenswrapper[4867]: E0214 04:11:48.334685 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:48.83467373 +0000 UTC m=+140.915611044 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:48 crc kubenswrapper[4867]: W0214 04:11:48.342472 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddc723269_8ee6_4236_9eaa_169a00d76442.slice/crio-c6603d0502e3bf3a96f69d86db4669bec69431826e76db3e19e87530f2205a4c WatchSource:0}: Error finding container c6603d0502e3bf3a96f69d86db4669bec69431826e76db3e19e87530f2205a4c: Status 404 returned error can't find the container with id c6603d0502e3bf3a96f69d86db4669bec69431826e76db3e19e87530f2205a4c Feb 14 04:11:48 crc kubenswrapper[4867]: I0214 04:11:48.348195 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-5k4wz"] Feb 14 04:11:48 crc kubenswrapper[4867]: I0214 04:11:48.370149 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-29p6h"] Feb 14 04:11:48 crc kubenswrapper[4867]: I0214 04:11:48.435072 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:48 crc kubenswrapper[4867]: E0214 04:11:48.435416 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:48.935391348 +0000 UTC m=+141.016328662 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:48 crc kubenswrapper[4867]: I0214 04:11:48.435656 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:48 crc kubenswrapper[4867]: E0214 04:11:48.435915 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:48.935904051 +0000 UTC m=+141.016841365 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:48 crc kubenswrapper[4867]: I0214 04:11:48.454282 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-rxprp"] Feb 14 04:11:48 crc kubenswrapper[4867]: I0214 04:11:48.455701 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f47sx"] Feb 14 04:11:48 crc kubenswrapper[4867]: W0214 04:11:48.465187 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a16b0f1_4ef6_457a_a766_a0cc2181501f.slice/crio-d8c00e67797b70c4d461fc731bea49b9afed652c05d67318593d72642cce6663 WatchSource:0}: Error finding container d8c00e67797b70c4d461fc731bea49b9afed652c05d67318593d72642cce6663: Status 404 returned error can't find the container with id d8c00e67797b70c4d461fc731bea49b9afed652c05d67318593d72642cce6663 Feb 14 04:11:48 crc kubenswrapper[4867]: W0214 04:11:48.491289 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod02d4609f_f699_4ac2_bc41_752b879681ba.slice/crio-0c37da51818bd2a86c5c4020ae0c2e247acf00e1a8d050fbdd365928ff64f107 WatchSource:0}: Error finding container 0c37da51818bd2a86c5c4020ae0c2e247acf00e1a8d050fbdd365928ff64f107: Status 404 returned error can't find the container with id 0c37da51818bd2a86c5c4020ae0c2e247acf00e1a8d050fbdd365928ff64f107 Feb 14 04:11:48 crc kubenswrapper[4867]: I0214 04:11:48.534314 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-8ftf5"] Feb 14 04:11:48 crc kubenswrapper[4867]: I0214 04:11:48.536901 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:48 crc kubenswrapper[4867]: E0214 04:11:48.537357 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:49.037342957 +0000 UTC m=+141.118280271 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:48 crc kubenswrapper[4867]: I0214 04:11:48.638756 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:48 crc kubenswrapper[4867]: E0214 04:11:48.639472 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:49.139459351 +0000 UTC m=+141.220396665 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:48 crc kubenswrapper[4867]: I0214 04:11:48.740547 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:48 crc kubenswrapper[4867]: E0214 04:11:48.740867 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:49.240851495 +0000 UTC m=+141.321788809 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:48 crc kubenswrapper[4867]: I0214 04:11:48.842546 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:48 crc kubenswrapper[4867]: E0214 04:11:48.843217 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:49.343200265 +0000 UTC m=+141.424137579 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:48 crc kubenswrapper[4867]: I0214 04:11:48.891888 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8bmcr" event={"ID":"835c6d49-e42e-444a-a276-fb9f064fdbda","Type":"ContainerStarted","Data":"cc2f33bbd998239443d5512e9c48d9641b1036c627268f3ee030a0d1cbcb4206"} Feb 14 04:11:48 crc kubenswrapper[4867]: I0214 04:11:48.897976 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f47sx" event={"ID":"89db71f1-1a8b-4c57-9a3d-eb725060aee9","Type":"ContainerStarted","Data":"b30438543696cd384ed51ec93bdf53c2b2d40d7cbe5536f977a7badcc6e3f3fe"} Feb 14 04:11:48 crc kubenswrapper[4867]: I0214 04:11:48.916207 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s94ht"] Feb 14 04:11:48 crc kubenswrapper[4867]: I0214 04:11:48.919414 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-gc8sl"] Feb 14 04:11:48 crc kubenswrapper[4867]: I0214 04:11:48.921249 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wgfm8" event={"ID":"77ddb26b-22ee-4a97-81ab-7e82c611ebd5","Type":"ContainerStarted","Data":"6f58856879441c20ef32c48d2b07eeb92fe9c4144e96f5d0e64cc487391dceab"} Feb 14 04:11:48 crc kubenswrapper[4867]: I0214 04:11:48.921668 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wgfm8" event={"ID":"77ddb26b-22ee-4a97-81ab-7e82c611ebd5","Type":"ContainerStarted","Data":"cc79acb7ca2c05a7b6f2b1184ae328fe64a8f5e6b88328704675be85a52db37e"} Feb 14 04:11:48 crc kubenswrapper[4867]: I0214 04:11:48.935583 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" event={"ID":"0ad7b333-6328-41ea-a81d-bce9790b185a","Type":"ContainerStarted","Data":"0005bb5ab795f3cb3316208372a9d4195e426c2a1f38a510bf0162032f954a9f"} Feb 14 04:11:48 crc kubenswrapper[4867]: I0214 04:11:48.943682 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:48 crc kubenswrapper[4867]: E0214 04:11:48.943866 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:49.443837771 +0000 UTC m=+141.524775085 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:48 crc kubenswrapper[4867]: I0214 04:11:48.943978 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:48 crc kubenswrapper[4867]: E0214 04:11:48.945410 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:49.44535797 +0000 UTC m=+141.526295504 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:48 crc kubenswrapper[4867]: I0214 04:11:48.964198 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rv8cb"] Feb 14 04:11:48 crc kubenswrapper[4867]: I0214 04:11:48.964380 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pmlgc" event={"ID":"acdb1323-fec8-46fa-9f36-9b0f7f74cca4","Type":"ContainerStarted","Data":"5f6ce5ba2b04602f0c14203c86f267a1383ed602966128d4ccefac88636b0e0f"} Feb 14 04:11:48 crc kubenswrapper[4867]: I0214 04:11:48.972386 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-c4c52" event={"ID":"bb63883f-65f5-4107-877a-ff786d6c00f9","Type":"ContainerStarted","Data":"63e5a177904c856ac44a70adc1fabc18b6435a4f03e0f904c50917bec344fb2b"} Feb 14 04:11:48 crc kubenswrapper[4867]: I0214 04:11:48.972432 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-c4c52" event={"ID":"bb63883f-65f5-4107-877a-ff786d6c00f9","Type":"ContainerStarted","Data":"0bfaa5034c5f4aa419ca6cadf9c2423257fac17593840dedc0a8810563cfdfe4"} Feb 14 04:11:48 crc kubenswrapper[4867]: I0214 04:11:48.981158 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-rxprp" event={"ID":"02d4609f-f699-4ac2-bc41-752b879681ba","Type":"ContainerStarted","Data":"0c37da51818bd2a86c5c4020ae0c2e247acf00e1a8d050fbdd365928ff64f107"} Feb 14 04:11:48 crc kubenswrapper[4867]: I0214 04:11:48.983552 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-886ct" event={"ID":"6d8ea50d-6822-425a-8eac-6311c8537eb7","Type":"ContainerStarted","Data":"50102f8a422b34bb67884ff8b07519f02caa657555708d3170f9b4f1160b2d78"} Feb 14 04:11:48 crc kubenswrapper[4867]: I0214 04:11:48.985047 4867 generic.go:334] "Generic (PLEG): container finished" podID="894233bb-65ed-4cdd-ac61-7a8bd8f66140" containerID="1e918a4597fc13bcf23fff6b70d5dcd093ca46273a1af1cda296479943dc1f92" exitCode=0 Feb 14 04:11:48 crc kubenswrapper[4867]: I0214 04:11:48.985872 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-8qkg2" event={"ID":"894233bb-65ed-4cdd-ac61-7a8bd8f66140","Type":"ContainerDied","Data":"1e918a4597fc13bcf23fff6b70d5dcd093ca46273a1af1cda296479943dc1f92"} Feb 14 04:11:48 crc kubenswrapper[4867]: I0214 04:11:48.991741 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wcdc2" event={"ID":"6a8f75ff-3558-4d7b-8adb-722a732d0633","Type":"ContainerStarted","Data":"e2a84ac2941e9118b0d6ca163c3b651937c952981f9423f9c36f0e1f4479d0bf"} Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.009042 4867 patch_prober.go:28] interesting pod/downloads-7954f5f757-x9sjv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.009105 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-x9sjv" podUID="72546cbc-3499-4110-b0e4-58beab7cc8a5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.020661 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-x9sjv" Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.020699 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-x9sjv" event={"ID":"72546cbc-3499-4110-b0e4-58beab7cc8a5","Type":"ContainerStarted","Data":"6df86e37892d6555081dceb55f2b33fa3d058e82a95ff8722c4d3a8bd1c5bcb0"} Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.020720 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-sz8l8" event={"ID":"0d05475f-b787-49dc-8a0b-c98e47f40a3b","Type":"ContainerStarted","Data":"bc740684119f9953c31d7aa9b7d34476d57a87ba84403a05c62af5df446355d0"} Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.028284 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-9kgzh"] Feb 14 04:11:49 crc kubenswrapper[4867]: W0214 04:11:49.028853 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1b196c26_84a1_408f_913b_eb50572102cf.slice/crio-bc81bfa7a43c3207c40e6706fb2fd31e8a1cd427a12e1e87a713601dd9213e3b WatchSource:0}: Error finding container bc81bfa7a43c3207c40e6706fb2fd31e8a1cd427a12e1e87a713601dd9213e3b: Status 404 returned error can't find the container with id bc81bfa7a43c3207c40e6706fb2fd31e8a1cd427a12e1e87a713601dd9213e3b Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.032065 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l8d7w" event={"ID":"d1f6fd76-f362-495f-969d-a644f072552f","Type":"ContainerStarted","Data":"66fbea02ea2b5f3c6ffdf61d25eeeee17d6b58bd4bb90aedfb7b5388f306f2b1"} Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.032119 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l8d7w" event={"ID":"d1f6fd76-f362-495f-969d-a644f072552f","Type":"ContainerStarted","Data":"607ec17b312d47e50fa406a7fff1d088a74c699097b2d55b67b19d4ae24f518b"} Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.046240 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:49 crc kubenswrapper[4867]: E0214 04:11:49.046849 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:49.546816087 +0000 UTC m=+141.627753401 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.053121 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:49 crc kubenswrapper[4867]: E0214 04:11:49.054991 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:49.554971864 +0000 UTC m=+141.635909178 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.080058 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-htv2n" event={"ID":"dc723269-8ee6-4236-9eaa-169a00d76442","Type":"ContainerStarted","Data":"c6603d0502e3bf3a96f69d86db4669bec69431826e76db3e19e87530f2205a4c"} Feb 14 04:11:49 crc kubenswrapper[4867]: W0214 04:11:49.099163 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd74f081b_fe53_4642_8340_a8e602c627f1.slice/crio-6bbf18ce5c812e12910a07b611ffec21ee29c37d3d1a406668755058c6f086a2 WatchSource:0}: Error finding container 6bbf18ce5c812e12910a07b611ffec21ee29c37d3d1a406668755058c6f086a2: Status 404 returned error can't find the container with id 6bbf18ce5c812e12910a07b611ffec21ee29c37d3d1a406668755058c6f086a2 Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.130163 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-l6gq7" event={"ID":"a9bcb9a2-1128-4c6b-80b1-47afd1a46511","Type":"ContainerStarted","Data":"9ea62d91d1858319052e207e1983303fd5ae8466b8ddde272b8623ca28891674"} Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.139083 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ff8rv" event={"ID":"1815da32-cba4-41f4-80ca-45a750c7e93f","Type":"ContainerStarted","Data":"54533f7991dc430af26aa8af2dd88dc0fc6f065ca009bdb0a7dac8bbe30947df"} Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.152126 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6tvm5"] Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.155662 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:49 crc kubenswrapper[4867]: E0214 04:11:49.156297 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:49.656280457 +0000 UTC m=+141.737217771 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.160254 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-szcmx" event={"ID":"ccd97956-aef1-45cf-9475-02928c866124","Type":"ContainerStarted","Data":"7a72087c5b6144c6f3aed9ba692230758bb339399f3f613b14ed37ff2fa94e73"} Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.178735 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nmdjh" event={"ID":"d46c3923-f64c-42de-b84c-98bc872f5de6","Type":"ContainerStarted","Data":"74bb88ddf246c9f9f45e960909d198f3f135c09d430e61473c281c28c45bee0c"} Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.178773 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nmdjh" event={"ID":"d46c3923-f64c-42de-b84c-98bc872f5de6","Type":"ContainerStarted","Data":"227104c829d767a5114f57777c951690c6c9a1f5b806413a08b5f9308019149a"} Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.185764 4867 generic.go:334] "Generic (PLEG): container finished" podID="d58c6e7c-e0bc-4833-ab34-348c03f75da7" containerID="12da2f6592db926bc6b038b2412413441a52d11e48a0905c013aecc02bac9d5b" exitCode=0 Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.185814 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jsc7b" event={"ID":"d58c6e7c-e0bc-4833-ab34-348c03f75da7","Type":"ContainerDied","Data":"12da2f6592db926bc6b038b2412413441a52d11e48a0905c013aecc02bac9d5b"} Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.199874 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-ccg6j" event={"ID":"22c4dfcc-144e-40cd-bed2-dc28c210a130","Type":"ContainerStarted","Data":"263364339aa134cb4836f537b5988d857bea9c4594e07ba03a259b85c85888f6"} Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.217929 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-29p6h" event={"ID":"14efaf39-985f-45ea-ab79-0b8b2044c7f7","Type":"ContainerStarted","Data":"d80c060a94d17951aad5e051f55bf43d373a158b1129e1b3c3d94726f3601c49"} Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.222049 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5kv6p" event={"ID":"1261994f-a993-4ffc-851a-dfce5bcc10b1","Type":"ContainerStarted","Data":"974151c82d92401f369f342fcb19c0e0d4a552b08f80e650f6f661183db79009"} Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.233731 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-699tj" event={"ID":"8437deca-adf5-4648-9abe-2c1c6376d07b","Type":"ContainerStarted","Data":"aeb08e1d2ccc4adbf42036ec7046b270415d4121c0d2775b6d94c8142ecb9b04"} Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.239877 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-pzj5s"] Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.264068 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:49 crc kubenswrapper[4867]: E0214 04:11:49.273175 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:49.773150707 +0000 UTC m=+141.854088021 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.296756 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-p69vd" event={"ID":"553b1e39-c2d5-459d-a7fd-058f936804cb","Type":"ContainerStarted","Data":"b3ec6ea524af8ababe998d66f1ad7b4fd6c79fcd1e44d811fa653aa1b5766706"} Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.296801 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-p69vd" event={"ID":"553b1e39-c2d5-459d-a7fd-058f936804cb","Type":"ContainerStarted","Data":"f51659d90d716607c500c828d381bbeb2f56b13403ec3fa1d830b9afe7e14995"} Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.305730 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-485km" event={"ID":"1fd832b4-de40-4266-93fb-3682eeb9dd3e","Type":"ContainerStarted","Data":"eeb2706f83e48e704a97b22ba18e66fc2203ad21a3e5aaa8b32f2186829ae52e"} Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.306052 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tcss9"] Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.309054 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-t8bst" event={"ID":"0ccfed17-f056-4bbe-8ec3-cdd31f37be63","Type":"ContainerStarted","Data":"1798ae6291d65e0cbe62da82880f9738a214a7260938d99f551bb2b6fd0ad5ff"} Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.309944 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mkw9h"] Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.310849 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5k4wz" event={"ID":"9a16b0f1-4ef6-457a-a766-a0cc2181501f","Type":"ContainerStarted","Data":"d8c00e67797b70c4d461fc731bea49b9afed652c05d67318593d72642cce6663"} Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.312942 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-pctg8" event={"ID":"07dd9173-fdfe-4edb-821b-37c94116b53e","Type":"ContainerStarted","Data":"b5e5c1b68f534cc73bf83368aec1b5b6ddd64d982817b6a68fb05176cffabc6e"} Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.313266 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-pctg8" Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.327846 4867 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-pctg8 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.327900 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-pctg8" podUID="07dd9173-fdfe-4edb-821b-37c94116b53e" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.344154 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517360-jfvsd"] Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.365626 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:49 crc kubenswrapper[4867]: E0214 04:11:49.365801 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:49.865773369 +0000 UTC m=+141.946710683 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.365937 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:49 crc kubenswrapper[4867]: E0214 04:11:49.366270 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:49.866259741 +0000 UTC m=+141.947197055 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.417926 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-t6c97"] Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.453318 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dgp2v"] Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.466690 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:49 crc kubenswrapper[4867]: E0214 04:11:49.466790 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:49.966764124 +0000 UTC m=+142.047701438 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.466950 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:49 crc kubenswrapper[4867]: E0214 04:11:49.468043 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:49.968029066 +0000 UTC m=+142.048966420 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.488394 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-x9sjv" podStartSLOduration=118.488370195 podStartE2EDuration="1m58.488370195s" podCreationTimestamp="2026-02-14 04:09:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:49.466738343 +0000 UTC m=+141.547675657" watchObservedRunningTime="2026-02-14 04:11:49.488370195 +0000 UTC m=+141.569307509" Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.490928 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wgfm8" podStartSLOduration=118.490915159 podStartE2EDuration="1m58.490915159s" podCreationTimestamp="2026-02-14 04:09:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:49.48546044 +0000 UTC m=+141.566397754" watchObservedRunningTime="2026-02-14 04:11:49.490915159 +0000 UTC m=+141.571852483" Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.567754 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:49 crc kubenswrapper[4867]: E0214 04:11:49.567928 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:50.067897262 +0000 UTC m=+142.148834576 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.568092 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:49 crc kubenswrapper[4867]: E0214 04:11:49.568406 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:50.068391035 +0000 UTC m=+142.149328349 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.669133 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:49 crc kubenswrapper[4867]: E0214 04:11:49.669298 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:50.169275687 +0000 UTC m=+142.250213001 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.669498 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:49 crc kubenswrapper[4867]: E0214 04:11:49.669747 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:50.169739859 +0000 UTC m=+142.250677163 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.771187 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:49 crc kubenswrapper[4867]: E0214 04:11:49.771964 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:50.271948944 +0000 UTC m=+142.352886258 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.853900 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nmdjh" podStartSLOduration=118.853882033 podStartE2EDuration="1m58.853882033s" podCreationTimestamp="2026-02-14 04:09:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:49.852670132 +0000 UTC m=+141.933607456" watchObservedRunningTime="2026-02-14 04:11:49.853882033 +0000 UTC m=+141.934819347" Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.854457 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ff8rv" podStartSLOduration=118.854450747 podStartE2EDuration="1m58.854450747s" podCreationTimestamp="2026-02-14 04:09:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:49.822676407 +0000 UTC m=+141.903613721" watchObservedRunningTime="2026-02-14 04:11:49.854450747 +0000 UTC m=+141.935388061" Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.872863 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:49 crc kubenswrapper[4867]: E0214 04:11:49.873203 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:50.373187205 +0000 UTC m=+142.454124519 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.973968 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:49 crc kubenswrapper[4867]: E0214 04:11:49.974137 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:50.474112988 +0000 UTC m=+142.555050302 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.974216 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:49 crc kubenswrapper[4867]: E0214 04:11:49.974497 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:50.474486058 +0000 UTC m=+142.555423422 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:50 crc kubenswrapper[4867]: I0214 04:11:50.075182 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:50 crc kubenswrapper[4867]: E0214 04:11:50.077072 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:50.577044973 +0000 UTC m=+142.657982357 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:50 crc kubenswrapper[4867]: I0214 04:11:50.182921 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:50 crc kubenswrapper[4867]: E0214 04:11:50.183325 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:50.683311082 +0000 UTC m=+142.764248406 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:50 crc kubenswrapper[4867]: I0214 04:11:50.220324 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8bmcr" podStartSLOduration=119.220304105 podStartE2EDuration="1m59.220304105s" podCreationTimestamp="2026-02-14 04:09:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:50.103488637 +0000 UTC m=+142.184425961" watchObservedRunningTime="2026-02-14 04:11:50.220304105 +0000 UTC m=+142.301241439" Feb 14 04:11:50 crc kubenswrapper[4867]: I0214 04:11:50.250764 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-p69vd" podStartSLOduration=119.250743311 podStartE2EDuration="1m59.250743311s" podCreationTimestamp="2026-02-14 04:09:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:50.248786441 +0000 UTC m=+142.329723755" watchObservedRunningTime="2026-02-14 04:11:50.250743311 +0000 UTC m=+142.331680625" Feb 14 04:11:50 crc kubenswrapper[4867]: I0214 04:11:50.287939 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:50 crc kubenswrapper[4867]: E0214 04:11:50.288036 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:50.788020952 +0000 UTC m=+142.868958266 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:50 crc kubenswrapper[4867]: I0214 04:11:50.288316 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:50 crc kubenswrapper[4867]: E0214 04:11:50.288604 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:50.788596146 +0000 UTC m=+142.869533460 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:50 crc kubenswrapper[4867]: I0214 04:11:50.310694 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-c4c52" podStartSLOduration=120.310677829 podStartE2EDuration="2m0.310677829s" podCreationTimestamp="2026-02-14 04:09:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:50.306712168 +0000 UTC m=+142.387649482" watchObservedRunningTime="2026-02-14 04:11:50.310677829 +0000 UTC m=+142.391615143" Feb 14 04:11:50 crc kubenswrapper[4867]: I0214 04:11:50.353184 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-pctg8" podStartSLOduration=119.353167603 podStartE2EDuration="1m59.353167603s" podCreationTimestamp="2026-02-14 04:09:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:50.328491514 +0000 UTC m=+142.409428828" watchObservedRunningTime="2026-02-14 04:11:50.353167603 +0000 UTC m=+142.434104917" Feb 14 04:11:50 crc kubenswrapper[4867]: I0214 04:11:50.398840 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:50 crc kubenswrapper[4867]: E0214 04:11:50.398933 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:50.898909789 +0000 UTC m=+142.979847103 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:50 crc kubenswrapper[4867]: I0214 04:11:50.399219 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:50 crc kubenswrapper[4867]: E0214 04:11:50.399579 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:50.899567666 +0000 UTC m=+142.980504970 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:50 crc kubenswrapper[4867]: I0214 04:11:50.401454 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-mkw9h" event={"ID":"0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2","Type":"ContainerStarted","Data":"0b46292ee8547b3f863b2a98bb8fb2cf8703a9757ad76735d9fe0ebd6ef2ffbd"} Feb 14 04:11:50 crc kubenswrapper[4867]: I0214 04:11:50.404013 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s94ht" event={"ID":"1b196c26-84a1-408f-913b-eb50572102cf","Type":"ContainerStarted","Data":"bc81bfa7a43c3207c40e6706fb2fd31e8a1cd427a12e1e87a713601dd9213e3b"} Feb 14 04:11:50 crc kubenswrapper[4867]: I0214 04:11:50.423343 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-gc8sl" event={"ID":"541a6523-92f6-477b-9d35-a3a0074f5de3","Type":"ContainerStarted","Data":"7359c9966fb493273fc78879abb8bc048cba601f71bd1221d1053d939eaff9ef"} Feb 14 04:11:50 crc kubenswrapper[4867]: I0214 04:11:50.440726 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tcss9" event={"ID":"46664b60-c0df-4869-9304-cec4de385a86","Type":"ContainerStarted","Data":"f397ed60c1c846321f943b10609443e3f5bd17a9c6dd2ecf373fb19774fdd18f"} Feb 14 04:11:50 crc kubenswrapper[4867]: I0214 04:11:50.443269 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6tvm5" event={"ID":"ec0d5c79-9e98-4f09-a336-9c284ba81d82","Type":"ContainerStarted","Data":"d1780c3399c6c4ac3170d23e77834088498a3ff63bc91665ba42c8a13e3d4fbb"} Feb 14 04:11:50 crc kubenswrapper[4867]: I0214 04:11:50.445381 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-sz8l8" event={"ID":"0d05475f-b787-49dc-8a0b-c98e47f40a3b","Type":"ContainerStarted","Data":"65216933a12d5c19e8bc55d0d569c235523b6dbe22cd57d5990271ce4e425222"} Feb 14 04:11:50 crc kubenswrapper[4867]: I0214 04:11:50.446247 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-8ftf5" event={"ID":"d3658855-0c06-490f-9bcc-33de7069178e","Type":"ContainerStarted","Data":"5613c8dd19ecd64e0d1180d68287ca020cc270c9863eb29760e0d932df960c3a"} Feb 14 04:11:50 crc kubenswrapper[4867]: I0214 04:11:50.447120 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dgp2v" event={"ID":"b1dba42c-e410-49fd-8c48-449fca5d65dc","Type":"ContainerStarted","Data":"2505211cfa615779d9f8e3b0b78e975c0737917c367dbe131808e6bc917ecd9d"} Feb 14 04:11:50 crc kubenswrapper[4867]: I0214 04:11:50.448538 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-9kgzh" event={"ID":"d74f081b-fe53-4642-8340-a8e602c627f1","Type":"ContainerStarted","Data":"6bbf18ce5c812e12910a07b611ffec21ee29c37d3d1a406668755058c6f086a2"} Feb 14 04:11:50 crc kubenswrapper[4867]: I0214 04:11:50.452680 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-t6c97" event={"ID":"dd4dbaf5-45ee-4171-b6b9-7deba44931ff","Type":"ContainerStarted","Data":"66d11694151f7873d16f9b3dbc561e7d675ca9fa539f15e70d22c62627ee1279"} Feb 14 04:11:50 crc kubenswrapper[4867]: I0214 04:11:50.459172 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-qlkzp" event={"ID":"4b71d414-e6bf-4f51-a808-1938c1edf207","Type":"ContainerStarted","Data":"ca0c26a1e9b7b16e001e49e5ddce44e9963632069c1c51977ac55d694d506ff1"} Feb 14 04:11:50 crc kubenswrapper[4867]: I0214 04:11:50.461054 4867 generic.go:334] "Generic (PLEG): container finished" podID="d1f6fd76-f362-495f-969d-a644f072552f" containerID="66fbea02ea2b5f3c6ffdf61d25eeeee17d6b58bd4bb90aedfb7b5388f306f2b1" exitCode=0 Feb 14 04:11:50 crc kubenswrapper[4867]: I0214 04:11:50.461111 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l8d7w" event={"ID":"d1f6fd76-f362-495f-969d-a644f072552f","Type":"ContainerDied","Data":"66fbea02ea2b5f3c6ffdf61d25eeeee17d6b58bd4bb90aedfb7b5388f306f2b1"} Feb 14 04:11:50 crc kubenswrapper[4867]: I0214 04:11:50.466160 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5k4wz" event={"ID":"9a16b0f1-4ef6-457a-a766-a0cc2181501f","Type":"ContainerStarted","Data":"0f4ffcd9be28b010cbb3f90f45a501cb69ec2cbb557453e40c94ee2eaabe1408"} Feb 14 04:11:50 crc kubenswrapper[4867]: I0214 04:11:50.467261 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517360-jfvsd" event={"ID":"71ac31c5-7a3b-4c18-aa9e-c193fa8f778a","Type":"ContainerStarted","Data":"e4ca5c9cce4b1a413dbb012e458367afc39bde8f3194baa1bce21c05bfa3d89d"} Feb 14 04:11:50 crc kubenswrapper[4867]: I0214 04:11:50.487804 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rv8cb" event={"ID":"a0c7654d-1553-4b68-8af4-253f77d7c657","Type":"ContainerStarted","Data":"b0126d8e37d5f7cc69f3c939759dd77b3373c63949068219c15168a6526dc330"} Feb 14 04:11:50 crc kubenswrapper[4867]: I0214 04:11:50.496062 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-htv2n" event={"ID":"dc723269-8ee6-4236-9eaa-169a00d76442","Type":"ContainerStarted","Data":"0048178c63d05d01b42d22de443716f1298cccafc53f9294b614ff7f1612f71a"} Feb 14 04:11:50 crc kubenswrapper[4867]: I0214 04:11:50.499104 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-pzj5s" event={"ID":"7cedc5a6-929b-43ca-a8b0-6dca555ca455","Type":"ContainerStarted","Data":"e03a7a990f5400e00c868e6bf732598ed46ee2c93e55a4f998fa09c139acce06"} Feb 14 04:11:50 crc kubenswrapper[4867]: I0214 04:11:50.501823 4867 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-pctg8 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Feb 14 04:11:50 crc kubenswrapper[4867]: I0214 04:11:50.501854 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-pctg8" podUID="07dd9173-fdfe-4edb-821b-37c94116b53e" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Feb 14 04:11:50 crc kubenswrapper[4867]: I0214 04:11:50.502126 4867 patch_prober.go:28] interesting pod/downloads-7954f5f757-x9sjv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Feb 14 04:11:50 crc kubenswrapper[4867]: I0214 04:11:50.502145 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-x9sjv" podUID="72546cbc-3499-4110-b0e4-58beab7cc8a5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Feb 14 04:11:50 crc kubenswrapper[4867]: I0214 04:11:50.504355 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:50 crc kubenswrapper[4867]: E0214 04:11:50.505493 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:51.005475616 +0000 UTC m=+143.086412930 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:50 crc kubenswrapper[4867]: I0214 04:11:50.607981 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:50 crc kubenswrapper[4867]: E0214 04:11:50.615745 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:51.115721757 +0000 UTC m=+143.196659071 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:50 crc kubenswrapper[4867]: I0214 04:11:50.709709 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:50 crc kubenswrapper[4867]: E0214 04:11:50.710194 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:51.210179345 +0000 UTC m=+143.291116659 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:50 crc kubenswrapper[4867]: I0214 04:11:50.811907 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:50 crc kubenswrapper[4867]: E0214 04:11:50.812300 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:51.312286099 +0000 UTC m=+143.393223413 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:50 crc kubenswrapper[4867]: I0214 04:11:50.912990 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:50 crc kubenswrapper[4867]: E0214 04:11:50.913493 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:51.413474388 +0000 UTC m=+143.494411702 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:51 crc kubenswrapper[4867]: I0214 04:11:51.014558 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:51 crc kubenswrapper[4867]: E0214 04:11:51.014873 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:51.514861264 +0000 UTC m=+143.595798578 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:51 crc kubenswrapper[4867]: I0214 04:11:51.115979 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:51 crc kubenswrapper[4867]: E0214 04:11:51.116133 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:51.616115665 +0000 UTC m=+143.697052979 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:51 crc kubenswrapper[4867]: I0214 04:11:51.116210 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:51 crc kubenswrapper[4867]: E0214 04:11:51.116527 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:51.616501215 +0000 UTC m=+143.697438529 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:51 crc kubenswrapper[4867]: I0214 04:11:51.217233 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:51 crc kubenswrapper[4867]: E0214 04:11:51.217449 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:51.717424608 +0000 UTC m=+143.798361922 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:51 crc kubenswrapper[4867]: I0214 04:11:51.217690 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:51 crc kubenswrapper[4867]: E0214 04:11:51.218032 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:51.718018603 +0000 UTC m=+143.798955917 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:51 crc kubenswrapper[4867]: I0214 04:11:51.318524 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:51 crc kubenswrapper[4867]: E0214 04:11:51.318834 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:51.818819733 +0000 UTC m=+143.899757037 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:51 crc kubenswrapper[4867]: I0214 04:11:51.420617 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:51 crc kubenswrapper[4867]: E0214 04:11:51.421090 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:51.92107552 +0000 UTC m=+144.002012854 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:51 crc kubenswrapper[4867]: I0214 04:11:51.457679 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:11:51 crc kubenswrapper[4867]: I0214 04:11:51.507776 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rv8cb" event={"ID":"a0c7654d-1553-4b68-8af4-253f77d7c657","Type":"ContainerStarted","Data":"dc6b34c0a2b6b91075fb741871027a4a30faaff955391c22fdb83614576be619"} Feb 14 04:11:51 crc kubenswrapper[4867]: I0214 04:11:51.508911 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-l6gq7" event={"ID":"a9bcb9a2-1128-4c6b-80b1-47afd1a46511","Type":"ContainerStarted","Data":"6a9fdef78d2c3532db91530b9b0f923268929b942fc13d36c12fa391ad6c39d5"} Feb 14 04:11:51 crc kubenswrapper[4867]: I0214 04:11:51.509850 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f47sx" event={"ID":"89db71f1-1a8b-4c57-9a3d-eb725060aee9","Type":"ContainerStarted","Data":"32f54270e4bbc4ffd262bfa8f6df761c3f4f277c90d8ea5a8e2f59467a048f45"} Feb 14 04:11:51 crc kubenswrapper[4867]: I0214 04:11:51.511881 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-699tj" event={"ID":"8437deca-adf5-4648-9abe-2c1c6376d07b","Type":"ContainerStarted","Data":"5f164b53316141d80833dd0afb26eb9682abfcc6f23401e2fa506cbf27329a34"} Feb 14 04:11:51 crc kubenswrapper[4867]: I0214 04:11:51.514970 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pmlgc" event={"ID":"acdb1323-fec8-46fa-9f36-9b0f7f74cca4","Type":"ContainerStarted","Data":"90e24cbafd8c59084ad3aa234e814bb76c7cc62e3f4fd2f231f08f478ee21fe0"} Feb 14 04:11:51 crc kubenswrapper[4867]: I0214 04:11:51.516144 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-rxprp" event={"ID":"02d4609f-f699-4ac2-bc41-752b879681ba","Type":"ContainerStarted","Data":"ee04e324663f8fc4b82cb8b67e9abaf1041eff947957b2c683b5e82e076739c3"} Feb 14 04:11:51 crc kubenswrapper[4867]: I0214 04:11:51.521802 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:51 crc kubenswrapper[4867]: E0214 04:11:51.522170 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:52.022155548 +0000 UTC m=+144.103092862 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:51 crc kubenswrapper[4867]: I0214 04:11:51.531725 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-szcmx" event={"ID":"ccd97956-aef1-45cf-9475-02928c866124","Type":"ContainerStarted","Data":"ff1d2840fe467c400dc900559308f5e595fcd476b4c879a9698aeab0690fa07f"} Feb 14 04:11:51 crc kubenswrapper[4867]: I0214 04:11:51.532712 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-ccg6j" event={"ID":"22c4dfcc-144e-40cd-bed2-dc28c210a130","Type":"ContainerStarted","Data":"89cc21aca7d7ce86585c86f456df154687acf8be7a8390235ac7c35d06f5ef7f"} Feb 14 04:11:51 crc kubenswrapper[4867]: I0214 04:11:51.534664 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-29p6h" event={"ID":"14efaf39-985f-45ea-ab79-0b8b2044c7f7","Type":"ContainerStarted","Data":"ffdcb8b4f0119bbfa4081845fbe7d22aac75e8abd20c4cfd6d4121782f9269ad"} Feb 14 04:11:51 crc kubenswrapper[4867]: I0214 04:11:51.535275 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-29p6h" Feb 14 04:11:51 crc kubenswrapper[4867]: I0214 04:11:51.536917 4867 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-29p6h container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Feb 14 04:11:51 crc kubenswrapper[4867]: I0214 04:11:51.536950 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-29p6h" podUID="14efaf39-985f-45ea-ab79-0b8b2044c7f7" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" Feb 14 04:11:51 crc kubenswrapper[4867]: I0214 04:11:51.538353 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-485km" event={"ID":"1fd832b4-de40-4266-93fb-3682eeb9dd3e","Type":"ContainerStarted","Data":"1fbdab536832bc3ffe63dca56aee4c29e70508ec0e812efabd142713405560ce"} Feb 14 04:11:51 crc kubenswrapper[4867]: I0214 04:11:51.539797 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" event={"ID":"0ad7b333-6328-41ea-a81d-bce9790b185a","Type":"ContainerStarted","Data":"271deed38181d3d03a61bb60c701b3fc845d6907348df479c58ecd82b90d57ea"} Feb 14 04:11:51 crc kubenswrapper[4867]: I0214 04:11:51.540528 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" Feb 14 04:11:51 crc kubenswrapper[4867]: I0214 04:11:51.541886 4867 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-c65kr container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.34:6443/healthz\": dial tcp 10.217.0.34:6443: connect: connection refused" start-of-body= Feb 14 04:11:51 crc kubenswrapper[4867]: I0214 04:11:51.541918 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" podUID="0ad7b333-6328-41ea-a81d-bce9790b185a" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.34:6443/healthz\": dial tcp 10.217.0.34:6443: connect: connection refused" Feb 14 04:11:51 crc kubenswrapper[4867]: I0214 04:11:51.547461 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f47sx" podStartSLOduration=120.547446412 podStartE2EDuration="2m0.547446412s" podCreationTimestamp="2026-02-14 04:09:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:51.544751914 +0000 UTC m=+143.625689218" watchObservedRunningTime="2026-02-14 04:11:51.547446412 +0000 UTC m=+143.628383726" Feb 14 04:11:51 crc kubenswrapper[4867]: I0214 04:11:51.548894 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wcdc2" event={"ID":"6a8f75ff-3558-4d7b-8adb-722a732d0633","Type":"ContainerStarted","Data":"304bdadfd0110c34cc762c32f0da538f1989fb2efbd08e2faec1ba1b223f466d"} Feb 14 04:11:51 crc kubenswrapper[4867]: I0214 04:11:51.551162 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-886ct" event={"ID":"6d8ea50d-6822-425a-8eac-6311c8537eb7","Type":"ContainerStarted","Data":"53f2d770e25766aa294ce2cd51e6fda4ecaed6b876043de753f867fb66dc79d7"} Feb 14 04:11:51 crc kubenswrapper[4867]: I0214 04:11:51.551891 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-htv2n" Feb 14 04:11:51 crc kubenswrapper[4867]: I0214 04:11:51.553596 4867 patch_prober.go:28] interesting pod/console-operator-58897d9998-htv2n container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/readyz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Feb 14 04:11:51 crc kubenswrapper[4867]: I0214 04:11:51.553629 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-htv2n" podUID="dc723269-8ee6-4236-9eaa-169a00d76442" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/readyz\": dial tcp 10.217.0.9:8443: connect: connection refused" Feb 14 04:11:51 crc kubenswrapper[4867]: I0214 04:11:51.578032 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-ccg6j" podStartSLOduration=120.578017332 podStartE2EDuration="2m0.578017332s" podCreationTimestamp="2026-02-14 04:09:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:51.577742335 +0000 UTC m=+143.658679659" watchObservedRunningTime="2026-02-14 04:11:51.578017332 +0000 UTC m=+143.658954646" Feb 14 04:11:51 crc kubenswrapper[4867]: I0214 04:11:51.617775 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" podStartSLOduration=120.617758275 podStartE2EDuration="2m0.617758275s" podCreationTimestamp="2026-02-14 04:09:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:51.617657443 +0000 UTC m=+143.698594757" watchObservedRunningTime="2026-02-14 04:11:51.617758275 +0000 UTC m=+143.698695589" Feb 14 04:11:51 crc kubenswrapper[4867]: I0214 04:11:51.624150 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:51 crc kubenswrapper[4867]: E0214 04:11:51.625988 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:52.125976055 +0000 UTC m=+144.206913369 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:51 crc kubenswrapper[4867]: I0214 04:11:51.675115 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-29p6h" podStartSLOduration=120.675094317 podStartE2EDuration="2m0.675094317s" podCreationTimestamp="2026-02-14 04:09:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:51.671407363 +0000 UTC m=+143.752344687" watchObservedRunningTime="2026-02-14 04:11:51.675094317 +0000 UTC m=+143.756031631" Feb 14 04:11:51 crc kubenswrapper[4867]: I0214 04:11:51.680395 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-rxprp" podStartSLOduration=120.680367451 podStartE2EDuration="2m0.680367451s" podCreationTimestamp="2026-02-14 04:09:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:51.649041843 +0000 UTC m=+143.729979157" watchObservedRunningTime="2026-02-14 04:11:51.680367451 +0000 UTC m=+143.761304825" Feb 14 04:11:51 crc kubenswrapper[4867]: I0214 04:11:51.720331 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-886ct" podStartSLOduration=120.72031596 podStartE2EDuration="2m0.72031596s" podCreationTimestamp="2026-02-14 04:09:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:51.701459809 +0000 UTC m=+143.782397123" watchObservedRunningTime="2026-02-14 04:11:51.72031596 +0000 UTC m=+143.801253274" Feb 14 04:11:51 crc kubenswrapper[4867]: I0214 04:11:51.721760 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-sz8l8" podStartSLOduration=7.721755327 podStartE2EDuration="7.721755327s" podCreationTimestamp="2026-02-14 04:11:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:51.720217738 +0000 UTC m=+143.801155052" watchObservedRunningTime="2026-02-14 04:11:51.721755327 +0000 UTC m=+143.802692641" Feb 14 04:11:51 crc kubenswrapper[4867]: I0214 04:11:51.726095 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:51 crc kubenswrapper[4867]: E0214 04:11:51.727906 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:52.227884533 +0000 UTC m=+144.308821847 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:51 crc kubenswrapper[4867]: I0214 04:11:51.745284 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-htv2n" podStartSLOduration=121.745266476 podStartE2EDuration="2m1.745266476s" podCreationTimestamp="2026-02-14 04:09:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:51.74502355 +0000 UTC m=+143.825960864" watchObservedRunningTime="2026-02-14 04:11:51.745266476 +0000 UTC m=+143.826203790" Feb 14 04:11:51 crc kubenswrapper[4867]: I0214 04:11:51.827427 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:51 crc kubenswrapper[4867]: E0214 04:11:51.828326 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:52.328296723 +0000 UTC m=+144.409234047 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:51 crc kubenswrapper[4867]: I0214 04:11:51.928241 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:51 crc kubenswrapper[4867]: E0214 04:11:51.928612 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:52.42858897 +0000 UTC m=+144.509526284 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.032169 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:52 crc kubenswrapper[4867]: E0214 04:11:52.032582 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:52.532566251 +0000 UTC m=+144.613503565 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.132955 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:52 crc kubenswrapper[4867]: E0214 04:11:52.133095 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:52.633071784 +0000 UTC m=+144.714009088 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.133201 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:52 crc kubenswrapper[4867]: E0214 04:11:52.133540 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:52.633532236 +0000 UTC m=+144.714469550 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.234074 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:52 crc kubenswrapper[4867]: E0214 04:11:52.234427 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:52.734412768 +0000 UTC m=+144.815350082 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.336015 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:52 crc kubenswrapper[4867]: E0214 04:11:52.336538 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:52.83649212 +0000 UTC m=+144.917429434 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.437315 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:52 crc kubenswrapper[4867]: E0214 04:11:52.437692 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:52.93767847 +0000 UTC m=+145.018615784 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.539587 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:52 crc kubenswrapper[4867]: E0214 04:11:52.539985 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:53.039972248 +0000 UTC m=+145.120909572 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.561591 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6tvm5" event={"ID":"ec0d5c79-9e98-4f09-a336-9c284ba81d82","Type":"ContainerStarted","Data":"900bb8b6bc424bcc0d4213869ba2132ba509a17f54cb6d5786f79ffa8f2ff01a"} Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.564521 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5k4wz" event={"ID":"9a16b0f1-4ef6-457a-a766-a0cc2181501f","Type":"ContainerStarted","Data":"e4e60affe86a35fc1b3546c424ffe18fb73433fa54f7e1f2f48230d3938cb514"} Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.566534 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5kv6p" event={"ID":"1261994f-a993-4ffc-851a-dfce5bcc10b1","Type":"ContainerStarted","Data":"ad2ddf4680e69f0a913bce1ff89fea465b130eba23650d73e98a805b35546172"} Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.570894 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-9kgzh" event={"ID":"d74f081b-fe53-4642-8340-a8e602c627f1","Type":"ContainerStarted","Data":"8e1c452b54860770b53ac4d26fe606d56a8da1c4532f5ebb807da0e51ca4911a"} Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.577978 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jsc7b" event={"ID":"d58c6e7c-e0bc-4833-ab34-348c03f75da7","Type":"ContainerStarted","Data":"c8e82d7f6512b2d6b5c03b51ba8a2b0d813ac1588b43a82d35118815f7fec1a7"} Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.581448 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-qlkzp" event={"ID":"4b71d414-e6bf-4f51-a808-1938c1edf207","Type":"ContainerStarted","Data":"d6f9a4aceb60429befbb079eda354a35872f1921b3ba953e54763f01e9e1d148"} Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.582875 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-gc8sl" event={"ID":"541a6523-92f6-477b-9d35-a3a0074f5de3","Type":"ContainerStarted","Data":"6d4453329edd29451bae0a09af90381f5e724a96b41cd88fd8fce385eb3b0938"} Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.584652 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-l6gq7" event={"ID":"a9bcb9a2-1128-4c6b-80b1-47afd1a46511","Type":"ContainerStarted","Data":"1abbdcf648a7bfd0096b2c9b5b18705a13408f9f258027c631990c9d23109908"} Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.587444 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tcss9" event={"ID":"46664b60-c0df-4869-9304-cec4de385a86","Type":"ContainerStarted","Data":"6ff2ed29a3b77b2481e62c7a269a418387c210dfacd8443a4552d6a8773dde4c"} Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.588180 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tcss9" Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.590975 4867 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-tcss9 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.591020 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tcss9" podUID="46664b60-c0df-4869-9304-cec4de385a86" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.594849 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-699tj" event={"ID":"8437deca-adf5-4648-9abe-2c1c6376d07b","Type":"ContainerStarted","Data":"adcf037effe8823e62cc635472c88504b24b940b865e88039d35e39c4e81f334"} Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.595749 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6tvm5" podStartSLOduration=121.59573005 podStartE2EDuration="2m1.59573005s" podCreationTimestamp="2026-02-14 04:09:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:52.594639742 +0000 UTC m=+144.675577056" watchObservedRunningTime="2026-02-14 04:11:52.59573005 +0000 UTC m=+144.676667364" Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.597428 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517360-jfvsd" event={"ID":"71ac31c5-7a3b-4c18-aa9e-c193fa8f778a","Type":"ContainerStarted","Data":"aa8fea275ce5bfacf3d08b45c45e75a0934c35dd23257fef4ead33c26bfccaa6"} Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.598705 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-mkw9h" event={"ID":"0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2","Type":"ContainerStarted","Data":"51dd7926e1bc9104319614773b3ee71539ad753d4fb48a3fd7a135d20615274f"} Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.599598 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-mkw9h" Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.601142 4867 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-mkw9h container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.601190 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-mkw9h" podUID="0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.602085 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s94ht" event={"ID":"1b196c26-84a1-408f-913b-eb50572102cf","Type":"ContainerStarted","Data":"c943db06330ddf72b1ccef3b0bef6de1e4225825a436a45e341b66e82e44cf32"} Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.602687 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s94ht" Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.604451 4867 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-s94ht container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.18:5443/healthz\": dial tcp 10.217.0.18:5443: connect: connection refused" start-of-body= Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.604497 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s94ht" podUID="1b196c26-84a1-408f-913b-eb50572102cf" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.18:5443/healthz\": dial tcp 10.217.0.18:5443: connect: connection refused" Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.604772 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pmlgc" event={"ID":"acdb1323-fec8-46fa-9f36-9b0f7f74cca4","Type":"ContainerStarted","Data":"6a0a494f29ffa335720d7960fce257fdb4789ba5266a571250856c1caa1d4139"} Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.606165 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dgp2v" event={"ID":"b1dba42c-e410-49fd-8c48-449fca5d65dc","Type":"ContainerStarted","Data":"1c2f18b80eabbfd8f9faa98d372c322248253795be83a6d80562b3ec3e4cc570"} Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.606216 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dgp2v" Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.607643 4867 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-dgp2v container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" start-of-body= Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.607673 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dgp2v" podUID="b1dba42c-e410-49fd-8c48-449fca5d65dc" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.609067 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-t8bst" event={"ID":"0ccfed17-f056-4bbe-8ec3-cdd31f37be63","Type":"ContainerStarted","Data":"c6c39938bfb9f99f937a0fc65d181fea0eb1da601b9f5674b7e62e146b7e19eb"} Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.619109 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-8qkg2" event={"ID":"894233bb-65ed-4cdd-ac61-7a8bd8f66140","Type":"ContainerStarted","Data":"67c1e7d10b3abf8fcc8deed18cda3d4daabcb2d1f501d3cd9da57cd0242ef6c3"} Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.620474 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-t6c97" event={"ID":"dd4dbaf5-45ee-4171-b6b9-7deba44931ff","Type":"ContainerStarted","Data":"0afe2d8ca5740eb65cbdba4d5b86f18abf64813249d78245a81d0c7fae76c57d"} Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.621814 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tcss9" podStartSLOduration=121.621799785 podStartE2EDuration="2m1.621799785s" podCreationTimestamp="2026-02-14 04:09:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:52.618731047 +0000 UTC m=+144.699668361" watchObservedRunningTime="2026-02-14 04:11:52.621799785 +0000 UTC m=+144.702737099" Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.625265 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l8d7w" event={"ID":"d1f6fd76-f362-495f-969d-a644f072552f","Type":"ContainerStarted","Data":"82b37a1a0a51ba5be1a38f645454c34b41d59a7c8c5d04f87682e4e4b69cd548"} Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.625411 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l8d7w" Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.627304 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-485km" event={"ID":"1fd832b4-de40-4266-93fb-3682eeb9dd3e","Type":"ContainerStarted","Data":"3bb257dbc0b7e413b76e942b1666e5f7fbceaca7b423608496b33ebb41a122d7"} Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.632535 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-8ftf5" event={"ID":"d3658855-0c06-490f-9bcc-33de7069178e","Type":"ContainerStarted","Data":"31ebad694423c3f7c2ca5e7854062b07fbed0bf71eb51ec69427bf63965f12f7"} Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.635383 4867 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-29p6h container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.635416 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-29p6h" podUID="14efaf39-985f-45ea-ab79-0b8b2044c7f7" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.635468 4867 patch_prober.go:28] interesting pod/console-operator-58897d9998-htv2n container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/readyz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.635532 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-htv2n" podUID="dc723269-8ee6-4236-9eaa-169a00d76442" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/readyz\": dial tcp 10.217.0.9:8443: connect: connection refused" Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.635674 4867 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-c65kr container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.34:6443/healthz\": dial tcp 10.217.0.34:6443: connect: connection refused" start-of-body= Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.635845 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" podUID="0ad7b333-6328-41ea-a81d-bce9790b185a" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.34:6443/healthz\": dial tcp 10.217.0.34:6443: connect: connection refused" Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.643356 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:52 crc kubenswrapper[4867]: E0214 04:11:52.644007 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:53.14398836 +0000 UTC m=+145.224925674 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.644528 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-9kgzh" podStartSLOduration=121.644517754 podStartE2EDuration="2m1.644517754s" podCreationTimestamp="2026-02-14 04:09:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:52.642879512 +0000 UTC m=+144.723816836" watchObservedRunningTime="2026-02-14 04:11:52.644517754 +0000 UTC m=+144.725455068" Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.676601 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-l6gq7" podStartSLOduration=121.676582241 podStartE2EDuration="2m1.676582241s" podCreationTimestamp="2026-02-14 04:09:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:52.676203842 +0000 UTC m=+144.757141156" watchObservedRunningTime="2026-02-14 04:11:52.676582241 +0000 UTC m=+144.757519555" Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.713726 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jsc7b" podStartSLOduration=121.713706978 podStartE2EDuration="2m1.713706978s" podCreationTimestamp="2026-02-14 04:09:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:52.705303614 +0000 UTC m=+144.786240938" watchObservedRunningTime="2026-02-14 04:11:52.713706978 +0000 UTC m=+144.794644302" Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.731925 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5kv6p" podStartSLOduration=122.731907122 podStartE2EDuration="2m2.731907122s" podCreationTimestamp="2026-02-14 04:09:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:52.731304697 +0000 UTC m=+144.812242011" watchObservedRunningTime="2026-02-14 04:11:52.731907122 +0000 UTC m=+144.812844436" Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.750556 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-qlkzp" podStartSLOduration=121.750540177 podStartE2EDuration="2m1.750540177s" podCreationTimestamp="2026-02-14 04:09:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:52.75026902 +0000 UTC m=+144.831206334" watchObservedRunningTime="2026-02-14 04:11:52.750540177 +0000 UTC m=+144.831477491" Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.750926 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:52 crc kubenswrapper[4867]: E0214 04:11:52.762006 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:53.261989169 +0000 UTC m=+145.342926543 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.820135 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5k4wz" podStartSLOduration=121.820121031 podStartE2EDuration="2m1.820121031s" podCreationTimestamp="2026-02-14 04:09:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:52.789367287 +0000 UTC m=+144.870304601" watchObservedRunningTime="2026-02-14 04:11:52.820121031 +0000 UTC m=+144.901058335" Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.849873 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l8d7w" podStartSLOduration=122.849851739 podStartE2EDuration="2m2.849851739s" podCreationTimestamp="2026-02-14 04:09:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:52.847863099 +0000 UTC m=+144.928800413" watchObservedRunningTime="2026-02-14 04:11:52.849851739 +0000 UTC m=+144.930789053" Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.850494 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dgp2v" podStartSLOduration=121.850487926 podStartE2EDuration="2m1.850487926s" podCreationTimestamp="2026-02-14 04:09:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:52.821086505 +0000 UTC m=+144.902023819" watchObservedRunningTime="2026-02-14 04:11:52.850487926 +0000 UTC m=+144.931425230" Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.870286 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:52 crc kubenswrapper[4867]: E0214 04:11:52.870997 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:53.370957647 +0000 UTC m=+145.451894961 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.890667 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-t8bst" podStartSLOduration=121.890646699 podStartE2EDuration="2m1.890646699s" podCreationTimestamp="2026-02-14 04:09:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:52.888988127 +0000 UTC m=+144.969925441" watchObservedRunningTime="2026-02-14 04:11:52.890646699 +0000 UTC m=+144.971584013" Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.921008 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-485km" podStartSLOduration=121.920994693 podStartE2EDuration="2m1.920994693s" podCreationTimestamp="2026-02-14 04:09:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:52.919999188 +0000 UTC m=+145.000936502" watchObservedRunningTime="2026-02-14 04:11:52.920994693 +0000 UTC m=+145.001932007" Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.953682 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pmlgc" podStartSLOduration=122.953663826 podStartE2EDuration="2m2.953663826s" podCreationTimestamp="2026-02-14 04:09:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:52.952173018 +0000 UTC m=+145.033110332" watchObservedRunningTime="2026-02-14 04:11:52.953663826 +0000 UTC m=+145.034601140" Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.975198 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:52 crc kubenswrapper[4867]: E0214 04:11:52.975601 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:53.475580025 +0000 UTC m=+145.556517339 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.979850 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-szcmx" podStartSLOduration=121.979836133 podStartE2EDuration="2m1.979836133s" podCreationTimestamp="2026-02-14 04:09:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:52.978023607 +0000 UTC m=+145.058960931" watchObservedRunningTime="2026-02-14 04:11:52.979836133 +0000 UTC m=+145.060773437" Feb 14 04:11:53 crc kubenswrapper[4867]: I0214 04:11:53.016961 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-mkw9h" podStartSLOduration=122.01694575 podStartE2EDuration="2m2.01694575s" podCreationTimestamp="2026-02-14 04:09:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:53.015193035 +0000 UTC m=+145.096130349" watchObservedRunningTime="2026-02-14 04:11:53.01694575 +0000 UTC m=+145.097883054" Feb 14 04:11:53 crc kubenswrapper[4867]: I0214 04:11:53.038839 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wcdc2" podStartSLOduration=122.038821977 podStartE2EDuration="2m2.038821977s" podCreationTimestamp="2026-02-14 04:09:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:53.037585206 +0000 UTC m=+145.118522520" watchObservedRunningTime="2026-02-14 04:11:53.038821977 +0000 UTC m=+145.119759291" Feb 14 04:11:53 crc kubenswrapper[4867]: I0214 04:11:53.075876 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:53 crc kubenswrapper[4867]: E0214 04:11:53.076136 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:53.576121459 +0000 UTC m=+145.657058773 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:53 crc kubenswrapper[4867]: I0214 04:11:53.103835 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-qlkzp" Feb 14 04:11:53 crc kubenswrapper[4867]: I0214 04:11:53.116850 4867 patch_prober.go:28] interesting pod/router-default-5444994796-qlkzp container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 14 04:11:53 crc kubenswrapper[4867]: I0214 04:11:53.116901 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qlkzp" podUID="4b71d414-e6bf-4f51-a808-1938c1edf207" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 14 04:11:53 crc kubenswrapper[4867]: I0214 04:11:53.152807 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-699tj" podStartSLOduration=122.152790193 podStartE2EDuration="2m2.152790193s" podCreationTimestamp="2026-02-14 04:09:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:53.152181078 +0000 UTC m=+145.233118392" watchObservedRunningTime="2026-02-14 04:11:53.152790193 +0000 UTC m=+145.233727507" Feb 14 04:11:53 crc kubenswrapper[4867]: I0214 04:11:53.153599 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29517360-jfvsd" podStartSLOduration=123.153592804 podStartE2EDuration="2m3.153592804s" podCreationTimestamp="2026-02-14 04:09:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:53.093015009 +0000 UTC m=+145.173952323" watchObservedRunningTime="2026-02-14 04:11:53.153592804 +0000 UTC m=+145.234530118" Feb 14 04:11:53 crc kubenswrapper[4867]: I0214 04:11:53.177107 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:53 crc kubenswrapper[4867]: E0214 04:11:53.177450 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:53.677435221 +0000 UTC m=+145.758372535 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:53 crc kubenswrapper[4867]: I0214 04:11:53.179546 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s94ht" podStartSLOduration=122.179533415 podStartE2EDuration="2m2.179533415s" podCreationTimestamp="2026-02-14 04:09:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:53.179086774 +0000 UTC m=+145.260024088" watchObservedRunningTime="2026-02-14 04:11:53.179533415 +0000 UTC m=+145.260470729" Feb 14 04:11:53 crc kubenswrapper[4867]: I0214 04:11:53.257326 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-8ftf5" podStartSLOduration=9.257309448 podStartE2EDuration="9.257309448s" podCreationTimestamp="2026-02-14 04:11:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:53.218921159 +0000 UTC m=+145.299858473" watchObservedRunningTime="2026-02-14 04:11:53.257309448 +0000 UTC m=+145.338246762" Feb 14 04:11:53 crc kubenswrapper[4867]: I0214 04:11:53.257710 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-t6c97" podStartSLOduration=122.257706508 podStartE2EDuration="2m2.257706508s" podCreationTimestamp="2026-02-14 04:09:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:53.256788765 +0000 UTC m=+145.337726079" watchObservedRunningTime="2026-02-14 04:11:53.257706508 +0000 UTC m=+145.338643822" Feb 14 04:11:53 crc kubenswrapper[4867]: I0214 04:11:53.278489 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:53 crc kubenswrapper[4867]: E0214 04:11:53.278642 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:53.778612521 +0000 UTC m=+145.859549845 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:53 crc kubenswrapper[4867]: I0214 04:11:53.278898 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:53 crc kubenswrapper[4867]: E0214 04:11:53.279334 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:53.779315479 +0000 UTC m=+145.860252853 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:53 crc kubenswrapper[4867]: I0214 04:11:53.379617 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:53 crc kubenswrapper[4867]: E0214 04:11:53.379820 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:53.87979581 +0000 UTC m=+145.960733124 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:53 crc kubenswrapper[4867]: I0214 04:11:53.380065 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:53 crc kubenswrapper[4867]: E0214 04:11:53.380335 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:53.880324023 +0000 UTC m=+145.961261337 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:53 crc kubenswrapper[4867]: I0214 04:11:53.481138 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:53 crc kubenswrapper[4867]: E0214 04:11:53.481454 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:53.981429621 +0000 UTC m=+146.062366935 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:53 crc kubenswrapper[4867]: I0214 04:11:53.582781 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:53 crc kubenswrapper[4867]: E0214 04:11:53.583074 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:54.083063302 +0000 UTC m=+146.164000616 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:53 crc kubenswrapper[4867]: I0214 04:11:53.639453 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-gc8sl" event={"ID":"541a6523-92f6-477b-9d35-a3a0074f5de3","Type":"ContainerStarted","Data":"6bbbbeedff53f1e49ea9cd3f79ae63d75d6bc0433fb4e9f819daa726730735e0"} Feb 14 04:11:53 crc kubenswrapper[4867]: I0214 04:11:53.639669 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-gc8sl" Feb 14 04:11:53 crc kubenswrapper[4867]: I0214 04:11:53.641414 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-8qkg2" event={"ID":"894233bb-65ed-4cdd-ac61-7a8bd8f66140","Type":"ContainerStarted","Data":"4823dc08f4332c870bc0a784be9acf6b08614d27f9fcc58f84d0a6d513455976"} Feb 14 04:11:53 crc kubenswrapper[4867]: I0214 04:11:53.643773 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rv8cb" event={"ID":"a0c7654d-1553-4b68-8af4-253f77d7c657","Type":"ContainerStarted","Data":"a3c4bddbff04cdcab7e0f56ecaa633a0e493e61f17878482d74e1ba56c884806"} Feb 14 04:11:53 crc kubenswrapper[4867]: I0214 04:11:53.643809 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rv8cb" Feb 14 04:11:53 crc kubenswrapper[4867]: I0214 04:11:53.656349 4867 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-mkw9h container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Feb 14 04:11:53 crc kubenswrapper[4867]: I0214 04:11:53.656400 4867 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-dgp2v container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" start-of-body= Feb 14 04:11:53 crc kubenswrapper[4867]: I0214 04:11:53.656416 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-mkw9h" podUID="0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" Feb 14 04:11:53 crc kubenswrapper[4867]: I0214 04:11:53.656419 4867 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-c65kr container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.34:6443/healthz\": dial tcp 10.217.0.34:6443: connect: connection refused" start-of-body= Feb 14 04:11:53 crc kubenswrapper[4867]: I0214 04:11:53.656466 4867 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-29p6h container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Feb 14 04:11:53 crc kubenswrapper[4867]: I0214 04:11:53.656470 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dgp2v" podUID="b1dba42c-e410-49fd-8c48-449fca5d65dc" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" Feb 14 04:11:53 crc kubenswrapper[4867]: I0214 04:11:53.656524 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-29p6h" podUID="14efaf39-985f-45ea-ab79-0b8b2044c7f7" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" Feb 14 04:11:53 crc kubenswrapper[4867]: I0214 04:11:53.656485 4867 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-s94ht container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.18:5443/healthz\": dial tcp 10.217.0.18:5443: connect: connection refused" start-of-body= Feb 14 04:11:53 crc kubenswrapper[4867]: I0214 04:11:53.656376 4867 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-tcss9 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Feb 14 04:11:53 crc kubenswrapper[4867]: I0214 04:11:53.656573 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tcss9" podUID="46664b60-c0df-4869-9304-cec4de385a86" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" Feb 14 04:11:53 crc kubenswrapper[4867]: I0214 04:11:53.656574 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s94ht" podUID="1b196c26-84a1-408f-913b-eb50572102cf" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.18:5443/healthz\": dial tcp 10.217.0.18:5443: connect: connection refused" Feb 14 04:11:53 crc kubenswrapper[4867]: I0214 04:11:53.656473 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" podUID="0ad7b333-6328-41ea-a81d-bce9790b185a" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.34:6443/healthz\": dial tcp 10.217.0.34:6443: connect: connection refused" Feb 14 04:11:53 crc kubenswrapper[4867]: I0214 04:11:53.676686 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-gc8sl" podStartSLOduration=9.676668029 podStartE2EDuration="9.676668029s" podCreationTimestamp="2026-02-14 04:11:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:53.676135965 +0000 UTC m=+145.757073279" watchObservedRunningTime="2026-02-14 04:11:53.676668029 +0000 UTC m=+145.757605343" Feb 14 04:11:53 crc kubenswrapper[4867]: I0214 04:11:53.688774 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:53 crc kubenswrapper[4867]: E0214 04:11:53.689149 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:54.189136747 +0000 UTC m=+146.270074061 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:53 crc kubenswrapper[4867]: I0214 04:11:53.702424 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rv8cb" podStartSLOduration=122.702404145 podStartE2EDuration="2m2.702404145s" podCreationTimestamp="2026-02-14 04:09:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:53.699796419 +0000 UTC m=+145.780733733" watchObservedRunningTime="2026-02-14 04:11:53.702404145 +0000 UTC m=+145.783341449" Feb 14 04:11:53 crc kubenswrapper[4867]: I0214 04:11:53.734810 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-8qkg2" podStartSLOduration=122.734790171 podStartE2EDuration="2m2.734790171s" podCreationTimestamp="2026-02-14 04:09:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:53.732613165 +0000 UTC m=+145.813550479" watchObservedRunningTime="2026-02-14 04:11:53.734790171 +0000 UTC m=+145.815727485" Feb 14 04:11:53 crc kubenswrapper[4867]: I0214 04:11:53.791074 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:53 crc kubenswrapper[4867]: E0214 04:11:53.798192 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:54.298176337 +0000 UTC m=+146.379113741 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:53 crc kubenswrapper[4867]: I0214 04:11:53.897012 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:53 crc kubenswrapper[4867]: E0214 04:11:53.897115 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:54.397099379 +0000 UTC m=+146.478036693 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:53 crc kubenswrapper[4867]: I0214 04:11:53.897362 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:53 crc kubenswrapper[4867]: E0214 04:11:53.897639 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:54.397632813 +0000 UTC m=+146.478570127 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:53 crc kubenswrapper[4867]: I0214 04:11:53.999309 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:53 crc kubenswrapper[4867]: E0214 04:11:53.999558 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:54.499530131 +0000 UTC m=+146.580467455 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:53 crc kubenswrapper[4867]: I0214 04:11:53.999703 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:54 crc kubenswrapper[4867]: E0214 04:11:54.000182 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:54.500172057 +0000 UTC m=+146.581109381 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:54 crc kubenswrapper[4867]: I0214 04:11:54.101372 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:54 crc kubenswrapper[4867]: E0214 04:11:54.101756 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:54.601738147 +0000 UTC m=+146.682675471 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:54 crc kubenswrapper[4867]: I0214 04:11:54.104143 4867 patch_prober.go:28] interesting pod/router-default-5444994796-qlkzp container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 14 04:11:54 crc kubenswrapper[4867]: I0214 04:11:54.104196 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qlkzp" podUID="4b71d414-e6bf-4f51-a808-1938c1edf207" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 14 04:11:54 crc kubenswrapper[4867]: I0214 04:11:54.202844 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:54 crc kubenswrapper[4867]: E0214 04:11:54.203293 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:54.703272196 +0000 UTC m=+146.784209580 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:54 crc kubenswrapper[4867]: I0214 04:11:54.307891 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:54 crc kubenswrapper[4867]: E0214 04:11:54.308037 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:54.808019946 +0000 UTC m=+146.888957260 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:54 crc kubenswrapper[4867]: I0214 04:11:54.308189 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:54 crc kubenswrapper[4867]: E0214 04:11:54.308394 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:54.808387175 +0000 UTC m=+146.889324489 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:54 crc kubenswrapper[4867]: I0214 04:11:54.409262 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:54 crc kubenswrapper[4867]: E0214 04:11:54.409380 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:54.90935496 +0000 UTC m=+146.990292274 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:54 crc kubenswrapper[4867]: I0214 04:11:54.409800 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:54 crc kubenswrapper[4867]: E0214 04:11:54.410115 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:54.910103669 +0000 UTC m=+146.991040973 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:54 crc kubenswrapper[4867]: I0214 04:11:54.511192 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:54 crc kubenswrapper[4867]: E0214 04:11:54.511557 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:55.011540345 +0000 UTC m=+147.092477659 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:54 crc kubenswrapper[4867]: I0214 04:11:54.612611 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:54 crc kubenswrapper[4867]: E0214 04:11:54.612945 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:55.11293399 +0000 UTC m=+147.193871294 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:54 crc kubenswrapper[4867]: I0214 04:11:54.654495 4867 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-mkw9h container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Feb 14 04:11:54 crc kubenswrapper[4867]: I0214 04:11:54.654793 4867 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-tcss9 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Feb 14 04:11:54 crc kubenswrapper[4867]: I0214 04:11:54.654827 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tcss9" podUID="46664b60-c0df-4869-9304-cec4de385a86" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" Feb 14 04:11:54 crc kubenswrapper[4867]: I0214 04:11:54.654954 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-mkw9h" podUID="0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" Feb 14 04:11:54 crc kubenswrapper[4867]: I0214 04:11:54.655200 4867 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-s94ht container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.18:5443/healthz\": dial tcp 10.217.0.18:5443: connect: connection refused" start-of-body= Feb 14 04:11:54 crc kubenswrapper[4867]: I0214 04:11:54.655259 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s94ht" podUID="1b196c26-84a1-408f-913b-eb50572102cf" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.18:5443/healthz\": dial tcp 10.217.0.18:5443: connect: connection refused" Feb 14 04:11:54 crc kubenswrapper[4867]: I0214 04:11:54.713361 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:54 crc kubenswrapper[4867]: E0214 04:11:54.713573 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:55.213555336 +0000 UTC m=+147.294492650 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:54 crc kubenswrapper[4867]: I0214 04:11:54.713675 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:54 crc kubenswrapper[4867]: E0214 04:11:54.713999 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:55.213986287 +0000 UTC m=+147.294923601 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:54 crc kubenswrapper[4867]: I0214 04:11:54.815917 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:54 crc kubenswrapper[4867]: E0214 04:11:54.816081 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:55.316050069 +0000 UTC m=+147.396987383 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:54 crc kubenswrapper[4867]: I0214 04:11:54.816298 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:54 crc kubenswrapper[4867]: E0214 04:11:54.817224 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:55.317208828 +0000 UTC m=+147.398146142 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:54 crc kubenswrapper[4867]: I0214 04:11:54.920070 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:54 crc kubenswrapper[4867]: E0214 04:11:54.920880 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:55.420865651 +0000 UTC m=+147.501802955 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:55 crc kubenswrapper[4867]: I0214 04:11:55.022444 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:55 crc kubenswrapper[4867]: E0214 04:11:55.022737 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:55.522726449 +0000 UTC m=+147.603663763 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:55 crc kubenswrapper[4867]: I0214 04:11:55.115324 4867 patch_prober.go:28] interesting pod/router-default-5444994796-qlkzp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 14 04:11:55 crc kubenswrapper[4867]: [-]has-synced failed: reason withheld Feb 14 04:11:55 crc kubenswrapper[4867]: [+]process-running ok Feb 14 04:11:55 crc kubenswrapper[4867]: healthz check failed Feb 14 04:11:55 crc kubenswrapper[4867]: I0214 04:11:55.115375 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qlkzp" podUID="4b71d414-e6bf-4f51-a808-1938c1edf207" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 14 04:11:55 crc kubenswrapper[4867]: I0214 04:11:55.123161 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:55 crc kubenswrapper[4867]: E0214 04:11:55.123352 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:55.623325993 +0000 UTC m=+147.704263317 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:55 crc kubenswrapper[4867]: I0214 04:11:55.123494 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:55 crc kubenswrapper[4867]: E0214 04:11:55.123773 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:55.623765615 +0000 UTC m=+147.704702929 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:55 crc kubenswrapper[4867]: I0214 04:11:55.224464 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:55 crc kubenswrapper[4867]: E0214 04:11:55.224583 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:55.724562644 +0000 UTC m=+147.805499968 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:55 crc kubenswrapper[4867]: I0214 04:11:55.224817 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:55 crc kubenswrapper[4867]: E0214 04:11:55.225082 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:55.725073198 +0000 UTC m=+147.806010512 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:55 crc kubenswrapper[4867]: I0214 04:11:55.325940 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:55 crc kubenswrapper[4867]: E0214 04:11:55.326144 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:55.826118764 +0000 UTC m=+147.907056078 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:55 crc kubenswrapper[4867]: I0214 04:11:55.326288 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:55 crc kubenswrapper[4867]: E0214 04:11:55.326580 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:55.826569465 +0000 UTC m=+147.907506779 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:55 crc kubenswrapper[4867]: I0214 04:11:55.426873 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:55 crc kubenswrapper[4867]: E0214 04:11:55.427205 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:55.927191521 +0000 UTC m=+148.008128835 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:55 crc kubenswrapper[4867]: I0214 04:11:55.528558 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:55 crc kubenswrapper[4867]: E0214 04:11:55.528904 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:56.028888954 +0000 UTC m=+148.109826268 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:55 crc kubenswrapper[4867]: I0214 04:11:55.629374 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:55 crc kubenswrapper[4867]: E0214 04:11:55.629500 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:56.129476598 +0000 UTC m=+148.210413902 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:55 crc kubenswrapper[4867]: I0214 04:11:55.629667 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:55 crc kubenswrapper[4867]: E0214 04:11:55.629948 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:56.12993619 +0000 UTC m=+148.210873504 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:55 crc kubenswrapper[4867]: I0214 04:11:55.663005 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-pzj5s" event={"ID":"7cedc5a6-929b-43ca-a8b0-6dca555ca455","Type":"ContainerStarted","Data":"f0ce6046d0ab83b94ac4d4ae21e0e2aee7d12dc8629bf47e4f4767c2b9df51ab"} Feb 14 04:11:55 crc kubenswrapper[4867]: I0214 04:11:55.730740 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:55 crc kubenswrapper[4867]: E0214 04:11:55.730963 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:56.230938115 +0000 UTC m=+148.311875419 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:55 crc kubenswrapper[4867]: I0214 04:11:55.731162 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:55 crc kubenswrapper[4867]: E0214 04:11:55.731484 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:56.231472769 +0000 UTC m=+148.312410083 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:55 crc kubenswrapper[4867]: I0214 04:11:55.831828 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:55 crc kubenswrapper[4867]: E0214 04:11:55.832239 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:56.332224118 +0000 UTC m=+148.413161432 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:55 crc kubenswrapper[4867]: I0214 04:11:55.847653 4867 csr.go:261] certificate signing request csr-d6v2v is approved, waiting to be issued Feb 14 04:11:55 crc kubenswrapper[4867]: I0214 04:11:55.854130 4867 csr.go:257] certificate signing request csr-d6v2v is issued Feb 14 04:11:55 crc kubenswrapper[4867]: I0214 04:11:55.933883 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:55 crc kubenswrapper[4867]: E0214 04:11:55.934168 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:56.434158067 +0000 UTC m=+148.515095381 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:56 crc kubenswrapper[4867]: I0214 04:11:56.035461 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:56 crc kubenswrapper[4867]: E0214 04:11:56.035676 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:56.535640214 +0000 UTC m=+148.616577528 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:56 crc kubenswrapper[4867]: I0214 04:11:56.035958 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:56 crc kubenswrapper[4867]: E0214 04:11:56.036276 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:56.53626159 +0000 UTC m=+148.617198904 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:56 crc kubenswrapper[4867]: I0214 04:11:56.108862 4867 patch_prober.go:28] interesting pod/router-default-5444994796-qlkzp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 14 04:11:56 crc kubenswrapper[4867]: [-]has-synced failed: reason withheld Feb 14 04:11:56 crc kubenswrapper[4867]: [+]process-running ok Feb 14 04:11:56 crc kubenswrapper[4867]: healthz check failed Feb 14 04:11:56 crc kubenswrapper[4867]: I0214 04:11:56.108918 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qlkzp" podUID="4b71d414-e6bf-4f51-a808-1938c1edf207" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 14 04:11:56 crc kubenswrapper[4867]: I0214 04:11:56.137707 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:56 crc kubenswrapper[4867]: E0214 04:11:56.137893 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:56.637869061 +0000 UTC m=+148.718806375 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:56 crc kubenswrapper[4867]: I0214 04:11:56.138087 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:56 crc kubenswrapper[4867]: E0214 04:11:56.138420 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:56.638411804 +0000 UTC m=+148.719349118 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:56 crc kubenswrapper[4867]: I0214 04:11:56.243167 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:56 crc kubenswrapper[4867]: E0214 04:11:56.243317 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:56.743288148 +0000 UTC m=+148.824225482 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:56 crc kubenswrapper[4867]: I0214 04:11:56.243469 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:56 crc kubenswrapper[4867]: E0214 04:11:56.243819 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:56.743809262 +0000 UTC m=+148.824746656 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:56 crc kubenswrapper[4867]: I0214 04:11:56.344105 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:56 crc kubenswrapper[4867]: E0214 04:11:56.344455 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:56.844441238 +0000 UTC m=+148.925378552 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:56 crc kubenswrapper[4867]: I0214 04:11:56.446120 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:56 crc kubenswrapper[4867]: E0214 04:11:56.446396 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:56.946385627 +0000 UTC m=+149.027322941 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:56 crc kubenswrapper[4867]: I0214 04:11:56.546617 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:56 crc kubenswrapper[4867]: E0214 04:11:56.547012 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:57.046998852 +0000 UTC m=+149.127936166 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:56 crc kubenswrapper[4867]: I0214 04:11:56.648671 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:56 crc kubenswrapper[4867]: E0214 04:11:56.648998 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:57.148987542 +0000 UTC m=+149.229924856 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:56 crc kubenswrapper[4867]: I0214 04:11:56.750185 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:56 crc kubenswrapper[4867]: E0214 04:11:56.750608 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:57.250593123 +0000 UTC m=+149.331530437 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:56 crc kubenswrapper[4867]: I0214 04:11:56.852195 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:56 crc kubenswrapper[4867]: E0214 04:11:56.852586 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:57.352570823 +0000 UTC m=+149.433508137 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:56 crc kubenswrapper[4867]: I0214 04:11:56.855445 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-14 04:06:55 +0000 UTC, rotation deadline is 2027-01-02 23:44:14.270951759 +0000 UTC Feb 14 04:11:56 crc kubenswrapper[4867]: I0214 04:11:56.855466 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7747h32m17.415488442s for next certificate rotation Feb 14 04:11:56 crc kubenswrapper[4867]: I0214 04:11:56.908550 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jsc7b" Feb 14 04:11:56 crc kubenswrapper[4867]: I0214 04:11:56.908597 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jsc7b" Feb 14 04:11:56 crc kubenswrapper[4867]: I0214 04:11:56.919464 4867 patch_prober.go:28] interesting pod/downloads-7954f5f757-x9sjv container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Feb 14 04:11:56 crc kubenswrapper[4867]: I0214 04:11:56.919535 4867 patch_prober.go:28] interesting pod/downloads-7954f5f757-x9sjv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Feb 14 04:11:56 crc kubenswrapper[4867]: I0214 04:11:56.919543 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-x9sjv" podUID="72546cbc-3499-4110-b0e4-58beab7cc8a5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Feb 14 04:11:56 crc kubenswrapper[4867]: I0214 04:11:56.919560 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-x9sjv" podUID="72546cbc-3499-4110-b0e4-58beab7cc8a5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Feb 14 04:11:56 crc kubenswrapper[4867]: I0214 04:11:56.923208 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jsc7b" Feb 14 04:11:56 crc kubenswrapper[4867]: I0214 04:11:56.953394 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:56 crc kubenswrapper[4867]: E0214 04:11:56.953559 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:57.453541396 +0000 UTC m=+149.534478710 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:56 crc kubenswrapper[4867]: I0214 04:11:56.953643 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:56 crc kubenswrapper[4867]: E0214 04:11:56.953874 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:57.453866495 +0000 UTC m=+149.534803809 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.054572 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:57 crc kubenswrapper[4867]: E0214 04:11:57.054775 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:57.554745287 +0000 UTC m=+149.635682591 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.055108 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:57 crc kubenswrapper[4867]: E0214 04:11:57.055384 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:57.555376583 +0000 UTC m=+149.636313897 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.087785 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-8qkg2" Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.088627 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-8qkg2" Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.093157 4867 patch_prober.go:28] interesting pod/apiserver-76f77b778f-8qkg2 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.11:8443/livez\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.093200 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-8qkg2" podUID="894233bb-65ed-4cdd-ac61-7a8bd8f66140" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.11:8443/livez\": dial tcp 10.217.0.11:8443: connect: connection refused" Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.095976 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-pctg8" Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.110809 4867 patch_prober.go:28] interesting pod/router-default-5444994796-qlkzp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 14 04:11:57 crc kubenswrapper[4867]: [-]has-synced failed: reason withheld Feb 14 04:11:57 crc kubenswrapper[4867]: [+]process-running ok Feb 14 04:11:57 crc kubenswrapper[4867]: healthz check failed Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.110857 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qlkzp" podUID="4b71d414-e6bf-4f51-a808-1938c1edf207" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.155580 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l8d7w" Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.156969 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:57 crc kubenswrapper[4867]: E0214 04:11:57.157066 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:57.657053575 +0000 UTC m=+149.737990889 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.158126 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:57 crc kubenswrapper[4867]: E0214 04:11:57.158728 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:57.658720598 +0000 UTC m=+149.739657912 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.259112 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:57 crc kubenswrapper[4867]: E0214 04:11:57.259279 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:57.759260551 +0000 UTC m=+149.840197865 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.259417 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:57 crc kubenswrapper[4867]: E0214 04:11:57.260747 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:57.760729839 +0000 UTC m=+149.841667153 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.283640 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.284227 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.298155 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.298370 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.307312 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.310736 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-c4c52" Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.311575 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-c4c52" Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.318821 4867 patch_prober.go:28] interesting pod/console-f9d7485db-c4c52 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.25:8443/health\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.318857 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-c4c52" podUID="bb63883f-65f5-4107-877a-ff786d6c00f9" containerName="console" probeResult="failure" output="Get \"https://10.217.0.25:8443/health\": dial tcp 10.217.0.25:8443: connect: connection refused" Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.361032 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.361285 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/adff5c07-e04d-4412-9e26-a0d00b565646-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"adff5c07-e04d-4412-9e26-a0d00b565646\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.361325 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/adff5c07-e04d-4412-9e26-a0d00b565646-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"adff5c07-e04d-4412-9e26-a0d00b565646\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 14 04:11:57 crc kubenswrapper[4867]: E0214 04:11:57.361425 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:57.861410686 +0000 UTC m=+149.942348000 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.462063 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.462112 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/adff5c07-e04d-4412-9e26-a0d00b565646-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"adff5c07-e04d-4412-9e26-a0d00b565646\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.462173 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/adff5c07-e04d-4412-9e26-a0d00b565646-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"adff5c07-e04d-4412-9e26-a0d00b565646\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 14 04:11:57 crc kubenswrapper[4867]: E0214 04:11:57.462456 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:57.962438791 +0000 UTC m=+150.043376105 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.463166 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/adff5c07-e04d-4412-9e26-a0d00b565646-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"adff5c07-e04d-4412-9e26-a0d00b565646\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.504329 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/adff5c07-e04d-4412-9e26-a0d00b565646-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"adff5c07-e04d-4412-9e26-a0d00b565646\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.557160 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-pctg8"] Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.563738 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:57 crc kubenswrapper[4867]: E0214 04:11:57.563922 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:58.063887708 +0000 UTC m=+150.144825022 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.564015 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:57 crc kubenswrapper[4867]: E0214 04:11:57.564281 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:58.064273808 +0000 UTC m=+150.145211122 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.632990 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.662275 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.664891 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:57 crc kubenswrapper[4867]: E0214 04:11:57.665257 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:58.165238102 +0000 UTC m=+150.246175416 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.687459 4867 generic.go:334] "Generic (PLEG): container finished" podID="71ac31c5-7a3b-4c18-aa9e-c193fa8f778a" containerID="aa8fea275ce5bfacf3d08b45c45e75a0934c35dd23257fef4ead33c26bfccaa6" exitCode=0 Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.687491 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517360-jfvsd" event={"ID":"71ac31c5-7a3b-4c18-aa9e-c193fa8f778a","Type":"ContainerDied","Data":"aa8fea275ce5bfacf3d08b45c45e75a0934c35dd23257fef4ead33c26bfccaa6"} Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.738782 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-pctg8" podUID="07dd9173-fdfe-4edb-821b-37c94116b53e" containerName="controller-manager" containerID="cri-o://b5e5c1b68f534cc73bf83368aec1b5b6ddd64d982817b6a68fb05176cffabc6e" gracePeriod=30 Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.738987 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-pzj5s" event={"ID":"7cedc5a6-929b-43ca-a8b0-6dca555ca455","Type":"ContainerStarted","Data":"b67964cbe053fa4b504891f9d1320fbbf85de3580e88f6025eb397bf3a820c3e"} Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.739044 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-pzj5s" event={"ID":"7cedc5a6-929b-43ca-a8b0-6dca555ca455","Type":"ContainerStarted","Data":"30a4d1a2b9a2f97ee6204f6ea64d14f2970f5d990b25c81ad1207f0552e02227"} Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.751587 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jsc7b" Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.770117 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:57 crc kubenswrapper[4867]: E0214 04:11:57.771061 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:58.27104996 +0000 UTC m=+150.351987274 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.773998 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5mz22"] Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.774924 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5mz22" Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.779200 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.805377 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5mz22"] Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.840848 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-29p6h" Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.863997 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-htv2n" Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.880831 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.881393 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cf2e46b-a553-4b29-b6f2-02072b8660d9-utilities\") pod \"certified-operators-5mz22\" (UID: \"4cf2e46b-a553-4b29-b6f2-02072b8660d9\") " pod="openshift-marketplace/certified-operators-5mz22" Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.881467 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cf2e46b-a553-4b29-b6f2-02072b8660d9-catalog-content\") pod \"certified-operators-5mz22\" (UID: \"4cf2e46b-a553-4b29-b6f2-02072b8660d9\") " pod="openshift-marketplace/certified-operators-5mz22" Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.881817 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmwl4\" (UniqueName: \"kubernetes.io/projected/4cf2e46b-a553-4b29-b6f2-02072b8660d9-kube-api-access-rmwl4\") pod \"certified-operators-5mz22\" (UID: \"4cf2e46b-a553-4b29-b6f2-02072b8660d9\") " pod="openshift-marketplace/certified-operators-5mz22" Feb 14 04:11:57 crc kubenswrapper[4867]: E0214 04:11:57.882776 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:58.382744398 +0000 UTC m=+150.463681712 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.986566 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.986853 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmwl4\" (UniqueName: \"kubernetes.io/projected/4cf2e46b-a553-4b29-b6f2-02072b8660d9-kube-api-access-rmwl4\") pod \"certified-operators-5mz22\" (UID: \"4cf2e46b-a553-4b29-b6f2-02072b8660d9\") " pod="openshift-marketplace/certified-operators-5mz22" Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.986897 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cf2e46b-a553-4b29-b6f2-02072b8660d9-utilities\") pod \"certified-operators-5mz22\" (UID: \"4cf2e46b-a553-4b29-b6f2-02072b8660d9\") " pod="openshift-marketplace/certified-operators-5mz22" Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.986960 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cf2e46b-a553-4b29-b6f2-02072b8660d9-catalog-content\") pod \"certified-operators-5mz22\" (UID: \"4cf2e46b-a553-4b29-b6f2-02072b8660d9\") " pod="openshift-marketplace/certified-operators-5mz22" Feb 14 04:11:57 crc kubenswrapper[4867]: E0214 04:11:57.987856 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:58.487842707 +0000 UTC m=+150.568780021 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.008270 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8vs6k"] Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.009431 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8vs6k" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.016879 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.028532 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmwl4\" (UniqueName: \"kubernetes.io/projected/4cf2e46b-a553-4b29-b6f2-02072b8660d9-kube-api-access-rmwl4\") pod \"certified-operators-5mz22\" (UID: \"4cf2e46b-a553-4b29-b6f2-02072b8660d9\") " pod="openshift-marketplace/certified-operators-5mz22" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.033263 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8vs6k"] Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.059816 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tcss9" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.062231 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s94ht" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.088026 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.088213 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6d1c1c6-899d-4220-8f80-defae4ba56f0-utilities\") pod \"community-operators-8vs6k\" (UID: \"b6d1c1c6-899d-4220-8f80-defae4ba56f0\") " pod="openshift-marketplace/community-operators-8vs6k" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.088267 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6d1c1c6-899d-4220-8f80-defae4ba56f0-catalog-content\") pod \"community-operators-8vs6k\" (UID: \"b6d1c1c6-899d-4220-8f80-defae4ba56f0\") " pod="openshift-marketplace/community-operators-8vs6k" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.088301 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtnvz\" (UniqueName: \"kubernetes.io/projected/b6d1c1c6-899d-4220-8f80-defae4ba56f0-kube-api-access-mtnvz\") pod \"community-operators-8vs6k\" (UID: \"b6d1c1c6-899d-4220-8f80-defae4ba56f0\") " pod="openshift-marketplace/community-operators-8vs6k" Feb 14 04:11:58 crc kubenswrapper[4867]: E0214 04:11:58.088486 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:58.588467283 +0000 UTC m=+150.669404597 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.101930 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-mkw9h" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.106607 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-qlkzp" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.124818 4867 patch_prober.go:28] interesting pod/router-default-5444994796-qlkzp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 14 04:11:58 crc kubenswrapper[4867]: [-]has-synced failed: reason withheld Feb 14 04:11:58 crc kubenswrapper[4867]: [+]process-running ok Feb 14 04:11:58 crc kubenswrapper[4867]: healthz check failed Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.124863 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qlkzp" podUID="4b71d414-e6bf-4f51-a808-1938c1edf207" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.149252 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dgp2v" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.179820 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-x4khs"] Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.180078 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cf2e46b-a553-4b29-b6f2-02072b8660d9-utilities\") pod \"certified-operators-5mz22\" (UID: \"4cf2e46b-a553-4b29-b6f2-02072b8660d9\") " pod="openshift-marketplace/certified-operators-5mz22" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.180462 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cf2e46b-a553-4b29-b6f2-02072b8660d9-catalog-content\") pod \"certified-operators-5mz22\" (UID: \"4cf2e46b-a553-4b29-b6f2-02072b8660d9\") " pod="openshift-marketplace/certified-operators-5mz22" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.180945 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x4khs" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.197331 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtnvz\" (UniqueName: \"kubernetes.io/projected/b6d1c1c6-899d-4220-8f80-defae4ba56f0-kube-api-access-mtnvz\") pod \"community-operators-8vs6k\" (UID: \"b6d1c1c6-899d-4220-8f80-defae4ba56f0\") " pod="openshift-marketplace/community-operators-8vs6k" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.197493 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.197629 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6d1c1c6-899d-4220-8f80-defae4ba56f0-utilities\") pod \"community-operators-8vs6k\" (UID: \"b6d1c1c6-899d-4220-8f80-defae4ba56f0\") " pod="openshift-marketplace/community-operators-8vs6k" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.197672 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6d1c1c6-899d-4220-8f80-defae4ba56f0-catalog-content\") pod \"community-operators-8vs6k\" (UID: \"b6d1c1c6-899d-4220-8f80-defae4ba56f0\") " pod="openshift-marketplace/community-operators-8vs6k" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.200348 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6d1c1c6-899d-4220-8f80-defae4ba56f0-utilities\") pod \"community-operators-8vs6k\" (UID: \"b6d1c1c6-899d-4220-8f80-defae4ba56f0\") " pod="openshift-marketplace/community-operators-8vs6k" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.201203 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6d1c1c6-899d-4220-8f80-defae4ba56f0-catalog-content\") pod \"community-operators-8vs6k\" (UID: \"b6d1c1c6-899d-4220-8f80-defae4ba56f0\") " pod="openshift-marketplace/community-operators-8vs6k" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.209761 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 14 04:11:58 crc kubenswrapper[4867]: E0214 04:11:58.216214 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:58.71619711 +0000 UTC m=+150.797134414 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.238605 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-x4khs"] Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.248635 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtnvz\" (UniqueName: \"kubernetes.io/projected/b6d1c1c6-899d-4220-8f80-defae4ba56f0-kube-api-access-mtnvz\") pod \"community-operators-8vs6k\" (UID: \"b6d1c1c6-899d-4220-8f80-defae4ba56f0\") " pod="openshift-marketplace/community-operators-8vs6k" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.259671 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.260354 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.263383 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.263910 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.273922 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.298305 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.298601 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzh4n\" (UniqueName: \"kubernetes.io/projected/f27f899c-e2d8-4601-9a36-4582192436b7-kube-api-access-rzh4n\") pod \"certified-operators-x4khs\" (UID: \"f27f899c-e2d8-4601-9a36-4582192436b7\") " pod="openshift-marketplace/certified-operators-x4khs" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.298635 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f27f899c-e2d8-4601-9a36-4582192436b7-utilities\") pod \"certified-operators-x4khs\" (UID: \"f27f899c-e2d8-4601-9a36-4582192436b7\") " pod="openshift-marketplace/certified-operators-x4khs" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.298695 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f27f899c-e2d8-4601-9a36-4582192436b7-catalog-content\") pod \"certified-operators-x4khs\" (UID: \"f27f899c-e2d8-4601-9a36-4582192436b7\") " pod="openshift-marketplace/certified-operators-x4khs" Feb 14 04:11:58 crc kubenswrapper[4867]: E0214 04:11:58.298789 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:58.798774925 +0000 UTC m=+150.879712239 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.343370 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8vs6k" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.372652 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2cjxf"] Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.374282 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2cjxf" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.386835 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2cjxf"] Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.391597 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5mz22" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.400563 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzh4n\" (UniqueName: \"kubernetes.io/projected/f27f899c-e2d8-4601-9a36-4582192436b7-kube-api-access-rzh4n\") pod \"certified-operators-x4khs\" (UID: \"f27f899c-e2d8-4601-9a36-4582192436b7\") " pod="openshift-marketplace/certified-operators-x4khs" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.400603 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f27f899c-e2d8-4601-9a36-4582192436b7-utilities\") pod \"certified-operators-x4khs\" (UID: \"f27f899c-e2d8-4601-9a36-4582192436b7\") " pod="openshift-marketplace/certified-operators-x4khs" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.400643 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.400661 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f27f899c-e2d8-4601-9a36-4582192436b7-catalog-content\") pod \"certified-operators-x4khs\" (UID: \"f27f899c-e2d8-4601-9a36-4582192436b7\") " pod="openshift-marketplace/certified-operators-x4khs" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.400719 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5be31bdb-ced4-4935-8102-e6ddc671474f-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"5be31bdb-ced4-4935-8102-e6ddc671474f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.400738 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5be31bdb-ced4-4935-8102-e6ddc671474f-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"5be31bdb-ced4-4935-8102-e6ddc671474f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.401376 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f27f899c-e2d8-4601-9a36-4582192436b7-utilities\") pod \"certified-operators-x4khs\" (UID: \"f27f899c-e2d8-4601-9a36-4582192436b7\") " pod="openshift-marketplace/certified-operators-x4khs" Feb 14 04:11:58 crc kubenswrapper[4867]: E0214 04:11:58.401629 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:58.901618747 +0000 UTC m=+150.982556061 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.401846 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f27f899c-e2d8-4601-9a36-4582192436b7-catalog-content\") pod \"certified-operators-x4khs\" (UID: \"f27f899c-e2d8-4601-9a36-4582192436b7\") " pod="openshift-marketplace/certified-operators-x4khs" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.441390 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzh4n\" (UniqueName: \"kubernetes.io/projected/f27f899c-e2d8-4601-9a36-4582192436b7-kube-api-access-rzh4n\") pod \"certified-operators-x4khs\" (UID: \"f27f899c-e2d8-4601-9a36-4582192436b7\") " pod="openshift-marketplace/certified-operators-x4khs" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.505881 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x4khs" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.505968 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.506200 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0683c2f1-5695-4ef3-b6cc-31fe804c6dc6-catalog-content\") pod \"community-operators-2cjxf\" (UID: \"0683c2f1-5695-4ef3-b6cc-31fe804c6dc6\") " pod="openshift-marketplace/community-operators-2cjxf" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.506259 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mp526\" (UniqueName: \"kubernetes.io/projected/0683c2f1-5695-4ef3-b6cc-31fe804c6dc6-kube-api-access-mp526\") pod \"community-operators-2cjxf\" (UID: \"0683c2f1-5695-4ef3-b6cc-31fe804c6dc6\") " pod="openshift-marketplace/community-operators-2cjxf" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.506286 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5be31bdb-ced4-4935-8102-e6ddc671474f-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"5be31bdb-ced4-4935-8102-e6ddc671474f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.506303 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5be31bdb-ced4-4935-8102-e6ddc671474f-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"5be31bdb-ced4-4935-8102-e6ddc671474f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.506321 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0683c2f1-5695-4ef3-b6cc-31fe804c6dc6-utilities\") pod \"community-operators-2cjxf\" (UID: \"0683c2f1-5695-4ef3-b6cc-31fe804c6dc6\") " pod="openshift-marketplace/community-operators-2cjxf" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.506438 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5be31bdb-ced4-4935-8102-e6ddc671474f-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"5be31bdb-ced4-4935-8102-e6ddc671474f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 14 04:11:58 crc kubenswrapper[4867]: E0214 04:11:58.506561 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:59.006545233 +0000 UTC m=+151.087482547 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.550935 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5be31bdb-ced4-4935-8102-e6ddc671474f-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"5be31bdb-ced4-4935-8102-e6ddc671474f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.597125 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.607295 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mp526\" (UniqueName: \"kubernetes.io/projected/0683c2f1-5695-4ef3-b6cc-31fe804c6dc6-kube-api-access-mp526\") pod \"community-operators-2cjxf\" (UID: \"0683c2f1-5695-4ef3-b6cc-31fe804c6dc6\") " pod="openshift-marketplace/community-operators-2cjxf" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.607339 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0683c2f1-5695-4ef3-b6cc-31fe804c6dc6-utilities\") pod \"community-operators-2cjxf\" (UID: \"0683c2f1-5695-4ef3-b6cc-31fe804c6dc6\") " pod="openshift-marketplace/community-operators-2cjxf" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.608331 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0683c2f1-5695-4ef3-b6cc-31fe804c6dc6-catalog-content\") pod \"community-operators-2cjxf\" (UID: \"0683c2f1-5695-4ef3-b6cc-31fe804c6dc6\") " pod="openshift-marketplace/community-operators-2cjxf" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.608358 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.608548 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0683c2f1-5695-4ef3-b6cc-31fe804c6dc6-utilities\") pod \"community-operators-2cjxf\" (UID: \"0683c2f1-5695-4ef3-b6cc-31fe804c6dc6\") " pod="openshift-marketplace/community-operators-2cjxf" Feb 14 04:11:58 crc kubenswrapper[4867]: E0214 04:11:58.608626 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:59.108615445 +0000 UTC m=+151.189552749 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.608836 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0683c2f1-5695-4ef3-b6cc-31fe804c6dc6-catalog-content\") pod \"community-operators-2cjxf\" (UID: \"0683c2f1-5695-4ef3-b6cc-31fe804c6dc6\") " pod="openshift-marketplace/community-operators-2cjxf" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.632837 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mp526\" (UniqueName: \"kubernetes.io/projected/0683c2f1-5695-4ef3-b6cc-31fe804c6dc6-kube-api-access-mp526\") pod \"community-operators-2cjxf\" (UID: \"0683c2f1-5695-4ef3-b6cc-31fe804c6dc6\") " pod="openshift-marketplace/community-operators-2cjxf" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.714097 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:58 crc kubenswrapper[4867]: E0214 04:11:58.714525 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:59.214489954 +0000 UTC m=+151.295427268 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.717403 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8vs6k"] Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.734272 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2cjxf" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.755077 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8vs6k" event={"ID":"b6d1c1c6-899d-4220-8f80-defae4ba56f0","Type":"ContainerStarted","Data":"9ac639b6394c5e1017aeaf569eada5d729a39bf526b8497bd4296ca3b0755153"} Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.764991 4867 generic.go:334] "Generic (PLEG): container finished" podID="07dd9173-fdfe-4edb-821b-37c94116b53e" containerID="b5e5c1b68f534cc73bf83368aec1b5b6ddd64d982817b6a68fb05176cffabc6e" exitCode=0 Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.765082 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-pctg8" event={"ID":"07dd9173-fdfe-4edb-821b-37c94116b53e","Type":"ContainerDied","Data":"b5e5c1b68f534cc73bf83368aec1b5b6ddd64d982817b6a68fb05176cffabc6e"} Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.783635 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"adff5c07-e04d-4412-9e26-a0d00b565646","Type":"ContainerStarted","Data":"377e295c3b007785a985a19cb9652f29604083f015986a2b6609275e06c00eb4"} Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.816184 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:58 crc kubenswrapper[4867]: E0214 04:11:58.816535 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:59.316521556 +0000 UTC m=+151.397458870 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.836567 4867 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.916892 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:58 crc kubenswrapper[4867]: E0214 04:11:58.917084 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:59.417062369 +0000 UTC m=+151.497999683 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.917666 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:58 crc kubenswrapper[4867]: E0214 04:11:58.918721 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:59.418712211 +0000 UTC m=+151.499649575 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.919892 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5mz22"] Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.962122 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-x4khs"] Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.024715 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.024893 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.024931 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.024971 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.025027 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:11:59 crc kubenswrapper[4867]: E0214 04:11:59.025093 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:59.525064193 +0000 UTC m=+151.606001507 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.030611 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.034125 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.067199 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.075871 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.106126 4867 patch_prober.go:28] interesting pod/router-default-5444994796-qlkzp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 14 04:11:59 crc kubenswrapper[4867]: [-]has-synced failed: reason withheld Feb 14 04:11:59 crc kubenswrapper[4867]: [+]process-running ok Feb 14 04:11:59 crc kubenswrapper[4867]: healthz check failed Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.106165 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qlkzp" podUID="4b71d414-e6bf-4f51-a808-1938c1edf207" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.125961 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:59 crc kubenswrapper[4867]: E0214 04:11:59.126251 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:59.626239772 +0000 UTC m=+151.707177086 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.171385 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.198462 4867 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-14T04:11:58.836615758Z","Handler":null,"Name":""} Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.207500 4867 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.207551 4867 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 14 04:11:59 crc kubenswrapper[4867]: W0214 04:11:59.211211 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod5be31bdb_ced4_4935_8102_e6ddc671474f.slice/crio-9a5067fa21df88aec15309e79d7720348fa24ff022d24e723cd4073f519393f9 WatchSource:0}: Error finding container 9a5067fa21df88aec15309e79d7720348fa24ff022d24e723cd4073f519393f9: Status 404 returned error can't find the container with id 9a5067fa21df88aec15309e79d7720348fa24ff022d24e723cd4073f519393f9 Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.226300 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.260861 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.262593 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.267079 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.273828 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.329591 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.332264 4867 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.332289 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.333604 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517360-jfvsd" Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.432957 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/71ac31c5-7a3b-4c18-aa9e-c193fa8f778a-config-volume\") pod \"71ac31c5-7a3b-4c18-aa9e-c193fa8f778a\" (UID: \"71ac31c5-7a3b-4c18-aa9e-c193fa8f778a\") " Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.433257 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s6dn8\" (UniqueName: \"kubernetes.io/projected/71ac31c5-7a3b-4c18-aa9e-c193fa8f778a-kube-api-access-s6dn8\") pod \"71ac31c5-7a3b-4c18-aa9e-c193fa8f778a\" (UID: \"71ac31c5-7a3b-4c18-aa9e-c193fa8f778a\") " Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.433340 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/71ac31c5-7a3b-4c18-aa9e-c193fa8f778a-secret-volume\") pod \"71ac31c5-7a3b-4c18-aa9e-c193fa8f778a\" (UID: \"71ac31c5-7a3b-4c18-aa9e-c193fa8f778a\") " Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.433783 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71ac31c5-7a3b-4c18-aa9e-c193fa8f778a-config-volume" (OuterVolumeSpecName: "config-volume") pod "71ac31c5-7a3b-4c18-aa9e-c193fa8f778a" (UID: "71ac31c5-7a3b-4c18-aa9e-c193fa8f778a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.444607 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71ac31c5-7a3b-4c18-aa9e-c193fa8f778a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "71ac31c5-7a3b-4c18-aa9e-c193fa8f778a" (UID: "71ac31c5-7a3b-4c18-aa9e-c193fa8f778a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.462479 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71ac31c5-7a3b-4c18-aa9e-c193fa8f778a-kube-api-access-s6dn8" (OuterVolumeSpecName: "kube-api-access-s6dn8") pod "71ac31c5-7a3b-4c18-aa9e-c193fa8f778a" (UID: "71ac31c5-7a3b-4c18-aa9e-c193fa8f778a"). InnerVolumeSpecName "kube-api-access-s6dn8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.467261 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.522099 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2cjxf"] Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.537354 4867 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/71ac31c5-7a3b-4c18-aa9e-c193fa8f778a-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.537401 4867 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/71ac31c5-7a3b-4c18-aa9e-c193fa8f778a-config-volume\") on node \"crc\" DevicePath \"\"" Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.537415 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s6dn8\" (UniqueName: \"kubernetes.io/projected/71ac31c5-7a3b-4c18-aa9e-c193fa8f778a-kube-api-access-s6dn8\") on node \"crc\" DevicePath \"\"" Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.577207 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.770051 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gvh7q"] Feb 14 04:11:59 crc kubenswrapper[4867]: E0214 04:11:59.770396 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71ac31c5-7a3b-4c18-aa9e-c193fa8f778a" containerName="collect-profiles" Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.770407 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="71ac31c5-7a3b-4c18-aa9e-c193fa8f778a" containerName="collect-profiles" Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.770496 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="71ac31c5-7a3b-4c18-aa9e-c193fa8f778a" containerName="collect-profiles" Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.771350 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gvh7q" Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.774684 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.790784 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gvh7q"] Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.801155 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-pzj5s" event={"ID":"7cedc5a6-929b-43ca-a8b0-6dca555ca455","Type":"ContainerStarted","Data":"dd9f424d26487bd816b5e8b2553faae6b604eacd9336a79c5c1317a6caa66f61"} Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.803642 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"adff5c07-e04d-4412-9e26-a0d00b565646","Type":"ContainerStarted","Data":"a215a1216cda74b0dbd2e2da4a16be436346ba36074b62928e5d1ff7177aee65"} Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.804649 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"5be31bdb-ced4-4935-8102-e6ddc671474f","Type":"ContainerStarted","Data":"9a5067fa21df88aec15309e79d7720348fa24ff022d24e723cd4073f519393f9"} Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.806046 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8vs6k" event={"ID":"b6d1c1c6-899d-4220-8f80-defae4ba56f0","Type":"ContainerStarted","Data":"3e14d895a14f4a0564f7f7e3c69189c69564a9ff087f2c6d784da1dda53743aa"} Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.806842 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5mz22" event={"ID":"4cf2e46b-a553-4b29-b6f2-02072b8660d9","Type":"ContainerStarted","Data":"23ddca82e7ec32caacf54a7cebc1ffb43fed1e460daeba077f08fce659c5713c"} Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.808368 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517360-jfvsd" event={"ID":"71ac31c5-7a3b-4c18-aa9e-c193fa8f778a","Type":"ContainerDied","Data":"e4ca5c9cce4b1a413dbb012e458367afc39bde8f3194baa1bce21c05bfa3d89d"} Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.808408 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4ca5c9cce4b1a413dbb012e458367afc39bde8f3194baa1bce21c05bfa3d89d" Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.808407 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517360-jfvsd" Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.809740 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x4khs" event={"ID":"f27f899c-e2d8-4601-9a36-4582192436b7","Type":"ContainerStarted","Data":"3e5452fa8e8c6fb391a2e17ab4b7c984074e14d79a0538110dcd9e41b18bd839"} Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.817434 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-pzj5s" podStartSLOduration=15.817419425 podStartE2EDuration="15.817419425s" podCreationTimestamp="2026-02-14 04:11:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:59.81722956 +0000 UTC m=+151.898166874" watchObservedRunningTime="2026-02-14 04:11:59.817419425 +0000 UTC m=+151.898356739" Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.945010 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8f5g2\" (UniqueName: \"kubernetes.io/projected/2e834244-05c0-4e48-9e2a-7c69cf930951-kube-api-access-8f5g2\") pod \"redhat-marketplace-gvh7q\" (UID: \"2e834244-05c0-4e48-9e2a-7c69cf930951\") " pod="openshift-marketplace/redhat-marketplace-gvh7q" Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.945070 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e834244-05c0-4e48-9e2a-7c69cf930951-utilities\") pod \"redhat-marketplace-gvh7q\" (UID: \"2e834244-05c0-4e48-9e2a-7c69cf930951\") " pod="openshift-marketplace/redhat-marketplace-gvh7q" Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.945152 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e834244-05c0-4e48-9e2a-7c69cf930951-catalog-content\") pod \"redhat-marketplace-gvh7q\" (UID: \"2e834244-05c0-4e48-9e2a-7c69cf930951\") " pod="openshift-marketplace/redhat-marketplace-gvh7q" Feb 14 04:12:00 crc kubenswrapper[4867]: W0214 04:12:00.025015 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-dd9c8ef630798ffa3cc39d45ab72fceea03622579e03ea47273afd679887d81b WatchSource:0}: Error finding container dd9c8ef630798ffa3cc39d45ab72fceea03622579e03ea47273afd679887d81b: Status 404 returned error can't find the container with id dd9c8ef630798ffa3cc39d45ab72fceea03622579e03ea47273afd679887d81b Feb 14 04:12:00 crc kubenswrapper[4867]: W0214 04:12:00.026086 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-64386fcb522683f74940827e11a242c7aa41fcd9600688bca68515d34901637b WatchSource:0}: Error finding container 64386fcb522683f74940827e11a242c7aa41fcd9600688bca68515d34901637b: Status 404 returned error can't find the container with id 64386fcb522683f74940827e11a242c7aa41fcd9600688bca68515d34901637b Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.046167 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e834244-05c0-4e48-9e2a-7c69cf930951-catalog-content\") pod \"redhat-marketplace-gvh7q\" (UID: \"2e834244-05c0-4e48-9e2a-7c69cf930951\") " pod="openshift-marketplace/redhat-marketplace-gvh7q" Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.046392 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8f5g2\" (UniqueName: \"kubernetes.io/projected/2e834244-05c0-4e48-9e2a-7c69cf930951-kube-api-access-8f5g2\") pod \"redhat-marketplace-gvh7q\" (UID: \"2e834244-05c0-4e48-9e2a-7c69cf930951\") " pod="openshift-marketplace/redhat-marketplace-gvh7q" Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.046639 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e834244-05c0-4e48-9e2a-7c69cf930951-utilities\") pod \"redhat-marketplace-gvh7q\" (UID: \"2e834244-05c0-4e48-9e2a-7c69cf930951\") " pod="openshift-marketplace/redhat-marketplace-gvh7q" Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.046697 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e834244-05c0-4e48-9e2a-7c69cf930951-catalog-content\") pod \"redhat-marketplace-gvh7q\" (UID: \"2e834244-05c0-4e48-9e2a-7c69cf930951\") " pod="openshift-marketplace/redhat-marketplace-gvh7q" Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.046884 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e834244-05c0-4e48-9e2a-7c69cf930951-utilities\") pod \"redhat-marketplace-gvh7q\" (UID: \"2e834244-05c0-4e48-9e2a-7c69cf930951\") " pod="openshift-marketplace/redhat-marketplace-gvh7q" Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.068380 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8f5g2\" (UniqueName: \"kubernetes.io/projected/2e834244-05c0-4e48-9e2a-7c69cf930951-kube-api-access-8f5g2\") pod \"redhat-marketplace-gvh7q\" (UID: \"2e834244-05c0-4e48-9e2a-7c69cf930951\") " pod="openshift-marketplace/redhat-marketplace-gvh7q" Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.102706 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gvh7q" Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.107303 4867 patch_prober.go:28] interesting pod/router-default-5444994796-qlkzp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 14 04:12:00 crc kubenswrapper[4867]: [-]has-synced failed: reason withheld Feb 14 04:12:00 crc kubenswrapper[4867]: [+]process-running ok Feb 14 04:12:00 crc kubenswrapper[4867]: healthz check failed Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.107429 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qlkzp" podUID="4b71d414-e6bf-4f51-a808-1938c1edf207" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.176538 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-s8hwg"] Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.183843 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s8hwg" Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.187150 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s8hwg"] Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.273217 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-5rxcg"] Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.351584 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f7707be-b4dc-47c7-8a74-bc46399acd36-utilities\") pod \"redhat-marketplace-s8hwg\" (UID: \"1f7707be-b4dc-47c7-8a74-bc46399acd36\") " pod="openshift-marketplace/redhat-marketplace-s8hwg" Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.351678 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f7707be-b4dc-47c7-8a74-bc46399acd36-catalog-content\") pod \"redhat-marketplace-s8hwg\" (UID: \"1f7707be-b4dc-47c7-8a74-bc46399acd36\") " pod="openshift-marketplace/redhat-marketplace-s8hwg" Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.351731 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztsqf\" (UniqueName: \"kubernetes.io/projected/1f7707be-b4dc-47c7-8a74-bc46399acd36-kube-api-access-ztsqf\") pod \"redhat-marketplace-s8hwg\" (UID: \"1f7707be-b4dc-47c7-8a74-bc46399acd36\") " pod="openshift-marketplace/redhat-marketplace-s8hwg" Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.454065 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f7707be-b4dc-47c7-8a74-bc46399acd36-utilities\") pod \"redhat-marketplace-s8hwg\" (UID: \"1f7707be-b4dc-47c7-8a74-bc46399acd36\") " pod="openshift-marketplace/redhat-marketplace-s8hwg" Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.454480 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f7707be-b4dc-47c7-8a74-bc46399acd36-catalog-content\") pod \"redhat-marketplace-s8hwg\" (UID: \"1f7707be-b4dc-47c7-8a74-bc46399acd36\") " pod="openshift-marketplace/redhat-marketplace-s8hwg" Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.454531 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztsqf\" (UniqueName: \"kubernetes.io/projected/1f7707be-b4dc-47c7-8a74-bc46399acd36-kube-api-access-ztsqf\") pod \"redhat-marketplace-s8hwg\" (UID: \"1f7707be-b4dc-47c7-8a74-bc46399acd36\") " pod="openshift-marketplace/redhat-marketplace-s8hwg" Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.455269 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f7707be-b4dc-47c7-8a74-bc46399acd36-catalog-content\") pod \"redhat-marketplace-s8hwg\" (UID: \"1f7707be-b4dc-47c7-8a74-bc46399acd36\") " pod="openshift-marketplace/redhat-marketplace-s8hwg" Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.455295 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f7707be-b4dc-47c7-8a74-bc46399acd36-utilities\") pod \"redhat-marketplace-s8hwg\" (UID: \"1f7707be-b4dc-47c7-8a74-bc46399acd36\") " pod="openshift-marketplace/redhat-marketplace-s8hwg" Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.485039 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztsqf\" (UniqueName: \"kubernetes.io/projected/1f7707be-b4dc-47c7-8a74-bc46399acd36-kube-api-access-ztsqf\") pod \"redhat-marketplace-s8hwg\" (UID: \"1f7707be-b4dc-47c7-8a74-bc46399acd36\") " pod="openshift-marketplace/redhat-marketplace-s8hwg" Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.687460 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s8hwg" Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.690551 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-pctg8" Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.741855 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gvh7q"] Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.829915 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"5be31bdb-ced4-4935-8102-e6ddc671474f","Type":"ContainerStarted","Data":"93e8b98a2ad31b4fa7402ae583c45be6e8f302edddc3396101d8d5532f77e5bf"} Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.833325 4867 generic.go:334] "Generic (PLEG): container finished" podID="4cf2e46b-a553-4b29-b6f2-02072b8660d9" containerID="af97fea8edd2f6f86bfcc865565c17f7057a140b45a31735d974db6d18d89c4d" exitCode=0 Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.833670 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5mz22" event={"ID":"4cf2e46b-a553-4b29-b6f2-02072b8660d9","Type":"ContainerDied","Data":"af97fea8edd2f6f86bfcc865565c17f7057a140b45a31735d974db6d18d89c4d"} Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.842939 4867 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.847982 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.847963479 podStartE2EDuration="2.847963479s" podCreationTimestamp="2026-02-14 04:11:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:12:00.847380434 +0000 UTC m=+152.928317748" watchObservedRunningTime="2026-02-14 04:12:00.847963479 +0000 UTC m=+152.928900793" Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.848088 4867 generic.go:334] "Generic (PLEG): container finished" podID="f27f899c-e2d8-4601-9a36-4582192436b7" containerID="a4ecefe0bd25ea2146d501e1e030f255aa760e1d3b80ec52600bc04dede7435e" exitCode=0 Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.848154 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x4khs" event={"ID":"f27f899c-e2d8-4601-9a36-4582192436b7","Type":"ContainerDied","Data":"a4ecefe0bd25ea2146d501e1e030f255aa760e1d3b80ec52600bc04dede7435e"} Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.859552 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bqcq7\" (UniqueName: \"kubernetes.io/projected/07dd9173-fdfe-4edb-821b-37c94116b53e-kube-api-access-bqcq7\") pod \"07dd9173-fdfe-4edb-821b-37c94116b53e\" (UID: \"07dd9173-fdfe-4edb-821b-37c94116b53e\") " Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.859616 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07dd9173-fdfe-4edb-821b-37c94116b53e-serving-cert\") pod \"07dd9173-fdfe-4edb-821b-37c94116b53e\" (UID: \"07dd9173-fdfe-4edb-821b-37c94116b53e\") " Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.859765 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/07dd9173-fdfe-4edb-821b-37c94116b53e-client-ca\") pod \"07dd9173-fdfe-4edb-821b-37c94116b53e\" (UID: \"07dd9173-fdfe-4edb-821b-37c94116b53e\") " Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.859799 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07dd9173-fdfe-4edb-821b-37c94116b53e-config\") pod \"07dd9173-fdfe-4edb-821b-37c94116b53e\" (UID: \"07dd9173-fdfe-4edb-821b-37c94116b53e\") " Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.859828 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/07dd9173-fdfe-4edb-821b-37c94116b53e-proxy-ca-bundles\") pod \"07dd9173-fdfe-4edb-821b-37c94116b53e\" (UID: \"07dd9173-fdfe-4edb-821b-37c94116b53e\") " Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.861578 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07dd9173-fdfe-4edb-821b-37c94116b53e-client-ca" (OuterVolumeSpecName: "client-ca") pod "07dd9173-fdfe-4edb-821b-37c94116b53e" (UID: "07dd9173-fdfe-4edb-821b-37c94116b53e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.862689 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07dd9173-fdfe-4edb-821b-37c94116b53e-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "07dd9173-fdfe-4edb-821b-37c94116b53e" (UID: "07dd9173-fdfe-4edb-821b-37c94116b53e"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.870273 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07dd9173-fdfe-4edb-821b-37c94116b53e-config" (OuterVolumeSpecName: "config") pod "07dd9173-fdfe-4edb-821b-37c94116b53e" (UID: "07dd9173-fdfe-4edb-821b-37c94116b53e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.874696 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" event={"ID":"c029599e-5014-4874-917f-076635849451","Type":"ContainerStarted","Data":"6ea0765f93238181496aa9ad98328dd359db53721f5f5fd14d5d2d61c6d3b39b"} Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.881110 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2cjxf" event={"ID":"0683c2f1-5695-4ef3-b6cc-31fe804c6dc6","Type":"ContainerStarted","Data":"7e41463addb663f771a8a5f2b9e7c4873429544544dd6087d30ba5633e2b13ff"} Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.881166 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2cjxf" event={"ID":"0683c2f1-5695-4ef3-b6cc-31fe804c6dc6","Type":"ContainerStarted","Data":"add894549a2aff626db3cd5482bf5486b20d694394b5286fe468f9059e3f4b1d"} Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.883289 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"64386fcb522683f74940827e11a242c7aa41fcd9600688bca68515d34901637b"} Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.883785 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07dd9173-fdfe-4edb-821b-37c94116b53e-kube-api-access-bqcq7" (OuterVolumeSpecName: "kube-api-access-bqcq7") pod "07dd9173-fdfe-4edb-821b-37c94116b53e" (UID: "07dd9173-fdfe-4edb-821b-37c94116b53e"). InnerVolumeSpecName "kube-api-access-bqcq7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.883868 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07dd9173-fdfe-4edb-821b-37c94116b53e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "07dd9173-fdfe-4edb-821b-37c94116b53e" (UID: "07dd9173-fdfe-4edb-821b-37c94116b53e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.898869 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gvh7q" event={"ID":"2e834244-05c0-4e48-9e2a-7c69cf930951","Type":"ContainerStarted","Data":"90d63cc6554a718e0d4cbfb1e7b6d2e1fdaca86fdf3238edfbe5d97515589316"} Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.908821 4867 generic.go:334] "Generic (PLEG): container finished" podID="adff5c07-e04d-4412-9e26-a0d00b565646" containerID="a215a1216cda74b0dbd2e2da4a16be436346ba36074b62928e5d1ff7177aee65" exitCode=0 Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.908921 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"adff5c07-e04d-4412-9e26-a0d00b565646","Type":"ContainerDied","Data":"a215a1216cda74b0dbd2e2da4a16be436346ba36074b62928e5d1ff7177aee65"} Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.912095 4867 generic.go:334] "Generic (PLEG): container finished" podID="b6d1c1c6-899d-4220-8f80-defae4ba56f0" containerID="3e14d895a14f4a0564f7f7e3c69189c69564a9ff087f2c6d784da1dda53743aa" exitCode=0 Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.912151 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8vs6k" event={"ID":"b6d1c1c6-899d-4220-8f80-defae4ba56f0","Type":"ContainerDied","Data":"3e14d895a14f4a0564f7f7e3c69189c69564a9ff087f2c6d784da1dda53743aa"} Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.914820 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"d4a2627f95fd3c188ed05c0d5e7f958011284b03877cacfc4dda17d1cf310d54"} Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.916349 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-pctg8" event={"ID":"07dd9173-fdfe-4edb-821b-37c94116b53e","Type":"ContainerDied","Data":"c43a26497795da97ad6a6c4586b62e12ae1ccaaa8dd33d4cfe17199345411003"} Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.916409 4867 scope.go:117] "RemoveContainer" containerID="b5e5c1b68f534cc73bf83368aec1b5b6ddd64d982817b6a68fb05176cffabc6e" Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.916463 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-pctg8" Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.924629 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"dd9c8ef630798ffa3cc39d45ab72fceea03622579e03ea47273afd679887d81b"} Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.965177 4867 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/07dd9173-fdfe-4edb-821b-37c94116b53e-client-ca\") on node \"crc\" DevicePath \"\"" Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.965213 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07dd9173-fdfe-4edb-821b-37c94116b53e-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.965227 4867 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/07dd9173-fdfe-4edb-821b-37c94116b53e-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.965240 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bqcq7\" (UniqueName: \"kubernetes.io/projected/07dd9173-fdfe-4edb-821b-37c94116b53e-kube-api-access-bqcq7\") on node \"crc\" DevicePath \"\"" Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.965254 4867 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07dd9173-fdfe-4edb-821b-37c94116b53e-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.980336 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-n9vq9"] Feb 14 04:12:00 crc kubenswrapper[4867]: E0214 04:12:00.980560 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07dd9173-fdfe-4edb-821b-37c94116b53e" containerName="controller-manager" Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.980571 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="07dd9173-fdfe-4edb-821b-37c94116b53e" containerName="controller-manager" Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.980768 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="07dd9173-fdfe-4edb-821b-37c94116b53e" containerName="controller-manager" Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.981815 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n9vq9" Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.985357 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.986313 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-pctg8"] Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.990546 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-n9vq9"] Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.994540 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-pctg8"] Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.005978 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07dd9173-fdfe-4edb-821b-37c94116b53e" path="/var/lib/kubelet/pods/07dd9173-fdfe-4edb-821b-37c94116b53e/volumes" Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.006636 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.048096 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s8hwg"] Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.105859 4867 patch_prober.go:28] interesting pod/router-default-5444994796-qlkzp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 14 04:12:01 crc kubenswrapper[4867]: [-]has-synced failed: reason withheld Feb 14 04:12:01 crc kubenswrapper[4867]: [+]process-running ok Feb 14 04:12:01 crc kubenswrapper[4867]: healthz check failed Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.106097 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qlkzp" podUID="4b71d414-e6bf-4f51-a808-1938c1edf207" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.170551 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21ce8d91-a436-4fe6-b5fd-1988e588ded8-utilities\") pod \"redhat-operators-n9vq9\" (UID: \"21ce8d91-a436-4fe6-b5fd-1988e588ded8\") " pod="openshift-marketplace/redhat-operators-n9vq9" Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.170599 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21ce8d91-a436-4fe6-b5fd-1988e588ded8-catalog-content\") pod \"redhat-operators-n9vq9\" (UID: \"21ce8d91-a436-4fe6-b5fd-1988e588ded8\") " pod="openshift-marketplace/redhat-operators-n9vq9" Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.170632 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v76pr\" (UniqueName: \"kubernetes.io/projected/21ce8d91-a436-4fe6-b5fd-1988e588ded8-kube-api-access-v76pr\") pod \"redhat-operators-n9vq9\" (UID: \"21ce8d91-a436-4fe6-b5fd-1988e588ded8\") " pod="openshift-marketplace/redhat-operators-n9vq9" Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.251400 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.251458 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.271631 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v76pr\" (UniqueName: \"kubernetes.io/projected/21ce8d91-a436-4fe6-b5fd-1988e588ded8-kube-api-access-v76pr\") pod \"redhat-operators-n9vq9\" (UID: \"21ce8d91-a436-4fe6-b5fd-1988e588ded8\") " pod="openshift-marketplace/redhat-operators-n9vq9" Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.271787 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21ce8d91-a436-4fe6-b5fd-1988e588ded8-utilities\") pod \"redhat-operators-n9vq9\" (UID: \"21ce8d91-a436-4fe6-b5fd-1988e588ded8\") " pod="openshift-marketplace/redhat-operators-n9vq9" Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.271840 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21ce8d91-a436-4fe6-b5fd-1988e588ded8-catalog-content\") pod \"redhat-operators-n9vq9\" (UID: \"21ce8d91-a436-4fe6-b5fd-1988e588ded8\") " pod="openshift-marketplace/redhat-operators-n9vq9" Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.272281 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21ce8d91-a436-4fe6-b5fd-1988e588ded8-utilities\") pod \"redhat-operators-n9vq9\" (UID: \"21ce8d91-a436-4fe6-b5fd-1988e588ded8\") " pod="openshift-marketplace/redhat-operators-n9vq9" Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.272397 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21ce8d91-a436-4fe6-b5fd-1988e588ded8-catalog-content\") pod \"redhat-operators-n9vq9\" (UID: \"21ce8d91-a436-4fe6-b5fd-1988e588ded8\") " pod="openshift-marketplace/redhat-operators-n9vq9" Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.301457 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v76pr\" (UniqueName: \"kubernetes.io/projected/21ce8d91-a436-4fe6-b5fd-1988e588ded8-kube-api-access-v76pr\") pod \"redhat-operators-n9vq9\" (UID: \"21ce8d91-a436-4fe6-b5fd-1988e588ded8\") " pod="openshift-marketplace/redhat-operators-n9vq9" Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.305735 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n9vq9" Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.369688 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jc878"] Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.370676 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jc878" Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.379181 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jc878"] Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.474723 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab-utilities\") pod \"redhat-operators-jc878\" (UID: \"fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab\") " pod="openshift-marketplace/redhat-operators-jc878" Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.475131 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab-catalog-content\") pod \"redhat-operators-jc878\" (UID: \"fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab\") " pod="openshift-marketplace/redhat-operators-jc878" Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.475166 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmkjt\" (UniqueName: \"kubernetes.io/projected/fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab-kube-api-access-nmkjt\") pod \"redhat-operators-jc878\" (UID: \"fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab\") " pod="openshift-marketplace/redhat-operators-jc878" Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.577336 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab-catalog-content\") pod \"redhat-operators-jc878\" (UID: \"fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab\") " pod="openshift-marketplace/redhat-operators-jc878" Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.577384 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmkjt\" (UniqueName: \"kubernetes.io/projected/fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab-kube-api-access-nmkjt\") pod \"redhat-operators-jc878\" (UID: \"fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab\") " pod="openshift-marketplace/redhat-operators-jc878" Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.577414 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab-utilities\") pod \"redhat-operators-jc878\" (UID: \"fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab\") " pod="openshift-marketplace/redhat-operators-jc878" Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.578138 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab-utilities\") pod \"redhat-operators-jc878\" (UID: \"fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab\") " pod="openshift-marketplace/redhat-operators-jc878" Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.578206 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab-catalog-content\") pod \"redhat-operators-jc878\" (UID: \"fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab\") " pod="openshift-marketplace/redhat-operators-jc878" Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.595093 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmkjt\" (UniqueName: \"kubernetes.io/projected/fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab-kube-api-access-nmkjt\") pod \"redhat-operators-jc878\" (UID: \"fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab\") " pod="openshift-marketplace/redhat-operators-jc878" Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.647771 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-n9vq9"] Feb 14 04:12:01 crc kubenswrapper[4867]: W0214 04:12:01.659305 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod21ce8d91_a436_4fe6_b5fd_1988e588ded8.slice/crio-4782354a698fe401c643d9fa5567f3591df600cf5a8f25b16b237312263df503 WatchSource:0}: Error finding container 4782354a698fe401c643d9fa5567f3591df600cf5a8f25b16b237312263df503: Status 404 returned error can't find the container with id 4782354a698fe401c643d9fa5567f3591df600cf5a8f25b16b237312263df503 Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.790497 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-nt7fn"] Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.791686 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-nt7fn" Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.793568 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.794909 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.795210 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.795253 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.795273 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.795633 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.799153 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-nt7fn"] Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.802886 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.817720 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jc878" Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.880739 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd1a4559-f0ef-4bc6-b318-2c91b798b76d-config\") pod \"controller-manager-879f6c89f-nt7fn\" (UID: \"dd1a4559-f0ef-4bc6-b318-2c91b798b76d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nt7fn" Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.880786 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dd1a4559-f0ef-4bc6-b318-2c91b798b76d-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-nt7fn\" (UID: \"dd1a4559-f0ef-4bc6-b318-2c91b798b76d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nt7fn" Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.880809 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcj7j\" (UniqueName: \"kubernetes.io/projected/dd1a4559-f0ef-4bc6-b318-2c91b798b76d-kube-api-access-vcj7j\") pod \"controller-manager-879f6c89f-nt7fn\" (UID: \"dd1a4559-f0ef-4bc6-b318-2c91b798b76d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nt7fn" Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.880836 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dd1a4559-f0ef-4bc6-b318-2c91b798b76d-client-ca\") pod \"controller-manager-879f6c89f-nt7fn\" (UID: \"dd1a4559-f0ef-4bc6-b318-2c91b798b76d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nt7fn" Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.880920 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd1a4559-f0ef-4bc6-b318-2c91b798b76d-serving-cert\") pod \"controller-manager-879f6c89f-nt7fn\" (UID: \"dd1a4559-f0ef-4bc6-b318-2c91b798b76d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nt7fn" Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.939270 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"1d273146d82792583f5426f64d40ca1b61c93f2ff6a5501b7da9405e4007554e"} Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.940682 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n9vq9" event={"ID":"21ce8d91-a436-4fe6-b5fd-1988e588ded8","Type":"ContainerStarted","Data":"4782354a698fe401c643d9fa5567f3591df600cf5a8f25b16b237312263df503"} Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.943045 4867 generic.go:334] "Generic (PLEG): container finished" podID="5be31bdb-ced4-4935-8102-e6ddc671474f" containerID="93e8b98a2ad31b4fa7402ae583c45be6e8f302edddc3396101d8d5532f77e5bf" exitCode=0 Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.943141 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"5be31bdb-ced4-4935-8102-e6ddc671474f","Type":"ContainerDied","Data":"93e8b98a2ad31b4fa7402ae583c45be6e8f302edddc3396101d8d5532f77e5bf"} Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.944494 4867 generic.go:334] "Generic (PLEG): container finished" podID="2e834244-05c0-4e48-9e2a-7c69cf930951" containerID="5ea24da634c74fd4522707557b46ec23669f943631ddc2b04acda4a65985a65f" exitCode=0 Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.944577 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gvh7q" event={"ID":"2e834244-05c0-4e48-9e2a-7c69cf930951","Type":"ContainerDied","Data":"5ea24da634c74fd4522707557b46ec23669f943631ddc2b04acda4a65985a65f"} Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.946204 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"68b3b8905f1bedcc835a898b667bf7ab79f6fa8df4b53b86e14c3fef1d2938f6"} Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.946441 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.947602 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" event={"ID":"c029599e-5014-4874-917f-076635849451","Type":"ContainerStarted","Data":"984105ff3eb0991dfe28181ee193825f9011bc66c156c9de4b38deec4acb2517"} Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.947737 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.949100 4867 generic.go:334] "Generic (PLEG): container finished" podID="0683c2f1-5695-4ef3-b6cc-31fe804c6dc6" containerID="7e41463addb663f771a8a5f2b9e7c4873429544544dd6087d30ba5633e2b13ff" exitCode=0 Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.949164 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2cjxf" event={"ID":"0683c2f1-5695-4ef3-b6cc-31fe804c6dc6","Type":"ContainerDied","Data":"7e41463addb663f771a8a5f2b9e7c4873429544544dd6087d30ba5633e2b13ff"} Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.953566 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"6b246566537b41c130bb12c4f84dc51f22f10bd6f92a37c1c392801346072b07"} Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.958210 4867 generic.go:334] "Generic (PLEG): container finished" podID="1f7707be-b4dc-47c7-8a74-bc46399acd36" containerID="74feb7884ba2418ee7d549ee5577cf3938f772233b39e1dc8f5cc302e9984613" exitCode=0 Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.958287 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s8hwg" event={"ID":"1f7707be-b4dc-47c7-8a74-bc46399acd36","Type":"ContainerDied","Data":"74feb7884ba2418ee7d549ee5577cf3938f772233b39e1dc8f5cc302e9984613"} Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.963812 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s8hwg" event={"ID":"1f7707be-b4dc-47c7-8a74-bc46399acd36","Type":"ContainerStarted","Data":"9414f47d96386d3ff0af0fa0050f52950e5a9a8e484274e0b79dd8bd6d0a669b"} Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.981613 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd1a4559-f0ef-4bc6-b318-2c91b798b76d-serving-cert\") pod \"controller-manager-879f6c89f-nt7fn\" (UID: \"dd1a4559-f0ef-4bc6-b318-2c91b798b76d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nt7fn" Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.981669 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd1a4559-f0ef-4bc6-b318-2c91b798b76d-config\") pod \"controller-manager-879f6c89f-nt7fn\" (UID: \"dd1a4559-f0ef-4bc6-b318-2c91b798b76d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nt7fn" Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.981700 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dd1a4559-f0ef-4bc6-b318-2c91b798b76d-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-nt7fn\" (UID: \"dd1a4559-f0ef-4bc6-b318-2c91b798b76d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nt7fn" Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.981722 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcj7j\" (UniqueName: \"kubernetes.io/projected/dd1a4559-f0ef-4bc6-b318-2c91b798b76d-kube-api-access-vcj7j\") pod \"controller-manager-879f6c89f-nt7fn\" (UID: \"dd1a4559-f0ef-4bc6-b318-2c91b798b76d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nt7fn" Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.981758 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dd1a4559-f0ef-4bc6-b318-2c91b798b76d-client-ca\") pod \"controller-manager-879f6c89f-nt7fn\" (UID: \"dd1a4559-f0ef-4bc6-b318-2c91b798b76d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nt7fn" Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.982947 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dd1a4559-f0ef-4bc6-b318-2c91b798b76d-client-ca\") pod \"controller-manager-879f6c89f-nt7fn\" (UID: \"dd1a4559-f0ef-4bc6-b318-2c91b798b76d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nt7fn" Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.984310 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dd1a4559-f0ef-4bc6-b318-2c91b798b76d-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-nt7fn\" (UID: \"dd1a4559-f0ef-4bc6-b318-2c91b798b76d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nt7fn" Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.984322 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd1a4559-f0ef-4bc6-b318-2c91b798b76d-config\") pod \"controller-manager-879f6c89f-nt7fn\" (UID: \"dd1a4559-f0ef-4bc6-b318-2c91b798b76d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nt7fn" Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.994266 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd1a4559-f0ef-4bc6-b318-2c91b798b76d-serving-cert\") pod \"controller-manager-879f6c89f-nt7fn\" (UID: \"dd1a4559-f0ef-4bc6-b318-2c91b798b76d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nt7fn" Feb 14 04:12:02 crc kubenswrapper[4867]: I0214 04:12:02.019037 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcj7j\" (UniqueName: \"kubernetes.io/projected/dd1a4559-f0ef-4bc6-b318-2c91b798b76d-kube-api-access-vcj7j\") pod \"controller-manager-879f6c89f-nt7fn\" (UID: \"dd1a4559-f0ef-4bc6-b318-2c91b798b76d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nt7fn" Feb 14 04:12:02 crc kubenswrapper[4867]: I0214 04:12:02.050804 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" podStartSLOduration=131.050781596 podStartE2EDuration="2m11.050781596s" podCreationTimestamp="2026-02-14 04:09:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:12:02.047073732 +0000 UTC m=+154.128011046" watchObservedRunningTime="2026-02-14 04:12:02.050781596 +0000 UTC m=+154.131718910" Feb 14 04:12:02 crc kubenswrapper[4867]: I0214 04:12:02.110819 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-nt7fn" Feb 14 04:12:02 crc kubenswrapper[4867]: I0214 04:12:02.118967 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-8qkg2" Feb 14 04:12:02 crc kubenswrapper[4867]: I0214 04:12:02.122231 4867 patch_prober.go:28] interesting pod/router-default-5444994796-qlkzp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 14 04:12:02 crc kubenswrapper[4867]: [-]has-synced failed: reason withheld Feb 14 04:12:02 crc kubenswrapper[4867]: [+]process-running ok Feb 14 04:12:02 crc kubenswrapper[4867]: healthz check failed Feb 14 04:12:02 crc kubenswrapper[4867]: I0214 04:12:02.122441 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qlkzp" podUID="4b71d414-e6bf-4f51-a808-1938c1edf207" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 14 04:12:02 crc kubenswrapper[4867]: I0214 04:12:02.155684 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-8qkg2" Feb 14 04:12:02 crc kubenswrapper[4867]: I0214 04:12:02.443167 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jc878"] Feb 14 04:12:02 crc kubenswrapper[4867]: I0214 04:12:02.501524 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 14 04:12:02 crc kubenswrapper[4867]: I0214 04:12:02.574024 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-nt7fn"] Feb 14 04:12:02 crc kubenswrapper[4867]: I0214 04:12:02.599341 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/adff5c07-e04d-4412-9e26-a0d00b565646-kube-api-access\") pod \"adff5c07-e04d-4412-9e26-a0d00b565646\" (UID: \"adff5c07-e04d-4412-9e26-a0d00b565646\") " Feb 14 04:12:02 crc kubenswrapper[4867]: I0214 04:12:02.600272 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/adff5c07-e04d-4412-9e26-a0d00b565646-kubelet-dir\") pod \"adff5c07-e04d-4412-9e26-a0d00b565646\" (UID: \"adff5c07-e04d-4412-9e26-a0d00b565646\") " Feb 14 04:12:02 crc kubenswrapper[4867]: I0214 04:12:02.600351 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/adff5c07-e04d-4412-9e26-a0d00b565646-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "adff5c07-e04d-4412-9e26-a0d00b565646" (UID: "adff5c07-e04d-4412-9e26-a0d00b565646"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 04:12:02 crc kubenswrapper[4867]: I0214 04:12:02.600587 4867 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/adff5c07-e04d-4412-9e26-a0d00b565646-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 14 04:12:02 crc kubenswrapper[4867]: I0214 04:12:02.607203 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adff5c07-e04d-4412-9e26-a0d00b565646-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "adff5c07-e04d-4412-9e26-a0d00b565646" (UID: "adff5c07-e04d-4412-9e26-a0d00b565646"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:12:02 crc kubenswrapper[4867]: I0214 04:12:02.704864 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/adff5c07-e04d-4412-9e26-a0d00b565646-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 14 04:12:02 crc kubenswrapper[4867]: I0214 04:12:02.971625 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-nt7fn" event={"ID":"dd1a4559-f0ef-4bc6-b318-2c91b798b76d","Type":"ContainerStarted","Data":"9560a6c0d2908add05e4ca895184c5c2c58cffdd60f774e8164ccee333384db8"} Feb 14 04:12:02 crc kubenswrapper[4867]: I0214 04:12:02.972071 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-nt7fn" Feb 14 04:12:02 crc kubenswrapper[4867]: I0214 04:12:02.972085 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-nt7fn" event={"ID":"dd1a4559-f0ef-4bc6-b318-2c91b798b76d","Type":"ContainerStarted","Data":"5207b73aaa57eb157e090896dbc459c86fd8684eae6a2b10610ff75ec8af8595"} Feb 14 04:12:02 crc kubenswrapper[4867]: I0214 04:12:02.974425 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 14 04:12:02 crc kubenswrapper[4867]: I0214 04:12:02.974425 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"adff5c07-e04d-4412-9e26-a0d00b565646","Type":"ContainerDied","Data":"377e295c3b007785a985a19cb9652f29604083f015986a2b6609275e06c00eb4"} Feb 14 04:12:02 crc kubenswrapper[4867]: I0214 04:12:02.974551 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="377e295c3b007785a985a19cb9652f29604083f015986a2b6609275e06c00eb4" Feb 14 04:12:02 crc kubenswrapper[4867]: I0214 04:12:02.976217 4867 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-nt7fn container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: connect: connection refused" start-of-body= Feb 14 04:12:02 crc kubenswrapper[4867]: I0214 04:12:02.976253 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-nt7fn" podUID="dd1a4559-f0ef-4bc6-b318-2c91b798b76d" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: connect: connection refused" Feb 14 04:12:02 crc kubenswrapper[4867]: I0214 04:12:02.979082 4867 generic.go:334] "Generic (PLEG): container finished" podID="fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab" containerID="32411749279c49995d30b3666ff88537eeae29bee0a978d984c3e86a4c392864" exitCode=0 Feb 14 04:12:02 crc kubenswrapper[4867]: I0214 04:12:02.979115 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jc878" event={"ID":"fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab","Type":"ContainerDied","Data":"32411749279c49995d30b3666ff88537eeae29bee0a978d984c3e86a4c392864"} Feb 14 04:12:02 crc kubenswrapper[4867]: I0214 04:12:02.979179 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jc878" event={"ID":"fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab","Type":"ContainerStarted","Data":"873ab4fab8bcde5b4877631fe5b476f986fe024be500dd128844b9b8ff975f35"} Feb 14 04:12:02 crc kubenswrapper[4867]: I0214 04:12:02.982822 4867 generic.go:334] "Generic (PLEG): container finished" podID="21ce8d91-a436-4fe6-b5fd-1988e588ded8" containerID="743ba93f76979f5c122f709823ba46e2f882af89613e670bb5a5b1a6bbf930e3" exitCode=0 Feb 14 04:12:02 crc kubenswrapper[4867]: I0214 04:12:02.985277 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n9vq9" event={"ID":"21ce8d91-a436-4fe6-b5fd-1988e588ded8","Type":"ContainerDied","Data":"743ba93f76979f5c122f709823ba46e2f882af89613e670bb5a5b1a6bbf930e3"} Feb 14 04:12:02 crc kubenswrapper[4867]: I0214 04:12:02.998907 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-nt7fn" podStartSLOduration=5.998887249 podStartE2EDuration="5.998887249s" podCreationTimestamp="2026-02-14 04:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:12:02.98716389 +0000 UTC m=+155.068101204" watchObservedRunningTime="2026-02-14 04:12:02.998887249 +0000 UTC m=+155.079824563" Feb 14 04:12:03 crc kubenswrapper[4867]: I0214 04:12:03.105909 4867 patch_prober.go:28] interesting pod/router-default-5444994796-qlkzp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 14 04:12:03 crc kubenswrapper[4867]: [-]has-synced failed: reason withheld Feb 14 04:12:03 crc kubenswrapper[4867]: [+]process-running ok Feb 14 04:12:03 crc kubenswrapper[4867]: healthz check failed Feb 14 04:12:03 crc kubenswrapper[4867]: I0214 04:12:03.105989 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qlkzp" podUID="4b71d414-e6bf-4f51-a808-1938c1edf207" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 14 04:12:03 crc kubenswrapper[4867]: I0214 04:12:03.129292 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-gc8sl" Feb 14 04:12:03 crc kubenswrapper[4867]: I0214 04:12:03.473169 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 14 04:12:03 crc kubenswrapper[4867]: I0214 04:12:03.524941 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5be31bdb-ced4-4935-8102-e6ddc671474f-kubelet-dir\") pod \"5be31bdb-ced4-4935-8102-e6ddc671474f\" (UID: \"5be31bdb-ced4-4935-8102-e6ddc671474f\") " Feb 14 04:12:03 crc kubenswrapper[4867]: I0214 04:12:03.525023 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5be31bdb-ced4-4935-8102-e6ddc671474f-kube-api-access\") pod \"5be31bdb-ced4-4935-8102-e6ddc671474f\" (UID: \"5be31bdb-ced4-4935-8102-e6ddc671474f\") " Feb 14 04:12:03 crc kubenswrapper[4867]: I0214 04:12:03.525272 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5be31bdb-ced4-4935-8102-e6ddc671474f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "5be31bdb-ced4-4935-8102-e6ddc671474f" (UID: "5be31bdb-ced4-4935-8102-e6ddc671474f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 04:12:03 crc kubenswrapper[4867]: I0214 04:12:03.541798 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5be31bdb-ced4-4935-8102-e6ddc671474f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "5be31bdb-ced4-4935-8102-e6ddc671474f" (UID: "5be31bdb-ced4-4935-8102-e6ddc671474f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:12:03 crc kubenswrapper[4867]: I0214 04:12:03.626465 4867 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5be31bdb-ced4-4935-8102-e6ddc671474f-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 14 04:12:03 crc kubenswrapper[4867]: I0214 04:12:03.626496 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5be31bdb-ced4-4935-8102-e6ddc671474f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 14 04:12:04 crc kubenswrapper[4867]: I0214 04:12:04.035813 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"5be31bdb-ced4-4935-8102-e6ddc671474f","Type":"ContainerDied","Data":"9a5067fa21df88aec15309e79d7720348fa24ff022d24e723cd4073f519393f9"} Feb 14 04:12:04 crc kubenswrapper[4867]: I0214 04:12:04.035859 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 14 04:12:04 crc kubenswrapper[4867]: I0214 04:12:04.035860 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a5067fa21df88aec15309e79d7720348fa24ff022d24e723cd4073f519393f9" Feb 14 04:12:04 crc kubenswrapper[4867]: I0214 04:12:04.042053 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-nt7fn" Feb 14 04:12:04 crc kubenswrapper[4867]: I0214 04:12:04.110953 4867 patch_prober.go:28] interesting pod/router-default-5444994796-qlkzp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 14 04:12:04 crc kubenswrapper[4867]: [-]has-synced failed: reason withheld Feb 14 04:12:04 crc kubenswrapper[4867]: [+]process-running ok Feb 14 04:12:04 crc kubenswrapper[4867]: healthz check failed Feb 14 04:12:04 crc kubenswrapper[4867]: I0214 04:12:04.111010 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qlkzp" podUID="4b71d414-e6bf-4f51-a808-1938c1edf207" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 14 04:12:05 crc kubenswrapper[4867]: I0214 04:12:05.108521 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-qlkzp" Feb 14 04:12:05 crc kubenswrapper[4867]: I0214 04:12:05.113177 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-qlkzp" Feb 14 04:12:06 crc kubenswrapper[4867]: I0214 04:12:06.919973 4867 patch_prober.go:28] interesting pod/downloads-7954f5f757-x9sjv container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Feb 14 04:12:06 crc kubenswrapper[4867]: I0214 04:12:06.920029 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-x9sjv" podUID="72546cbc-3499-4110-b0e4-58beab7cc8a5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Feb 14 04:12:06 crc kubenswrapper[4867]: I0214 04:12:06.919972 4867 patch_prober.go:28] interesting pod/downloads-7954f5f757-x9sjv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Feb 14 04:12:06 crc kubenswrapper[4867]: I0214 04:12:06.920130 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-x9sjv" podUID="72546cbc-3499-4110-b0e4-58beab7cc8a5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Feb 14 04:12:07 crc kubenswrapper[4867]: I0214 04:12:07.312607 4867 patch_prober.go:28] interesting pod/console-f9d7485db-c4c52 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.25:8443/health\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Feb 14 04:12:07 crc kubenswrapper[4867]: I0214 04:12:07.313003 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-c4c52" podUID="bb63883f-65f5-4107-877a-ff786d6c00f9" containerName="console" probeResult="failure" output="Get \"https://10.217.0.25:8443/health\": dial tcp 10.217.0.25:8443: connect: connection refused" Feb 14 04:12:13 crc kubenswrapper[4867]: I0214 04:12:13.384124 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7206174b-645b-4924-8345-d1d4b1a5ec39-metrics-certs\") pod \"network-metrics-daemon-4b6k5\" (UID: \"7206174b-645b-4924-8345-d1d4b1a5ec39\") " pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:12:13 crc kubenswrapper[4867]: I0214 04:12:13.401349 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7206174b-645b-4924-8345-d1d4b1a5ec39-metrics-certs\") pod \"network-metrics-daemon-4b6k5\" (UID: \"7206174b-645b-4924-8345-d1d4b1a5ec39\") " pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:12:13 crc kubenswrapper[4867]: I0214 04:12:13.613167 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:12:16 crc kubenswrapper[4867]: I0214 04:12:16.919788 4867 patch_prober.go:28] interesting pod/downloads-7954f5f757-x9sjv container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Feb 14 04:12:16 crc kubenswrapper[4867]: I0214 04:12:16.920089 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-x9sjv" podUID="72546cbc-3499-4110-b0e4-58beab7cc8a5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Feb 14 04:12:16 crc kubenswrapper[4867]: I0214 04:12:16.920136 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-x9sjv" Feb 14 04:12:16 crc kubenswrapper[4867]: I0214 04:12:16.920749 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"6df86e37892d6555081dceb55f2b33fa3d058e82a95ff8722c4d3a8bd1c5bcb0"} pod="openshift-console/downloads-7954f5f757-x9sjv" containerMessage="Container download-server failed liveness probe, will be restarted" Feb 14 04:12:16 crc kubenswrapper[4867]: I0214 04:12:16.920833 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-x9sjv" podUID="72546cbc-3499-4110-b0e4-58beab7cc8a5" containerName="download-server" containerID="cri-o://6df86e37892d6555081dceb55f2b33fa3d058e82a95ff8722c4d3a8bd1c5bcb0" gracePeriod=2 Feb 14 04:12:16 crc kubenswrapper[4867]: I0214 04:12:16.919974 4867 patch_prober.go:28] interesting pod/downloads-7954f5f757-x9sjv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Feb 14 04:12:16 crc kubenswrapper[4867]: I0214 04:12:16.920938 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-x9sjv" podUID="72546cbc-3499-4110-b0e4-58beab7cc8a5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Feb 14 04:12:16 crc kubenswrapper[4867]: I0214 04:12:16.921206 4867 patch_prober.go:28] interesting pod/downloads-7954f5f757-x9sjv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Feb 14 04:12:16 crc kubenswrapper[4867]: I0214 04:12:16.921271 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-x9sjv" podUID="72546cbc-3499-4110-b0e4-58beab7cc8a5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Feb 14 04:12:17 crc kubenswrapper[4867]: I0214 04:12:17.027054 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-nt7fn"] Feb 14 04:12:17 crc kubenswrapper[4867]: I0214 04:12:17.027356 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-nt7fn" podUID="dd1a4559-f0ef-4bc6-b318-2c91b798b76d" containerName="controller-manager" containerID="cri-o://9560a6c0d2908add05e4ca895184c5c2c58cffdd60f774e8164ccee333384db8" gracePeriod=30 Feb 14 04:12:17 crc kubenswrapper[4867]: I0214 04:12:17.052792 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-29p6h"] Feb 14 04:12:17 crc kubenswrapper[4867]: I0214 04:12:17.053018 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-29p6h" podUID="14efaf39-985f-45ea-ab79-0b8b2044c7f7" containerName="route-controller-manager" containerID="cri-o://ffdcb8b4f0119bbfa4081845fbe7d22aac75e8abd20c4cfd6d4121782f9269ad" gracePeriod=30 Feb 14 04:12:17 crc kubenswrapper[4867]: I0214 04:12:17.316628 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-c4c52" Feb 14 04:12:17 crc kubenswrapper[4867]: I0214 04:12:17.321616 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-c4c52" Feb 14 04:12:17 crc kubenswrapper[4867]: I0214 04:12:17.829274 4867 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-29p6h container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Feb 14 04:12:17 crc kubenswrapper[4867]: I0214 04:12:17.829341 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-29p6h" podUID="14efaf39-985f-45ea-ab79-0b8b2044c7f7" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" Feb 14 04:12:18 crc kubenswrapper[4867]: I0214 04:12:18.251431 4867 generic.go:334] "Generic (PLEG): container finished" podID="14efaf39-985f-45ea-ab79-0b8b2044c7f7" containerID="ffdcb8b4f0119bbfa4081845fbe7d22aac75e8abd20c4cfd6d4121782f9269ad" exitCode=0 Feb 14 04:12:18 crc kubenswrapper[4867]: I0214 04:12:18.251484 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-29p6h" event={"ID":"14efaf39-985f-45ea-ab79-0b8b2044c7f7","Type":"ContainerDied","Data":"ffdcb8b4f0119bbfa4081845fbe7d22aac75e8abd20c4cfd6d4121782f9269ad"} Feb 14 04:12:18 crc kubenswrapper[4867]: I0214 04:12:18.252918 4867 generic.go:334] "Generic (PLEG): container finished" podID="dd1a4559-f0ef-4bc6-b318-2c91b798b76d" containerID="9560a6c0d2908add05e4ca895184c5c2c58cffdd60f774e8164ccee333384db8" exitCode=0 Feb 14 04:12:18 crc kubenswrapper[4867]: I0214 04:12:18.252964 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-nt7fn" event={"ID":"dd1a4559-f0ef-4bc6-b318-2c91b798b76d","Type":"ContainerDied","Data":"9560a6c0d2908add05e4ca895184c5c2c58cffdd60f774e8164ccee333384db8"} Feb 14 04:12:18 crc kubenswrapper[4867]: I0214 04:12:18.254353 4867 generic.go:334] "Generic (PLEG): container finished" podID="72546cbc-3499-4110-b0e4-58beab7cc8a5" containerID="6df86e37892d6555081dceb55f2b33fa3d058e82a95ff8722c4d3a8bd1c5bcb0" exitCode=0 Feb 14 04:12:18 crc kubenswrapper[4867]: I0214 04:12:18.254371 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-x9sjv" event={"ID":"72546cbc-3499-4110-b0e4-58beab7cc8a5","Type":"ContainerDied","Data":"6df86e37892d6555081dceb55f2b33fa3d058e82a95ff8722c4d3a8bd1c5bcb0"} Feb 14 04:12:19 crc kubenswrapper[4867]: I0214 04:12:19.585667 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:12:22 crc kubenswrapper[4867]: I0214 04:12:22.113119 4867 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-nt7fn container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: connect: connection refused" start-of-body= Feb 14 04:12:22 crc kubenswrapper[4867]: I0214 04:12:22.113449 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-nt7fn" podUID="dd1a4559-f0ef-4bc6-b318-2c91b798b76d" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: connect: connection refused" Feb 14 04:12:26 crc kubenswrapper[4867]: I0214 04:12:26.921121 4867 patch_prober.go:28] interesting pod/downloads-7954f5f757-x9sjv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Feb 14 04:12:26 crc kubenswrapper[4867]: I0214 04:12:26.921181 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-x9sjv" podUID="72546cbc-3499-4110-b0e4-58beab7cc8a5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Feb 14 04:12:27 crc kubenswrapper[4867]: I0214 04:12:27.827556 4867 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-29p6h container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Feb 14 04:12:27 crc kubenswrapper[4867]: I0214 04:12:27.827825 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-29p6h" podUID="14efaf39-985f-45ea-ab79-0b8b2044c7f7" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" Feb 14 04:12:28 crc kubenswrapper[4867]: I0214 04:12:28.063454 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rv8cb" Feb 14 04:12:31 crc kubenswrapper[4867]: I0214 04:12:31.250948 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 04:12:31 crc kubenswrapper[4867]: I0214 04:12:31.251547 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 04:12:33 crc kubenswrapper[4867]: I0214 04:12:33.113550 4867 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-nt7fn container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: i/o timeout" start-of-body= Feb 14 04:12:33 crc kubenswrapper[4867]: I0214 04:12:33.113624 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-nt7fn" podUID="dd1a4559-f0ef-4bc6-b318-2c91b798b76d" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: i/o timeout" Feb 14 04:12:36 crc kubenswrapper[4867]: I0214 04:12:36.920023 4867 patch_prober.go:28] interesting pod/downloads-7954f5f757-x9sjv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Feb 14 04:12:36 crc kubenswrapper[4867]: I0214 04:12:36.920428 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-x9sjv" podUID="72546cbc-3499-4110-b0e4-58beab7cc8a5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Feb 14 04:12:37 crc kubenswrapper[4867]: I0214 04:12:37.206149 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 14 04:12:37 crc kubenswrapper[4867]: E0214 04:12:37.206430 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adff5c07-e04d-4412-9e26-a0d00b565646" containerName="pruner" Feb 14 04:12:37 crc kubenswrapper[4867]: I0214 04:12:37.206452 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="adff5c07-e04d-4412-9e26-a0d00b565646" containerName="pruner" Feb 14 04:12:37 crc kubenswrapper[4867]: E0214 04:12:37.206467 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5be31bdb-ced4-4935-8102-e6ddc671474f" containerName="pruner" Feb 14 04:12:37 crc kubenswrapper[4867]: I0214 04:12:37.206475 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="5be31bdb-ced4-4935-8102-e6ddc671474f" containerName="pruner" Feb 14 04:12:37 crc kubenswrapper[4867]: I0214 04:12:37.206614 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="adff5c07-e04d-4412-9e26-a0d00b565646" containerName="pruner" Feb 14 04:12:37 crc kubenswrapper[4867]: I0214 04:12:37.206633 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="5be31bdb-ced4-4935-8102-e6ddc671474f" containerName="pruner" Feb 14 04:12:37 crc kubenswrapper[4867]: I0214 04:12:37.207116 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 14 04:12:37 crc kubenswrapper[4867]: I0214 04:12:37.208801 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 14 04:12:37 crc kubenswrapper[4867]: I0214 04:12:37.209747 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 14 04:12:37 crc kubenswrapper[4867]: I0214 04:12:37.219032 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 14 04:12:37 crc kubenswrapper[4867]: I0214 04:12:37.320917 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3f1bacbd-3b75-4814-83cc-1569cbbf36bb-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"3f1bacbd-3b75-4814-83cc-1569cbbf36bb\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 14 04:12:37 crc kubenswrapper[4867]: I0214 04:12:37.321084 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3f1bacbd-3b75-4814-83cc-1569cbbf36bb-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"3f1bacbd-3b75-4814-83cc-1569cbbf36bb\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 14 04:12:37 crc kubenswrapper[4867]: I0214 04:12:37.422034 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3f1bacbd-3b75-4814-83cc-1569cbbf36bb-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"3f1bacbd-3b75-4814-83cc-1569cbbf36bb\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 14 04:12:37 crc kubenswrapper[4867]: I0214 04:12:37.422120 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3f1bacbd-3b75-4814-83cc-1569cbbf36bb-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"3f1bacbd-3b75-4814-83cc-1569cbbf36bb\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 14 04:12:37 crc kubenswrapper[4867]: I0214 04:12:37.422259 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3f1bacbd-3b75-4814-83cc-1569cbbf36bb-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"3f1bacbd-3b75-4814-83cc-1569cbbf36bb\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 14 04:12:37 crc kubenswrapper[4867]: I0214 04:12:37.460432 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3f1bacbd-3b75-4814-83cc-1569cbbf36bb-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"3f1bacbd-3b75-4814-83cc-1569cbbf36bb\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 14 04:12:37 crc kubenswrapper[4867]: I0214 04:12:37.534497 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.108909 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-nt7fn" Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.114663 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-29p6h" Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.142424 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-748d4597b7-zr2sc"] Feb 14 04:12:38 crc kubenswrapper[4867]: E0214 04:12:38.142746 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd1a4559-f0ef-4bc6-b318-2c91b798b76d" containerName="controller-manager" Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.142762 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd1a4559-f0ef-4bc6-b318-2c91b798b76d" containerName="controller-manager" Feb 14 04:12:38 crc kubenswrapper[4867]: E0214 04:12:38.142773 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14efaf39-985f-45ea-ab79-0b8b2044c7f7" containerName="route-controller-manager" Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.142783 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="14efaf39-985f-45ea-ab79-0b8b2044c7f7" containerName="route-controller-manager" Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.142955 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="14efaf39-985f-45ea-ab79-0b8b2044c7f7" containerName="route-controller-manager" Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.142976 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd1a4559-f0ef-4bc6-b318-2c91b798b76d" containerName="controller-manager" Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.143680 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-748d4597b7-zr2sc" Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.208107 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-748d4597b7-zr2sc"] Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.242769 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2kd6\" (UniqueName: \"kubernetes.io/projected/14efaf39-985f-45ea-ab79-0b8b2044c7f7-kube-api-access-q2kd6\") pod \"14efaf39-985f-45ea-ab79-0b8b2044c7f7\" (UID: \"14efaf39-985f-45ea-ab79-0b8b2044c7f7\") " Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.242817 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/14efaf39-985f-45ea-ab79-0b8b2044c7f7-client-ca\") pod \"14efaf39-985f-45ea-ab79-0b8b2044c7f7\" (UID: \"14efaf39-985f-45ea-ab79-0b8b2044c7f7\") " Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.242899 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14efaf39-985f-45ea-ab79-0b8b2044c7f7-serving-cert\") pod \"14efaf39-985f-45ea-ab79-0b8b2044c7f7\" (UID: \"14efaf39-985f-45ea-ab79-0b8b2044c7f7\") " Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.242927 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dd1a4559-f0ef-4bc6-b318-2c91b798b76d-proxy-ca-bundles\") pod \"dd1a4559-f0ef-4bc6-b318-2c91b798b76d\" (UID: \"dd1a4559-f0ef-4bc6-b318-2c91b798b76d\") " Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.242955 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd1a4559-f0ef-4bc6-b318-2c91b798b76d-serving-cert\") pod \"dd1a4559-f0ef-4bc6-b318-2c91b798b76d\" (UID: \"dd1a4559-f0ef-4bc6-b318-2c91b798b76d\") " Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.242984 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vcj7j\" (UniqueName: \"kubernetes.io/projected/dd1a4559-f0ef-4bc6-b318-2c91b798b76d-kube-api-access-vcj7j\") pod \"dd1a4559-f0ef-4bc6-b318-2c91b798b76d\" (UID: \"dd1a4559-f0ef-4bc6-b318-2c91b798b76d\") " Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.243029 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd1a4559-f0ef-4bc6-b318-2c91b798b76d-config\") pod \"dd1a4559-f0ef-4bc6-b318-2c91b798b76d\" (UID: \"dd1a4559-f0ef-4bc6-b318-2c91b798b76d\") " Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.243093 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14efaf39-985f-45ea-ab79-0b8b2044c7f7-config\") pod \"14efaf39-985f-45ea-ab79-0b8b2044c7f7\" (UID: \"14efaf39-985f-45ea-ab79-0b8b2044c7f7\") " Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.243115 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dd1a4559-f0ef-4bc6-b318-2c91b798b76d-client-ca\") pod \"dd1a4559-f0ef-4bc6-b318-2c91b798b76d\" (UID: \"dd1a4559-f0ef-4bc6-b318-2c91b798b76d\") " Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.243275 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c312f687-8694-4be3-a1ac-ddb1a0e8e1e6-proxy-ca-bundles\") pod \"controller-manager-748d4597b7-zr2sc\" (UID: \"c312f687-8694-4be3-a1ac-ddb1a0e8e1e6\") " pod="openshift-controller-manager/controller-manager-748d4597b7-zr2sc" Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.243317 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c312f687-8694-4be3-a1ac-ddb1a0e8e1e6-config\") pod \"controller-manager-748d4597b7-zr2sc\" (UID: \"c312f687-8694-4be3-a1ac-ddb1a0e8e1e6\") " pod="openshift-controller-manager/controller-manager-748d4597b7-zr2sc" Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.243408 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c312f687-8694-4be3-a1ac-ddb1a0e8e1e6-client-ca\") pod \"controller-manager-748d4597b7-zr2sc\" (UID: \"c312f687-8694-4be3-a1ac-ddb1a0e8e1e6\") " pod="openshift-controller-manager/controller-manager-748d4597b7-zr2sc" Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.243441 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7sbq9\" (UniqueName: \"kubernetes.io/projected/c312f687-8694-4be3-a1ac-ddb1a0e8e1e6-kube-api-access-7sbq9\") pod \"controller-manager-748d4597b7-zr2sc\" (UID: \"c312f687-8694-4be3-a1ac-ddb1a0e8e1e6\") " pod="openshift-controller-manager/controller-manager-748d4597b7-zr2sc" Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.243473 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c312f687-8694-4be3-a1ac-ddb1a0e8e1e6-serving-cert\") pod \"controller-manager-748d4597b7-zr2sc\" (UID: \"c312f687-8694-4be3-a1ac-ddb1a0e8e1e6\") " pod="openshift-controller-manager/controller-manager-748d4597b7-zr2sc" Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.243971 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd1a4559-f0ef-4bc6-b318-2c91b798b76d-client-ca" (OuterVolumeSpecName: "client-ca") pod "dd1a4559-f0ef-4bc6-b318-2c91b798b76d" (UID: "dd1a4559-f0ef-4bc6-b318-2c91b798b76d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.244113 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14efaf39-985f-45ea-ab79-0b8b2044c7f7-config" (OuterVolumeSpecName: "config") pod "14efaf39-985f-45ea-ab79-0b8b2044c7f7" (UID: "14efaf39-985f-45ea-ab79-0b8b2044c7f7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.244186 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14efaf39-985f-45ea-ab79-0b8b2044c7f7-client-ca" (OuterVolumeSpecName: "client-ca") pod "14efaf39-985f-45ea-ab79-0b8b2044c7f7" (UID: "14efaf39-985f-45ea-ab79-0b8b2044c7f7"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.244311 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd1a4559-f0ef-4bc6-b318-2c91b798b76d-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "dd1a4559-f0ef-4bc6-b318-2c91b798b76d" (UID: "dd1a4559-f0ef-4bc6-b318-2c91b798b76d"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.244368 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd1a4559-f0ef-4bc6-b318-2c91b798b76d-config" (OuterVolumeSpecName: "config") pod "dd1a4559-f0ef-4bc6-b318-2c91b798b76d" (UID: "dd1a4559-f0ef-4bc6-b318-2c91b798b76d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.248438 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd1a4559-f0ef-4bc6-b318-2c91b798b76d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "dd1a4559-f0ef-4bc6-b318-2c91b798b76d" (UID: "dd1a4559-f0ef-4bc6-b318-2c91b798b76d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.248740 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14efaf39-985f-45ea-ab79-0b8b2044c7f7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "14efaf39-985f-45ea-ab79-0b8b2044c7f7" (UID: "14efaf39-985f-45ea-ab79-0b8b2044c7f7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.249189 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd1a4559-f0ef-4bc6-b318-2c91b798b76d-kube-api-access-vcj7j" (OuterVolumeSpecName: "kube-api-access-vcj7j") pod "dd1a4559-f0ef-4bc6-b318-2c91b798b76d" (UID: "dd1a4559-f0ef-4bc6-b318-2c91b798b76d"). InnerVolumeSpecName "kube-api-access-vcj7j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.250295 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14efaf39-985f-45ea-ab79-0b8b2044c7f7-kube-api-access-q2kd6" (OuterVolumeSpecName: "kube-api-access-q2kd6") pod "14efaf39-985f-45ea-ab79-0b8b2044c7f7" (UID: "14efaf39-985f-45ea-ab79-0b8b2044c7f7"). InnerVolumeSpecName "kube-api-access-q2kd6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.344733 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c312f687-8694-4be3-a1ac-ddb1a0e8e1e6-config\") pod \"controller-manager-748d4597b7-zr2sc\" (UID: \"c312f687-8694-4be3-a1ac-ddb1a0e8e1e6\") " pod="openshift-controller-manager/controller-manager-748d4597b7-zr2sc" Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.344834 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c312f687-8694-4be3-a1ac-ddb1a0e8e1e6-client-ca\") pod \"controller-manager-748d4597b7-zr2sc\" (UID: \"c312f687-8694-4be3-a1ac-ddb1a0e8e1e6\") " pod="openshift-controller-manager/controller-manager-748d4597b7-zr2sc" Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.344859 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7sbq9\" (UniqueName: \"kubernetes.io/projected/c312f687-8694-4be3-a1ac-ddb1a0e8e1e6-kube-api-access-7sbq9\") pod \"controller-manager-748d4597b7-zr2sc\" (UID: \"c312f687-8694-4be3-a1ac-ddb1a0e8e1e6\") " pod="openshift-controller-manager/controller-manager-748d4597b7-zr2sc" Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.344884 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c312f687-8694-4be3-a1ac-ddb1a0e8e1e6-serving-cert\") pod \"controller-manager-748d4597b7-zr2sc\" (UID: \"c312f687-8694-4be3-a1ac-ddb1a0e8e1e6\") " pod="openshift-controller-manager/controller-manager-748d4597b7-zr2sc" Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.344909 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c312f687-8694-4be3-a1ac-ddb1a0e8e1e6-proxy-ca-bundles\") pod \"controller-manager-748d4597b7-zr2sc\" (UID: \"c312f687-8694-4be3-a1ac-ddb1a0e8e1e6\") " pod="openshift-controller-manager/controller-manager-748d4597b7-zr2sc" Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.344953 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd1a4559-f0ef-4bc6-b318-2c91b798b76d-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.344965 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14efaf39-985f-45ea-ab79-0b8b2044c7f7-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.344975 4867 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dd1a4559-f0ef-4bc6-b318-2c91b798b76d-client-ca\") on node \"crc\" DevicePath \"\"" Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.344983 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2kd6\" (UniqueName: \"kubernetes.io/projected/14efaf39-985f-45ea-ab79-0b8b2044c7f7-kube-api-access-q2kd6\") on node \"crc\" DevicePath \"\"" Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.344993 4867 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/14efaf39-985f-45ea-ab79-0b8b2044c7f7-client-ca\") on node \"crc\" DevicePath \"\"" Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.345001 4867 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14efaf39-985f-45ea-ab79-0b8b2044c7f7-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.345010 4867 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dd1a4559-f0ef-4bc6-b318-2c91b798b76d-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.345020 4867 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd1a4559-f0ef-4bc6-b318-2c91b798b76d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.345032 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vcj7j\" (UniqueName: \"kubernetes.io/projected/dd1a4559-f0ef-4bc6-b318-2c91b798b76d-kube-api-access-vcj7j\") on node \"crc\" DevicePath \"\"" Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.346484 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c312f687-8694-4be3-a1ac-ddb1a0e8e1e6-config\") pod \"controller-manager-748d4597b7-zr2sc\" (UID: \"c312f687-8694-4be3-a1ac-ddb1a0e8e1e6\") " pod="openshift-controller-manager/controller-manager-748d4597b7-zr2sc" Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.346749 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c312f687-8694-4be3-a1ac-ddb1a0e8e1e6-proxy-ca-bundles\") pod \"controller-manager-748d4597b7-zr2sc\" (UID: \"c312f687-8694-4be3-a1ac-ddb1a0e8e1e6\") " pod="openshift-controller-manager/controller-manager-748d4597b7-zr2sc" Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.347155 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c312f687-8694-4be3-a1ac-ddb1a0e8e1e6-client-ca\") pod \"controller-manager-748d4597b7-zr2sc\" (UID: \"c312f687-8694-4be3-a1ac-ddb1a0e8e1e6\") " pod="openshift-controller-manager/controller-manager-748d4597b7-zr2sc" Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.350005 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c312f687-8694-4be3-a1ac-ddb1a0e8e1e6-serving-cert\") pod \"controller-manager-748d4597b7-zr2sc\" (UID: \"c312f687-8694-4be3-a1ac-ddb1a0e8e1e6\") " pod="openshift-controller-manager/controller-manager-748d4597b7-zr2sc" Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.361626 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7sbq9\" (UniqueName: \"kubernetes.io/projected/c312f687-8694-4be3-a1ac-ddb1a0e8e1e6-kube-api-access-7sbq9\") pod \"controller-manager-748d4597b7-zr2sc\" (UID: \"c312f687-8694-4be3-a1ac-ddb1a0e8e1e6\") " pod="openshift-controller-manager/controller-manager-748d4597b7-zr2sc" Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.393710 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-29p6h" event={"ID":"14efaf39-985f-45ea-ab79-0b8b2044c7f7","Type":"ContainerDied","Data":"d80c060a94d17951aad5e051f55bf43d373a158b1129e1b3c3d94726f3601c49"} Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.393800 4867 scope.go:117] "RemoveContainer" containerID="ffdcb8b4f0119bbfa4081845fbe7d22aac75e8abd20c4cfd6d4121782f9269ad" Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.393747 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-29p6h" Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.397287 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-nt7fn" event={"ID":"dd1a4559-f0ef-4bc6-b318-2c91b798b76d","Type":"ContainerDied","Data":"5207b73aaa57eb157e090896dbc459c86fd8684eae6a2b10610ff75ec8af8595"} Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.397473 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-nt7fn" Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.423632 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-29p6h"] Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.427410 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-29p6h"] Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.439370 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-nt7fn"] Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.445136 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-nt7fn"] Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.466540 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-748d4597b7-zr2sc" Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.828421 4867 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-29p6h container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.828608 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-29p6h" podUID="14efaf39-985f-45ea-ab79-0b8b2044c7f7" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 04:12:39 crc kubenswrapper[4867]: I0214 04:12:39.013916 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14efaf39-985f-45ea-ab79-0b8b2044c7f7" path="/var/lib/kubelet/pods/14efaf39-985f-45ea-ab79-0b8b2044c7f7/volumes" Feb 14 04:12:39 crc kubenswrapper[4867]: I0214 04:12:39.015469 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd1a4559-f0ef-4bc6-b318-2c91b798b76d" path="/var/lib/kubelet/pods/dd1a4559-f0ef-4bc6-b318-2c91b798b76d/volumes" Feb 14 04:12:39 crc kubenswrapper[4867]: I0214 04:12:39.278605 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:12:40 crc kubenswrapper[4867]: I0214 04:12:40.823097 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-74548f6c84-krdz8"] Feb 14 04:12:40 crc kubenswrapper[4867]: I0214 04:12:40.825755 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-74548f6c84-krdz8" Feb 14 04:12:40 crc kubenswrapper[4867]: I0214 04:12:40.831895 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 14 04:12:40 crc kubenswrapper[4867]: I0214 04:12:40.832064 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 14 04:12:40 crc kubenswrapper[4867]: I0214 04:12:40.832136 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 14 04:12:40 crc kubenswrapper[4867]: I0214 04:12:40.832219 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 14 04:12:40 crc kubenswrapper[4867]: I0214 04:12:40.832229 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 14 04:12:40 crc kubenswrapper[4867]: I0214 04:12:40.832340 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 14 04:12:40 crc kubenswrapper[4867]: I0214 04:12:40.842728 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-74548f6c84-krdz8"] Feb 14 04:12:40 crc kubenswrapper[4867]: I0214 04:12:40.876267 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b9320aa8-606f-42da-94c7-886ddd1a0646-client-ca\") pod \"route-controller-manager-74548f6c84-krdz8\" (UID: \"b9320aa8-606f-42da-94c7-886ddd1a0646\") " pod="openshift-route-controller-manager/route-controller-manager-74548f6c84-krdz8" Feb 14 04:12:40 crc kubenswrapper[4867]: I0214 04:12:40.876328 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9320aa8-606f-42da-94c7-886ddd1a0646-serving-cert\") pod \"route-controller-manager-74548f6c84-krdz8\" (UID: \"b9320aa8-606f-42da-94c7-886ddd1a0646\") " pod="openshift-route-controller-manager/route-controller-manager-74548f6c84-krdz8" Feb 14 04:12:40 crc kubenswrapper[4867]: I0214 04:12:40.876358 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9320aa8-606f-42da-94c7-886ddd1a0646-config\") pod \"route-controller-manager-74548f6c84-krdz8\" (UID: \"b9320aa8-606f-42da-94c7-886ddd1a0646\") " pod="openshift-route-controller-manager/route-controller-manager-74548f6c84-krdz8" Feb 14 04:12:40 crc kubenswrapper[4867]: I0214 04:12:40.876428 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9fs9\" (UniqueName: \"kubernetes.io/projected/b9320aa8-606f-42da-94c7-886ddd1a0646-kube-api-access-g9fs9\") pod \"route-controller-manager-74548f6c84-krdz8\" (UID: \"b9320aa8-606f-42da-94c7-886ddd1a0646\") " pod="openshift-route-controller-manager/route-controller-manager-74548f6c84-krdz8" Feb 14 04:12:40 crc kubenswrapper[4867]: I0214 04:12:40.977869 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9fs9\" (UniqueName: \"kubernetes.io/projected/b9320aa8-606f-42da-94c7-886ddd1a0646-kube-api-access-g9fs9\") pod \"route-controller-manager-74548f6c84-krdz8\" (UID: \"b9320aa8-606f-42da-94c7-886ddd1a0646\") " pod="openshift-route-controller-manager/route-controller-manager-74548f6c84-krdz8" Feb 14 04:12:40 crc kubenswrapper[4867]: I0214 04:12:40.977973 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b9320aa8-606f-42da-94c7-886ddd1a0646-client-ca\") pod \"route-controller-manager-74548f6c84-krdz8\" (UID: \"b9320aa8-606f-42da-94c7-886ddd1a0646\") " pod="openshift-route-controller-manager/route-controller-manager-74548f6c84-krdz8" Feb 14 04:12:40 crc kubenswrapper[4867]: I0214 04:12:40.978014 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9320aa8-606f-42da-94c7-886ddd1a0646-serving-cert\") pod \"route-controller-manager-74548f6c84-krdz8\" (UID: \"b9320aa8-606f-42da-94c7-886ddd1a0646\") " pod="openshift-route-controller-manager/route-controller-manager-74548f6c84-krdz8" Feb 14 04:12:40 crc kubenswrapper[4867]: I0214 04:12:40.978042 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9320aa8-606f-42da-94c7-886ddd1a0646-config\") pod \"route-controller-manager-74548f6c84-krdz8\" (UID: \"b9320aa8-606f-42da-94c7-886ddd1a0646\") " pod="openshift-route-controller-manager/route-controller-manager-74548f6c84-krdz8" Feb 14 04:12:40 crc kubenswrapper[4867]: I0214 04:12:40.979239 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b9320aa8-606f-42da-94c7-886ddd1a0646-client-ca\") pod \"route-controller-manager-74548f6c84-krdz8\" (UID: \"b9320aa8-606f-42da-94c7-886ddd1a0646\") " pod="openshift-route-controller-manager/route-controller-manager-74548f6c84-krdz8" Feb 14 04:12:40 crc kubenswrapper[4867]: I0214 04:12:40.979531 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9320aa8-606f-42da-94c7-886ddd1a0646-config\") pod \"route-controller-manager-74548f6c84-krdz8\" (UID: \"b9320aa8-606f-42da-94c7-886ddd1a0646\") " pod="openshift-route-controller-manager/route-controller-manager-74548f6c84-krdz8" Feb 14 04:12:40 crc kubenswrapper[4867]: I0214 04:12:40.982089 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9320aa8-606f-42da-94c7-886ddd1a0646-serving-cert\") pod \"route-controller-manager-74548f6c84-krdz8\" (UID: \"b9320aa8-606f-42da-94c7-886ddd1a0646\") " pod="openshift-route-controller-manager/route-controller-manager-74548f6c84-krdz8" Feb 14 04:12:40 crc kubenswrapper[4867]: I0214 04:12:40.996034 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9fs9\" (UniqueName: \"kubernetes.io/projected/b9320aa8-606f-42da-94c7-886ddd1a0646-kube-api-access-g9fs9\") pod \"route-controller-manager-74548f6c84-krdz8\" (UID: \"b9320aa8-606f-42da-94c7-886ddd1a0646\") " pod="openshift-route-controller-manager/route-controller-manager-74548f6c84-krdz8" Feb 14 04:12:41 crc kubenswrapper[4867]: I0214 04:12:41.150902 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-74548f6c84-krdz8" Feb 14 04:12:42 crc kubenswrapper[4867]: I0214 04:12:42.811725 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 14 04:12:42 crc kubenswrapper[4867]: I0214 04:12:42.813596 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 14 04:12:42 crc kubenswrapper[4867]: I0214 04:12:42.815647 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 14 04:12:42 crc kubenswrapper[4867]: I0214 04:12:42.903224 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0e717e9c-3ff4-420e-8f69-26044fc5e482-kubelet-dir\") pod \"installer-9-crc\" (UID: \"0e717e9c-3ff4-420e-8f69-26044fc5e482\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 14 04:12:42 crc kubenswrapper[4867]: I0214 04:12:42.903273 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0e717e9c-3ff4-420e-8f69-26044fc5e482-kube-api-access\") pod \"installer-9-crc\" (UID: \"0e717e9c-3ff4-420e-8f69-26044fc5e482\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 14 04:12:42 crc kubenswrapper[4867]: I0214 04:12:42.903297 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0e717e9c-3ff4-420e-8f69-26044fc5e482-var-lock\") pod \"installer-9-crc\" (UID: \"0e717e9c-3ff4-420e-8f69-26044fc5e482\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 14 04:12:43 crc kubenswrapper[4867]: I0214 04:12:43.004322 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0e717e9c-3ff4-420e-8f69-26044fc5e482-kubelet-dir\") pod \"installer-9-crc\" (UID: \"0e717e9c-3ff4-420e-8f69-26044fc5e482\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 14 04:12:43 crc kubenswrapper[4867]: I0214 04:12:43.004394 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0e717e9c-3ff4-420e-8f69-26044fc5e482-kube-api-access\") pod \"installer-9-crc\" (UID: \"0e717e9c-3ff4-420e-8f69-26044fc5e482\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 14 04:12:43 crc kubenswrapper[4867]: I0214 04:12:43.004433 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0e717e9c-3ff4-420e-8f69-26044fc5e482-var-lock\") pod \"installer-9-crc\" (UID: \"0e717e9c-3ff4-420e-8f69-26044fc5e482\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 14 04:12:43 crc kubenswrapper[4867]: I0214 04:12:43.004496 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0e717e9c-3ff4-420e-8f69-26044fc5e482-kubelet-dir\") pod \"installer-9-crc\" (UID: \"0e717e9c-3ff4-420e-8f69-26044fc5e482\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 14 04:12:43 crc kubenswrapper[4867]: I0214 04:12:43.004586 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0e717e9c-3ff4-420e-8f69-26044fc5e482-var-lock\") pod \"installer-9-crc\" (UID: \"0e717e9c-3ff4-420e-8f69-26044fc5e482\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 14 04:12:43 crc kubenswrapper[4867]: I0214 04:12:43.022599 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0e717e9c-3ff4-420e-8f69-26044fc5e482-kube-api-access\") pod \"installer-9-crc\" (UID: \"0e717e9c-3ff4-420e-8f69-26044fc5e482\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 14 04:12:43 crc kubenswrapper[4867]: I0214 04:12:43.171185 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 14 04:12:45 crc kubenswrapper[4867]: E0214 04:12:45.351096 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 14 04:12:45 crc kubenswrapper[4867]: E0214 04:12:45.351581 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v76pr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-n9vq9_openshift-marketplace(21ce8d91-a436-4fe6-b5fd-1988e588ded8): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 14 04:12:45 crc kubenswrapper[4867]: E0214 04:12:45.352783 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-n9vq9" podUID="21ce8d91-a436-4fe6-b5fd-1988e588ded8" Feb 14 04:12:46 crc kubenswrapper[4867]: I0214 04:12:46.920205 4867 patch_prober.go:28] interesting pod/downloads-7954f5f757-x9sjv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Feb 14 04:12:46 crc kubenswrapper[4867]: I0214 04:12:46.920258 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-x9sjv" podUID="72546cbc-3499-4110-b0e4-58beab7cc8a5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Feb 14 04:12:47 crc kubenswrapper[4867]: E0214 04:12:47.607000 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-n9vq9" podUID="21ce8d91-a436-4fe6-b5fd-1988e588ded8" Feb 14 04:12:48 crc kubenswrapper[4867]: E0214 04:12:48.984827 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 14 04:12:48 crc kubenswrapper[4867]: E0214 04:12:48.985315 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rmwl4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-5mz22_openshift-marketplace(4cf2e46b-a553-4b29-b6f2-02072b8660d9): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 14 04:12:48 crc kubenswrapper[4867]: E0214 04:12:48.986474 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-5mz22" podUID="4cf2e46b-a553-4b29-b6f2-02072b8660d9" Feb 14 04:12:56 crc kubenswrapper[4867]: E0214 04:12:56.251603 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-5mz22" podUID="4cf2e46b-a553-4b29-b6f2-02072b8660d9" Feb 14 04:12:56 crc kubenswrapper[4867]: E0214 04:12:56.270616 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 14 04:12:56 crc kubenswrapper[4867]: E0214 04:12:56.270823 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rzh4n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-x4khs_openshift-marketplace(f27f899c-e2d8-4601-9a36-4582192436b7): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 14 04:12:56 crc kubenswrapper[4867]: E0214 04:12:56.271944 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-x4khs" podUID="f27f899c-e2d8-4601-9a36-4582192436b7" Feb 14 04:12:56 crc kubenswrapper[4867]: I0214 04:12:56.918966 4867 patch_prober.go:28] interesting pod/downloads-7954f5f757-x9sjv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Feb 14 04:12:56 crc kubenswrapper[4867]: I0214 04:12:56.919393 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-x9sjv" podUID="72546cbc-3499-4110-b0e4-58beab7cc8a5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Feb 14 04:13:00 crc kubenswrapper[4867]: E0214 04:13:00.831702 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 14 04:13:00 crc kubenswrapper[4867]: E0214 04:13:00.832021 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mtnvz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-8vs6k_openshift-marketplace(b6d1c1c6-899d-4220-8f80-defae4ba56f0): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 14 04:13:00 crc kubenswrapper[4867]: E0214 04:13:00.833257 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-8vs6k" podUID="b6d1c1c6-899d-4220-8f80-defae4ba56f0" Feb 14 04:13:01 crc kubenswrapper[4867]: I0214 04:13:01.250770 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 04:13:01 crc kubenswrapper[4867]: I0214 04:13:01.250845 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 04:13:01 crc kubenswrapper[4867]: I0214 04:13:01.250904 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" Feb 14 04:13:01 crc kubenswrapper[4867]: I0214 04:13:01.252241 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c06b088007e4cc02eff5f33dffc101f9d559fc0af6d9fc99cb7d1a49c47deec3"} pod="openshift-machine-config-operator/machine-config-daemon-4s95t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 04:13:01 crc kubenswrapper[4867]: I0214 04:13:01.253558 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" containerID="cri-o://c06b088007e4cc02eff5f33dffc101f9d559fc0af6d9fc99cb7d1a49c47deec3" gracePeriod=600 Feb 14 04:13:01 crc kubenswrapper[4867]: E0214 04:13:01.339093 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 14 04:13:01 crc kubenswrapper[4867]: E0214 04:13:01.339242 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nmkjt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-jc878_openshift-marketplace(fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 14 04:13:01 crc kubenswrapper[4867]: E0214 04:13:01.340534 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-jc878" podUID="fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab" Feb 14 04:13:02 crc kubenswrapper[4867]: I0214 04:13:02.530773 4867 generic.go:334] "Generic (PLEG): container finished" podID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerID="c06b088007e4cc02eff5f33dffc101f9d559fc0af6d9fc99cb7d1a49c47deec3" exitCode=0 Feb 14 04:13:02 crc kubenswrapper[4867]: I0214 04:13:02.530820 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" event={"ID":"5992e46c-bce7-4b9f-82f2-c7ffb93286cd","Type":"ContainerDied","Data":"c06b088007e4cc02eff5f33dffc101f9d559fc0af6d9fc99cb7d1a49c47deec3"} Feb 14 04:13:03 crc kubenswrapper[4867]: E0214 04:13:03.891945 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 14 04:13:03 crc kubenswrapper[4867]: E0214 04:13:03.892174 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mp526,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-2cjxf_openshift-marketplace(0683c2f1-5695-4ef3-b6cc-31fe804c6dc6): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 14 04:13:03 crc kubenswrapper[4867]: E0214 04:13:03.893443 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-2cjxf" podUID="0683c2f1-5695-4ef3-b6cc-31fe804c6dc6" Feb 14 04:13:05 crc kubenswrapper[4867]: E0214 04:13:05.080353 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-jc878" podUID="fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab" Feb 14 04:13:05 crc kubenswrapper[4867]: E0214 04:13:05.081025 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-2cjxf" podUID="0683c2f1-5695-4ef3-b6cc-31fe804c6dc6" Feb 14 04:13:05 crc kubenswrapper[4867]: E0214 04:13:05.081182 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8vs6k" podUID="b6d1c1c6-899d-4220-8f80-defae4ba56f0" Feb 14 04:13:05 crc kubenswrapper[4867]: E0214 04:13:05.081202 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-x4khs" podUID="f27f899c-e2d8-4601-9a36-4582192436b7" Feb 14 04:13:05 crc kubenswrapper[4867]: I0214 04:13:05.158195 4867 scope.go:117] "RemoveContainer" containerID="9560a6c0d2908add05e4ca895184c5c2c58cffdd60f774e8164ccee333384db8" Feb 14 04:13:05 crc kubenswrapper[4867]: I0214 04:13:05.440540 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 14 04:13:05 crc kubenswrapper[4867]: I0214 04:13:05.527448 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 14 04:13:05 crc kubenswrapper[4867]: W0214 04:13:05.534007 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod0e717e9c_3ff4_420e_8f69_26044fc5e482.slice/crio-798ef9a3e213da1cc192f6e3e40f1dc1868f826121e73963d17a6206d8028438 WatchSource:0}: Error finding container 798ef9a3e213da1cc192f6e3e40f1dc1868f826121e73963d17a6206d8028438: Status 404 returned error can't find the container with id 798ef9a3e213da1cc192f6e3e40f1dc1868f826121e73963d17a6206d8028438 Feb 14 04:13:05 crc kubenswrapper[4867]: I0214 04:13:05.564422 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"3f1bacbd-3b75-4814-83cc-1569cbbf36bb","Type":"ContainerStarted","Data":"eb0e14a1c0feea853b78fcf6336be6031c436d6dec3e3102e524efe8fc4064cc"} Feb 14 04:13:05 crc kubenswrapper[4867]: I0214 04:13:05.565396 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"0e717e9c-3ff4-420e-8f69-26044fc5e482","Type":"ContainerStarted","Data":"798ef9a3e213da1cc192f6e3e40f1dc1868f826121e73963d17a6206d8028438"} Feb 14 04:13:05 crc kubenswrapper[4867]: I0214 04:13:05.565577 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-4b6k5"] Feb 14 04:13:05 crc kubenswrapper[4867]: W0214 04:13:05.573809 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7206174b_645b_4924_8345_d1d4b1a5ec39.slice/crio-d4a66555ea7fd71658fde0e679ecdc654cf768c20dd8915504ae31493fc1728c WatchSource:0}: Error finding container d4a66555ea7fd71658fde0e679ecdc654cf768c20dd8915504ae31493fc1728c: Status 404 returned error can't find the container with id d4a66555ea7fd71658fde0e679ecdc654cf768c20dd8915504ae31493fc1728c Feb 14 04:13:05 crc kubenswrapper[4867]: I0214 04:13:05.728592 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-748d4597b7-zr2sc"] Feb 14 04:13:05 crc kubenswrapper[4867]: I0214 04:13:05.746695 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-74548f6c84-krdz8"] Feb 14 04:13:05 crc kubenswrapper[4867]: W0214 04:13:05.760191 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc312f687_8694_4be3_a1ac_ddb1a0e8e1e6.slice/crio-e4132b3ddfc13f1765cbd4d8f6a797c02ea70c5da037aeea7a90fb80fbf566d7 WatchSource:0}: Error finding container e4132b3ddfc13f1765cbd4d8f6a797c02ea70c5da037aeea7a90fb80fbf566d7: Status 404 returned error can't find the container with id e4132b3ddfc13f1765cbd4d8f6a797c02ea70c5da037aeea7a90fb80fbf566d7 Feb 14 04:13:06 crc kubenswrapper[4867]: E0214 04:13:06.481641 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 14 04:13:06 crc kubenswrapper[4867]: E0214 04:13:06.482106 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8f5g2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-gvh7q_openshift-marketplace(2e834244-05c0-4e48-9e2a-7c69cf930951): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 14 04:13:06 crc kubenswrapper[4867]: E0214 04:13:06.483457 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-gvh7q" podUID="2e834244-05c0-4e48-9e2a-7c69cf930951" Feb 14 04:13:06 crc kubenswrapper[4867]: I0214 04:13:06.574144 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-74548f6c84-krdz8" event={"ID":"b9320aa8-606f-42da-94c7-886ddd1a0646","Type":"ContainerStarted","Data":"f157b04c5dcfd4a5e66739ecf3f255670013221d2f63682930806f03de907180"} Feb 14 04:13:06 crc kubenswrapper[4867]: I0214 04:13:06.574535 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-74548f6c84-krdz8" event={"ID":"b9320aa8-606f-42da-94c7-886ddd1a0646","Type":"ContainerStarted","Data":"541ea6e9e6c3a77aac7816654698f9c602bfc9a3197a2fd757215b2f093807ec"} Feb 14 04:13:06 crc kubenswrapper[4867]: I0214 04:13:06.574894 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-74548f6c84-krdz8" Feb 14 04:13:06 crc kubenswrapper[4867]: I0214 04:13:06.577489 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"3f1bacbd-3b75-4814-83cc-1569cbbf36bb","Type":"ContainerStarted","Data":"1e9a67aeecba2c81f700639b7605c079fdc674717d231a63f286887b5989d232"} Feb 14 04:13:06 crc kubenswrapper[4867]: I0214 04:13:06.579402 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"0e717e9c-3ff4-420e-8f69-26044fc5e482","Type":"ContainerStarted","Data":"e88a66dab5c2b34dc63a7059bdf03187c70eb6a356f22173e8d8866831ed9219"} Feb 14 04:13:06 crc kubenswrapper[4867]: I0214 04:13:06.581349 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-4b6k5" event={"ID":"7206174b-645b-4924-8345-d1d4b1a5ec39","Type":"ContainerStarted","Data":"428657683c188c9151d48ece253a11bffad3e756aad099cbaca848114b650376"} Feb 14 04:13:06 crc kubenswrapper[4867]: I0214 04:13:06.581376 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-4b6k5" event={"ID":"7206174b-645b-4924-8345-d1d4b1a5ec39","Type":"ContainerStarted","Data":"d4a66555ea7fd71658fde0e679ecdc654cf768c20dd8915504ae31493fc1728c"} Feb 14 04:13:06 crc kubenswrapper[4867]: I0214 04:13:06.583241 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-748d4597b7-zr2sc" event={"ID":"c312f687-8694-4be3-a1ac-ddb1a0e8e1e6","Type":"ContainerStarted","Data":"d4aead393cb2b02a428fb28661f16918a1873ee0f2ed4a30857ac163193d3857"} Feb 14 04:13:06 crc kubenswrapper[4867]: I0214 04:13:06.583267 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-748d4597b7-zr2sc" event={"ID":"c312f687-8694-4be3-a1ac-ddb1a0e8e1e6","Type":"ContainerStarted","Data":"e4132b3ddfc13f1765cbd4d8f6a797c02ea70c5da037aeea7a90fb80fbf566d7"} Feb 14 04:13:06 crc kubenswrapper[4867]: I0214 04:13:06.583547 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-748d4597b7-zr2sc" Feb 14 04:13:06 crc kubenswrapper[4867]: I0214 04:13:06.589598 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-x9sjv" event={"ID":"72546cbc-3499-4110-b0e4-58beab7cc8a5","Type":"ContainerStarted","Data":"f1032fb4248d8848aa74a32078e94558edcfccf5692ba81381e6264aab175df3"} Feb 14 04:13:06 crc kubenswrapper[4867]: I0214 04:13:06.591110 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-x9sjv" Feb 14 04:13:06 crc kubenswrapper[4867]: I0214 04:13:06.591157 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-748d4597b7-zr2sc" Feb 14 04:13:06 crc kubenswrapper[4867]: I0214 04:13:06.591541 4867 patch_prober.go:28] interesting pod/downloads-7954f5f757-x9sjv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Feb 14 04:13:06 crc kubenswrapper[4867]: I0214 04:13:06.593900 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-x9sjv" podUID="72546cbc-3499-4110-b0e4-58beab7cc8a5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Feb 14 04:13:06 crc kubenswrapper[4867]: E0214 04:13:06.599711 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-gvh7q" podUID="2e834244-05c0-4e48-9e2a-7c69cf930951" Feb 14 04:13:06 crc kubenswrapper[4867]: I0214 04:13:06.613086 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-748d4597b7-zr2sc" podStartSLOduration=29.613070936 podStartE2EDuration="29.613070936s" podCreationTimestamp="2026-02-14 04:12:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:13:06.611555087 +0000 UTC m=+218.692492401" watchObservedRunningTime="2026-02-14 04:13:06.613070936 +0000 UTC m=+218.694008250" Feb 14 04:13:06 crc kubenswrapper[4867]: I0214 04:13:06.614712 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-74548f6c84-krdz8" podStartSLOduration=29.614705187 podStartE2EDuration="29.614705187s" podCreationTimestamp="2026-02-14 04:12:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:13:06.597644234 +0000 UTC m=+218.678581568" watchObservedRunningTime="2026-02-14 04:13:06.614705187 +0000 UTC m=+218.695642501" Feb 14 04:13:06 crc kubenswrapper[4867]: I0214 04:13:06.652569 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=24.652547599000002 podStartE2EDuration="24.652547599s" podCreationTimestamp="2026-02-14 04:12:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:13:06.652490218 +0000 UTC m=+218.733427532" watchObservedRunningTime="2026-02-14 04:13:06.652547599 +0000 UTC m=+218.733484923" Feb 14 04:13:06 crc kubenswrapper[4867]: E0214 04:13:06.780057 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 14 04:13:06 crc kubenswrapper[4867]: E0214 04:13:06.780599 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ztsqf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-s8hwg_openshift-marketplace(1f7707be-b4dc-47c7-8a74-bc46399acd36): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 14 04:13:06 crc kubenswrapper[4867]: E0214 04:13:06.781873 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-s8hwg" podUID="1f7707be-b4dc-47c7-8a74-bc46399acd36" Feb 14 04:13:06 crc kubenswrapper[4867]: I0214 04:13:06.919703 4867 patch_prober.go:28] interesting pod/downloads-7954f5f757-x9sjv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Feb 14 04:13:06 crc kubenswrapper[4867]: I0214 04:13:06.919761 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-x9sjv" podUID="72546cbc-3499-4110-b0e4-58beab7cc8a5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Feb 14 04:13:06 crc kubenswrapper[4867]: I0214 04:13:06.919798 4867 patch_prober.go:28] interesting pod/downloads-7954f5f757-x9sjv container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Feb 14 04:13:06 crc kubenswrapper[4867]: I0214 04:13:06.919862 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-x9sjv" podUID="72546cbc-3499-4110-b0e4-58beab7cc8a5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Feb 14 04:13:06 crc kubenswrapper[4867]: I0214 04:13:06.941278 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-74548f6c84-krdz8" Feb 14 04:13:06 crc kubenswrapper[4867]: I0214 04:13:06.965579 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=29.965563744 podStartE2EDuration="29.965563744s" podCreationTimestamp="2026-02-14 04:12:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:13:06.715776486 +0000 UTC m=+218.796713800" watchObservedRunningTime="2026-02-14 04:13:06.965563744 +0000 UTC m=+219.046501058" Feb 14 04:13:07 crc kubenswrapper[4867]: I0214 04:13:07.606015 4867 generic.go:334] "Generic (PLEG): container finished" podID="3f1bacbd-3b75-4814-83cc-1569cbbf36bb" containerID="1e9a67aeecba2c81f700639b7605c079fdc674717d231a63f286887b5989d232" exitCode=0 Feb 14 04:13:07 crc kubenswrapper[4867]: I0214 04:13:07.606259 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"3f1bacbd-3b75-4814-83cc-1569cbbf36bb","Type":"ContainerDied","Data":"1e9a67aeecba2c81f700639b7605c079fdc674717d231a63f286887b5989d232"} Feb 14 04:13:07 crc kubenswrapper[4867]: I0214 04:13:07.609404 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-4b6k5" event={"ID":"7206174b-645b-4924-8345-d1d4b1a5ec39","Type":"ContainerStarted","Data":"8ca174d87caff1de9590fea61881d6666f195d521ba3dc01cd9c9bdbc3ee5c9c"} Feb 14 04:13:07 crc kubenswrapper[4867]: I0214 04:13:07.611347 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" event={"ID":"5992e46c-bce7-4b9f-82f2-c7ffb93286cd","Type":"ContainerStarted","Data":"a1533900ce1e5bb0e6f304c6961b52011041a6df37ce715de5540edb7f995f66"} Feb 14 04:13:07 crc kubenswrapper[4867]: I0214 04:13:07.611854 4867 patch_prober.go:28] interesting pod/downloads-7954f5f757-x9sjv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Feb 14 04:13:07 crc kubenswrapper[4867]: I0214 04:13:07.612236 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-x9sjv" podUID="72546cbc-3499-4110-b0e4-58beab7cc8a5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Feb 14 04:13:07 crc kubenswrapper[4867]: E0214 04:13:07.613026 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-s8hwg" podUID="1f7707be-b4dc-47c7-8a74-bc46399acd36" Feb 14 04:13:07 crc kubenswrapper[4867]: I0214 04:13:07.675735 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-4b6k5" podStartSLOduration=197.675716793 podStartE2EDuration="3m17.675716793s" podCreationTimestamp="2026-02-14 04:09:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:13:07.65592346 +0000 UTC m=+219.736860794" watchObservedRunningTime="2026-02-14 04:13:07.675716793 +0000 UTC m=+219.756654107" Feb 14 04:13:08 crc kubenswrapper[4867]: I0214 04:13:08.617340 4867 generic.go:334] "Generic (PLEG): container finished" podID="21ce8d91-a436-4fe6-b5fd-1988e588ded8" containerID="1874a10e5b67d2e6bb513881074d5bce2e31adc733159821fa403df5a755105e" exitCode=0 Feb 14 04:13:08 crc kubenswrapper[4867]: I0214 04:13:08.617444 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n9vq9" event={"ID":"21ce8d91-a436-4fe6-b5fd-1988e588ded8","Type":"ContainerDied","Data":"1874a10e5b67d2e6bb513881074d5bce2e31adc733159821fa403df5a755105e"} Feb 14 04:13:08 crc kubenswrapper[4867]: I0214 04:13:08.619610 4867 patch_prober.go:28] interesting pod/downloads-7954f5f757-x9sjv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Feb 14 04:13:08 crc kubenswrapper[4867]: I0214 04:13:08.619682 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-x9sjv" podUID="72546cbc-3499-4110-b0e4-58beab7cc8a5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Feb 14 04:13:08 crc kubenswrapper[4867]: I0214 04:13:08.933810 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 14 04:13:09 crc kubenswrapper[4867]: I0214 04:13:09.060754 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3f1bacbd-3b75-4814-83cc-1569cbbf36bb-kubelet-dir\") pod \"3f1bacbd-3b75-4814-83cc-1569cbbf36bb\" (UID: \"3f1bacbd-3b75-4814-83cc-1569cbbf36bb\") " Feb 14 04:13:09 crc kubenswrapper[4867]: I0214 04:13:09.060803 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3f1bacbd-3b75-4814-83cc-1569cbbf36bb-kube-api-access\") pod \"3f1bacbd-3b75-4814-83cc-1569cbbf36bb\" (UID: \"3f1bacbd-3b75-4814-83cc-1569cbbf36bb\") " Feb 14 04:13:09 crc kubenswrapper[4867]: I0214 04:13:09.060871 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f1bacbd-3b75-4814-83cc-1569cbbf36bb-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "3f1bacbd-3b75-4814-83cc-1569cbbf36bb" (UID: "3f1bacbd-3b75-4814-83cc-1569cbbf36bb"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 04:13:09 crc kubenswrapper[4867]: I0214 04:13:09.061067 4867 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3f1bacbd-3b75-4814-83cc-1569cbbf36bb-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 14 04:13:09 crc kubenswrapper[4867]: I0214 04:13:09.066267 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f1bacbd-3b75-4814-83cc-1569cbbf36bb-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "3f1bacbd-3b75-4814-83cc-1569cbbf36bb" (UID: "3f1bacbd-3b75-4814-83cc-1569cbbf36bb"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:13:09 crc kubenswrapper[4867]: I0214 04:13:09.161674 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3f1bacbd-3b75-4814-83cc-1569cbbf36bb-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 14 04:13:09 crc kubenswrapper[4867]: I0214 04:13:09.626179 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"3f1bacbd-3b75-4814-83cc-1569cbbf36bb","Type":"ContainerDied","Data":"eb0e14a1c0feea853b78fcf6336be6031c436d6dec3e3102e524efe8fc4064cc"} Feb 14 04:13:09 crc kubenswrapper[4867]: I0214 04:13:09.626222 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb0e14a1c0feea853b78fcf6336be6031c436d6dec3e3102e524efe8fc4064cc" Feb 14 04:13:09 crc kubenswrapper[4867]: I0214 04:13:09.626313 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 14 04:13:10 crc kubenswrapper[4867]: I0214 04:13:10.632388 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n9vq9" event={"ID":"21ce8d91-a436-4fe6-b5fd-1988e588ded8","Type":"ContainerStarted","Data":"59d20d766b1edd844acfd10fcac06c637f2be95f509a76f1883642ffba8f4bdb"} Feb 14 04:13:10 crc kubenswrapper[4867]: I0214 04:13:10.651839 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-n9vq9" podStartSLOduration=3.6370444280000003 podStartE2EDuration="1m10.651824938s" podCreationTimestamp="2026-02-14 04:12:00 +0000 UTC" firstStartedPulling="2026-02-14 04:12:03.002679266 +0000 UTC m=+155.083616580" lastFinishedPulling="2026-02-14 04:13:10.017459776 +0000 UTC m=+222.098397090" observedRunningTime="2026-02-14 04:13:10.649802297 +0000 UTC m=+222.730739611" watchObservedRunningTime="2026-02-14 04:13:10.651824938 +0000 UTC m=+222.732762252" Feb 14 04:13:11 crc kubenswrapper[4867]: I0214 04:13:11.305930 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-n9vq9" Feb 14 04:13:11 crc kubenswrapper[4867]: I0214 04:13:11.306160 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-n9vq9" Feb 14 04:13:11 crc kubenswrapper[4867]: I0214 04:13:11.643998 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5mz22" event={"ID":"4cf2e46b-a553-4b29-b6f2-02072b8660d9","Type":"ContainerStarted","Data":"07dc86f27711b42c0f0c70d02bf821bf6e645caa1d382d2a371675cf0f568e78"} Feb 14 04:13:12 crc kubenswrapper[4867]: I0214 04:13:12.946503 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-n9vq9" podUID="21ce8d91-a436-4fe6-b5fd-1988e588ded8" containerName="registry-server" probeResult="failure" output=< Feb 14 04:13:12 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 04:13:12 crc kubenswrapper[4867]: > Feb 14 04:13:13 crc kubenswrapper[4867]: I0214 04:13:13.659618 4867 generic.go:334] "Generic (PLEG): container finished" podID="4cf2e46b-a553-4b29-b6f2-02072b8660d9" containerID="07dc86f27711b42c0f0c70d02bf821bf6e645caa1d382d2a371675cf0f568e78" exitCode=0 Feb 14 04:13:13 crc kubenswrapper[4867]: I0214 04:13:13.659657 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5mz22" event={"ID":"4cf2e46b-a553-4b29-b6f2-02072b8660d9","Type":"ContainerDied","Data":"07dc86f27711b42c0f0c70d02bf821bf6e645caa1d382d2a371675cf0f568e78"} Feb 14 04:13:16 crc kubenswrapper[4867]: I0214 04:13:16.919155 4867 patch_prober.go:28] interesting pod/downloads-7954f5f757-x9sjv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Feb 14 04:13:16 crc kubenswrapper[4867]: I0214 04:13:16.919686 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-x9sjv" podUID="72546cbc-3499-4110-b0e4-58beab7cc8a5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Feb 14 04:13:16 crc kubenswrapper[4867]: I0214 04:13:16.919158 4867 patch_prober.go:28] interesting pod/downloads-7954f5f757-x9sjv container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Feb 14 04:13:16 crc kubenswrapper[4867]: I0214 04:13:16.919808 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-x9sjv" podUID="72546cbc-3499-4110-b0e4-58beab7cc8a5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Feb 14 04:13:17 crc kubenswrapper[4867]: I0214 04:13:17.681701 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5mz22" event={"ID":"4cf2e46b-a553-4b29-b6f2-02072b8660d9","Type":"ContainerStarted","Data":"c2877fef377b8448495213f1ba7610d513464667dbd0985d720e7b4e3414f0c3"} Feb 14 04:13:17 crc kubenswrapper[4867]: I0214 04:13:17.708215 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5mz22" podStartSLOduration=4.296732424 podStartE2EDuration="1m20.708195191s" podCreationTimestamp="2026-02-14 04:11:57 +0000 UTC" firstStartedPulling="2026-02-14 04:12:00.842659324 +0000 UTC m=+152.923596638" lastFinishedPulling="2026-02-14 04:13:17.254122101 +0000 UTC m=+229.335059405" observedRunningTime="2026-02-14 04:13:17.701934102 +0000 UTC m=+229.782871426" watchObservedRunningTime="2026-02-14 04:13:17.708195191 +0000 UTC m=+229.789132525" Feb 14 04:13:18 crc kubenswrapper[4867]: I0214 04:13:18.394108 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5mz22" Feb 14 04:13:18 crc kubenswrapper[4867]: I0214 04:13:18.394459 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5mz22" Feb 14 04:13:18 crc kubenswrapper[4867]: I0214 04:13:18.690243 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jc878" event={"ID":"fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab","Type":"ContainerStarted","Data":"a9a5891bbec4b4da6c9ef36e2dd93f2b54465511a9b15a7d390a7176eb2c82b4"} Feb 14 04:13:19 crc kubenswrapper[4867]: I0214 04:13:19.482093 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-5mz22" podUID="4cf2e46b-a553-4b29-b6f2-02072b8660d9" containerName="registry-server" probeResult="failure" output=< Feb 14 04:13:19 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 04:13:19 crc kubenswrapper[4867]: > Feb 14 04:13:19 crc kubenswrapper[4867]: I0214 04:13:19.735196 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x4khs" event={"ID":"f27f899c-e2d8-4601-9a36-4582192436b7","Type":"ContainerStarted","Data":"fc1f0bd8f7009d70b8d79a2619856a470a226829cf0b6491da5a920f404a7708"} Feb 14 04:13:20 crc kubenswrapper[4867]: I0214 04:13:20.753703 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2cjxf" event={"ID":"0683c2f1-5695-4ef3-b6cc-31fe804c6dc6","Type":"ContainerStarted","Data":"85287bd98780c8d28545ae3a7b154f6ba33f7e022b07f74e2ecc3b8f424c43cb"} Feb 14 04:13:21 crc kubenswrapper[4867]: I0214 04:13:21.425425 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-n9vq9" Feb 14 04:13:21 crc kubenswrapper[4867]: I0214 04:13:21.473911 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-n9vq9" Feb 14 04:13:21 crc kubenswrapper[4867]: I0214 04:13:21.762073 4867 generic.go:334] "Generic (PLEG): container finished" podID="fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab" containerID="a9a5891bbec4b4da6c9ef36e2dd93f2b54465511a9b15a7d390a7176eb2c82b4" exitCode=0 Feb 14 04:13:21 crc kubenswrapper[4867]: I0214 04:13:21.762135 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jc878" event={"ID":"fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab","Type":"ContainerDied","Data":"a9a5891bbec4b4da6c9ef36e2dd93f2b54465511a9b15a7d390a7176eb2c82b4"} Feb 14 04:13:21 crc kubenswrapper[4867]: I0214 04:13:21.764790 4867 generic.go:334] "Generic (PLEG): container finished" podID="f27f899c-e2d8-4601-9a36-4582192436b7" containerID="fc1f0bd8f7009d70b8d79a2619856a470a226829cf0b6491da5a920f404a7708" exitCode=0 Feb 14 04:13:21 crc kubenswrapper[4867]: I0214 04:13:21.764835 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x4khs" event={"ID":"f27f899c-e2d8-4601-9a36-4582192436b7","Type":"ContainerDied","Data":"fc1f0bd8f7009d70b8d79a2619856a470a226829cf0b6491da5a920f404a7708"} Feb 14 04:13:21 crc kubenswrapper[4867]: I0214 04:13:21.770120 4867 generic.go:334] "Generic (PLEG): container finished" podID="0683c2f1-5695-4ef3-b6cc-31fe804c6dc6" containerID="85287bd98780c8d28545ae3a7b154f6ba33f7e022b07f74e2ecc3b8f424c43cb" exitCode=0 Feb 14 04:13:21 crc kubenswrapper[4867]: I0214 04:13:21.770453 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2cjxf" event={"ID":"0683c2f1-5695-4ef3-b6cc-31fe804c6dc6","Type":"ContainerDied","Data":"85287bd98780c8d28545ae3a7b154f6ba33f7e022b07f74e2ecc3b8f424c43cb"} Feb 14 04:13:26 crc kubenswrapper[4867]: I0214 04:13:26.923887 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-x9sjv" Feb 14 04:13:28 crc kubenswrapper[4867]: I0214 04:13:28.446564 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5mz22" Feb 14 04:13:28 crc kubenswrapper[4867]: I0214 04:13:28.499404 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5mz22" Feb 14 04:13:34 crc kubenswrapper[4867]: I0214 04:13:34.834382 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2cjxf" event={"ID":"0683c2f1-5695-4ef3-b6cc-31fe804c6dc6","Type":"ContainerStarted","Data":"118aa202ac601ceca70d20070e2eef726e85bdc481297be9216162c3fbf1dc32"} Feb 14 04:13:34 crc kubenswrapper[4867]: I0214 04:13:34.837225 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s8hwg" event={"ID":"1f7707be-b4dc-47c7-8a74-bc46399acd36","Type":"ContainerStarted","Data":"984fdfc85b05392cc72c5c84de4475acfa58af432c2af35475c4d0530104a422"} Feb 14 04:13:34 crc kubenswrapper[4867]: I0214 04:13:34.843998 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gvh7q" event={"ID":"2e834244-05c0-4e48-9e2a-7c69cf930951","Type":"ContainerStarted","Data":"7e50404d86dfa5abaa30ac013da7f00871fba46895499f9f17afba5a612ece63"} Feb 14 04:13:34 crc kubenswrapper[4867]: I0214 04:13:34.845940 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8vs6k" event={"ID":"b6d1c1c6-899d-4220-8f80-defae4ba56f0","Type":"ContainerStarted","Data":"c9315920968c94ddf5477e0bdd603b5b8e9cbf807eefba671df93e2d03e2c2f6"} Feb 14 04:13:34 crc kubenswrapper[4867]: I0214 04:13:34.847699 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jc878" event={"ID":"fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab","Type":"ContainerStarted","Data":"60ffc454fecb09f395b2cdd3ab6338fbcdb34866e0895ad196ee1967f60209e8"} Feb 14 04:13:34 crc kubenswrapper[4867]: I0214 04:13:34.856995 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2cjxf" podStartSLOduration=4.727915658 podStartE2EDuration="1m36.856979987s" podCreationTimestamp="2026-02-14 04:11:58 +0000 UTC" firstStartedPulling="2026-02-14 04:12:01.950242563 +0000 UTC m=+154.031179877" lastFinishedPulling="2026-02-14 04:13:34.079306892 +0000 UTC m=+246.160244206" observedRunningTime="2026-02-14 04:13:34.854573566 +0000 UTC m=+246.935510890" watchObservedRunningTime="2026-02-14 04:13:34.856979987 +0000 UTC m=+246.937917301" Feb 14 04:13:34 crc kubenswrapper[4867]: I0214 04:13:34.862377 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x4khs" event={"ID":"f27f899c-e2d8-4601-9a36-4582192436b7","Type":"ContainerStarted","Data":"ce8e3a0d75f26f463ddb328420cf33514070ab3b090d2f2c0466cda65d982931"} Feb 14 04:13:34 crc kubenswrapper[4867]: I0214 04:13:34.922136 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jc878" podStartSLOduration=2.852256427 podStartE2EDuration="1m33.922118222s" podCreationTimestamp="2026-02-14 04:12:01 +0000 UTC" firstStartedPulling="2026-02-14 04:12:02.983522458 +0000 UTC m=+155.064459772" lastFinishedPulling="2026-02-14 04:13:34.053384233 +0000 UTC m=+246.134321567" observedRunningTime="2026-02-14 04:13:34.906977797 +0000 UTC m=+246.987915111" watchObservedRunningTime="2026-02-14 04:13:34.922118222 +0000 UTC m=+247.003055536" Feb 14 04:13:34 crc kubenswrapper[4867]: I0214 04:13:34.968050 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-x4khs" podStartSLOduration=3.794932676 podStartE2EDuration="1m36.968030719s" podCreationTimestamp="2026-02-14 04:11:58 +0000 UTC" firstStartedPulling="2026-02-14 04:12:00.853233573 +0000 UTC m=+152.934170887" lastFinishedPulling="2026-02-14 04:13:34.026331616 +0000 UTC m=+246.107268930" observedRunningTime="2026-02-14 04:13:34.966396357 +0000 UTC m=+247.047333671" watchObservedRunningTime="2026-02-14 04:13:34.968030719 +0000 UTC m=+247.048968033" Feb 14 04:13:35 crc kubenswrapper[4867]: I0214 04:13:35.868599 4867 generic.go:334] "Generic (PLEG): container finished" podID="b6d1c1c6-899d-4220-8f80-defae4ba56f0" containerID="c9315920968c94ddf5477e0bdd603b5b8e9cbf807eefba671df93e2d03e2c2f6" exitCode=0 Feb 14 04:13:35 crc kubenswrapper[4867]: I0214 04:13:35.868671 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8vs6k" event={"ID":"b6d1c1c6-899d-4220-8f80-defae4ba56f0","Type":"ContainerDied","Data":"c9315920968c94ddf5477e0bdd603b5b8e9cbf807eefba671df93e2d03e2c2f6"} Feb 14 04:13:35 crc kubenswrapper[4867]: I0214 04:13:35.871447 4867 generic.go:334] "Generic (PLEG): container finished" podID="1f7707be-b4dc-47c7-8a74-bc46399acd36" containerID="984fdfc85b05392cc72c5c84de4475acfa58af432c2af35475c4d0530104a422" exitCode=0 Feb 14 04:13:35 crc kubenswrapper[4867]: I0214 04:13:35.871535 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s8hwg" event={"ID":"1f7707be-b4dc-47c7-8a74-bc46399acd36","Type":"ContainerDied","Data":"984fdfc85b05392cc72c5c84de4475acfa58af432c2af35475c4d0530104a422"} Feb 14 04:13:35 crc kubenswrapper[4867]: I0214 04:13:35.873855 4867 generic.go:334] "Generic (PLEG): container finished" podID="2e834244-05c0-4e48-9e2a-7c69cf930951" containerID="7e50404d86dfa5abaa30ac013da7f00871fba46895499f9f17afba5a612ece63" exitCode=0 Feb 14 04:13:35 crc kubenswrapper[4867]: I0214 04:13:35.873920 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gvh7q" event={"ID":"2e834244-05c0-4e48-9e2a-7c69cf930951","Type":"ContainerDied","Data":"7e50404d86dfa5abaa30ac013da7f00871fba46895499f9f17afba5a612ece63"} Feb 14 04:13:36 crc kubenswrapper[4867]: I0214 04:13:36.881652 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s8hwg" event={"ID":"1f7707be-b4dc-47c7-8a74-bc46399acd36","Type":"ContainerStarted","Data":"7fb020ae5c17769ac38af08639b438690daf523e3453b2d4607be04e3eed31f6"} Feb 14 04:13:36 crc kubenswrapper[4867]: I0214 04:13:36.884144 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gvh7q" event={"ID":"2e834244-05c0-4e48-9e2a-7c69cf930951","Type":"ContainerStarted","Data":"d4d72b2ebbd17189ee349d8b4d6304ac52d50866cfe1895c6576cff0ec95c46e"} Feb 14 04:13:36 crc kubenswrapper[4867]: I0214 04:13:36.886499 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8vs6k" event={"ID":"b6d1c1c6-899d-4220-8f80-defae4ba56f0","Type":"ContainerStarted","Data":"fde717817968c374eed933a0aba80886281d640f0cd7b277b1cbd496e7430898"} Feb 14 04:13:36 crc kubenswrapper[4867]: I0214 04:13:36.910038 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-s8hwg" podStartSLOduration=2.605449596 podStartE2EDuration="1m36.910021714s" podCreationTimestamp="2026-02-14 04:12:00 +0000 UTC" firstStartedPulling="2026-02-14 04:12:01.968048537 +0000 UTC m=+154.048985851" lastFinishedPulling="2026-02-14 04:13:36.272620655 +0000 UTC m=+248.353557969" observedRunningTime="2026-02-14 04:13:36.90712359 +0000 UTC m=+248.988060904" watchObservedRunningTime="2026-02-14 04:13:36.910021714 +0000 UTC m=+248.990959038" Feb 14 04:13:36 crc kubenswrapper[4867]: I0214 04:13:36.929614 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8vs6k" podStartSLOduration=4.423403619 podStartE2EDuration="1m39.929595721s" podCreationTimestamp="2026-02-14 04:11:57 +0000 UTC" firstStartedPulling="2026-02-14 04:12:00.913850629 +0000 UTC m=+152.994787943" lastFinishedPulling="2026-02-14 04:13:36.420042731 +0000 UTC m=+248.500980045" observedRunningTime="2026-02-14 04:13:36.926043411 +0000 UTC m=+249.006980745" watchObservedRunningTime="2026-02-14 04:13:36.929595721 +0000 UTC m=+249.010533035" Feb 14 04:13:36 crc kubenswrapper[4867]: I0214 04:13:36.946634 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gvh7q" podStartSLOduration=3.548929708 podStartE2EDuration="1m37.946616824s" podCreationTimestamp="2026-02-14 04:11:59 +0000 UTC" firstStartedPulling="2026-02-14 04:12:01.945704687 +0000 UTC m=+154.026642011" lastFinishedPulling="2026-02-14 04:13:36.343391813 +0000 UTC m=+248.424329127" observedRunningTime="2026-02-14 04:13:36.944244644 +0000 UTC m=+249.025181958" watchObservedRunningTime="2026-02-14 04:13:36.946616824 +0000 UTC m=+249.027554138" Feb 14 04:13:38 crc kubenswrapper[4867]: I0214 04:13:38.345050 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8vs6k" Feb 14 04:13:38 crc kubenswrapper[4867]: I0214 04:13:38.345106 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8vs6k" Feb 14 04:13:38 crc kubenswrapper[4867]: I0214 04:13:38.506892 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-x4khs" Feb 14 04:13:38 crc kubenswrapper[4867]: I0214 04:13:38.506938 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-x4khs" Feb 14 04:13:38 crc kubenswrapper[4867]: I0214 04:13:38.561017 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-x4khs" Feb 14 04:13:38 crc kubenswrapper[4867]: I0214 04:13:38.735339 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2cjxf" Feb 14 04:13:38 crc kubenswrapper[4867]: I0214 04:13:38.735401 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2cjxf" Feb 14 04:13:38 crc kubenswrapper[4867]: I0214 04:13:38.778034 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2cjxf" Feb 14 04:13:39 crc kubenswrapper[4867]: I0214 04:13:39.388791 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-8vs6k" podUID="b6d1c1c6-899d-4220-8f80-defae4ba56f0" containerName="registry-server" probeResult="failure" output=< Feb 14 04:13:39 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 04:13:39 crc kubenswrapper[4867]: > Feb 14 04:13:40 crc kubenswrapper[4867]: I0214 04:13:40.103000 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gvh7q" Feb 14 04:13:40 crc kubenswrapper[4867]: I0214 04:13:40.103056 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gvh7q" Feb 14 04:13:40 crc kubenswrapper[4867]: I0214 04:13:40.141876 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gvh7q" Feb 14 04:13:40 crc kubenswrapper[4867]: I0214 04:13:40.688552 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-s8hwg" Feb 14 04:13:40 crc kubenswrapper[4867]: I0214 04:13:40.688601 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-s8hwg" Feb 14 04:13:40 crc kubenswrapper[4867]: I0214 04:13:40.728583 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-s8hwg" Feb 14 04:13:41 crc kubenswrapper[4867]: I0214 04:13:41.818188 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jc878" Feb 14 04:13:41 crc kubenswrapper[4867]: I0214 04:13:41.818246 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jc878" Feb 14 04:13:41 crc kubenswrapper[4867]: I0214 04:13:41.863630 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jc878" Feb 14 04:13:41 crc kubenswrapper[4867]: I0214 04:13:41.948455 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jc878" Feb 14 04:13:43 crc kubenswrapper[4867]: I0214 04:13:43.904388 4867 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 14 04:13:43 crc kubenswrapper[4867]: E0214 04:13:43.905850 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f1bacbd-3b75-4814-83cc-1569cbbf36bb" containerName="pruner" Feb 14 04:13:43 crc kubenswrapper[4867]: I0214 04:13:43.905937 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f1bacbd-3b75-4814-83cc-1569cbbf36bb" containerName="pruner" Feb 14 04:13:43 crc kubenswrapper[4867]: I0214 04:13:43.906104 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f1bacbd-3b75-4814-83cc-1569cbbf36bb" containerName="pruner" Feb 14 04:13:43 crc kubenswrapper[4867]: I0214 04:13:43.906474 4867 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 14 04:13:43 crc kubenswrapper[4867]: I0214 04:13:43.906598 4867 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 14 04:13:43 crc kubenswrapper[4867]: I0214 04:13:43.906598 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 04:13:43 crc kubenswrapper[4867]: I0214 04:13:43.906813 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://ff2ac5b982c695cfabb0b045748396477b0076e3f4bd77aedf8140d8d212eefe" gracePeriod=15 Feb 14 04:13:43 crc kubenswrapper[4867]: I0214 04:13:43.906832 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://37c96b250166bcf9c613c7707d9b66c11bbb6292c67d03ed9c9cd8359f466d48" gracePeriod=15 Feb 14 04:13:43 crc kubenswrapper[4867]: E0214 04:13:43.906977 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 14 04:13:43 crc kubenswrapper[4867]: I0214 04:13:43.907046 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 14 04:13:43 crc kubenswrapper[4867]: E0214 04:13:43.907111 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 14 04:13:43 crc kubenswrapper[4867]: I0214 04:13:43.907173 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 14 04:13:43 crc kubenswrapper[4867]: E0214 04:13:43.907243 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 14 04:13:43 crc kubenswrapper[4867]: I0214 04:13:43.907000 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://7d60b00afe16ba210d6cf3e8edd9c12aef490177b83185e6d74f219cc35efbc7" gracePeriod=15 Feb 14 04:13:43 crc kubenswrapper[4867]: I0214 04:13:43.906952 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://44c65a590577c74e672bca804403f159a08ded5ed0e25daf1bef640898c304fc" gracePeriod=15 Feb 14 04:13:43 crc kubenswrapper[4867]: I0214 04:13:43.907306 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 14 04:13:43 crc kubenswrapper[4867]: E0214 04:13:43.907405 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 14 04:13:43 crc kubenswrapper[4867]: I0214 04:13:43.907415 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 14 04:13:43 crc kubenswrapper[4867]: E0214 04:13:43.907426 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 14 04:13:43 crc kubenswrapper[4867]: I0214 04:13:43.907432 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 14 04:13:43 crc kubenswrapper[4867]: E0214 04:13:43.907448 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 14 04:13:43 crc kubenswrapper[4867]: I0214 04:13:43.907453 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 14 04:13:43 crc kubenswrapper[4867]: E0214 04:13:43.907464 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 14 04:13:43 crc kubenswrapper[4867]: I0214 04:13:43.907469 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 14 04:13:43 crc kubenswrapper[4867]: I0214 04:13:43.906957 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://3b2c4d9c08ee7188cfea877222707517949a93291dac8409facff18ccd5d9243" gracePeriod=15 Feb 14 04:13:43 crc kubenswrapper[4867]: I0214 04:13:43.907676 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 14 04:13:43 crc kubenswrapper[4867]: I0214 04:13:43.907690 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 14 04:13:43 crc kubenswrapper[4867]: I0214 04:13:43.907700 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 14 04:13:43 crc kubenswrapper[4867]: I0214 04:13:43.907707 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 14 04:13:43 crc kubenswrapper[4867]: I0214 04:13:43.907716 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 14 04:13:43 crc kubenswrapper[4867]: I0214 04:13:43.907723 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 14 04:13:43 crc kubenswrapper[4867]: I0214 04:13:43.913389 4867 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Feb 14 04:13:44 crc kubenswrapper[4867]: I0214 04:13:44.069299 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 04:13:44 crc kubenswrapper[4867]: I0214 04:13:44.069382 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 04:13:44 crc kubenswrapper[4867]: I0214 04:13:44.069401 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 04:13:44 crc kubenswrapper[4867]: I0214 04:13:44.069438 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 04:13:44 crc kubenswrapper[4867]: I0214 04:13:44.069467 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 04:13:44 crc kubenswrapper[4867]: I0214 04:13:44.069545 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 04:13:44 crc kubenswrapper[4867]: I0214 04:13:44.069645 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 04:13:44 crc kubenswrapper[4867]: I0214 04:13:44.069672 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 04:13:44 crc kubenswrapper[4867]: I0214 04:13:44.170351 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 04:13:44 crc kubenswrapper[4867]: I0214 04:13:44.170412 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 04:13:44 crc kubenswrapper[4867]: I0214 04:13:44.170415 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 04:13:44 crc kubenswrapper[4867]: I0214 04:13:44.170450 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 04:13:44 crc kubenswrapper[4867]: I0214 04:13:44.170472 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 04:13:44 crc kubenswrapper[4867]: I0214 04:13:44.170497 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 04:13:44 crc kubenswrapper[4867]: I0214 04:13:44.170546 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 04:13:44 crc kubenswrapper[4867]: I0214 04:13:44.170571 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 04:13:44 crc kubenswrapper[4867]: I0214 04:13:44.170604 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 04:13:44 crc kubenswrapper[4867]: I0214 04:13:44.170581 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 04:13:44 crc kubenswrapper[4867]: I0214 04:13:44.170581 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 04:13:44 crc kubenswrapper[4867]: I0214 04:13:44.170608 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 04:13:44 crc kubenswrapper[4867]: I0214 04:13:44.170651 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 04:13:44 crc kubenswrapper[4867]: I0214 04:13:44.170678 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 04:13:44 crc kubenswrapper[4867]: I0214 04:13:44.170686 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 04:13:44 crc kubenswrapper[4867]: I0214 04:13:44.170709 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 04:13:44 crc kubenswrapper[4867]: I0214 04:13:44.940951 4867 generic.go:334] "Generic (PLEG): container finished" podID="0e717e9c-3ff4-420e-8f69-26044fc5e482" containerID="e88a66dab5c2b34dc63a7059bdf03187c70eb6a356f22173e8d8866831ed9219" exitCode=0 Feb 14 04:13:44 crc kubenswrapper[4867]: I0214 04:13:44.941052 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"0e717e9c-3ff4-420e-8f69-26044fc5e482","Type":"ContainerDied","Data":"e88a66dab5c2b34dc63a7059bdf03187c70eb6a356f22173e8d8866831ed9219"} Feb 14 04:13:44 crc kubenswrapper[4867]: I0214 04:13:44.941976 4867 status_manager.go:851] "Failed to get status for pod" podUID="0e717e9c-3ff4-420e-8f69-26044fc5e482" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:44 crc kubenswrapper[4867]: I0214 04:13:44.943233 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 14 04:13:44 crc kubenswrapper[4867]: I0214 04:13:44.944361 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 14 04:13:44 crc kubenswrapper[4867]: I0214 04:13:44.945155 4867 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="37c96b250166bcf9c613c7707d9b66c11bbb6292c67d03ed9c9cd8359f466d48" exitCode=0 Feb 14 04:13:44 crc kubenswrapper[4867]: I0214 04:13:44.945177 4867 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7d60b00afe16ba210d6cf3e8edd9c12aef490177b83185e6d74f219cc35efbc7" exitCode=0 Feb 14 04:13:44 crc kubenswrapper[4867]: I0214 04:13:44.945185 4867 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="44c65a590577c74e672bca804403f159a08ded5ed0e25daf1bef640898c304fc" exitCode=0 Feb 14 04:13:44 crc kubenswrapper[4867]: I0214 04:13:44.945195 4867 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3b2c4d9c08ee7188cfea877222707517949a93291dac8409facff18ccd5d9243" exitCode=2 Feb 14 04:13:44 crc kubenswrapper[4867]: I0214 04:13:44.945226 4867 scope.go:117] "RemoveContainer" containerID="b9a86a9d4bdcb85bed9cc5869d14d5d0dcd8a0e22ad73bcc1a9db45554d0c687" Feb 14 04:13:45 crc kubenswrapper[4867]: E0214 04:13:45.869237 4867 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:45 crc kubenswrapper[4867]: E0214 04:13:45.869582 4867 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:45 crc kubenswrapper[4867]: E0214 04:13:45.870078 4867 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:45 crc kubenswrapper[4867]: E0214 04:13:45.870558 4867 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:45 crc kubenswrapper[4867]: E0214 04:13:45.870918 4867 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:45 crc kubenswrapper[4867]: I0214 04:13:45.870952 4867 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 14 04:13:45 crc kubenswrapper[4867]: E0214 04:13:45.871118 4867 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.113:6443: connect: connection refused" interval="200ms" Feb 14 04:13:45 crc kubenswrapper[4867]: I0214 04:13:45.954008 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 14 04:13:46 crc kubenswrapper[4867]: E0214 04:13:46.004247 4867 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.113:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" volumeName="registry-storage" Feb 14 04:13:46 crc kubenswrapper[4867]: E0214 04:13:46.071888 4867 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.113:6443: connect: connection refused" interval="400ms" Feb 14 04:13:46 crc kubenswrapper[4867]: I0214 04:13:46.298960 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 14 04:13:46 crc kubenswrapper[4867]: I0214 04:13:46.299624 4867 status_manager.go:851] "Failed to get status for pod" podUID="0e717e9c-3ff4-420e-8f69-26044fc5e482" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:46 crc kubenswrapper[4867]: I0214 04:13:46.400193 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0e717e9c-3ff4-420e-8f69-26044fc5e482-kubelet-dir\") pod \"0e717e9c-3ff4-420e-8f69-26044fc5e482\" (UID: \"0e717e9c-3ff4-420e-8f69-26044fc5e482\") " Feb 14 04:13:46 crc kubenswrapper[4867]: I0214 04:13:46.400290 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0e717e9c-3ff4-420e-8f69-26044fc5e482-var-lock\") pod \"0e717e9c-3ff4-420e-8f69-26044fc5e482\" (UID: \"0e717e9c-3ff4-420e-8f69-26044fc5e482\") " Feb 14 04:13:46 crc kubenswrapper[4867]: I0214 04:13:46.400305 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e717e9c-3ff4-420e-8f69-26044fc5e482-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "0e717e9c-3ff4-420e-8f69-26044fc5e482" (UID: "0e717e9c-3ff4-420e-8f69-26044fc5e482"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 04:13:46 crc kubenswrapper[4867]: I0214 04:13:46.400336 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0e717e9c-3ff4-420e-8f69-26044fc5e482-kube-api-access\") pod \"0e717e9c-3ff4-420e-8f69-26044fc5e482\" (UID: \"0e717e9c-3ff4-420e-8f69-26044fc5e482\") " Feb 14 04:13:46 crc kubenswrapper[4867]: I0214 04:13:46.400342 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e717e9c-3ff4-420e-8f69-26044fc5e482-var-lock" (OuterVolumeSpecName: "var-lock") pod "0e717e9c-3ff4-420e-8f69-26044fc5e482" (UID: "0e717e9c-3ff4-420e-8f69-26044fc5e482"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 04:13:46 crc kubenswrapper[4867]: I0214 04:13:46.400599 4867 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0e717e9c-3ff4-420e-8f69-26044fc5e482-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 14 04:13:46 crc kubenswrapper[4867]: I0214 04:13:46.400612 4867 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0e717e9c-3ff4-420e-8f69-26044fc5e482-var-lock\") on node \"crc\" DevicePath \"\"" Feb 14 04:13:46 crc kubenswrapper[4867]: I0214 04:13:46.406148 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e717e9c-3ff4-420e-8f69-26044fc5e482-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0e717e9c-3ff4-420e-8f69-26044fc5e482" (UID: "0e717e9c-3ff4-420e-8f69-26044fc5e482"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:13:46 crc kubenswrapper[4867]: E0214 04:13:46.473149 4867 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.113:6443: connect: connection refused" interval="800ms" Feb 14 04:13:46 crc kubenswrapper[4867]: I0214 04:13:46.502093 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0e717e9c-3ff4-420e-8f69-26044fc5e482-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 14 04:13:46 crc kubenswrapper[4867]: I0214 04:13:46.970711 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"0e717e9c-3ff4-420e-8f69-26044fc5e482","Type":"ContainerDied","Data":"798ef9a3e213da1cc192f6e3e40f1dc1868f826121e73963d17a6206d8028438"} Feb 14 04:13:46 crc kubenswrapper[4867]: I0214 04:13:46.970756 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="798ef9a3e213da1cc192f6e3e40f1dc1868f826121e73963d17a6206d8028438" Feb 14 04:13:46 crc kubenswrapper[4867]: I0214 04:13:46.970813 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 14 04:13:46 crc kubenswrapper[4867]: I0214 04:13:46.984355 4867 status_manager.go:851] "Failed to get status for pod" podUID="0e717e9c-3ff4-420e-8f69-26044fc5e482" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:47 crc kubenswrapper[4867]: E0214 04:13:47.274524 4867 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.113:6443: connect: connection refused" interval="1.6s" Feb 14 04:13:47 crc kubenswrapper[4867]: I0214 04:13:47.611731 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 14 04:13:47 crc kubenswrapper[4867]: I0214 04:13:47.612863 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 04:13:47 crc kubenswrapper[4867]: I0214 04:13:47.613361 4867 status_manager.go:851] "Failed to get status for pod" podUID="0e717e9c-3ff4-420e-8f69-26044fc5e482" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:47 crc kubenswrapper[4867]: I0214 04:13:47.613845 4867 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:47 crc kubenswrapper[4867]: I0214 04:13:47.715799 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 14 04:13:47 crc kubenswrapper[4867]: I0214 04:13:47.715955 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 14 04:13:47 crc kubenswrapper[4867]: I0214 04:13:47.716085 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 14 04:13:47 crc kubenswrapper[4867]: I0214 04:13:47.715899 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 04:13:47 crc kubenswrapper[4867]: I0214 04:13:47.716351 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 04:13:47 crc kubenswrapper[4867]: I0214 04:13:47.716393 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 04:13:47 crc kubenswrapper[4867]: I0214 04:13:47.818097 4867 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 14 04:13:47 crc kubenswrapper[4867]: I0214 04:13:47.818135 4867 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 14 04:13:47 crc kubenswrapper[4867]: I0214 04:13:47.818147 4867 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 14 04:13:47 crc kubenswrapper[4867]: I0214 04:13:47.978573 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 14 04:13:47 crc kubenswrapper[4867]: I0214 04:13:47.979318 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 04:13:47 crc kubenswrapper[4867]: I0214 04:13:47.979560 4867 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ff2ac5b982c695cfabb0b045748396477b0076e3f4bd77aedf8140d8d212eefe" exitCode=0 Feb 14 04:13:47 crc kubenswrapper[4867]: I0214 04:13:47.979660 4867 scope.go:117] "RemoveContainer" containerID="37c96b250166bcf9c613c7707d9b66c11bbb6292c67d03ed9c9cd8359f466d48" Feb 14 04:13:47 crc kubenswrapper[4867]: I0214 04:13:47.992292 4867 status_manager.go:851] "Failed to get status for pod" podUID="0e717e9c-3ff4-420e-8f69-26044fc5e482" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:47 crc kubenswrapper[4867]: I0214 04:13:47.992792 4867 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:47 crc kubenswrapper[4867]: I0214 04:13:47.994002 4867 scope.go:117] "RemoveContainer" containerID="7d60b00afe16ba210d6cf3e8edd9c12aef490177b83185e6d74f219cc35efbc7" Feb 14 04:13:48 crc kubenswrapper[4867]: I0214 04:13:48.007961 4867 scope.go:117] "RemoveContainer" containerID="44c65a590577c74e672bca804403f159a08ded5ed0e25daf1bef640898c304fc" Feb 14 04:13:48 crc kubenswrapper[4867]: I0214 04:13:48.022938 4867 scope.go:117] "RemoveContainer" containerID="3b2c4d9c08ee7188cfea877222707517949a93291dac8409facff18ccd5d9243" Feb 14 04:13:48 crc kubenswrapper[4867]: I0214 04:13:48.035573 4867 scope.go:117] "RemoveContainer" containerID="ff2ac5b982c695cfabb0b045748396477b0076e3f4bd77aedf8140d8d212eefe" Feb 14 04:13:48 crc kubenswrapper[4867]: I0214 04:13:48.048521 4867 scope.go:117] "RemoveContainer" containerID="6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302" Feb 14 04:13:48 crc kubenswrapper[4867]: I0214 04:13:48.064418 4867 scope.go:117] "RemoveContainer" containerID="37c96b250166bcf9c613c7707d9b66c11bbb6292c67d03ed9c9cd8359f466d48" Feb 14 04:13:48 crc kubenswrapper[4867]: E0214 04:13:48.064855 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37c96b250166bcf9c613c7707d9b66c11bbb6292c67d03ed9c9cd8359f466d48\": container with ID starting with 37c96b250166bcf9c613c7707d9b66c11bbb6292c67d03ed9c9cd8359f466d48 not found: ID does not exist" containerID="37c96b250166bcf9c613c7707d9b66c11bbb6292c67d03ed9c9cd8359f466d48" Feb 14 04:13:48 crc kubenswrapper[4867]: I0214 04:13:48.064899 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37c96b250166bcf9c613c7707d9b66c11bbb6292c67d03ed9c9cd8359f466d48"} err="failed to get container status \"37c96b250166bcf9c613c7707d9b66c11bbb6292c67d03ed9c9cd8359f466d48\": rpc error: code = NotFound desc = could not find container \"37c96b250166bcf9c613c7707d9b66c11bbb6292c67d03ed9c9cd8359f466d48\": container with ID starting with 37c96b250166bcf9c613c7707d9b66c11bbb6292c67d03ed9c9cd8359f466d48 not found: ID does not exist" Feb 14 04:13:48 crc kubenswrapper[4867]: I0214 04:13:48.064935 4867 scope.go:117] "RemoveContainer" containerID="7d60b00afe16ba210d6cf3e8edd9c12aef490177b83185e6d74f219cc35efbc7" Feb 14 04:13:48 crc kubenswrapper[4867]: E0214 04:13:48.065193 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d60b00afe16ba210d6cf3e8edd9c12aef490177b83185e6d74f219cc35efbc7\": container with ID starting with 7d60b00afe16ba210d6cf3e8edd9c12aef490177b83185e6d74f219cc35efbc7 not found: ID does not exist" containerID="7d60b00afe16ba210d6cf3e8edd9c12aef490177b83185e6d74f219cc35efbc7" Feb 14 04:13:48 crc kubenswrapper[4867]: I0214 04:13:48.065222 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d60b00afe16ba210d6cf3e8edd9c12aef490177b83185e6d74f219cc35efbc7"} err="failed to get container status \"7d60b00afe16ba210d6cf3e8edd9c12aef490177b83185e6d74f219cc35efbc7\": rpc error: code = NotFound desc = could not find container \"7d60b00afe16ba210d6cf3e8edd9c12aef490177b83185e6d74f219cc35efbc7\": container with ID starting with 7d60b00afe16ba210d6cf3e8edd9c12aef490177b83185e6d74f219cc35efbc7 not found: ID does not exist" Feb 14 04:13:48 crc kubenswrapper[4867]: I0214 04:13:48.065238 4867 scope.go:117] "RemoveContainer" containerID="44c65a590577c74e672bca804403f159a08ded5ed0e25daf1bef640898c304fc" Feb 14 04:13:48 crc kubenswrapper[4867]: E0214 04:13:48.065569 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44c65a590577c74e672bca804403f159a08ded5ed0e25daf1bef640898c304fc\": container with ID starting with 44c65a590577c74e672bca804403f159a08ded5ed0e25daf1bef640898c304fc not found: ID does not exist" containerID="44c65a590577c74e672bca804403f159a08ded5ed0e25daf1bef640898c304fc" Feb 14 04:13:48 crc kubenswrapper[4867]: I0214 04:13:48.065601 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44c65a590577c74e672bca804403f159a08ded5ed0e25daf1bef640898c304fc"} err="failed to get container status \"44c65a590577c74e672bca804403f159a08ded5ed0e25daf1bef640898c304fc\": rpc error: code = NotFound desc = could not find container \"44c65a590577c74e672bca804403f159a08ded5ed0e25daf1bef640898c304fc\": container with ID starting with 44c65a590577c74e672bca804403f159a08ded5ed0e25daf1bef640898c304fc not found: ID does not exist" Feb 14 04:13:48 crc kubenswrapper[4867]: I0214 04:13:48.065623 4867 scope.go:117] "RemoveContainer" containerID="3b2c4d9c08ee7188cfea877222707517949a93291dac8409facff18ccd5d9243" Feb 14 04:13:48 crc kubenswrapper[4867]: E0214 04:13:48.065932 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b2c4d9c08ee7188cfea877222707517949a93291dac8409facff18ccd5d9243\": container with ID starting with 3b2c4d9c08ee7188cfea877222707517949a93291dac8409facff18ccd5d9243 not found: ID does not exist" containerID="3b2c4d9c08ee7188cfea877222707517949a93291dac8409facff18ccd5d9243" Feb 14 04:13:48 crc kubenswrapper[4867]: I0214 04:13:48.065958 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b2c4d9c08ee7188cfea877222707517949a93291dac8409facff18ccd5d9243"} err="failed to get container status \"3b2c4d9c08ee7188cfea877222707517949a93291dac8409facff18ccd5d9243\": rpc error: code = NotFound desc = could not find container \"3b2c4d9c08ee7188cfea877222707517949a93291dac8409facff18ccd5d9243\": container with ID starting with 3b2c4d9c08ee7188cfea877222707517949a93291dac8409facff18ccd5d9243 not found: ID does not exist" Feb 14 04:13:48 crc kubenswrapper[4867]: I0214 04:13:48.065974 4867 scope.go:117] "RemoveContainer" containerID="ff2ac5b982c695cfabb0b045748396477b0076e3f4bd77aedf8140d8d212eefe" Feb 14 04:13:48 crc kubenswrapper[4867]: E0214 04:13:48.066266 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff2ac5b982c695cfabb0b045748396477b0076e3f4bd77aedf8140d8d212eefe\": container with ID starting with ff2ac5b982c695cfabb0b045748396477b0076e3f4bd77aedf8140d8d212eefe not found: ID does not exist" containerID="ff2ac5b982c695cfabb0b045748396477b0076e3f4bd77aedf8140d8d212eefe" Feb 14 04:13:48 crc kubenswrapper[4867]: I0214 04:13:48.066286 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff2ac5b982c695cfabb0b045748396477b0076e3f4bd77aedf8140d8d212eefe"} err="failed to get container status \"ff2ac5b982c695cfabb0b045748396477b0076e3f4bd77aedf8140d8d212eefe\": rpc error: code = NotFound desc = could not find container \"ff2ac5b982c695cfabb0b045748396477b0076e3f4bd77aedf8140d8d212eefe\": container with ID starting with ff2ac5b982c695cfabb0b045748396477b0076e3f4bd77aedf8140d8d212eefe not found: ID does not exist" Feb 14 04:13:48 crc kubenswrapper[4867]: I0214 04:13:48.066299 4867 scope.go:117] "RemoveContainer" containerID="6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302" Feb 14 04:13:48 crc kubenswrapper[4867]: E0214 04:13:48.066493 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\": container with ID starting with 6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302 not found: ID does not exist" containerID="6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302" Feb 14 04:13:48 crc kubenswrapper[4867]: I0214 04:13:48.066524 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302"} err="failed to get container status \"6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\": rpc error: code = NotFound desc = could not find container \"6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\": container with ID starting with 6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302 not found: ID does not exist" Feb 14 04:13:48 crc kubenswrapper[4867]: I0214 04:13:48.386353 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8vs6k" Feb 14 04:13:48 crc kubenswrapper[4867]: I0214 04:13:48.387670 4867 status_manager.go:851] "Failed to get status for pod" podUID="0e717e9c-3ff4-420e-8f69-26044fc5e482" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:48 crc kubenswrapper[4867]: I0214 04:13:48.388171 4867 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:48 crc kubenswrapper[4867]: I0214 04:13:48.388731 4867 status_manager.go:851] "Failed to get status for pod" podUID="b6d1c1c6-899d-4220-8f80-defae4ba56f0" pod="openshift-marketplace/community-operators-8vs6k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8vs6k\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:48 crc kubenswrapper[4867]: I0214 04:13:48.438219 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8vs6k" Feb 14 04:13:48 crc kubenswrapper[4867]: I0214 04:13:48.439381 4867 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:48 crc kubenswrapper[4867]: I0214 04:13:48.440046 4867 status_manager.go:851] "Failed to get status for pod" podUID="b6d1c1c6-899d-4220-8f80-defae4ba56f0" pod="openshift-marketplace/community-operators-8vs6k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8vs6k\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:48 crc kubenswrapper[4867]: I0214 04:13:48.440581 4867 status_manager.go:851] "Failed to get status for pod" podUID="0e717e9c-3ff4-420e-8f69-26044fc5e482" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:48 crc kubenswrapper[4867]: I0214 04:13:48.549623 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-x4khs" Feb 14 04:13:48 crc kubenswrapper[4867]: I0214 04:13:48.550789 4867 status_manager.go:851] "Failed to get status for pod" podUID="0e717e9c-3ff4-420e-8f69-26044fc5e482" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:48 crc kubenswrapper[4867]: I0214 04:13:48.550983 4867 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:48 crc kubenswrapper[4867]: I0214 04:13:48.551242 4867 status_manager.go:851] "Failed to get status for pod" podUID="f27f899c-e2d8-4601-9a36-4582192436b7" pod="openshift-marketplace/certified-operators-x4khs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-x4khs\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:48 crc kubenswrapper[4867]: I0214 04:13:48.551563 4867 status_manager.go:851] "Failed to get status for pod" podUID="b6d1c1c6-899d-4220-8f80-defae4ba56f0" pod="openshift-marketplace/community-operators-8vs6k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8vs6k\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:48 crc kubenswrapper[4867]: I0214 04:13:48.777856 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2cjxf" Feb 14 04:13:48 crc kubenswrapper[4867]: I0214 04:13:48.778711 4867 status_manager.go:851] "Failed to get status for pod" podUID="0e717e9c-3ff4-420e-8f69-26044fc5e482" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:48 crc kubenswrapper[4867]: I0214 04:13:48.779106 4867 status_manager.go:851] "Failed to get status for pod" podUID="0683c2f1-5695-4ef3-b6cc-31fe804c6dc6" pod="openshift-marketplace/community-operators-2cjxf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2cjxf\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:48 crc kubenswrapper[4867]: I0214 04:13:48.779372 4867 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:48 crc kubenswrapper[4867]: I0214 04:13:48.779673 4867 status_manager.go:851] "Failed to get status for pod" podUID="f27f899c-e2d8-4601-9a36-4582192436b7" pod="openshift-marketplace/certified-operators-x4khs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-x4khs\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:48 crc kubenswrapper[4867]: I0214 04:13:48.779920 4867 status_manager.go:851] "Failed to get status for pod" podUID="b6d1c1c6-899d-4220-8f80-defae4ba56f0" pod="openshift-marketplace/community-operators-8vs6k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8vs6k\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:48 crc kubenswrapper[4867]: E0214 04:13:48.876041 4867 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.113:6443: connect: connection refused" interval="3.2s" Feb 14 04:13:48 crc kubenswrapper[4867]: E0214 04:13:48.950020 4867 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.113:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 04:13:48 crc kubenswrapper[4867]: I0214 04:13:48.950549 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 04:13:48 crc kubenswrapper[4867]: W0214 04:13:48.969664 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-1ce48bf8dbd63206355352f06f78ae103f88293f63e186df20e4c68d1ae58f58 WatchSource:0}: Error finding container 1ce48bf8dbd63206355352f06f78ae103f88293f63e186df20e4c68d1ae58f58: Status 404 returned error can't find the container with id 1ce48bf8dbd63206355352f06f78ae103f88293f63e186df20e4c68d1ae58f58 Feb 14 04:13:48 crc kubenswrapper[4867]: E0214 04:13:48.982652 4867 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.113:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.189401b4ad9a96ef openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-14 04:13:48.981778159 +0000 UTC m=+261.062715463,LastTimestamp:2026-02-14 04:13:48.981778159 +0000 UTC m=+261.062715463,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 14 04:13:48 crc kubenswrapper[4867]: I0214 04:13:48.995919 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"1ce48bf8dbd63206355352f06f78ae103f88293f63e186df20e4c68d1ae58f58"} Feb 14 04:13:49 crc kubenswrapper[4867]: I0214 04:13:49.001744 4867 status_manager.go:851] "Failed to get status for pod" podUID="0e717e9c-3ff4-420e-8f69-26044fc5e482" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:49 crc kubenswrapper[4867]: I0214 04:13:49.002096 4867 status_manager.go:851] "Failed to get status for pod" podUID="0683c2f1-5695-4ef3-b6cc-31fe804c6dc6" pod="openshift-marketplace/community-operators-2cjxf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2cjxf\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:49 crc kubenswrapper[4867]: I0214 04:13:49.002368 4867 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:49 crc kubenswrapper[4867]: I0214 04:13:49.002579 4867 status_manager.go:851] "Failed to get status for pod" podUID="b6d1c1c6-899d-4220-8f80-defae4ba56f0" pod="openshift-marketplace/community-operators-8vs6k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8vs6k\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:49 crc kubenswrapper[4867]: I0214 04:13:49.002780 4867 status_manager.go:851] "Failed to get status for pod" podUID="f27f899c-e2d8-4601-9a36-4582192436b7" pod="openshift-marketplace/certified-operators-x4khs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-x4khs\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:49 crc kubenswrapper[4867]: I0214 04:13:49.010725 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 14 04:13:50 crc kubenswrapper[4867]: I0214 04:13:50.002254 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"e8b6ac2ad40980da7eed4ab19a090dd414cd17e380844b8fe6f7a8d4336ff8cd"} Feb 14 04:13:50 crc kubenswrapper[4867]: I0214 04:13:50.002877 4867 status_manager.go:851] "Failed to get status for pod" podUID="b6d1c1c6-899d-4220-8f80-defae4ba56f0" pod="openshift-marketplace/community-operators-8vs6k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8vs6k\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:50 crc kubenswrapper[4867]: E0214 04:13:50.003008 4867 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.113:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 04:13:50 crc kubenswrapper[4867]: I0214 04:13:50.003121 4867 status_manager.go:851] "Failed to get status for pod" podUID="f27f899c-e2d8-4601-9a36-4582192436b7" pod="openshift-marketplace/certified-operators-x4khs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-x4khs\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:50 crc kubenswrapper[4867]: I0214 04:13:50.003721 4867 status_manager.go:851] "Failed to get status for pod" podUID="0e717e9c-3ff4-420e-8f69-26044fc5e482" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:50 crc kubenswrapper[4867]: I0214 04:13:50.004247 4867 status_manager.go:851] "Failed to get status for pod" podUID="0683c2f1-5695-4ef3-b6cc-31fe804c6dc6" pod="openshift-marketplace/community-operators-2cjxf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2cjxf\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:50 crc kubenswrapper[4867]: I0214 04:13:50.147178 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gvh7q" Feb 14 04:13:50 crc kubenswrapper[4867]: I0214 04:13:50.147642 4867 status_manager.go:851] "Failed to get status for pod" podUID="0e717e9c-3ff4-420e-8f69-26044fc5e482" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:50 crc kubenswrapper[4867]: I0214 04:13:50.147951 4867 status_manager.go:851] "Failed to get status for pod" podUID="0683c2f1-5695-4ef3-b6cc-31fe804c6dc6" pod="openshift-marketplace/community-operators-2cjxf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2cjxf\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:50 crc kubenswrapper[4867]: I0214 04:13:50.148201 4867 status_manager.go:851] "Failed to get status for pod" podUID="2e834244-05c0-4e48-9e2a-7c69cf930951" pod="openshift-marketplace/redhat-marketplace-gvh7q" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-gvh7q\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:50 crc kubenswrapper[4867]: I0214 04:13:50.148406 4867 status_manager.go:851] "Failed to get status for pod" podUID="b6d1c1c6-899d-4220-8f80-defae4ba56f0" pod="openshift-marketplace/community-operators-8vs6k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8vs6k\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:50 crc kubenswrapper[4867]: I0214 04:13:50.148967 4867 status_manager.go:851] "Failed to get status for pod" podUID="f27f899c-e2d8-4601-9a36-4582192436b7" pod="openshift-marketplace/certified-operators-x4khs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-x4khs\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:50 crc kubenswrapper[4867]: I0214 04:13:50.726751 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-s8hwg" Feb 14 04:13:50 crc kubenswrapper[4867]: I0214 04:13:50.727430 4867 status_manager.go:851] "Failed to get status for pod" podUID="1f7707be-b4dc-47c7-8a74-bc46399acd36" pod="openshift-marketplace/redhat-marketplace-s8hwg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-s8hwg\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:50 crc kubenswrapper[4867]: I0214 04:13:50.727897 4867 status_manager.go:851] "Failed to get status for pod" podUID="0e717e9c-3ff4-420e-8f69-26044fc5e482" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:50 crc kubenswrapper[4867]: I0214 04:13:50.728322 4867 status_manager.go:851] "Failed to get status for pod" podUID="0683c2f1-5695-4ef3-b6cc-31fe804c6dc6" pod="openshift-marketplace/community-operators-2cjxf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2cjxf\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:50 crc kubenswrapper[4867]: I0214 04:13:50.728815 4867 status_manager.go:851] "Failed to get status for pod" podUID="2e834244-05c0-4e48-9e2a-7c69cf930951" pod="openshift-marketplace/redhat-marketplace-gvh7q" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-gvh7q\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:50 crc kubenswrapper[4867]: I0214 04:13:50.729382 4867 status_manager.go:851] "Failed to get status for pod" podUID="b6d1c1c6-899d-4220-8f80-defae4ba56f0" pod="openshift-marketplace/community-operators-8vs6k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8vs6k\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:50 crc kubenswrapper[4867]: I0214 04:13:50.729684 4867 status_manager.go:851] "Failed to get status for pod" podUID="f27f899c-e2d8-4601-9a36-4582192436b7" pod="openshift-marketplace/certified-operators-x4khs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-x4khs\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:51 crc kubenswrapper[4867]: E0214 04:13:51.010214 4867 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.113:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 04:13:51 crc kubenswrapper[4867]: E0214 04:13:51.635554 4867 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.113:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.189401b4ad9a96ef openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-14 04:13:48.981778159 +0000 UTC m=+261.062715463,LastTimestamp:2026-02-14 04:13:48.981778159 +0000 UTC m=+261.062715463,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 14 04:13:52 crc kubenswrapper[4867]: E0214 04:13:52.077111 4867 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.113:6443: connect: connection refused" interval="6.4s" Feb 14 04:13:56 crc kubenswrapper[4867]: E0214 04:13:56.938039 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:13:56Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:13:56Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:13:56Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:13:56Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:56 crc kubenswrapper[4867]: E0214 04:13:56.939124 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:56 crc kubenswrapper[4867]: E0214 04:13:56.939413 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:56 crc kubenswrapper[4867]: E0214 04:13:56.939692 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:56 crc kubenswrapper[4867]: E0214 04:13:56.939881 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:56 crc kubenswrapper[4867]: E0214 04:13:56.939898 4867 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 14 04:13:58 crc kubenswrapper[4867]: I0214 04:13:58.058403 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 14 04:13:58 crc kubenswrapper[4867]: I0214 04:13:58.058502 4867 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="898133696f8478fcb41fba24d15e056570cab68af53a559cb642724dff51617a" exitCode=1 Feb 14 04:13:58 crc kubenswrapper[4867]: I0214 04:13:58.058627 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"898133696f8478fcb41fba24d15e056570cab68af53a559cb642724dff51617a"} Feb 14 04:13:58 crc kubenswrapper[4867]: I0214 04:13:58.059473 4867 scope.go:117] "RemoveContainer" containerID="898133696f8478fcb41fba24d15e056570cab68af53a559cb642724dff51617a" Feb 14 04:13:58 crc kubenswrapper[4867]: I0214 04:13:58.059611 4867 status_manager.go:851] "Failed to get status for pod" podUID="1f7707be-b4dc-47c7-8a74-bc46399acd36" pod="openshift-marketplace/redhat-marketplace-s8hwg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-s8hwg\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:58 crc kubenswrapper[4867]: I0214 04:13:58.060803 4867 status_manager.go:851] "Failed to get status for pod" podUID="0e717e9c-3ff4-420e-8f69-26044fc5e482" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:58 crc kubenswrapper[4867]: I0214 04:13:58.061282 4867 status_manager.go:851] "Failed to get status for pod" podUID="0683c2f1-5695-4ef3-b6cc-31fe804c6dc6" pod="openshift-marketplace/community-operators-2cjxf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2cjxf\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:58 crc kubenswrapper[4867]: I0214 04:13:58.061617 4867 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:58 crc kubenswrapper[4867]: I0214 04:13:58.061914 4867 status_manager.go:851] "Failed to get status for pod" podUID="2e834244-05c0-4e48-9e2a-7c69cf930951" pod="openshift-marketplace/redhat-marketplace-gvh7q" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-gvh7q\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:58 crc kubenswrapper[4867]: I0214 04:13:58.063486 4867 status_manager.go:851] "Failed to get status for pod" podUID="b6d1c1c6-899d-4220-8f80-defae4ba56f0" pod="openshift-marketplace/community-operators-8vs6k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8vs6k\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:58 crc kubenswrapper[4867]: I0214 04:13:58.063820 4867 status_manager.go:851] "Failed to get status for pod" podUID="f27f899c-e2d8-4601-9a36-4582192436b7" pod="openshift-marketplace/certified-operators-x4khs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-x4khs\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:58 crc kubenswrapper[4867]: E0214 04:13:58.477601 4867 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.113:6443: connect: connection refused" interval="7s" Feb 14 04:13:58 crc kubenswrapper[4867]: I0214 04:13:58.996525 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 04:13:58 crc kubenswrapper[4867]: I0214 04:13:58.999775 4867 status_manager.go:851] "Failed to get status for pod" podUID="0683c2f1-5695-4ef3-b6cc-31fe804c6dc6" pod="openshift-marketplace/community-operators-2cjxf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2cjxf\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:59 crc kubenswrapper[4867]: I0214 04:13:59.000229 4867 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:59 crc kubenswrapper[4867]: I0214 04:13:59.000716 4867 status_manager.go:851] "Failed to get status for pod" podUID="2e834244-05c0-4e48-9e2a-7c69cf930951" pod="openshift-marketplace/redhat-marketplace-gvh7q" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-gvh7q\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:59 crc kubenswrapper[4867]: I0214 04:13:59.001024 4867 status_manager.go:851] "Failed to get status for pod" podUID="b6d1c1c6-899d-4220-8f80-defae4ba56f0" pod="openshift-marketplace/community-operators-8vs6k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8vs6k\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:59 crc kubenswrapper[4867]: I0214 04:13:59.001266 4867 status_manager.go:851] "Failed to get status for pod" podUID="f27f899c-e2d8-4601-9a36-4582192436b7" pod="openshift-marketplace/certified-operators-x4khs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-x4khs\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:59 crc kubenswrapper[4867]: I0214 04:13:59.001567 4867 status_manager.go:851] "Failed to get status for pod" podUID="1f7707be-b4dc-47c7-8a74-bc46399acd36" pod="openshift-marketplace/redhat-marketplace-s8hwg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-s8hwg\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:59 crc kubenswrapper[4867]: I0214 04:13:59.001830 4867 status_manager.go:851] "Failed to get status for pod" podUID="0e717e9c-3ff4-420e-8f69-26044fc5e482" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:59 crc kubenswrapper[4867]: I0214 04:13:59.002178 4867 status_manager.go:851] "Failed to get status for pod" podUID="0683c2f1-5695-4ef3-b6cc-31fe804c6dc6" pod="openshift-marketplace/community-operators-2cjxf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2cjxf\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:59 crc kubenswrapper[4867]: I0214 04:13:59.002388 4867 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:59 crc kubenswrapper[4867]: I0214 04:13:59.002707 4867 status_manager.go:851] "Failed to get status for pod" podUID="2e834244-05c0-4e48-9e2a-7c69cf930951" pod="openshift-marketplace/redhat-marketplace-gvh7q" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-gvh7q\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:59 crc kubenswrapper[4867]: I0214 04:13:59.003165 4867 status_manager.go:851] "Failed to get status for pod" podUID="b6d1c1c6-899d-4220-8f80-defae4ba56f0" pod="openshift-marketplace/community-operators-8vs6k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8vs6k\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:59 crc kubenswrapper[4867]: I0214 04:13:59.003463 4867 status_manager.go:851] "Failed to get status for pod" podUID="f27f899c-e2d8-4601-9a36-4582192436b7" pod="openshift-marketplace/certified-operators-x4khs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-x4khs\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:59 crc kubenswrapper[4867]: I0214 04:13:59.003764 4867 status_manager.go:851] "Failed to get status for pod" podUID="1f7707be-b4dc-47c7-8a74-bc46399acd36" pod="openshift-marketplace/redhat-marketplace-s8hwg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-s8hwg\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:59 crc kubenswrapper[4867]: I0214 04:13:59.004000 4867 status_manager.go:851] "Failed to get status for pod" podUID="0e717e9c-3ff4-420e-8f69-26044fc5e482" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:59 crc kubenswrapper[4867]: I0214 04:13:59.011158 4867 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b5aa8290-4924-4bc2-bd8e-576e53fa4216" Feb 14 04:13:59 crc kubenswrapper[4867]: I0214 04:13:59.011185 4867 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b5aa8290-4924-4bc2-bd8e-576e53fa4216" Feb 14 04:13:59 crc kubenswrapper[4867]: E0214 04:13:59.011564 4867 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 04:13:59 crc kubenswrapper[4867]: I0214 04:13:59.012031 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 04:13:59 crc kubenswrapper[4867]: W0214 04:13:59.032657 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-d19d83807e2c60c9d28be55d6ab831653f92c3e0eb1dcbee3a2f8da2c22a4a83 WatchSource:0}: Error finding container d19d83807e2c60c9d28be55d6ab831653f92c3e0eb1dcbee3a2f8da2c22a4a83: Status 404 returned error can't find the container with id d19d83807e2c60c9d28be55d6ab831653f92c3e0eb1dcbee3a2f8da2c22a4a83 Feb 14 04:13:59 crc kubenswrapper[4867]: I0214 04:13:59.066971 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 14 04:13:59 crc kubenswrapper[4867]: I0214 04:13:59.067050 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"017a857e2f79c693f0cb46747dd0950cd029e8ac2d878ddd91749e9ab1131b12"} Feb 14 04:13:59 crc kubenswrapper[4867]: I0214 04:13:59.067928 4867 status_manager.go:851] "Failed to get status for pod" podUID="0e717e9c-3ff4-420e-8f69-26044fc5e482" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:59 crc kubenswrapper[4867]: I0214 04:13:59.067995 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d19d83807e2c60c9d28be55d6ab831653f92c3e0eb1dcbee3a2f8da2c22a4a83"} Feb 14 04:13:59 crc kubenswrapper[4867]: I0214 04:13:59.068187 4867 status_manager.go:851] "Failed to get status for pod" podUID="0683c2f1-5695-4ef3-b6cc-31fe804c6dc6" pod="openshift-marketplace/community-operators-2cjxf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2cjxf\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:59 crc kubenswrapper[4867]: I0214 04:13:59.068401 4867 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:59 crc kubenswrapper[4867]: I0214 04:13:59.068623 4867 status_manager.go:851] "Failed to get status for pod" podUID="2e834244-05c0-4e48-9e2a-7c69cf930951" pod="openshift-marketplace/redhat-marketplace-gvh7q" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-gvh7q\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:59 crc kubenswrapper[4867]: I0214 04:13:59.068824 4867 status_manager.go:851] "Failed to get status for pod" podUID="b6d1c1c6-899d-4220-8f80-defae4ba56f0" pod="openshift-marketplace/community-operators-8vs6k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8vs6k\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:59 crc kubenswrapper[4867]: I0214 04:13:59.069000 4867 status_manager.go:851] "Failed to get status for pod" podUID="f27f899c-e2d8-4601-9a36-4582192436b7" pod="openshift-marketplace/certified-operators-x4khs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-x4khs\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:59 crc kubenswrapper[4867]: I0214 04:13:59.069187 4867 status_manager.go:851] "Failed to get status for pod" podUID="1f7707be-b4dc-47c7-8a74-bc46399acd36" pod="openshift-marketplace/redhat-marketplace-s8hwg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-s8hwg\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:14:00 crc kubenswrapper[4867]: I0214 04:14:00.073598 4867 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="825c6dd04856560083774b31efb866a033b44ccbf051e38f178b6b74973b2388" exitCode=0 Feb 14 04:14:00 crc kubenswrapper[4867]: I0214 04:14:00.073651 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"825c6dd04856560083774b31efb866a033b44ccbf051e38f178b6b74973b2388"} Feb 14 04:14:00 crc kubenswrapper[4867]: I0214 04:14:00.073917 4867 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b5aa8290-4924-4bc2-bd8e-576e53fa4216" Feb 14 04:14:00 crc kubenswrapper[4867]: I0214 04:14:00.073942 4867 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b5aa8290-4924-4bc2-bd8e-576e53fa4216" Feb 14 04:14:00 crc kubenswrapper[4867]: I0214 04:14:00.074461 4867 status_manager.go:851] "Failed to get status for pod" podUID="b6d1c1c6-899d-4220-8f80-defae4ba56f0" pod="openshift-marketplace/community-operators-8vs6k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8vs6k\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:14:00 crc kubenswrapper[4867]: E0214 04:14:00.074580 4867 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 04:14:00 crc kubenswrapper[4867]: I0214 04:14:00.074774 4867 status_manager.go:851] "Failed to get status for pod" podUID="f27f899c-e2d8-4601-9a36-4582192436b7" pod="openshift-marketplace/certified-operators-x4khs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-x4khs\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:14:00 crc kubenswrapper[4867]: I0214 04:14:00.075101 4867 status_manager.go:851] "Failed to get status for pod" podUID="1f7707be-b4dc-47c7-8a74-bc46399acd36" pod="openshift-marketplace/redhat-marketplace-s8hwg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-s8hwg\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:14:00 crc kubenswrapper[4867]: I0214 04:14:00.075388 4867 status_manager.go:851] "Failed to get status for pod" podUID="0e717e9c-3ff4-420e-8f69-26044fc5e482" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:14:00 crc kubenswrapper[4867]: I0214 04:14:00.075781 4867 status_manager.go:851] "Failed to get status for pod" podUID="0683c2f1-5695-4ef3-b6cc-31fe804c6dc6" pod="openshift-marketplace/community-operators-2cjxf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2cjxf\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:14:00 crc kubenswrapper[4867]: I0214 04:14:00.076209 4867 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:14:00 crc kubenswrapper[4867]: I0214 04:14:00.076571 4867 status_manager.go:851] "Failed to get status for pod" podUID="2e834244-05c0-4e48-9e2a-7c69cf930951" pod="openshift-marketplace/redhat-marketplace-gvh7q" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-gvh7q\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:14:01 crc kubenswrapper[4867]: I0214 04:14:01.085127 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"fc896aad0a51feb240844629a8a04a80cdcc2164b3884b8194232f3e137bf9b8"} Feb 14 04:14:01 crc kubenswrapper[4867]: I0214 04:14:01.085705 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"34ecbcdcfe7caeb94d120d8ba76d3e82a5981b2cc4bc85eac4dcf4f90d72eee4"} Feb 14 04:14:02 crc kubenswrapper[4867]: I0214 04:14:02.094009 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"fc520929f005a2ab20b1521b62b3e23f3a20f05efaf0e71119e639b949a971fe"} Feb 14 04:14:02 crc kubenswrapper[4867]: I0214 04:14:02.094308 4867 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b5aa8290-4924-4bc2-bd8e-576e53fa4216" Feb 14 04:14:02 crc kubenswrapper[4867]: I0214 04:14:02.094332 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 04:14:02 crc kubenswrapper[4867]: I0214 04:14:02.094337 4867 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b5aa8290-4924-4bc2-bd8e-576e53fa4216" Feb 14 04:14:02 crc kubenswrapper[4867]: I0214 04:14:02.094344 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"8bbd15c45b3dd04ad68f75e71e476e2ae097893c6bd36b7b144f4f60be34b421"} Feb 14 04:14:02 crc kubenswrapper[4867]: I0214 04:14:02.094356 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"8c641dba8692bb0b286320904a6b471e83001fc1bae562caab5c83019ed9c0c9"} Feb 14 04:14:04 crc kubenswrapper[4867]: I0214 04:14:04.012633 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 04:14:04 crc kubenswrapper[4867]: I0214 04:14:04.012673 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 04:14:04 crc kubenswrapper[4867]: I0214 04:14:04.016886 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 04:14:04 crc kubenswrapper[4867]: I0214 04:14:04.038136 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 14 04:14:04 crc kubenswrapper[4867]: I0214 04:14:04.182788 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 14 04:14:04 crc kubenswrapper[4867]: I0214 04:14:04.186310 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 14 04:14:07 crc kubenswrapper[4867]: I0214 04:14:07.129368 4867 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 04:14:08 crc kubenswrapper[4867]: I0214 04:14:08.124580 4867 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b5aa8290-4924-4bc2-bd8e-576e53fa4216" Feb 14 04:14:08 crc kubenswrapper[4867]: I0214 04:14:08.124640 4867 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b5aa8290-4924-4bc2-bd8e-576e53fa4216" Feb 14 04:14:08 crc kubenswrapper[4867]: I0214 04:14:08.128909 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 04:14:09 crc kubenswrapper[4867]: I0214 04:14:09.032344 4867 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="531e3d0c-a640-414e-8d3e-3370088f5d13" Feb 14 04:14:09 crc kubenswrapper[4867]: I0214 04:14:09.128736 4867 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b5aa8290-4924-4bc2-bd8e-576e53fa4216" Feb 14 04:14:09 crc kubenswrapper[4867]: I0214 04:14:09.128962 4867 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b5aa8290-4924-4bc2-bd8e-576e53fa4216" Feb 14 04:14:09 crc kubenswrapper[4867]: I0214 04:14:09.132544 4867 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="531e3d0c-a640-414e-8d3e-3370088f5d13" Feb 14 04:14:14 crc kubenswrapper[4867]: I0214 04:14:14.043111 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 14 04:14:16 crc kubenswrapper[4867]: I0214 04:14:16.309628 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 14 04:14:16 crc kubenswrapper[4867]: I0214 04:14:16.872664 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 14 04:14:16 crc kubenswrapper[4867]: I0214 04:14:16.896662 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 14 04:14:17 crc kubenswrapper[4867]: I0214 04:14:17.215292 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 14 04:14:17 crc kubenswrapper[4867]: I0214 04:14:17.250606 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 14 04:14:17 crc kubenswrapper[4867]: I0214 04:14:17.399835 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 14 04:14:17 crc kubenswrapper[4867]: I0214 04:14:17.488207 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 14 04:14:17 crc kubenswrapper[4867]: I0214 04:14:17.524329 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 14 04:14:17 crc kubenswrapper[4867]: I0214 04:14:17.625133 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 14 04:14:17 crc kubenswrapper[4867]: I0214 04:14:17.765350 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 14 04:14:17 crc kubenswrapper[4867]: I0214 04:14:17.781448 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 14 04:14:17 crc kubenswrapper[4867]: I0214 04:14:17.936909 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 14 04:14:17 crc kubenswrapper[4867]: I0214 04:14:17.938115 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 14 04:14:17 crc kubenswrapper[4867]: I0214 04:14:17.969478 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 14 04:14:18 crc kubenswrapper[4867]: I0214 04:14:18.298092 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 14 04:14:18 crc kubenswrapper[4867]: I0214 04:14:18.388033 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 14 04:14:18 crc kubenswrapper[4867]: I0214 04:14:18.589200 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 14 04:14:18 crc kubenswrapper[4867]: I0214 04:14:18.769260 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 14 04:14:18 crc kubenswrapper[4867]: I0214 04:14:18.933378 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 14 04:14:18 crc kubenswrapper[4867]: I0214 04:14:18.941402 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 14 04:14:19 crc kubenswrapper[4867]: I0214 04:14:19.098054 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 14 04:14:19 crc kubenswrapper[4867]: I0214 04:14:19.128964 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 14 04:14:19 crc kubenswrapper[4867]: I0214 04:14:19.263480 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 14 04:14:19 crc kubenswrapper[4867]: I0214 04:14:19.278554 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 14 04:14:19 crc kubenswrapper[4867]: I0214 04:14:19.414595 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 14 04:14:19 crc kubenswrapper[4867]: I0214 04:14:19.686609 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 14 04:14:19 crc kubenswrapper[4867]: I0214 04:14:19.717856 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 14 04:14:19 crc kubenswrapper[4867]: I0214 04:14:19.741842 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 14 04:14:19 crc kubenswrapper[4867]: I0214 04:14:19.766736 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 14 04:14:20 crc kubenswrapper[4867]: I0214 04:14:20.377049 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 14 04:14:20 crc kubenswrapper[4867]: I0214 04:14:20.408189 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 14 04:14:20 crc kubenswrapper[4867]: I0214 04:14:20.697350 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 14 04:14:20 crc kubenswrapper[4867]: I0214 04:14:20.724443 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 14 04:14:20 crc kubenswrapper[4867]: I0214 04:14:20.770094 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 14 04:14:20 crc kubenswrapper[4867]: I0214 04:14:20.812743 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 14 04:14:20 crc kubenswrapper[4867]: I0214 04:14:20.866010 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 14 04:14:20 crc kubenswrapper[4867]: I0214 04:14:20.984753 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 14 04:14:21 crc kubenswrapper[4867]: I0214 04:14:21.066665 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 14 04:14:21 crc kubenswrapper[4867]: I0214 04:14:21.069900 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 14 04:14:21 crc kubenswrapper[4867]: I0214 04:14:21.153964 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 14 04:14:21 crc kubenswrapper[4867]: I0214 04:14:21.167425 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 14 04:14:21 crc kubenswrapper[4867]: I0214 04:14:21.177711 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 14 04:14:21 crc kubenswrapper[4867]: I0214 04:14:21.183535 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 14 04:14:21 crc kubenswrapper[4867]: I0214 04:14:21.194246 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 14 04:14:21 crc kubenswrapper[4867]: I0214 04:14:21.209836 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 14 04:14:21 crc kubenswrapper[4867]: I0214 04:14:21.212192 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 14 04:14:21 crc kubenswrapper[4867]: I0214 04:14:21.284978 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 14 04:14:21 crc kubenswrapper[4867]: I0214 04:14:21.294951 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 14 04:14:21 crc kubenswrapper[4867]: I0214 04:14:21.353865 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 14 04:14:21 crc kubenswrapper[4867]: I0214 04:14:21.371036 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 14 04:14:21 crc kubenswrapper[4867]: I0214 04:14:21.405519 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 14 04:14:21 crc kubenswrapper[4867]: I0214 04:14:21.411302 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 14 04:14:21 crc kubenswrapper[4867]: I0214 04:14:21.425491 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 14 04:14:21 crc kubenswrapper[4867]: I0214 04:14:21.499947 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 14 04:14:21 crc kubenswrapper[4867]: I0214 04:14:21.552371 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 14 04:14:21 crc kubenswrapper[4867]: I0214 04:14:21.662619 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 14 04:14:21 crc kubenswrapper[4867]: I0214 04:14:21.719850 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 14 04:14:21 crc kubenswrapper[4867]: I0214 04:14:21.775048 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 14 04:14:21 crc kubenswrapper[4867]: I0214 04:14:21.835984 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 14 04:14:21 crc kubenswrapper[4867]: I0214 04:14:21.841834 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 14 04:14:21 crc kubenswrapper[4867]: I0214 04:14:21.841974 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 14 04:14:21 crc kubenswrapper[4867]: I0214 04:14:21.863757 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 14 04:14:21 crc kubenswrapper[4867]: I0214 04:14:21.979470 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 14 04:14:22 crc kubenswrapper[4867]: I0214 04:14:22.045897 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 14 04:14:22 crc kubenswrapper[4867]: I0214 04:14:22.061143 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 14 04:14:22 crc kubenswrapper[4867]: I0214 04:14:22.086120 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 14 04:14:22 crc kubenswrapper[4867]: I0214 04:14:22.132971 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 14 04:14:22 crc kubenswrapper[4867]: I0214 04:14:22.154753 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 14 04:14:22 crc kubenswrapper[4867]: I0214 04:14:22.262346 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 14 04:14:22 crc kubenswrapper[4867]: I0214 04:14:22.293394 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 14 04:14:22 crc kubenswrapper[4867]: I0214 04:14:22.319601 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 14 04:14:22 crc kubenswrapper[4867]: I0214 04:14:22.379001 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 14 04:14:22 crc kubenswrapper[4867]: I0214 04:14:22.393943 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 14 04:14:22 crc kubenswrapper[4867]: I0214 04:14:22.552377 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 14 04:14:22 crc kubenswrapper[4867]: I0214 04:14:22.614955 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 14 04:14:22 crc kubenswrapper[4867]: I0214 04:14:22.730313 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 14 04:14:22 crc kubenswrapper[4867]: I0214 04:14:22.784271 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 14 04:14:22 crc kubenswrapper[4867]: I0214 04:14:22.802711 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 14 04:14:22 crc kubenswrapper[4867]: I0214 04:14:22.875770 4867 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 14 04:14:22 crc kubenswrapper[4867]: I0214 04:14:22.880317 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 14 04:14:22 crc kubenswrapper[4867]: I0214 04:14:22.880369 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 14 04:14:22 crc kubenswrapper[4867]: I0214 04:14:22.883580 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 14 04:14:22 crc kubenswrapper[4867]: I0214 04:14:22.886183 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 04:14:22 crc kubenswrapper[4867]: I0214 04:14:22.890081 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 14 04:14:22 crc kubenswrapper[4867]: I0214 04:14:22.925470 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=15.925447316 podStartE2EDuration="15.925447316s" podCreationTimestamp="2026-02-14 04:14:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:14:22.904075933 +0000 UTC m=+294.985013247" watchObservedRunningTime="2026-02-14 04:14:22.925447316 +0000 UTC m=+295.006384640" Feb 14 04:14:22 crc kubenswrapper[4867]: I0214 04:14:22.963193 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 14 04:14:23 crc kubenswrapper[4867]: I0214 04:14:23.022216 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 14 04:14:23 crc kubenswrapper[4867]: I0214 04:14:23.058720 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 14 04:14:23 crc kubenswrapper[4867]: I0214 04:14:23.064321 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 14 04:14:23 crc kubenswrapper[4867]: I0214 04:14:23.123024 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 14 04:14:23 crc kubenswrapper[4867]: I0214 04:14:23.129415 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 14 04:14:23 crc kubenswrapper[4867]: I0214 04:14:23.219372 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 14 04:14:23 crc kubenswrapper[4867]: I0214 04:14:23.324176 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 14 04:14:23 crc kubenswrapper[4867]: I0214 04:14:23.375403 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 14 04:14:23 crc kubenswrapper[4867]: I0214 04:14:23.394832 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 14 04:14:23 crc kubenswrapper[4867]: I0214 04:14:23.414064 4867 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 14 04:14:23 crc kubenswrapper[4867]: I0214 04:14:23.645085 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 14 04:14:23 crc kubenswrapper[4867]: I0214 04:14:23.653922 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 14 04:14:23 crc kubenswrapper[4867]: I0214 04:14:23.678018 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 14 04:14:23 crc kubenswrapper[4867]: I0214 04:14:23.949423 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 14 04:14:23 crc kubenswrapper[4867]: I0214 04:14:23.966879 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 14 04:14:24 crc kubenswrapper[4867]: I0214 04:14:24.008410 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 14 04:14:24 crc kubenswrapper[4867]: I0214 04:14:24.112222 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 14 04:14:24 crc kubenswrapper[4867]: I0214 04:14:24.114675 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 14 04:14:24 crc kubenswrapper[4867]: I0214 04:14:24.141725 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 14 04:14:24 crc kubenswrapper[4867]: I0214 04:14:24.179200 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 14 04:14:24 crc kubenswrapper[4867]: I0214 04:14:24.221833 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 14 04:14:24 crc kubenswrapper[4867]: I0214 04:14:24.228749 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 14 04:14:24 crc kubenswrapper[4867]: I0214 04:14:24.237597 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 14 04:14:24 crc kubenswrapper[4867]: I0214 04:14:24.307836 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 14 04:14:24 crc kubenswrapper[4867]: I0214 04:14:24.345726 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 14 04:14:24 crc kubenswrapper[4867]: I0214 04:14:24.411985 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 14 04:14:24 crc kubenswrapper[4867]: I0214 04:14:24.431920 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 14 04:14:24 crc kubenswrapper[4867]: I0214 04:14:24.437374 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 14 04:14:24 crc kubenswrapper[4867]: I0214 04:14:24.445911 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 14 04:14:24 crc kubenswrapper[4867]: I0214 04:14:24.457422 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 14 04:14:24 crc kubenswrapper[4867]: I0214 04:14:24.529838 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 14 04:14:24 crc kubenswrapper[4867]: I0214 04:14:24.568892 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 14 04:14:24 crc kubenswrapper[4867]: I0214 04:14:24.765014 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 14 04:14:24 crc kubenswrapper[4867]: I0214 04:14:24.823901 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 14 04:14:24 crc kubenswrapper[4867]: I0214 04:14:24.850726 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 14 04:14:24 crc kubenswrapper[4867]: I0214 04:14:24.852334 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 14 04:14:24 crc kubenswrapper[4867]: I0214 04:14:24.942046 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 14 04:14:25 crc kubenswrapper[4867]: I0214 04:14:25.047686 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 14 04:14:25 crc kubenswrapper[4867]: I0214 04:14:25.087737 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 14 04:14:25 crc kubenswrapper[4867]: I0214 04:14:25.175223 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 14 04:14:25 crc kubenswrapper[4867]: I0214 04:14:25.360564 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 14 04:14:25 crc kubenswrapper[4867]: I0214 04:14:25.396818 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 14 04:14:25 crc kubenswrapper[4867]: I0214 04:14:25.402124 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 14 04:14:25 crc kubenswrapper[4867]: I0214 04:14:25.504971 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 14 04:14:25 crc kubenswrapper[4867]: I0214 04:14:25.624239 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 14 04:14:25 crc kubenswrapper[4867]: I0214 04:14:25.674203 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 14 04:14:25 crc kubenswrapper[4867]: I0214 04:14:25.698616 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 14 04:14:25 crc kubenswrapper[4867]: I0214 04:14:25.789989 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 14 04:14:25 crc kubenswrapper[4867]: I0214 04:14:25.805420 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 14 04:14:25 crc kubenswrapper[4867]: I0214 04:14:25.852724 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 14 04:14:25 crc kubenswrapper[4867]: I0214 04:14:25.873740 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 14 04:14:25 crc kubenswrapper[4867]: I0214 04:14:25.908525 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 14 04:14:25 crc kubenswrapper[4867]: I0214 04:14:25.926727 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 14 04:14:25 crc kubenswrapper[4867]: I0214 04:14:25.961971 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 14 04:14:25 crc kubenswrapper[4867]: I0214 04:14:25.988334 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 14 04:14:26 crc kubenswrapper[4867]: I0214 04:14:26.067759 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 14 04:14:26 crc kubenswrapper[4867]: I0214 04:14:26.207062 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 14 04:14:26 crc kubenswrapper[4867]: I0214 04:14:26.213795 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 14 04:14:26 crc kubenswrapper[4867]: I0214 04:14:26.266579 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 14 04:14:26 crc kubenswrapper[4867]: I0214 04:14:26.294159 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 14 04:14:26 crc kubenswrapper[4867]: I0214 04:14:26.345954 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 14 04:14:26 crc kubenswrapper[4867]: I0214 04:14:26.695822 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 14 04:14:26 crc kubenswrapper[4867]: I0214 04:14:26.699541 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 14 04:14:26 crc kubenswrapper[4867]: I0214 04:14:26.772222 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 14 04:14:26 crc kubenswrapper[4867]: I0214 04:14:26.797297 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 14 04:14:26 crc kubenswrapper[4867]: I0214 04:14:26.936659 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 14 04:14:26 crc kubenswrapper[4867]: I0214 04:14:26.959203 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 14 04:14:26 crc kubenswrapper[4867]: I0214 04:14:26.973667 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 14 04:14:27 crc kubenswrapper[4867]: I0214 04:14:27.008522 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 14 04:14:27 crc kubenswrapper[4867]: I0214 04:14:27.023725 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 14 04:14:27 crc kubenswrapper[4867]: I0214 04:14:27.113883 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 14 04:14:27 crc kubenswrapper[4867]: I0214 04:14:27.134891 4867 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 14 04:14:27 crc kubenswrapper[4867]: I0214 04:14:27.158018 4867 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 14 04:14:27 crc kubenswrapper[4867]: I0214 04:14:27.260907 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 14 04:14:27 crc kubenswrapper[4867]: I0214 04:14:27.306404 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 14 04:14:27 crc kubenswrapper[4867]: I0214 04:14:27.309145 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 14 04:14:27 crc kubenswrapper[4867]: I0214 04:14:27.371541 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 14 04:14:27 crc kubenswrapper[4867]: I0214 04:14:27.494243 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 14 04:14:27 crc kubenswrapper[4867]: I0214 04:14:27.569288 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 14 04:14:27 crc kubenswrapper[4867]: I0214 04:14:27.580036 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 14 04:14:27 crc kubenswrapper[4867]: I0214 04:14:27.746767 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 14 04:14:27 crc kubenswrapper[4867]: I0214 04:14:27.754029 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 14 04:14:27 crc kubenswrapper[4867]: I0214 04:14:27.759865 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 14 04:14:27 crc kubenswrapper[4867]: I0214 04:14:27.796581 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 14 04:14:27 crc kubenswrapper[4867]: I0214 04:14:27.955819 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 14 04:14:28 crc kubenswrapper[4867]: I0214 04:14:28.008946 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 14 04:14:28 crc kubenswrapper[4867]: I0214 04:14:28.064621 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 14 04:14:28 crc kubenswrapper[4867]: I0214 04:14:28.068425 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 14 04:14:28 crc kubenswrapper[4867]: I0214 04:14:28.351124 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 14 04:14:28 crc kubenswrapper[4867]: I0214 04:14:28.386184 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 14 04:14:28 crc kubenswrapper[4867]: I0214 04:14:28.551236 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 14 04:14:28 crc kubenswrapper[4867]: I0214 04:14:28.611844 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 14 04:14:28 crc kubenswrapper[4867]: I0214 04:14:28.640888 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 14 04:14:28 crc kubenswrapper[4867]: I0214 04:14:28.760029 4867 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Feb 14 04:14:28 crc kubenswrapper[4867]: I0214 04:14:28.777610 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 14 04:14:28 crc kubenswrapper[4867]: I0214 04:14:28.786998 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 14 04:14:28 crc kubenswrapper[4867]: I0214 04:14:28.833347 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 14 04:14:28 crc kubenswrapper[4867]: I0214 04:14:28.838679 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 14 04:14:28 crc kubenswrapper[4867]: I0214 04:14:28.854202 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 14 04:14:28 crc kubenswrapper[4867]: I0214 04:14:28.862838 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 14 04:14:28 crc kubenswrapper[4867]: I0214 04:14:28.890171 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 14 04:14:28 crc kubenswrapper[4867]: I0214 04:14:28.916464 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 14 04:14:28 crc kubenswrapper[4867]: I0214 04:14:28.928702 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 14 04:14:29 crc kubenswrapper[4867]: I0214 04:14:29.129158 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 14 04:14:29 crc kubenswrapper[4867]: I0214 04:14:29.161775 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 14 04:14:29 crc kubenswrapper[4867]: I0214 04:14:29.173688 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 14 04:14:29 crc kubenswrapper[4867]: I0214 04:14:29.200442 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 14 04:14:29 crc kubenswrapper[4867]: I0214 04:14:29.256490 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 14 04:14:29 crc kubenswrapper[4867]: I0214 04:14:29.471932 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 14 04:14:29 crc kubenswrapper[4867]: I0214 04:14:29.636063 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 14 04:14:29 crc kubenswrapper[4867]: I0214 04:14:29.691083 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 14 04:14:29 crc kubenswrapper[4867]: I0214 04:14:29.694693 4867 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 14 04:14:29 crc kubenswrapper[4867]: I0214 04:14:29.700875 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 14 04:14:29 crc kubenswrapper[4867]: I0214 04:14:29.744753 4867 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 14 04:14:29 crc kubenswrapper[4867]: I0214 04:14:29.792371 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 14 04:14:29 crc kubenswrapper[4867]: I0214 04:14:29.803310 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 14 04:14:29 crc kubenswrapper[4867]: I0214 04:14:29.809128 4867 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 14 04:14:29 crc kubenswrapper[4867]: I0214 04:14:29.809448 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://e8b6ac2ad40980da7eed4ab19a090dd414cd17e380844b8fe6f7a8d4336ff8cd" gracePeriod=5 Feb 14 04:14:29 crc kubenswrapper[4867]: I0214 04:14:29.850390 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 14 04:14:29 crc kubenswrapper[4867]: I0214 04:14:29.853817 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 14 04:14:30 crc kubenswrapper[4867]: I0214 04:14:30.040793 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 14 04:14:30 crc kubenswrapper[4867]: I0214 04:14:30.132119 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 14 04:14:30 crc kubenswrapper[4867]: I0214 04:14:30.158434 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 14 04:14:30 crc kubenswrapper[4867]: I0214 04:14:30.303367 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 14 04:14:30 crc kubenswrapper[4867]: I0214 04:14:30.372125 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 14 04:14:30 crc kubenswrapper[4867]: I0214 04:14:30.520806 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 14 04:14:30 crc kubenswrapper[4867]: I0214 04:14:30.528382 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 14 04:14:30 crc kubenswrapper[4867]: I0214 04:14:30.573851 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 14 04:14:30 crc kubenswrapper[4867]: I0214 04:14:30.624942 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 14 04:14:30 crc kubenswrapper[4867]: I0214 04:14:30.803008 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 14 04:14:30 crc kubenswrapper[4867]: I0214 04:14:30.815376 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 14 04:14:30 crc kubenswrapper[4867]: I0214 04:14:30.893494 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 14 04:14:30 crc kubenswrapper[4867]: I0214 04:14:30.910617 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 14 04:14:30 crc kubenswrapper[4867]: I0214 04:14:30.987844 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 14 04:14:30 crc kubenswrapper[4867]: I0214 04:14:30.990972 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 14 04:14:31 crc kubenswrapper[4867]: I0214 04:14:31.008692 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 14 04:14:31 crc kubenswrapper[4867]: I0214 04:14:31.164484 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 14 04:14:31 crc kubenswrapper[4867]: I0214 04:14:31.191569 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 14 04:14:31 crc kubenswrapper[4867]: I0214 04:14:31.433940 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 14 04:14:31 crc kubenswrapper[4867]: I0214 04:14:31.525959 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 14 04:14:31 crc kubenswrapper[4867]: I0214 04:14:31.568536 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 14 04:14:31 crc kubenswrapper[4867]: I0214 04:14:31.652137 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 14 04:14:31 crc kubenswrapper[4867]: I0214 04:14:31.721390 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 14 04:14:31 crc kubenswrapper[4867]: I0214 04:14:31.979810 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 14 04:14:32 crc kubenswrapper[4867]: I0214 04:14:32.030785 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 14 04:14:32 crc kubenswrapper[4867]: I0214 04:14:32.122172 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 14 04:14:32 crc kubenswrapper[4867]: I0214 04:14:32.124531 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 14 04:14:32 crc kubenswrapper[4867]: I0214 04:14:32.242532 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 14 04:14:32 crc kubenswrapper[4867]: I0214 04:14:32.541292 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 14 04:14:32 crc kubenswrapper[4867]: I0214 04:14:32.573046 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 14 04:14:32 crc kubenswrapper[4867]: I0214 04:14:32.663372 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 14 04:14:32 crc kubenswrapper[4867]: I0214 04:14:32.717761 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 14 04:14:32 crc kubenswrapper[4867]: I0214 04:14:32.775163 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 14 04:14:32 crc kubenswrapper[4867]: I0214 04:14:32.966608 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 14 04:14:33 crc kubenswrapper[4867]: I0214 04:14:33.307739 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 14 04:14:33 crc kubenswrapper[4867]: I0214 04:14:33.310029 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 14 04:14:33 crc kubenswrapper[4867]: I0214 04:14:33.752191 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 14 04:14:35 crc kubenswrapper[4867]: I0214 04:14:35.278330 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 14 04:14:35 crc kubenswrapper[4867]: I0214 04:14:35.278727 4867 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="e8b6ac2ad40980da7eed4ab19a090dd414cd17e380844b8fe6f7a8d4336ff8cd" exitCode=137 Feb 14 04:14:35 crc kubenswrapper[4867]: I0214 04:14:35.395378 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 14 04:14:35 crc kubenswrapper[4867]: I0214 04:14:35.395462 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 04:14:35 crc kubenswrapper[4867]: I0214 04:14:35.548136 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 14 04:14:35 crc kubenswrapper[4867]: I0214 04:14:35.548264 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 04:14:35 crc kubenswrapper[4867]: I0214 04:14:35.548278 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 14 04:14:35 crc kubenswrapper[4867]: I0214 04:14:35.548353 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 04:14:35 crc kubenswrapper[4867]: I0214 04:14:35.548411 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 14 04:14:35 crc kubenswrapper[4867]: I0214 04:14:35.548479 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 04:14:35 crc kubenswrapper[4867]: I0214 04:14:35.548500 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 14 04:14:35 crc kubenswrapper[4867]: I0214 04:14:35.548560 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 14 04:14:35 crc kubenswrapper[4867]: I0214 04:14:35.548631 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 04:14:35 crc kubenswrapper[4867]: I0214 04:14:35.548866 4867 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 14 04:14:35 crc kubenswrapper[4867]: I0214 04:14:35.548886 4867 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 14 04:14:35 crc kubenswrapper[4867]: I0214 04:14:35.548898 4867 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 14 04:14:35 crc kubenswrapper[4867]: I0214 04:14:35.548909 4867 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 14 04:14:35 crc kubenswrapper[4867]: I0214 04:14:35.567289 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 04:14:35 crc kubenswrapper[4867]: I0214 04:14:35.650420 4867 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 14 04:14:36 crc kubenswrapper[4867]: I0214 04:14:36.284064 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 14 04:14:36 crc kubenswrapper[4867]: I0214 04:14:36.284365 4867 scope.go:117] "RemoveContainer" containerID="e8b6ac2ad40980da7eed4ab19a090dd414cd17e380844b8fe6f7a8d4336ff8cd" Feb 14 04:14:36 crc kubenswrapper[4867]: I0214 04:14:36.284469 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 04:14:37 crc kubenswrapper[4867]: I0214 04:14:37.005659 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.096001 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5mz22"] Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.098162 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5mz22" podUID="4cf2e46b-a553-4b29-b6f2-02072b8660d9" containerName="registry-server" containerID="cri-o://c2877fef377b8448495213f1ba7610d513464667dbd0985d720e7b4e3414f0c3" gracePeriod=30 Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.103448 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-x4khs"] Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.103936 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-x4khs" podUID="f27f899c-e2d8-4601-9a36-4582192436b7" containerName="registry-server" containerID="cri-o://ce8e3a0d75f26f463ddb328420cf33514070ab3b090d2f2c0466cda65d982931" gracePeriod=30 Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.117092 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2cjxf"] Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.119168 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2cjxf" podUID="0683c2f1-5695-4ef3-b6cc-31fe804c6dc6" containerName="registry-server" containerID="cri-o://118aa202ac601ceca70d20070e2eef726e85bdc481297be9216162c3fbf1dc32" gracePeriod=30 Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.122894 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8vs6k"] Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.123130 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8vs6k" podUID="b6d1c1c6-899d-4220-8f80-defae4ba56f0" containerName="registry-server" containerID="cri-o://fde717817968c374eed933a0aba80886281d640f0cd7b277b1cbd496e7430898" gracePeriod=30 Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.134747 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mkw9h"] Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.135260 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-mkw9h" podUID="0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2" containerName="marketplace-operator" containerID="cri-o://51dd7926e1bc9104319614773b3ee71539ad753d4fb48a3fd7a135d20615274f" gracePeriod=30 Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.153150 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gvh7q"] Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.153219 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-s8hwg"] Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.153231 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jc878"] Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.153437 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-gvh7q" podUID="2e834244-05c0-4e48-9e2a-7c69cf930951" containerName="registry-server" containerID="cri-o://d4d72b2ebbd17189ee349d8b4d6304ac52d50866cfe1895c6576cff0ec95c46e" gracePeriod=30 Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.154013 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-s8hwg" podUID="1f7707be-b4dc-47c7-8a74-bc46399acd36" containerName="registry-server" containerID="cri-o://7fb020ae5c17769ac38af08639b438690daf523e3453b2d4607be04e3eed31f6" gracePeriod=30 Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.154196 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jc878" podUID="fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab" containerName="registry-server" containerID="cri-o://60ffc454fecb09f395b2cdd3ab6338fbcdb34866e0895ad196ee1967f60209e8" gracePeriod=30 Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.159050 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-n9vq9"] Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.159361 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-n9vq9" podUID="21ce8d91-a436-4fe6-b5fd-1988e588ded8" containerName="registry-server" containerID="cri-o://59d20d766b1edd844acfd10fcac06c637f2be95f509a76f1883642ffba8f4bdb" gracePeriod=30 Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.183812 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-p82xp"] Feb 14 04:14:42 crc kubenswrapper[4867]: E0214 04:14:42.184042 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e717e9c-3ff4-420e-8f69-26044fc5e482" containerName="installer" Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.184054 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e717e9c-3ff4-420e-8f69-26044fc5e482" containerName="installer" Feb 14 04:14:42 crc kubenswrapper[4867]: E0214 04:14:42.184069 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.184075 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.184160 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e717e9c-3ff4-420e-8f69-26044fc5e482" containerName="installer" Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.184173 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.184902 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-p82xp" Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.193648 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-p82xp"] Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.336811 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/33b576d8-f768-4fd2-895d-7d4ababe8714-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-p82xp\" (UID: \"33b576d8-f768-4fd2-895d-7d4ababe8714\") " pod="openshift-marketplace/marketplace-operator-79b997595-p82xp" Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.336881 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/33b576d8-f768-4fd2-895d-7d4ababe8714-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-p82xp\" (UID: \"33b576d8-f768-4fd2-895d-7d4ababe8714\") " pod="openshift-marketplace/marketplace-operator-79b997595-p82xp" Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.336925 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dp2r\" (UniqueName: \"kubernetes.io/projected/33b576d8-f768-4fd2-895d-7d4ababe8714-kube-api-access-8dp2r\") pod \"marketplace-operator-79b997595-p82xp\" (UID: \"33b576d8-f768-4fd2-895d-7d4ababe8714\") " pod="openshift-marketplace/marketplace-operator-79b997595-p82xp" Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.341683 4867 generic.go:334] "Generic (PLEG): container finished" podID="fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab" containerID="60ffc454fecb09f395b2cdd3ab6338fbcdb34866e0895ad196ee1967f60209e8" exitCode=0 Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.341753 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jc878" event={"ID":"fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab","Type":"ContainerDied","Data":"60ffc454fecb09f395b2cdd3ab6338fbcdb34866e0895ad196ee1967f60209e8"} Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.349160 4867 generic.go:334] "Generic (PLEG): container finished" podID="4cf2e46b-a553-4b29-b6f2-02072b8660d9" containerID="c2877fef377b8448495213f1ba7610d513464667dbd0985d720e7b4e3414f0c3" exitCode=0 Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.349304 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5mz22" event={"ID":"4cf2e46b-a553-4b29-b6f2-02072b8660d9","Type":"ContainerDied","Data":"c2877fef377b8448495213f1ba7610d513464667dbd0985d720e7b4e3414f0c3"} Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.360123 4867 generic.go:334] "Generic (PLEG): container finished" podID="21ce8d91-a436-4fe6-b5fd-1988e588ded8" containerID="59d20d766b1edd844acfd10fcac06c637f2be95f509a76f1883642ffba8f4bdb" exitCode=0 Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.360209 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n9vq9" event={"ID":"21ce8d91-a436-4fe6-b5fd-1988e588ded8","Type":"ContainerDied","Data":"59d20d766b1edd844acfd10fcac06c637f2be95f509a76f1883642ffba8f4bdb"} Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.366319 4867 generic.go:334] "Generic (PLEG): container finished" podID="0683c2f1-5695-4ef3-b6cc-31fe804c6dc6" containerID="118aa202ac601ceca70d20070e2eef726e85bdc481297be9216162c3fbf1dc32" exitCode=0 Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.366405 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2cjxf" event={"ID":"0683c2f1-5695-4ef3-b6cc-31fe804c6dc6","Type":"ContainerDied","Data":"118aa202ac601ceca70d20070e2eef726e85bdc481297be9216162c3fbf1dc32"} Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.368456 4867 generic.go:334] "Generic (PLEG): container finished" podID="1f7707be-b4dc-47c7-8a74-bc46399acd36" containerID="7fb020ae5c17769ac38af08639b438690daf523e3453b2d4607be04e3eed31f6" exitCode=0 Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.368474 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s8hwg" event={"ID":"1f7707be-b4dc-47c7-8a74-bc46399acd36","Type":"ContainerDied","Data":"7fb020ae5c17769ac38af08639b438690daf523e3453b2d4607be04e3eed31f6"} Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.370335 4867 generic.go:334] "Generic (PLEG): container finished" podID="f27f899c-e2d8-4601-9a36-4582192436b7" containerID="ce8e3a0d75f26f463ddb328420cf33514070ab3b090d2f2c0466cda65d982931" exitCode=0 Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.370361 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x4khs" event={"ID":"f27f899c-e2d8-4601-9a36-4582192436b7","Type":"ContainerDied","Data":"ce8e3a0d75f26f463ddb328420cf33514070ab3b090d2f2c0466cda65d982931"} Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.372113 4867 generic.go:334] "Generic (PLEG): container finished" podID="2e834244-05c0-4e48-9e2a-7c69cf930951" containerID="d4d72b2ebbd17189ee349d8b4d6304ac52d50866cfe1895c6576cff0ec95c46e" exitCode=0 Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.372161 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gvh7q" event={"ID":"2e834244-05c0-4e48-9e2a-7c69cf930951","Type":"ContainerDied","Data":"d4d72b2ebbd17189ee349d8b4d6304ac52d50866cfe1895c6576cff0ec95c46e"} Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.373430 4867 generic.go:334] "Generic (PLEG): container finished" podID="0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2" containerID="51dd7926e1bc9104319614773b3ee71539ad753d4fb48a3fd7a135d20615274f" exitCode=0 Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.373501 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-mkw9h" event={"ID":"0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2","Type":"ContainerDied","Data":"51dd7926e1bc9104319614773b3ee71539ad753d4fb48a3fd7a135d20615274f"} Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.375834 4867 generic.go:334] "Generic (PLEG): container finished" podID="b6d1c1c6-899d-4220-8f80-defae4ba56f0" containerID="fde717817968c374eed933a0aba80886281d640f0cd7b277b1cbd496e7430898" exitCode=0 Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.375859 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8vs6k" event={"ID":"b6d1c1c6-899d-4220-8f80-defae4ba56f0","Type":"ContainerDied","Data":"fde717817968c374eed933a0aba80886281d640f0cd7b277b1cbd496e7430898"} Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.438288 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dp2r\" (UniqueName: \"kubernetes.io/projected/33b576d8-f768-4fd2-895d-7d4ababe8714-kube-api-access-8dp2r\") pod \"marketplace-operator-79b997595-p82xp\" (UID: \"33b576d8-f768-4fd2-895d-7d4ababe8714\") " pod="openshift-marketplace/marketplace-operator-79b997595-p82xp" Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.438442 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/33b576d8-f768-4fd2-895d-7d4ababe8714-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-p82xp\" (UID: \"33b576d8-f768-4fd2-895d-7d4ababe8714\") " pod="openshift-marketplace/marketplace-operator-79b997595-p82xp" Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.438477 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/33b576d8-f768-4fd2-895d-7d4ababe8714-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-p82xp\" (UID: \"33b576d8-f768-4fd2-895d-7d4ababe8714\") " pod="openshift-marketplace/marketplace-operator-79b997595-p82xp" Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.441069 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/33b576d8-f768-4fd2-895d-7d4ababe8714-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-p82xp\" (UID: \"33b576d8-f768-4fd2-895d-7d4ababe8714\") " pod="openshift-marketplace/marketplace-operator-79b997595-p82xp" Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.445223 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/33b576d8-f768-4fd2-895d-7d4ababe8714-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-p82xp\" (UID: \"33b576d8-f768-4fd2-895d-7d4ababe8714\") " pod="openshift-marketplace/marketplace-operator-79b997595-p82xp" Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.456682 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dp2r\" (UniqueName: \"kubernetes.io/projected/33b576d8-f768-4fd2-895d-7d4ababe8714-kube-api-access-8dp2r\") pod \"marketplace-operator-79b997595-p82xp\" (UID: \"33b576d8-f768-4fd2-895d-7d4ababe8714\") " pod="openshift-marketplace/marketplace-operator-79b997595-p82xp" Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.774639 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-p82xp" Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.778947 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5mz22" Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.784378 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8vs6k" Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.791249 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s8hwg" Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.801497 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n9vq9" Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.835488 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gvh7q" Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.841720 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jc878" Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.842150 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-mkw9h" Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.842385 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x4khs" Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.842632 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cf2e46b-a553-4b29-b6f2-02072b8660d9-catalog-content\") pod \"4cf2e46b-a553-4b29-b6f2-02072b8660d9\" (UID: \"4cf2e46b-a553-4b29-b6f2-02072b8660d9\") " Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.842829 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cf2e46b-a553-4b29-b6f2-02072b8660d9-utilities\") pod \"4cf2e46b-a553-4b29-b6f2-02072b8660d9\" (UID: \"4cf2e46b-a553-4b29-b6f2-02072b8660d9\") " Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.842940 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmwl4\" (UniqueName: \"kubernetes.io/projected/4cf2e46b-a553-4b29-b6f2-02072b8660d9-kube-api-access-rmwl4\") pod \"4cf2e46b-a553-4b29-b6f2-02072b8660d9\" (UID: \"4cf2e46b-a553-4b29-b6f2-02072b8660d9\") " Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.847620 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4cf2e46b-a553-4b29-b6f2-02072b8660d9-utilities" (OuterVolumeSpecName: "utilities") pod "4cf2e46b-a553-4b29-b6f2-02072b8660d9" (UID: "4cf2e46b-a553-4b29-b6f2-02072b8660d9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.854915 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cf2e46b-a553-4b29-b6f2-02072b8660d9-kube-api-access-rmwl4" (OuterVolumeSpecName: "kube-api-access-rmwl4") pod "4cf2e46b-a553-4b29-b6f2-02072b8660d9" (UID: "4cf2e46b-a553-4b29-b6f2-02072b8660d9"). InnerVolumeSpecName "kube-api-access-rmwl4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.855043 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2cjxf" Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.905738 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4cf2e46b-a553-4b29-b6f2-02072b8660d9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4cf2e46b-a553-4b29-b6f2-02072b8660d9" (UID: "4cf2e46b-a553-4b29-b6f2-02072b8660d9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.944340 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab-utilities\") pod \"fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab\" (UID: \"fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab\") " Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.944384 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzh4n\" (UniqueName: \"kubernetes.io/projected/f27f899c-e2d8-4601-9a36-4582192436b7-kube-api-access-rzh4n\") pod \"f27f899c-e2d8-4601-9a36-4582192436b7\" (UID: \"f27f899c-e2d8-4601-9a36-4582192436b7\") " Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.944408 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f7707be-b4dc-47c7-8a74-bc46399acd36-catalog-content\") pod \"1f7707be-b4dc-47c7-8a74-bc46399acd36\" (UID: \"1f7707be-b4dc-47c7-8a74-bc46399acd36\") " Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.944431 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2-marketplace-operator-metrics\") pod \"0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2\" (UID: \"0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2\") " Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.944460 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmkjt\" (UniqueName: \"kubernetes.io/projected/fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab-kube-api-access-nmkjt\") pod \"fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab\" (UID: \"fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab\") " Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.944489 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mp526\" (UniqueName: \"kubernetes.io/projected/0683c2f1-5695-4ef3-b6cc-31fe804c6dc6-kube-api-access-mp526\") pod \"0683c2f1-5695-4ef3-b6cc-31fe804c6dc6\" (UID: \"0683c2f1-5695-4ef3-b6cc-31fe804c6dc6\") " Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.944529 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f7707be-b4dc-47c7-8a74-bc46399acd36-utilities\") pod \"1f7707be-b4dc-47c7-8a74-bc46399acd36\" (UID: \"1f7707be-b4dc-47c7-8a74-bc46399acd36\") " Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.944552 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21ce8d91-a436-4fe6-b5fd-1988e588ded8-catalog-content\") pod \"21ce8d91-a436-4fe6-b5fd-1988e588ded8\" (UID: \"21ce8d91-a436-4fe6-b5fd-1988e588ded8\") " Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.944621 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mtnvz\" (UniqueName: \"kubernetes.io/projected/b6d1c1c6-899d-4220-8f80-defae4ba56f0-kube-api-access-mtnvz\") pod \"b6d1c1c6-899d-4220-8f80-defae4ba56f0\" (UID: \"b6d1c1c6-899d-4220-8f80-defae4ba56f0\") " Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.944648 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f27f899c-e2d8-4601-9a36-4582192436b7-utilities\") pod \"f27f899c-e2d8-4601-9a36-4582192436b7\" (UID: \"f27f899c-e2d8-4601-9a36-4582192436b7\") " Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.944667 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f27f899c-e2d8-4601-9a36-4582192436b7-catalog-content\") pod \"f27f899c-e2d8-4601-9a36-4582192436b7\" (UID: \"f27f899c-e2d8-4601-9a36-4582192436b7\") " Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.944688 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gq2jw\" (UniqueName: \"kubernetes.io/projected/0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2-kube-api-access-gq2jw\") pod \"0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2\" (UID: \"0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2\") " Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.944709 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0683c2f1-5695-4ef3-b6cc-31fe804c6dc6-utilities\") pod \"0683c2f1-5695-4ef3-b6cc-31fe804c6dc6\" (UID: \"0683c2f1-5695-4ef3-b6cc-31fe804c6dc6\") " Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.944725 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0683c2f1-5695-4ef3-b6cc-31fe804c6dc6-catalog-content\") pod \"0683c2f1-5695-4ef3-b6cc-31fe804c6dc6\" (UID: \"0683c2f1-5695-4ef3-b6cc-31fe804c6dc6\") " Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.944741 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab-catalog-content\") pod \"fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab\" (UID: \"fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab\") " Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.944757 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e834244-05c0-4e48-9e2a-7c69cf930951-catalog-content\") pod \"2e834244-05c0-4e48-9e2a-7c69cf930951\" (UID: \"2e834244-05c0-4e48-9e2a-7c69cf930951\") " Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.944776 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v76pr\" (UniqueName: \"kubernetes.io/projected/21ce8d91-a436-4fe6-b5fd-1988e588ded8-kube-api-access-v76pr\") pod \"21ce8d91-a436-4fe6-b5fd-1988e588ded8\" (UID: \"21ce8d91-a436-4fe6-b5fd-1988e588ded8\") " Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.944792 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21ce8d91-a436-4fe6-b5fd-1988e588ded8-utilities\") pod \"21ce8d91-a436-4fe6-b5fd-1988e588ded8\" (UID: \"21ce8d91-a436-4fe6-b5fd-1988e588ded8\") " Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.944842 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8f5g2\" (UniqueName: \"kubernetes.io/projected/2e834244-05c0-4e48-9e2a-7c69cf930951-kube-api-access-8f5g2\") pod \"2e834244-05c0-4e48-9e2a-7c69cf930951\" (UID: \"2e834244-05c0-4e48-9e2a-7c69cf930951\") " Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.944872 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ztsqf\" (UniqueName: \"kubernetes.io/projected/1f7707be-b4dc-47c7-8a74-bc46399acd36-kube-api-access-ztsqf\") pod \"1f7707be-b4dc-47c7-8a74-bc46399acd36\" (UID: \"1f7707be-b4dc-47c7-8a74-bc46399acd36\") " Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.944902 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6d1c1c6-899d-4220-8f80-defae4ba56f0-catalog-content\") pod \"b6d1c1c6-899d-4220-8f80-defae4ba56f0\" (UID: \"b6d1c1c6-899d-4220-8f80-defae4ba56f0\") " Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.944926 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2-marketplace-trusted-ca\") pod \"0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2\" (UID: \"0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2\") " Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.944947 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e834244-05c0-4e48-9e2a-7c69cf930951-utilities\") pod \"2e834244-05c0-4e48-9e2a-7c69cf930951\" (UID: \"2e834244-05c0-4e48-9e2a-7c69cf930951\") " Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.945002 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6d1c1c6-899d-4220-8f80-defae4ba56f0-utilities\") pod \"b6d1c1c6-899d-4220-8f80-defae4ba56f0\" (UID: \"b6d1c1c6-899d-4220-8f80-defae4ba56f0\") " Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.945219 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cf2e46b-a553-4b29-b6f2-02072b8660d9-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.945236 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rmwl4\" (UniqueName: \"kubernetes.io/projected/4cf2e46b-a553-4b29-b6f2-02072b8660d9-kube-api-access-rmwl4\") on node \"crc\" DevicePath \"\"" Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.945247 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cf2e46b-a553-4b29-b6f2-02072b8660d9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.946227 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6d1c1c6-899d-4220-8f80-defae4ba56f0-utilities" (OuterVolumeSpecName: "utilities") pod "b6d1c1c6-899d-4220-8f80-defae4ba56f0" (UID: "b6d1c1c6-899d-4220-8f80-defae4ba56f0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.947356 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0683c2f1-5695-4ef3-b6cc-31fe804c6dc6-utilities" (OuterVolumeSpecName: "utilities") pod "0683c2f1-5695-4ef3-b6cc-31fe804c6dc6" (UID: "0683c2f1-5695-4ef3-b6cc-31fe804c6dc6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.947906 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21ce8d91-a436-4fe6-b5fd-1988e588ded8-utilities" (OuterVolumeSpecName: "utilities") pod "21ce8d91-a436-4fe6-b5fd-1988e588ded8" (UID: "21ce8d91-a436-4fe6-b5fd-1988e588ded8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.950141 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6d1c1c6-899d-4220-8f80-defae4ba56f0-kube-api-access-mtnvz" (OuterVolumeSpecName: "kube-api-access-mtnvz") pod "b6d1c1c6-899d-4220-8f80-defae4ba56f0" (UID: "b6d1c1c6-899d-4220-8f80-defae4ba56f0"). InnerVolumeSpecName "kube-api-access-mtnvz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.950493 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0683c2f1-5695-4ef3-b6cc-31fe804c6dc6-kube-api-access-mp526" (OuterVolumeSpecName: "kube-api-access-mp526") pod "0683c2f1-5695-4ef3-b6cc-31fe804c6dc6" (UID: "0683c2f1-5695-4ef3-b6cc-31fe804c6dc6"). InnerVolumeSpecName "kube-api-access-mp526". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.950899 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f7707be-b4dc-47c7-8a74-bc46399acd36-utilities" (OuterVolumeSpecName: "utilities") pod "1f7707be-b4dc-47c7-8a74-bc46399acd36" (UID: "1f7707be-b4dc-47c7-8a74-bc46399acd36"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.951790 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2-kube-api-access-gq2jw" (OuterVolumeSpecName: "kube-api-access-gq2jw") pod "0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2" (UID: "0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2"). InnerVolumeSpecName "kube-api-access-gq2jw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.954170 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f27f899c-e2d8-4601-9a36-4582192436b7-utilities" (OuterVolumeSpecName: "utilities") pod "f27f899c-e2d8-4601-9a36-4582192436b7" (UID: "f27f899c-e2d8-4601-9a36-4582192436b7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.956497 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2" (UID: "0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.958956 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f27f899c-e2d8-4601-9a36-4582192436b7-kube-api-access-rzh4n" (OuterVolumeSpecName: "kube-api-access-rzh4n") pod "f27f899c-e2d8-4601-9a36-4582192436b7" (UID: "f27f899c-e2d8-4601-9a36-4582192436b7"). InnerVolumeSpecName "kube-api-access-rzh4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.960387 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2" (UID: "0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.960653 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21ce8d91-a436-4fe6-b5fd-1988e588ded8-kube-api-access-v76pr" (OuterVolumeSpecName: "kube-api-access-v76pr") pod "21ce8d91-a436-4fe6-b5fd-1988e588ded8" (UID: "21ce8d91-a436-4fe6-b5fd-1988e588ded8"). InnerVolumeSpecName "kube-api-access-v76pr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.961616 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e834244-05c0-4e48-9e2a-7c69cf930951-utilities" (OuterVolumeSpecName: "utilities") pod "2e834244-05c0-4e48-9e2a-7c69cf930951" (UID: "2e834244-05c0-4e48-9e2a-7c69cf930951"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.963948 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab-utilities" (OuterVolumeSpecName: "utilities") pod "fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab" (UID: "fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.964299 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f7707be-b4dc-47c7-8a74-bc46399acd36-kube-api-access-ztsqf" (OuterVolumeSpecName: "kube-api-access-ztsqf") pod "1f7707be-b4dc-47c7-8a74-bc46399acd36" (UID: "1f7707be-b4dc-47c7-8a74-bc46399acd36"). InnerVolumeSpecName "kube-api-access-ztsqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.967004 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e834244-05c0-4e48-9e2a-7c69cf930951-kube-api-access-8f5g2" (OuterVolumeSpecName: "kube-api-access-8f5g2") pod "2e834244-05c0-4e48-9e2a-7c69cf930951" (UID: "2e834244-05c0-4e48-9e2a-7c69cf930951"). InnerVolumeSpecName "kube-api-access-8f5g2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.970424 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab-kube-api-access-nmkjt" (OuterVolumeSpecName: "kube-api-access-nmkjt") pod "fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab" (UID: "fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab"). InnerVolumeSpecName "kube-api-access-nmkjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.990215 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f7707be-b4dc-47c7-8a74-bc46399acd36-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1f7707be-b4dc-47c7-8a74-bc46399acd36" (UID: "1f7707be-b4dc-47c7-8a74-bc46399acd36"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.995409 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e834244-05c0-4e48-9e2a-7c69cf930951-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2e834244-05c0-4e48-9e2a-7c69cf930951" (UID: "2e834244-05c0-4e48-9e2a-7c69cf930951"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.050386 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rzh4n\" (UniqueName: \"kubernetes.io/projected/f27f899c-e2d8-4601-9a36-4582192436b7-kube-api-access-rzh4n\") on node \"crc\" DevicePath \"\"" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.050603 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f7707be-b4dc-47c7-8a74-bc46399acd36-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.050679 4867 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.050739 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nmkjt\" (UniqueName: \"kubernetes.io/projected/fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab-kube-api-access-nmkjt\") on node \"crc\" DevicePath \"\"" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.050802 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mp526\" (UniqueName: \"kubernetes.io/projected/0683c2f1-5695-4ef3-b6cc-31fe804c6dc6-kube-api-access-mp526\") on node \"crc\" DevicePath \"\"" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.050870 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f7707be-b4dc-47c7-8a74-bc46399acd36-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.050929 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mtnvz\" (UniqueName: \"kubernetes.io/projected/b6d1c1c6-899d-4220-8f80-defae4ba56f0-kube-api-access-mtnvz\") on node \"crc\" DevicePath \"\"" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.051289 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f27f899c-e2d8-4601-9a36-4582192436b7-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.051364 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gq2jw\" (UniqueName: \"kubernetes.io/projected/0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2-kube-api-access-gq2jw\") on node \"crc\" DevicePath \"\"" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.051431 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0683c2f1-5695-4ef3-b6cc-31fe804c6dc6-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.051527 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e834244-05c0-4e48-9e2a-7c69cf930951-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.051602 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v76pr\" (UniqueName: \"kubernetes.io/projected/21ce8d91-a436-4fe6-b5fd-1988e588ded8-kube-api-access-v76pr\") on node \"crc\" DevicePath \"\"" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.051686 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21ce8d91-a436-4fe6-b5fd-1988e588ded8-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.051761 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8f5g2\" (UniqueName: \"kubernetes.io/projected/2e834244-05c0-4e48-9e2a-7c69cf930951-kube-api-access-8f5g2\") on node \"crc\" DevicePath \"\"" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.051846 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ztsqf\" (UniqueName: \"kubernetes.io/projected/1f7707be-b4dc-47c7-8a74-bc46399acd36-kube-api-access-ztsqf\") on node \"crc\" DevicePath \"\"" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.052018 4867 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.052106 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e834244-05c0-4e48-9e2a-7c69cf930951-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.052234 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6d1c1c6-899d-4220-8f80-defae4ba56f0-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.052343 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.051260 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-p82xp"] Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.070819 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f27f899c-e2d8-4601-9a36-4582192436b7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f27f899c-e2d8-4601-9a36-4582192436b7" (UID: "f27f899c-e2d8-4601-9a36-4582192436b7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.080151 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0683c2f1-5695-4ef3-b6cc-31fe804c6dc6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0683c2f1-5695-4ef3-b6cc-31fe804c6dc6" (UID: "0683c2f1-5695-4ef3-b6cc-31fe804c6dc6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.084024 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6d1c1c6-899d-4220-8f80-defae4ba56f0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b6d1c1c6-899d-4220-8f80-defae4ba56f0" (UID: "b6d1c1c6-899d-4220-8f80-defae4ba56f0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.140423 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab" (UID: "fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.144032 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21ce8d91-a436-4fe6-b5fd-1988e588ded8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "21ce8d91-a436-4fe6-b5fd-1988e588ded8" (UID: "21ce8d91-a436-4fe6-b5fd-1988e588ded8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.153930 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f27f899c-e2d8-4601-9a36-4582192436b7-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.153965 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0683c2f1-5695-4ef3-b6cc-31fe804c6dc6-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.153976 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.153986 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6d1c1c6-899d-4220-8f80-defae4ba56f0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.153995 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21ce8d91-a436-4fe6-b5fd-1988e588ded8-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.326325 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jc878"] Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.383092 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gvh7q" event={"ID":"2e834244-05c0-4e48-9e2a-7c69cf930951","Type":"ContainerDied","Data":"90d63cc6554a718e0d4cbfb1e7b6d2e1fdaca86fdf3238edfbe5d97515589316"} Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.383169 4867 scope.go:117] "RemoveContainer" containerID="d4d72b2ebbd17189ee349d8b4d6304ac52d50866cfe1895c6576cff0ec95c46e" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.383177 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gvh7q" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.385001 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-mkw9h" event={"ID":"0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2","Type":"ContainerDied","Data":"0b46292ee8547b3f863b2a98bb8fb2cf8703a9757ad76735d9fe0ebd6ef2ffbd"} Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.385051 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-mkw9h" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.387012 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8vs6k" event={"ID":"b6d1c1c6-899d-4220-8f80-defae4ba56f0","Type":"ContainerDied","Data":"9ac639b6394c5e1017aeaf569eada5d729a39bf526b8497bd4296ca3b0755153"} Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.387114 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8vs6k" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.391395 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5mz22" event={"ID":"4cf2e46b-a553-4b29-b6f2-02072b8660d9","Type":"ContainerDied","Data":"23ddca82e7ec32caacf54a7cebc1ffb43fed1e460daeba077f08fce659c5713c"} Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.391429 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5mz22" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.394283 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n9vq9" event={"ID":"21ce8d91-a436-4fe6-b5fd-1988e588ded8","Type":"ContainerDied","Data":"4782354a698fe401c643d9fa5567f3591df600cf5a8f25b16b237312263df503"} Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.394641 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n9vq9" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.396600 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x4khs" event={"ID":"f27f899c-e2d8-4601-9a36-4582192436b7","Type":"ContainerDied","Data":"3e5452fa8e8c6fb391a2e17ab4b7c984074e14d79a0538110dcd9e41b18bd839"} Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.396637 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x4khs" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.398569 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2cjxf" event={"ID":"0683c2f1-5695-4ef3-b6cc-31fe804c6dc6","Type":"ContainerDied","Data":"add894549a2aff626db3cd5482bf5486b20d694394b5286fe468f9059e3f4b1d"} Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.398649 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2cjxf" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.398965 4867 scope.go:117] "RemoveContainer" containerID="7e50404d86dfa5abaa30ac013da7f00871fba46895499f9f17afba5a612ece63" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.408549 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s8hwg" event={"ID":"1f7707be-b4dc-47c7-8a74-bc46399acd36","Type":"ContainerDied","Data":"9414f47d96386d3ff0af0fa0050f52950e5a9a8e484274e0b79dd8bd6d0a669b"} Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.408637 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s8hwg" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.412625 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jc878" event={"ID":"fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab","Type":"ContainerDied","Data":"873ab4fab8bcde5b4877631fe5b476f986fe024be500dd128844b9b8ff975f35"} Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.412698 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jc878" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.417251 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-p82xp" event={"ID":"33b576d8-f768-4fd2-895d-7d4ababe8714","Type":"ContainerStarted","Data":"816ecbead5e006e5b927df8e1b250bfef25e06ac1f4af4b58cde8881814d60ac"} Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.417297 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-p82xp" event={"ID":"33b576d8-f768-4fd2-895d-7d4ababe8714","Type":"ContainerStarted","Data":"0825a46fff7992e99f90d4a3200834f03176e7548c5fc3621a0c63e09014fe8b"} Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.418213 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-p82xp" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.420292 4867 scope.go:117] "RemoveContainer" containerID="5ea24da634c74fd4522707557b46ec23669f943631ddc2b04acda4a65985a65f" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.420337 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gvh7q"] Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.422637 4867 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-p82xp container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.60:8080/healthz\": dial tcp 10.217.0.60:8080: connect: connection refused" start-of-body= Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.422705 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-p82xp" podUID="33b576d8-f768-4fd2-895d-7d4ababe8714" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.60:8080/healthz\": dial tcp 10.217.0.60:8080: connect: connection refused" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.432388 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-gvh7q"] Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.439791 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mkw9h"] Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.450454 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mkw9h"] Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.451616 4867 scope.go:117] "RemoveContainer" containerID="51dd7926e1bc9104319614773b3ee71539ad753d4fb48a3fd7a135d20615274f" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.457743 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5mz22"] Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.464236 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5mz22"] Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.470058 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-p82xp" podStartSLOduration=1.470033726 podStartE2EDuration="1.470033726s" podCreationTimestamp="2026-02-14 04:14:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:14:43.462520981 +0000 UTC m=+315.543458315" watchObservedRunningTime="2026-02-14 04:14:43.470033726 +0000 UTC m=+315.550971040" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.473381 4867 scope.go:117] "RemoveContainer" containerID="fde717817968c374eed933a0aba80886281d640f0cd7b277b1cbd496e7430898" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.482293 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2cjxf"] Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.486550 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2cjxf"] Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.493021 4867 scope.go:117] "RemoveContainer" containerID="c9315920968c94ddf5477e0bdd603b5b8e9cbf807eefba671df93e2d03e2c2f6" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.500171 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-n9vq9"] Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.503156 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-n9vq9"] Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.509657 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jc878"] Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.514594 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jc878"] Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.518238 4867 scope.go:117] "RemoveContainer" containerID="3e14d895a14f4a0564f7f7e3c69189c69564a9ff087f2c6d784da1dda53743aa" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.523181 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8vs6k"] Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.528007 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8vs6k"] Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.532678 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-x4khs"] Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.537835 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-x4khs"] Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.547419 4867 scope.go:117] "RemoveContainer" containerID="c2877fef377b8448495213f1ba7610d513464667dbd0985d720e7b4e3414f0c3" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.551559 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-s8hwg"] Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.555118 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-s8hwg"] Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.563484 4867 scope.go:117] "RemoveContainer" containerID="07dc86f27711b42c0f0c70d02bf821bf6e645caa1d382d2a371675cf0f568e78" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.577192 4867 scope.go:117] "RemoveContainer" containerID="af97fea8edd2f6f86bfcc865565c17f7057a140b45a31735d974db6d18d89c4d" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.593460 4867 scope.go:117] "RemoveContainer" containerID="59d20d766b1edd844acfd10fcac06c637f2be95f509a76f1883642ffba8f4bdb" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.607173 4867 scope.go:117] "RemoveContainer" containerID="1874a10e5b67d2e6bb513881074d5bce2e31adc733159821fa403df5a755105e" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.622857 4867 scope.go:117] "RemoveContainer" containerID="743ba93f76979f5c122f709823ba46e2f882af89613e670bb5a5b1a6bbf930e3" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.635237 4867 scope.go:117] "RemoveContainer" containerID="ce8e3a0d75f26f463ddb328420cf33514070ab3b090d2f2c0466cda65d982931" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.650648 4867 scope.go:117] "RemoveContainer" containerID="fc1f0bd8f7009d70b8d79a2619856a470a226829cf0b6491da5a920f404a7708" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.671704 4867 scope.go:117] "RemoveContainer" containerID="a4ecefe0bd25ea2146d501e1e030f255aa760e1d3b80ec52600bc04dede7435e" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.686148 4867 scope.go:117] "RemoveContainer" containerID="118aa202ac601ceca70d20070e2eef726e85bdc481297be9216162c3fbf1dc32" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.699630 4867 scope.go:117] "RemoveContainer" containerID="85287bd98780c8d28545ae3a7b154f6ba33f7e022b07f74e2ecc3b8f424c43cb" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.713161 4867 scope.go:117] "RemoveContainer" containerID="7e41463addb663f771a8a5f2b9e7c4873429544544dd6087d30ba5633e2b13ff" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.725190 4867 scope.go:117] "RemoveContainer" containerID="7fb020ae5c17769ac38af08639b438690daf523e3453b2d4607be04e3eed31f6" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.737288 4867 scope.go:117] "RemoveContainer" containerID="984fdfc85b05392cc72c5c84de4475acfa58af432c2af35475c4d0530104a422" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.748405 4867 scope.go:117] "RemoveContainer" containerID="74feb7884ba2418ee7d549ee5577cf3938f772233b39e1dc8f5cc302e9984613" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.758484 4867 scope.go:117] "RemoveContainer" containerID="60ffc454fecb09f395b2cdd3ab6338fbcdb34866e0895ad196ee1967f60209e8" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.771718 4867 scope.go:117] "RemoveContainer" containerID="a9a5891bbec4b4da6c9ef36e2dd93f2b54465511a9b15a7d390a7176eb2c82b4" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.789387 4867 scope.go:117] "RemoveContainer" containerID="32411749279c49995d30b3666ff88537eeae29bee0a978d984c3e86a4c392864" Feb 14 04:14:44 crc kubenswrapper[4867]: I0214 04:14:44.446776 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-p82xp" Feb 14 04:14:44 crc kubenswrapper[4867]: I0214 04:14:44.511932 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 14 04:14:45 crc kubenswrapper[4867]: I0214 04:14:45.005008 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0683c2f1-5695-4ef3-b6cc-31fe804c6dc6" path="/var/lib/kubelet/pods/0683c2f1-5695-4ef3-b6cc-31fe804c6dc6/volumes" Feb 14 04:14:45 crc kubenswrapper[4867]: I0214 04:14:45.005860 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2" path="/var/lib/kubelet/pods/0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2/volumes" Feb 14 04:14:45 crc kubenswrapper[4867]: I0214 04:14:45.006315 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f7707be-b4dc-47c7-8a74-bc46399acd36" path="/var/lib/kubelet/pods/1f7707be-b4dc-47c7-8a74-bc46399acd36/volumes" Feb 14 04:14:45 crc kubenswrapper[4867]: I0214 04:14:45.007395 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21ce8d91-a436-4fe6-b5fd-1988e588ded8" path="/var/lib/kubelet/pods/21ce8d91-a436-4fe6-b5fd-1988e588ded8/volumes" Feb 14 04:14:45 crc kubenswrapper[4867]: I0214 04:14:45.008065 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e834244-05c0-4e48-9e2a-7c69cf930951" path="/var/lib/kubelet/pods/2e834244-05c0-4e48-9e2a-7c69cf930951/volumes" Feb 14 04:14:45 crc kubenswrapper[4867]: I0214 04:14:45.009160 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4cf2e46b-a553-4b29-b6f2-02072b8660d9" path="/var/lib/kubelet/pods/4cf2e46b-a553-4b29-b6f2-02072b8660d9/volumes" Feb 14 04:14:45 crc kubenswrapper[4867]: I0214 04:14:45.009854 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6d1c1c6-899d-4220-8f80-defae4ba56f0" path="/var/lib/kubelet/pods/b6d1c1c6-899d-4220-8f80-defae4ba56f0/volumes" Feb 14 04:14:45 crc kubenswrapper[4867]: I0214 04:14:45.010452 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f27f899c-e2d8-4601-9a36-4582192436b7" path="/var/lib/kubelet/pods/f27f899c-e2d8-4601-9a36-4582192436b7/volumes" Feb 14 04:14:45 crc kubenswrapper[4867]: I0214 04:14:45.011381 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab" path="/var/lib/kubelet/pods/fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab/volumes" Feb 14 04:14:46 crc kubenswrapper[4867]: I0214 04:14:46.788532 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 14 04:14:47 crc kubenswrapper[4867]: I0214 04:14:47.499631 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 14 04:14:48 crc kubenswrapper[4867]: I0214 04:14:48.278670 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 14 04:14:52 crc kubenswrapper[4867]: I0214 04:14:52.903893 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 14 04:14:54 crc kubenswrapper[4867]: I0214 04:14:54.469031 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 14 04:14:55 crc kubenswrapper[4867]: I0214 04:14:55.511848 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 14 04:14:57 crc kubenswrapper[4867]: I0214 04:14:57.370336 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 14 04:14:59 crc kubenswrapper[4867]: I0214 04:14:59.315351 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 14 04:15:00 crc kubenswrapper[4867]: I0214 04:15:00.161877 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517375-78vgp"] Feb 14 04:15:00 crc kubenswrapper[4867]: E0214 04:15:00.162080 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6d1c1c6-899d-4220-8f80-defae4ba56f0" containerName="extract-utilities" Feb 14 04:15:00 crc kubenswrapper[4867]: I0214 04:15:00.162093 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6d1c1c6-899d-4220-8f80-defae4ba56f0" containerName="extract-utilities" Feb 14 04:15:00 crc kubenswrapper[4867]: E0214 04:15:00.162104 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab" containerName="registry-server" Feb 14 04:15:00 crc kubenswrapper[4867]: I0214 04:15:00.162110 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab" containerName="registry-server" Feb 14 04:15:00 crc kubenswrapper[4867]: E0214 04:15:00.162122 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21ce8d91-a436-4fe6-b5fd-1988e588ded8" containerName="extract-utilities" Feb 14 04:15:00 crc kubenswrapper[4867]: I0214 04:15:00.162130 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="21ce8d91-a436-4fe6-b5fd-1988e588ded8" containerName="extract-utilities" Feb 14 04:15:00 crc kubenswrapper[4867]: E0214 04:15:00.162138 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0683c2f1-5695-4ef3-b6cc-31fe804c6dc6" containerName="extract-content" Feb 14 04:15:00 crc kubenswrapper[4867]: I0214 04:15:00.162144 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="0683c2f1-5695-4ef3-b6cc-31fe804c6dc6" containerName="extract-content" Feb 14 04:15:00 crc kubenswrapper[4867]: E0214 04:15:00.162151 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f7707be-b4dc-47c7-8a74-bc46399acd36" containerName="registry-server" Feb 14 04:15:00 crc kubenswrapper[4867]: I0214 04:15:00.162156 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f7707be-b4dc-47c7-8a74-bc46399acd36" containerName="registry-server" Feb 14 04:15:00 crc kubenswrapper[4867]: E0214 04:15:00.162164 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab" containerName="extract-utilities" Feb 14 04:15:00 crc kubenswrapper[4867]: I0214 04:15:00.162170 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab" containerName="extract-utilities" Feb 14 04:15:00 crc kubenswrapper[4867]: E0214 04:15:00.162176 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f27f899c-e2d8-4601-9a36-4582192436b7" containerName="extract-content" Feb 14 04:15:00 crc kubenswrapper[4867]: I0214 04:15:00.162183 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f27f899c-e2d8-4601-9a36-4582192436b7" containerName="extract-content" Feb 14 04:15:00 crc kubenswrapper[4867]: E0214 04:15:00.162190 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2" containerName="marketplace-operator" Feb 14 04:15:00 crc kubenswrapper[4867]: I0214 04:15:00.162196 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2" containerName="marketplace-operator" Feb 14 04:15:00 crc kubenswrapper[4867]: E0214 04:15:00.162203 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0683c2f1-5695-4ef3-b6cc-31fe804c6dc6" containerName="extract-utilities" Feb 14 04:15:00 crc kubenswrapper[4867]: I0214 04:15:00.162211 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="0683c2f1-5695-4ef3-b6cc-31fe804c6dc6" containerName="extract-utilities" Feb 14 04:15:00 crc kubenswrapper[4867]: E0214 04:15:00.162220 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21ce8d91-a436-4fe6-b5fd-1988e588ded8" containerName="registry-server" Feb 14 04:15:00 crc kubenswrapper[4867]: I0214 04:15:00.162227 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="21ce8d91-a436-4fe6-b5fd-1988e588ded8" containerName="registry-server" Feb 14 04:15:00 crc kubenswrapper[4867]: E0214 04:15:00.162237 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f7707be-b4dc-47c7-8a74-bc46399acd36" containerName="extract-content" Feb 14 04:15:00 crc kubenswrapper[4867]: I0214 04:15:00.162245 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f7707be-b4dc-47c7-8a74-bc46399acd36" containerName="extract-content" Feb 14 04:15:00 crc kubenswrapper[4867]: E0214 04:15:00.162253 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab" containerName="extract-content" Feb 14 04:15:00 crc kubenswrapper[4867]: I0214 04:15:00.162260 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab" containerName="extract-content" Feb 14 04:15:00 crc kubenswrapper[4867]: E0214 04:15:00.162268 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e834244-05c0-4e48-9e2a-7c69cf930951" containerName="extract-content" Feb 14 04:15:00 crc kubenswrapper[4867]: I0214 04:15:00.162275 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e834244-05c0-4e48-9e2a-7c69cf930951" containerName="extract-content" Feb 14 04:15:00 crc kubenswrapper[4867]: E0214 04:15:00.162285 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21ce8d91-a436-4fe6-b5fd-1988e588ded8" containerName="extract-content" Feb 14 04:15:00 crc kubenswrapper[4867]: I0214 04:15:00.162293 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="21ce8d91-a436-4fe6-b5fd-1988e588ded8" containerName="extract-content" Feb 14 04:15:00 crc kubenswrapper[4867]: E0214 04:15:00.162302 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6d1c1c6-899d-4220-8f80-defae4ba56f0" containerName="registry-server" Feb 14 04:15:00 crc kubenswrapper[4867]: I0214 04:15:00.162309 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6d1c1c6-899d-4220-8f80-defae4ba56f0" containerName="registry-server" Feb 14 04:15:00 crc kubenswrapper[4867]: E0214 04:15:00.162317 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cf2e46b-a553-4b29-b6f2-02072b8660d9" containerName="extract-content" Feb 14 04:15:00 crc kubenswrapper[4867]: I0214 04:15:00.162324 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cf2e46b-a553-4b29-b6f2-02072b8660d9" containerName="extract-content" Feb 14 04:15:00 crc kubenswrapper[4867]: E0214 04:15:00.162332 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0683c2f1-5695-4ef3-b6cc-31fe804c6dc6" containerName="registry-server" Feb 14 04:15:00 crc kubenswrapper[4867]: I0214 04:15:00.162339 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="0683c2f1-5695-4ef3-b6cc-31fe804c6dc6" containerName="registry-server" Feb 14 04:15:00 crc kubenswrapper[4867]: E0214 04:15:00.162348 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cf2e46b-a553-4b29-b6f2-02072b8660d9" containerName="registry-server" Feb 14 04:15:00 crc kubenswrapper[4867]: I0214 04:15:00.162355 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cf2e46b-a553-4b29-b6f2-02072b8660d9" containerName="registry-server" Feb 14 04:15:00 crc kubenswrapper[4867]: E0214 04:15:00.162366 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6d1c1c6-899d-4220-8f80-defae4ba56f0" containerName="extract-content" Feb 14 04:15:00 crc kubenswrapper[4867]: I0214 04:15:00.162373 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6d1c1c6-899d-4220-8f80-defae4ba56f0" containerName="extract-content" Feb 14 04:15:00 crc kubenswrapper[4867]: E0214 04:15:00.162383 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e834244-05c0-4e48-9e2a-7c69cf930951" containerName="registry-server" Feb 14 04:15:00 crc kubenswrapper[4867]: I0214 04:15:00.162391 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e834244-05c0-4e48-9e2a-7c69cf930951" containerName="registry-server" Feb 14 04:15:00 crc kubenswrapper[4867]: E0214 04:15:00.162404 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e834244-05c0-4e48-9e2a-7c69cf930951" containerName="extract-utilities" Feb 14 04:15:00 crc kubenswrapper[4867]: I0214 04:15:00.162413 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e834244-05c0-4e48-9e2a-7c69cf930951" containerName="extract-utilities" Feb 14 04:15:00 crc kubenswrapper[4867]: E0214 04:15:00.162422 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f7707be-b4dc-47c7-8a74-bc46399acd36" containerName="extract-utilities" Feb 14 04:15:00 crc kubenswrapper[4867]: I0214 04:15:00.162429 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f7707be-b4dc-47c7-8a74-bc46399acd36" containerName="extract-utilities" Feb 14 04:15:00 crc kubenswrapper[4867]: E0214 04:15:00.162440 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cf2e46b-a553-4b29-b6f2-02072b8660d9" containerName="extract-utilities" Feb 14 04:15:00 crc kubenswrapper[4867]: I0214 04:15:00.162448 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cf2e46b-a553-4b29-b6f2-02072b8660d9" containerName="extract-utilities" Feb 14 04:15:00 crc kubenswrapper[4867]: E0214 04:15:00.162456 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f27f899c-e2d8-4601-9a36-4582192436b7" containerName="extract-utilities" Feb 14 04:15:00 crc kubenswrapper[4867]: I0214 04:15:00.162463 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f27f899c-e2d8-4601-9a36-4582192436b7" containerName="extract-utilities" Feb 14 04:15:00 crc kubenswrapper[4867]: E0214 04:15:00.162472 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f27f899c-e2d8-4601-9a36-4582192436b7" containerName="registry-server" Feb 14 04:15:00 crc kubenswrapper[4867]: I0214 04:15:00.162479 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f27f899c-e2d8-4601-9a36-4582192436b7" containerName="registry-server" Feb 14 04:15:00 crc kubenswrapper[4867]: I0214 04:15:00.162611 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e834244-05c0-4e48-9e2a-7c69cf930951" containerName="registry-server" Feb 14 04:15:00 crc kubenswrapper[4867]: I0214 04:15:00.162630 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2" containerName="marketplace-operator" Feb 14 04:15:00 crc kubenswrapper[4867]: I0214 04:15:00.162638 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab" containerName="registry-server" Feb 14 04:15:00 crc kubenswrapper[4867]: I0214 04:15:00.162647 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6d1c1c6-899d-4220-8f80-defae4ba56f0" containerName="registry-server" Feb 14 04:15:00 crc kubenswrapper[4867]: I0214 04:15:00.162659 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f7707be-b4dc-47c7-8a74-bc46399acd36" containerName="registry-server" Feb 14 04:15:00 crc kubenswrapper[4867]: I0214 04:15:00.162668 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cf2e46b-a553-4b29-b6f2-02072b8660d9" containerName="registry-server" Feb 14 04:15:00 crc kubenswrapper[4867]: I0214 04:15:00.162681 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="0683c2f1-5695-4ef3-b6cc-31fe804c6dc6" containerName="registry-server" Feb 14 04:15:00 crc kubenswrapper[4867]: I0214 04:15:00.162691 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="f27f899c-e2d8-4601-9a36-4582192436b7" containerName="registry-server" Feb 14 04:15:00 crc kubenswrapper[4867]: I0214 04:15:00.162700 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="21ce8d91-a436-4fe6-b5fd-1988e588ded8" containerName="registry-server" Feb 14 04:15:00 crc kubenswrapper[4867]: I0214 04:15:00.163189 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517375-78vgp" Feb 14 04:15:00 crc kubenswrapper[4867]: I0214 04:15:00.165942 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 14 04:15:00 crc kubenswrapper[4867]: I0214 04:15:00.166148 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 14 04:15:00 crc kubenswrapper[4867]: I0214 04:15:00.172054 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517375-78vgp"] Feb 14 04:15:00 crc kubenswrapper[4867]: I0214 04:15:00.271685 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cb80aae8-69eb-4098-af64-8a1ace025d53-config-volume\") pod \"collect-profiles-29517375-78vgp\" (UID: \"cb80aae8-69eb-4098-af64-8a1ace025d53\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517375-78vgp" Feb 14 04:15:00 crc kubenswrapper[4867]: I0214 04:15:00.271986 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cb80aae8-69eb-4098-af64-8a1ace025d53-secret-volume\") pod \"collect-profiles-29517375-78vgp\" (UID: \"cb80aae8-69eb-4098-af64-8a1ace025d53\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517375-78vgp" Feb 14 04:15:00 crc kubenswrapper[4867]: I0214 04:15:00.272113 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgpxk\" (UniqueName: \"kubernetes.io/projected/cb80aae8-69eb-4098-af64-8a1ace025d53-kube-api-access-mgpxk\") pod \"collect-profiles-29517375-78vgp\" (UID: \"cb80aae8-69eb-4098-af64-8a1ace025d53\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517375-78vgp" Feb 14 04:15:00 crc kubenswrapper[4867]: I0214 04:15:00.373835 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cb80aae8-69eb-4098-af64-8a1ace025d53-config-volume\") pod \"collect-profiles-29517375-78vgp\" (UID: \"cb80aae8-69eb-4098-af64-8a1ace025d53\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517375-78vgp" Feb 14 04:15:00 crc kubenswrapper[4867]: I0214 04:15:00.373892 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cb80aae8-69eb-4098-af64-8a1ace025d53-secret-volume\") pod \"collect-profiles-29517375-78vgp\" (UID: \"cb80aae8-69eb-4098-af64-8a1ace025d53\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517375-78vgp" Feb 14 04:15:00 crc kubenswrapper[4867]: I0214 04:15:00.373955 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgpxk\" (UniqueName: \"kubernetes.io/projected/cb80aae8-69eb-4098-af64-8a1ace025d53-kube-api-access-mgpxk\") pod \"collect-profiles-29517375-78vgp\" (UID: \"cb80aae8-69eb-4098-af64-8a1ace025d53\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517375-78vgp" Feb 14 04:15:00 crc kubenswrapper[4867]: I0214 04:15:00.374851 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cb80aae8-69eb-4098-af64-8a1ace025d53-config-volume\") pod \"collect-profiles-29517375-78vgp\" (UID: \"cb80aae8-69eb-4098-af64-8a1ace025d53\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517375-78vgp" Feb 14 04:15:00 crc kubenswrapper[4867]: I0214 04:15:00.379785 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cb80aae8-69eb-4098-af64-8a1ace025d53-secret-volume\") pod \"collect-profiles-29517375-78vgp\" (UID: \"cb80aae8-69eb-4098-af64-8a1ace025d53\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517375-78vgp" Feb 14 04:15:00 crc kubenswrapper[4867]: I0214 04:15:00.389748 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgpxk\" (UniqueName: \"kubernetes.io/projected/cb80aae8-69eb-4098-af64-8a1ace025d53-kube-api-access-mgpxk\") pod \"collect-profiles-29517375-78vgp\" (UID: \"cb80aae8-69eb-4098-af64-8a1ace025d53\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517375-78vgp" Feb 14 04:15:00 crc kubenswrapper[4867]: I0214 04:15:00.532704 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517375-78vgp" Feb 14 04:15:00 crc kubenswrapper[4867]: I0214 04:15:00.908825 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517375-78vgp"] Feb 14 04:15:00 crc kubenswrapper[4867]: W0214 04:15:00.912144 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb80aae8_69eb_4098_af64_8a1ace025d53.slice/crio-2f5128266a3aa5b15601b3f70b02001dc0d696e8cc344294deb8d0622ea55e45 WatchSource:0}: Error finding container 2f5128266a3aa5b15601b3f70b02001dc0d696e8cc344294deb8d0622ea55e45: Status 404 returned error can't find the container with id 2f5128266a3aa5b15601b3f70b02001dc0d696e8cc344294deb8d0622ea55e45 Feb 14 04:15:01 crc kubenswrapper[4867]: I0214 04:15:01.528321 4867 generic.go:334] "Generic (PLEG): container finished" podID="cb80aae8-69eb-4098-af64-8a1ace025d53" containerID="5dc1b7ab37c9c3df2b530ac74d487ec3f80c14970b4446bee10e3a796e0af837" exitCode=0 Feb 14 04:15:01 crc kubenswrapper[4867]: I0214 04:15:01.528371 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517375-78vgp" event={"ID":"cb80aae8-69eb-4098-af64-8a1ace025d53","Type":"ContainerDied","Data":"5dc1b7ab37c9c3df2b530ac74d487ec3f80c14970b4446bee10e3a796e0af837"} Feb 14 04:15:01 crc kubenswrapper[4867]: I0214 04:15:01.528403 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517375-78vgp" event={"ID":"cb80aae8-69eb-4098-af64-8a1ace025d53","Type":"ContainerStarted","Data":"2f5128266a3aa5b15601b3f70b02001dc0d696e8cc344294deb8d0622ea55e45"} Feb 14 04:15:01 crc kubenswrapper[4867]: I0214 04:15:01.542357 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 14 04:15:01 crc kubenswrapper[4867]: I0214 04:15:01.966792 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 14 04:15:02 crc kubenswrapper[4867]: I0214 04:15:02.798972 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517375-78vgp" Feb 14 04:15:02 crc kubenswrapper[4867]: I0214 04:15:02.911165 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cb80aae8-69eb-4098-af64-8a1ace025d53-secret-volume\") pod \"cb80aae8-69eb-4098-af64-8a1ace025d53\" (UID: \"cb80aae8-69eb-4098-af64-8a1ace025d53\") " Feb 14 04:15:02 crc kubenswrapper[4867]: I0214 04:15:02.911247 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cb80aae8-69eb-4098-af64-8a1ace025d53-config-volume\") pod \"cb80aae8-69eb-4098-af64-8a1ace025d53\" (UID: \"cb80aae8-69eb-4098-af64-8a1ace025d53\") " Feb 14 04:15:02 crc kubenswrapper[4867]: I0214 04:15:02.911273 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mgpxk\" (UniqueName: \"kubernetes.io/projected/cb80aae8-69eb-4098-af64-8a1ace025d53-kube-api-access-mgpxk\") pod \"cb80aae8-69eb-4098-af64-8a1ace025d53\" (UID: \"cb80aae8-69eb-4098-af64-8a1ace025d53\") " Feb 14 04:15:02 crc kubenswrapper[4867]: I0214 04:15:02.912107 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb80aae8-69eb-4098-af64-8a1ace025d53-config-volume" (OuterVolumeSpecName: "config-volume") pod "cb80aae8-69eb-4098-af64-8a1ace025d53" (UID: "cb80aae8-69eb-4098-af64-8a1ace025d53"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:15:02 crc kubenswrapper[4867]: I0214 04:15:02.916974 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb80aae8-69eb-4098-af64-8a1ace025d53-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "cb80aae8-69eb-4098-af64-8a1ace025d53" (UID: "cb80aae8-69eb-4098-af64-8a1ace025d53"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:15:02 crc kubenswrapper[4867]: I0214 04:15:02.917011 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb80aae8-69eb-4098-af64-8a1ace025d53-kube-api-access-mgpxk" (OuterVolumeSpecName: "kube-api-access-mgpxk") pod "cb80aae8-69eb-4098-af64-8a1ace025d53" (UID: "cb80aae8-69eb-4098-af64-8a1ace025d53"). InnerVolumeSpecName "kube-api-access-mgpxk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:15:03 crc kubenswrapper[4867]: I0214 04:15:03.015063 4867 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cb80aae8-69eb-4098-af64-8a1ace025d53-config-volume\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:03 crc kubenswrapper[4867]: I0214 04:15:03.015092 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mgpxk\" (UniqueName: \"kubernetes.io/projected/cb80aae8-69eb-4098-af64-8a1ace025d53-kube-api-access-mgpxk\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:03 crc kubenswrapper[4867]: I0214 04:15:03.015103 4867 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cb80aae8-69eb-4098-af64-8a1ace025d53-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:03 crc kubenswrapper[4867]: I0214 04:15:03.538123 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517375-78vgp" event={"ID":"cb80aae8-69eb-4098-af64-8a1ace025d53","Type":"ContainerDied","Data":"2f5128266a3aa5b15601b3f70b02001dc0d696e8cc344294deb8d0622ea55e45"} Feb 14 04:15:03 crc kubenswrapper[4867]: I0214 04:15:03.538164 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f5128266a3aa5b15601b3f70b02001dc0d696e8cc344294deb8d0622ea55e45" Feb 14 04:15:03 crc kubenswrapper[4867]: I0214 04:15:03.538182 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517375-78vgp" Feb 14 04:15:05 crc kubenswrapper[4867]: I0214 04:15:05.456148 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 14 04:15:06 crc kubenswrapper[4867]: I0214 04:15:06.001106 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 14 04:15:06 crc kubenswrapper[4867]: I0214 04:15:06.418912 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 14 04:15:07 crc kubenswrapper[4867]: I0214 04:15:07.634122 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 14 04:15:08 crc kubenswrapper[4867]: I0214 04:15:08.321835 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-748d4597b7-zr2sc"] Feb 14 04:15:08 crc kubenswrapper[4867]: I0214 04:15:08.322127 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-748d4597b7-zr2sc" podUID="c312f687-8694-4be3-a1ac-ddb1a0e8e1e6" containerName="controller-manager" containerID="cri-o://d4aead393cb2b02a428fb28661f16918a1873ee0f2ed4a30857ac163193d3857" gracePeriod=30 Feb 14 04:15:08 crc kubenswrapper[4867]: I0214 04:15:08.408368 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-74548f6c84-krdz8"] Feb 14 04:15:08 crc kubenswrapper[4867]: I0214 04:15:08.408617 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-74548f6c84-krdz8" podUID="b9320aa8-606f-42da-94c7-886ddd1a0646" containerName="route-controller-manager" containerID="cri-o://f157b04c5dcfd4a5e66739ecf3f255670013221d2f63682930806f03de907180" gracePeriod=30 Feb 14 04:15:08 crc kubenswrapper[4867]: I0214 04:15:08.467289 4867 patch_prober.go:28] interesting pod/controller-manager-748d4597b7-zr2sc container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.56:8443/healthz\": dial tcp 10.217.0.56:8443: connect: connection refused" start-of-body= Feb 14 04:15:08 crc kubenswrapper[4867]: I0214 04:15:08.467341 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-748d4597b7-zr2sc" podUID="c312f687-8694-4be3-a1ac-ddb1a0e8e1e6" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.56:8443/healthz\": dial tcp 10.217.0.56:8443: connect: connection refused" Feb 14 04:15:08 crc kubenswrapper[4867]: I0214 04:15:08.589897 4867 generic.go:334] "Generic (PLEG): container finished" podID="b9320aa8-606f-42da-94c7-886ddd1a0646" containerID="f157b04c5dcfd4a5e66739ecf3f255670013221d2f63682930806f03de907180" exitCode=0 Feb 14 04:15:08 crc kubenswrapper[4867]: I0214 04:15:08.589971 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-74548f6c84-krdz8" event={"ID":"b9320aa8-606f-42da-94c7-886ddd1a0646","Type":"ContainerDied","Data":"f157b04c5dcfd4a5e66739ecf3f255670013221d2f63682930806f03de907180"} Feb 14 04:15:08 crc kubenswrapper[4867]: I0214 04:15:08.595023 4867 generic.go:334] "Generic (PLEG): container finished" podID="c312f687-8694-4be3-a1ac-ddb1a0e8e1e6" containerID="d4aead393cb2b02a428fb28661f16918a1873ee0f2ed4a30857ac163193d3857" exitCode=0 Feb 14 04:15:08 crc kubenswrapper[4867]: I0214 04:15:08.595073 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-748d4597b7-zr2sc" event={"ID":"c312f687-8694-4be3-a1ac-ddb1a0e8e1e6","Type":"ContainerDied","Data":"d4aead393cb2b02a428fb28661f16918a1873ee0f2ed4a30857ac163193d3857"} Feb 14 04:15:08 crc kubenswrapper[4867]: I0214 04:15:08.703681 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-748d4597b7-zr2sc" Feb 14 04:15:08 crc kubenswrapper[4867]: I0214 04:15:08.749345 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-74548f6c84-krdz8" Feb 14 04:15:08 crc kubenswrapper[4867]: I0214 04:15:08.882247 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c312f687-8694-4be3-a1ac-ddb1a0e8e1e6-config\") pod \"c312f687-8694-4be3-a1ac-ddb1a0e8e1e6\" (UID: \"c312f687-8694-4be3-a1ac-ddb1a0e8e1e6\") " Feb 14 04:15:08 crc kubenswrapper[4867]: I0214 04:15:08.882306 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9320aa8-606f-42da-94c7-886ddd1a0646-config\") pod \"b9320aa8-606f-42da-94c7-886ddd1a0646\" (UID: \"b9320aa8-606f-42da-94c7-886ddd1a0646\") " Feb 14 04:15:08 crc kubenswrapper[4867]: I0214 04:15:08.882327 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c312f687-8694-4be3-a1ac-ddb1a0e8e1e6-client-ca\") pod \"c312f687-8694-4be3-a1ac-ddb1a0e8e1e6\" (UID: \"c312f687-8694-4be3-a1ac-ddb1a0e8e1e6\") " Feb 14 04:15:08 crc kubenswrapper[4867]: I0214 04:15:08.882357 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7sbq9\" (UniqueName: \"kubernetes.io/projected/c312f687-8694-4be3-a1ac-ddb1a0e8e1e6-kube-api-access-7sbq9\") pod \"c312f687-8694-4be3-a1ac-ddb1a0e8e1e6\" (UID: \"c312f687-8694-4be3-a1ac-ddb1a0e8e1e6\") " Feb 14 04:15:08 crc kubenswrapper[4867]: I0214 04:15:08.882377 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b9320aa8-606f-42da-94c7-886ddd1a0646-client-ca\") pod \"b9320aa8-606f-42da-94c7-886ddd1a0646\" (UID: \"b9320aa8-606f-42da-94c7-886ddd1a0646\") " Feb 14 04:15:08 crc kubenswrapper[4867]: I0214 04:15:08.882394 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c312f687-8694-4be3-a1ac-ddb1a0e8e1e6-proxy-ca-bundles\") pod \"c312f687-8694-4be3-a1ac-ddb1a0e8e1e6\" (UID: \"c312f687-8694-4be3-a1ac-ddb1a0e8e1e6\") " Feb 14 04:15:08 crc kubenswrapper[4867]: I0214 04:15:08.882447 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c312f687-8694-4be3-a1ac-ddb1a0e8e1e6-serving-cert\") pod \"c312f687-8694-4be3-a1ac-ddb1a0e8e1e6\" (UID: \"c312f687-8694-4be3-a1ac-ddb1a0e8e1e6\") " Feb 14 04:15:08 crc kubenswrapper[4867]: I0214 04:15:08.882468 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9fs9\" (UniqueName: \"kubernetes.io/projected/b9320aa8-606f-42da-94c7-886ddd1a0646-kube-api-access-g9fs9\") pod \"b9320aa8-606f-42da-94c7-886ddd1a0646\" (UID: \"b9320aa8-606f-42da-94c7-886ddd1a0646\") " Feb 14 04:15:08 crc kubenswrapper[4867]: I0214 04:15:08.882486 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9320aa8-606f-42da-94c7-886ddd1a0646-serving-cert\") pod \"b9320aa8-606f-42da-94c7-886ddd1a0646\" (UID: \"b9320aa8-606f-42da-94c7-886ddd1a0646\") " Feb 14 04:15:08 crc kubenswrapper[4867]: I0214 04:15:08.883207 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9320aa8-606f-42da-94c7-886ddd1a0646-client-ca" (OuterVolumeSpecName: "client-ca") pod "b9320aa8-606f-42da-94c7-886ddd1a0646" (UID: "b9320aa8-606f-42da-94c7-886ddd1a0646"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:15:08 crc kubenswrapper[4867]: I0214 04:15:08.883354 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c312f687-8694-4be3-a1ac-ddb1a0e8e1e6-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "c312f687-8694-4be3-a1ac-ddb1a0e8e1e6" (UID: "c312f687-8694-4be3-a1ac-ddb1a0e8e1e6"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:15:08 crc kubenswrapper[4867]: I0214 04:15:08.883421 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9320aa8-606f-42da-94c7-886ddd1a0646-config" (OuterVolumeSpecName: "config") pod "b9320aa8-606f-42da-94c7-886ddd1a0646" (UID: "b9320aa8-606f-42da-94c7-886ddd1a0646"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:15:08 crc kubenswrapper[4867]: I0214 04:15:08.883606 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9320aa8-606f-42da-94c7-886ddd1a0646-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:08 crc kubenswrapper[4867]: I0214 04:15:08.883626 4867 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b9320aa8-606f-42da-94c7-886ddd1a0646-client-ca\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:08 crc kubenswrapper[4867]: I0214 04:15:08.883638 4867 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c312f687-8694-4be3-a1ac-ddb1a0e8e1e6-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:08 crc kubenswrapper[4867]: I0214 04:15:08.883844 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c312f687-8694-4be3-a1ac-ddb1a0e8e1e6-client-ca" (OuterVolumeSpecName: "client-ca") pod "c312f687-8694-4be3-a1ac-ddb1a0e8e1e6" (UID: "c312f687-8694-4be3-a1ac-ddb1a0e8e1e6"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:15:08 crc kubenswrapper[4867]: I0214 04:15:08.883878 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c312f687-8694-4be3-a1ac-ddb1a0e8e1e6-config" (OuterVolumeSpecName: "config") pod "c312f687-8694-4be3-a1ac-ddb1a0e8e1e6" (UID: "c312f687-8694-4be3-a1ac-ddb1a0e8e1e6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:15:08 crc kubenswrapper[4867]: I0214 04:15:08.887595 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c312f687-8694-4be3-a1ac-ddb1a0e8e1e6-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c312f687-8694-4be3-a1ac-ddb1a0e8e1e6" (UID: "c312f687-8694-4be3-a1ac-ddb1a0e8e1e6"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:15:08 crc kubenswrapper[4867]: I0214 04:15:08.887741 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c312f687-8694-4be3-a1ac-ddb1a0e8e1e6-kube-api-access-7sbq9" (OuterVolumeSpecName: "kube-api-access-7sbq9") pod "c312f687-8694-4be3-a1ac-ddb1a0e8e1e6" (UID: "c312f687-8694-4be3-a1ac-ddb1a0e8e1e6"). InnerVolumeSpecName "kube-api-access-7sbq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:15:08 crc kubenswrapper[4867]: I0214 04:15:08.887962 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9320aa8-606f-42da-94c7-886ddd1a0646-kube-api-access-g9fs9" (OuterVolumeSpecName: "kube-api-access-g9fs9") pod "b9320aa8-606f-42da-94c7-886ddd1a0646" (UID: "b9320aa8-606f-42da-94c7-886ddd1a0646"). InnerVolumeSpecName "kube-api-access-g9fs9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:15:08 crc kubenswrapper[4867]: I0214 04:15:08.888353 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9320aa8-606f-42da-94c7-886ddd1a0646-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b9320aa8-606f-42da-94c7-886ddd1a0646" (UID: "b9320aa8-606f-42da-94c7-886ddd1a0646"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:15:08 crc kubenswrapper[4867]: I0214 04:15:08.984662 4867 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c312f687-8694-4be3-a1ac-ddb1a0e8e1e6-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:08 crc kubenswrapper[4867]: I0214 04:15:08.984732 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9fs9\" (UniqueName: \"kubernetes.io/projected/b9320aa8-606f-42da-94c7-886ddd1a0646-kube-api-access-g9fs9\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:08 crc kubenswrapper[4867]: I0214 04:15:08.984748 4867 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9320aa8-606f-42da-94c7-886ddd1a0646-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:08 crc kubenswrapper[4867]: I0214 04:15:08.984758 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c312f687-8694-4be3-a1ac-ddb1a0e8e1e6-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:08 crc kubenswrapper[4867]: I0214 04:15:08.984766 4867 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c312f687-8694-4be3-a1ac-ddb1a0e8e1e6-client-ca\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:08 crc kubenswrapper[4867]: I0214 04:15:08.984776 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7sbq9\" (UniqueName: \"kubernetes.io/projected/c312f687-8694-4be3-a1ac-ddb1a0e8e1e6-kube-api-access-7sbq9\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:09 crc kubenswrapper[4867]: I0214 04:15:09.601002 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-74548f6c84-krdz8" event={"ID":"b9320aa8-606f-42da-94c7-886ddd1a0646","Type":"ContainerDied","Data":"541ea6e9e6c3a77aac7816654698f9c602bfc9a3197a2fd757215b2f093807ec"} Feb 14 04:15:09 crc kubenswrapper[4867]: I0214 04:15:09.601068 4867 scope.go:117] "RemoveContainer" containerID="f157b04c5dcfd4a5e66739ecf3f255670013221d2f63682930806f03de907180" Feb 14 04:15:09 crc kubenswrapper[4867]: I0214 04:15:09.601349 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-74548f6c84-krdz8" Feb 14 04:15:09 crc kubenswrapper[4867]: I0214 04:15:09.602236 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-748d4597b7-zr2sc" event={"ID":"c312f687-8694-4be3-a1ac-ddb1a0e8e1e6","Type":"ContainerDied","Data":"e4132b3ddfc13f1765cbd4d8f6a797c02ea70c5da037aeea7a90fb80fbf566d7"} Feb 14 04:15:09 crc kubenswrapper[4867]: I0214 04:15:09.602306 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-748d4597b7-zr2sc" Feb 14 04:15:09 crc kubenswrapper[4867]: I0214 04:15:09.616610 4867 scope.go:117] "RemoveContainer" containerID="d4aead393cb2b02a428fb28661f16918a1873ee0f2ed4a30857ac163193d3857" Feb 14 04:15:09 crc kubenswrapper[4867]: I0214 04:15:09.620157 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-74548f6c84-krdz8"] Feb 14 04:15:09 crc kubenswrapper[4867]: I0214 04:15:09.623820 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-74548f6c84-krdz8"] Feb 14 04:15:09 crc kubenswrapper[4867]: I0214 04:15:09.628776 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-748d4597b7-zr2sc"] Feb 14 04:15:09 crc kubenswrapper[4867]: I0214 04:15:09.631331 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-748d4597b7-zr2sc"] Feb 14 04:15:09 crc kubenswrapper[4867]: I0214 04:15:09.931231 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f78cb94dd-pp8qj"] Feb 14 04:15:09 crc kubenswrapper[4867]: E0214 04:15:09.945242 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c312f687-8694-4be3-a1ac-ddb1a0e8e1e6" containerName="controller-manager" Feb 14 04:15:09 crc kubenswrapper[4867]: I0214 04:15:09.945279 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="c312f687-8694-4be3-a1ac-ddb1a0e8e1e6" containerName="controller-manager" Feb 14 04:15:09 crc kubenswrapper[4867]: E0214 04:15:09.945307 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9320aa8-606f-42da-94c7-886ddd1a0646" containerName="route-controller-manager" Feb 14 04:15:09 crc kubenswrapper[4867]: I0214 04:15:09.945314 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9320aa8-606f-42da-94c7-886ddd1a0646" containerName="route-controller-manager" Feb 14 04:15:09 crc kubenswrapper[4867]: E0214 04:15:09.945322 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb80aae8-69eb-4098-af64-8a1ace025d53" containerName="collect-profiles" Feb 14 04:15:09 crc kubenswrapper[4867]: I0214 04:15:09.945328 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb80aae8-69eb-4098-af64-8a1ace025d53" containerName="collect-profiles" Feb 14 04:15:09 crc kubenswrapper[4867]: I0214 04:15:09.945489 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb80aae8-69eb-4098-af64-8a1ace025d53" containerName="collect-profiles" Feb 14 04:15:09 crc kubenswrapper[4867]: I0214 04:15:09.945517 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9320aa8-606f-42da-94c7-886ddd1a0646" containerName="route-controller-manager" Feb 14 04:15:09 crc kubenswrapper[4867]: I0214 04:15:09.945526 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="c312f687-8694-4be3-a1ac-ddb1a0e8e1e6" containerName="controller-manager" Feb 14 04:15:09 crc kubenswrapper[4867]: I0214 04:15:09.946151 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5cb8bf5b5c-f5pvq"] Feb 14 04:15:09 crc kubenswrapper[4867]: I0214 04:15:09.946344 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f78cb94dd-pp8qj" Feb 14 04:15:09 crc kubenswrapper[4867]: I0214 04:15:09.947554 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5cb8bf5b5c-f5pvq" Feb 14 04:15:09 crc kubenswrapper[4867]: I0214 04:15:09.948343 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 14 04:15:09 crc kubenswrapper[4867]: I0214 04:15:09.953204 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 14 04:15:09 crc kubenswrapper[4867]: I0214 04:15:09.953611 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5cb8bf5b5c-f5pvq"] Feb 14 04:15:09 crc kubenswrapper[4867]: I0214 04:15:09.953693 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 14 04:15:09 crc kubenswrapper[4867]: I0214 04:15:09.953928 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 14 04:15:09 crc kubenswrapper[4867]: I0214 04:15:09.954093 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 14 04:15:09 crc kubenswrapper[4867]: I0214 04:15:09.954246 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 14 04:15:09 crc kubenswrapper[4867]: I0214 04:15:09.954412 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 14 04:15:09 crc kubenswrapper[4867]: I0214 04:15:09.954576 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 14 04:15:09 crc kubenswrapper[4867]: I0214 04:15:09.954704 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 14 04:15:09 crc kubenswrapper[4867]: I0214 04:15:09.954896 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 14 04:15:09 crc kubenswrapper[4867]: I0214 04:15:09.955028 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 14 04:15:09 crc kubenswrapper[4867]: I0214 04:15:09.955065 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 14 04:15:09 crc kubenswrapper[4867]: I0214 04:15:09.959191 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 14 04:15:09 crc kubenswrapper[4867]: I0214 04:15:09.962670 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f78cb94dd-pp8qj"] Feb 14 04:15:10 crc kubenswrapper[4867]: I0214 04:15:10.099174 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e50093fe-87a2-46d7-aab7-3bf4179dc49b-config\") pod \"route-controller-manager-f78cb94dd-pp8qj\" (UID: \"e50093fe-87a2-46d7-aab7-3bf4179dc49b\") " pod="openshift-route-controller-manager/route-controller-manager-f78cb94dd-pp8qj" Feb 14 04:15:10 crc kubenswrapper[4867]: I0214 04:15:10.099216 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dvsv\" (UniqueName: \"kubernetes.io/projected/a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2-kube-api-access-6dvsv\") pod \"controller-manager-5cb8bf5b5c-f5pvq\" (UID: \"a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2\") " pod="openshift-controller-manager/controller-manager-5cb8bf5b5c-f5pvq" Feb 14 04:15:10 crc kubenswrapper[4867]: I0214 04:15:10.099262 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2-proxy-ca-bundles\") pod \"controller-manager-5cb8bf5b5c-f5pvq\" (UID: \"a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2\") " pod="openshift-controller-manager/controller-manager-5cb8bf5b5c-f5pvq" Feb 14 04:15:10 crc kubenswrapper[4867]: I0214 04:15:10.099283 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e50093fe-87a2-46d7-aab7-3bf4179dc49b-serving-cert\") pod \"route-controller-manager-f78cb94dd-pp8qj\" (UID: \"e50093fe-87a2-46d7-aab7-3bf4179dc49b\") " pod="openshift-route-controller-manager/route-controller-manager-f78cb94dd-pp8qj" Feb 14 04:15:10 crc kubenswrapper[4867]: I0214 04:15:10.099390 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2-client-ca\") pod \"controller-manager-5cb8bf5b5c-f5pvq\" (UID: \"a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2\") " pod="openshift-controller-manager/controller-manager-5cb8bf5b5c-f5pvq" Feb 14 04:15:10 crc kubenswrapper[4867]: I0214 04:15:10.099529 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2-config\") pod \"controller-manager-5cb8bf5b5c-f5pvq\" (UID: \"a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2\") " pod="openshift-controller-manager/controller-manager-5cb8bf5b5c-f5pvq" Feb 14 04:15:10 crc kubenswrapper[4867]: I0214 04:15:10.099577 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2-serving-cert\") pod \"controller-manager-5cb8bf5b5c-f5pvq\" (UID: \"a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2\") " pod="openshift-controller-manager/controller-manager-5cb8bf5b5c-f5pvq" Feb 14 04:15:10 crc kubenswrapper[4867]: I0214 04:15:10.099604 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e50093fe-87a2-46d7-aab7-3bf4179dc49b-client-ca\") pod \"route-controller-manager-f78cb94dd-pp8qj\" (UID: \"e50093fe-87a2-46d7-aab7-3bf4179dc49b\") " pod="openshift-route-controller-manager/route-controller-manager-f78cb94dd-pp8qj" Feb 14 04:15:10 crc kubenswrapper[4867]: I0214 04:15:10.099648 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlwrf\" (UniqueName: \"kubernetes.io/projected/e50093fe-87a2-46d7-aab7-3bf4179dc49b-kube-api-access-jlwrf\") pod \"route-controller-manager-f78cb94dd-pp8qj\" (UID: \"e50093fe-87a2-46d7-aab7-3bf4179dc49b\") " pod="openshift-route-controller-manager/route-controller-manager-f78cb94dd-pp8qj" Feb 14 04:15:10 crc kubenswrapper[4867]: I0214 04:15:10.200592 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2-client-ca\") pod \"controller-manager-5cb8bf5b5c-f5pvq\" (UID: \"a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2\") " pod="openshift-controller-manager/controller-manager-5cb8bf5b5c-f5pvq" Feb 14 04:15:10 crc kubenswrapper[4867]: I0214 04:15:10.200703 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2-config\") pod \"controller-manager-5cb8bf5b5c-f5pvq\" (UID: \"a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2\") " pod="openshift-controller-manager/controller-manager-5cb8bf5b5c-f5pvq" Feb 14 04:15:10 crc kubenswrapper[4867]: I0214 04:15:10.200913 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2-serving-cert\") pod \"controller-manager-5cb8bf5b5c-f5pvq\" (UID: \"a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2\") " pod="openshift-controller-manager/controller-manager-5cb8bf5b5c-f5pvq" Feb 14 04:15:10 crc kubenswrapper[4867]: I0214 04:15:10.200944 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e50093fe-87a2-46d7-aab7-3bf4179dc49b-client-ca\") pod \"route-controller-manager-f78cb94dd-pp8qj\" (UID: \"e50093fe-87a2-46d7-aab7-3bf4179dc49b\") " pod="openshift-route-controller-manager/route-controller-manager-f78cb94dd-pp8qj" Feb 14 04:15:10 crc kubenswrapper[4867]: I0214 04:15:10.201236 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlwrf\" (UniqueName: \"kubernetes.io/projected/e50093fe-87a2-46d7-aab7-3bf4179dc49b-kube-api-access-jlwrf\") pod \"route-controller-manager-f78cb94dd-pp8qj\" (UID: \"e50093fe-87a2-46d7-aab7-3bf4179dc49b\") " pod="openshift-route-controller-manager/route-controller-manager-f78cb94dd-pp8qj" Feb 14 04:15:10 crc kubenswrapper[4867]: I0214 04:15:10.201303 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e50093fe-87a2-46d7-aab7-3bf4179dc49b-config\") pod \"route-controller-manager-f78cb94dd-pp8qj\" (UID: \"e50093fe-87a2-46d7-aab7-3bf4179dc49b\") " pod="openshift-route-controller-manager/route-controller-manager-f78cb94dd-pp8qj" Feb 14 04:15:10 crc kubenswrapper[4867]: I0214 04:15:10.201332 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dvsv\" (UniqueName: \"kubernetes.io/projected/a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2-kube-api-access-6dvsv\") pod \"controller-manager-5cb8bf5b5c-f5pvq\" (UID: \"a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2\") " pod="openshift-controller-manager/controller-manager-5cb8bf5b5c-f5pvq" Feb 14 04:15:10 crc kubenswrapper[4867]: I0214 04:15:10.201360 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2-proxy-ca-bundles\") pod \"controller-manager-5cb8bf5b5c-f5pvq\" (UID: \"a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2\") " pod="openshift-controller-manager/controller-manager-5cb8bf5b5c-f5pvq" Feb 14 04:15:10 crc kubenswrapper[4867]: I0214 04:15:10.201387 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e50093fe-87a2-46d7-aab7-3bf4179dc49b-serving-cert\") pod \"route-controller-manager-f78cb94dd-pp8qj\" (UID: \"e50093fe-87a2-46d7-aab7-3bf4179dc49b\") " pod="openshift-route-controller-manager/route-controller-manager-f78cb94dd-pp8qj" Feb 14 04:15:10 crc kubenswrapper[4867]: I0214 04:15:10.202735 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e50093fe-87a2-46d7-aab7-3bf4179dc49b-client-ca\") pod \"route-controller-manager-f78cb94dd-pp8qj\" (UID: \"e50093fe-87a2-46d7-aab7-3bf4179dc49b\") " pod="openshift-route-controller-manager/route-controller-manager-f78cb94dd-pp8qj" Feb 14 04:15:10 crc kubenswrapper[4867]: I0214 04:15:10.202801 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2-client-ca\") pod \"controller-manager-5cb8bf5b5c-f5pvq\" (UID: \"a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2\") " pod="openshift-controller-manager/controller-manager-5cb8bf5b5c-f5pvq" Feb 14 04:15:10 crc kubenswrapper[4867]: I0214 04:15:10.203028 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e50093fe-87a2-46d7-aab7-3bf4179dc49b-config\") pod \"route-controller-manager-f78cb94dd-pp8qj\" (UID: \"e50093fe-87a2-46d7-aab7-3bf4179dc49b\") " pod="openshift-route-controller-manager/route-controller-manager-f78cb94dd-pp8qj" Feb 14 04:15:10 crc kubenswrapper[4867]: I0214 04:15:10.204596 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2-config\") pod \"controller-manager-5cb8bf5b5c-f5pvq\" (UID: \"a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2\") " pod="openshift-controller-manager/controller-manager-5cb8bf5b5c-f5pvq" Feb 14 04:15:10 crc kubenswrapper[4867]: I0214 04:15:10.205684 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2-proxy-ca-bundles\") pod \"controller-manager-5cb8bf5b5c-f5pvq\" (UID: \"a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2\") " pod="openshift-controller-manager/controller-manager-5cb8bf5b5c-f5pvq" Feb 14 04:15:10 crc kubenswrapper[4867]: I0214 04:15:10.206451 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e50093fe-87a2-46d7-aab7-3bf4179dc49b-serving-cert\") pod \"route-controller-manager-f78cb94dd-pp8qj\" (UID: \"e50093fe-87a2-46d7-aab7-3bf4179dc49b\") " pod="openshift-route-controller-manager/route-controller-manager-f78cb94dd-pp8qj" Feb 14 04:15:10 crc kubenswrapper[4867]: I0214 04:15:10.212226 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2-serving-cert\") pod \"controller-manager-5cb8bf5b5c-f5pvq\" (UID: \"a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2\") " pod="openshift-controller-manager/controller-manager-5cb8bf5b5c-f5pvq" Feb 14 04:15:10 crc kubenswrapper[4867]: I0214 04:15:10.223829 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dvsv\" (UniqueName: \"kubernetes.io/projected/a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2-kube-api-access-6dvsv\") pod \"controller-manager-5cb8bf5b5c-f5pvq\" (UID: \"a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2\") " pod="openshift-controller-manager/controller-manager-5cb8bf5b5c-f5pvq" Feb 14 04:15:10 crc kubenswrapper[4867]: I0214 04:15:10.227329 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlwrf\" (UniqueName: \"kubernetes.io/projected/e50093fe-87a2-46d7-aab7-3bf4179dc49b-kube-api-access-jlwrf\") pod \"route-controller-manager-f78cb94dd-pp8qj\" (UID: \"e50093fe-87a2-46d7-aab7-3bf4179dc49b\") " pod="openshift-route-controller-manager/route-controller-manager-f78cb94dd-pp8qj" Feb 14 04:15:10 crc kubenswrapper[4867]: I0214 04:15:10.266118 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f78cb94dd-pp8qj" Feb 14 04:15:10 crc kubenswrapper[4867]: I0214 04:15:10.274501 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5cb8bf5b5c-f5pvq" Feb 14 04:15:10 crc kubenswrapper[4867]: I0214 04:15:10.570314 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f78cb94dd-pp8qj"] Feb 14 04:15:10 crc kubenswrapper[4867]: I0214 04:15:10.609796 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f78cb94dd-pp8qj" event={"ID":"e50093fe-87a2-46d7-aab7-3bf4179dc49b","Type":"ContainerStarted","Data":"a9053638ca02cf4b81c623ee2fa7a93b209439119614460e70b88e72705e85c2"} Feb 14 04:15:10 crc kubenswrapper[4867]: I0214 04:15:10.715500 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5cb8bf5b5c-f5pvq"] Feb 14 04:15:10 crc kubenswrapper[4867]: W0214 04:15:10.719299 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda0e49f0b_ad6d_49a7_a2a3_10cba6dd6ac2.slice/crio-360828fdf971e05929afc0cdacaf1fd44127fa200f8fc77f1916b7cb060bcb94 WatchSource:0}: Error finding container 360828fdf971e05929afc0cdacaf1fd44127fa200f8fc77f1916b7cb060bcb94: Status 404 returned error can't find the container with id 360828fdf971e05929afc0cdacaf1fd44127fa200f8fc77f1916b7cb060bcb94 Feb 14 04:15:11 crc kubenswrapper[4867]: I0214 04:15:11.004653 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9320aa8-606f-42da-94c7-886ddd1a0646" path="/var/lib/kubelet/pods/b9320aa8-606f-42da-94c7-886ddd1a0646/volumes" Feb 14 04:15:11 crc kubenswrapper[4867]: I0214 04:15:11.005607 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c312f687-8694-4be3-a1ac-ddb1a0e8e1e6" path="/var/lib/kubelet/pods/c312f687-8694-4be3-a1ac-ddb1a0e8e1e6/volumes" Feb 14 04:15:11 crc kubenswrapper[4867]: I0214 04:15:11.617228 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f78cb94dd-pp8qj" event={"ID":"e50093fe-87a2-46d7-aab7-3bf4179dc49b","Type":"ContainerStarted","Data":"cec0c2937bd7622aca6d6cadfea46713d67a14b0de8cfcd88c7d84ab9a7580e2"} Feb 14 04:15:11 crc kubenswrapper[4867]: I0214 04:15:11.617499 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-f78cb94dd-pp8qj" Feb 14 04:15:11 crc kubenswrapper[4867]: I0214 04:15:11.618818 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5cb8bf5b5c-f5pvq" event={"ID":"a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2","Type":"ContainerStarted","Data":"79b6df8ad449678d1ab295023a9a7003c72a1a06cb1ca593b34794017c19ab69"} Feb 14 04:15:11 crc kubenswrapper[4867]: I0214 04:15:11.618857 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5cb8bf5b5c-f5pvq" event={"ID":"a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2","Type":"ContainerStarted","Data":"360828fdf971e05929afc0cdacaf1fd44127fa200f8fc77f1916b7cb060bcb94"} Feb 14 04:15:11 crc kubenswrapper[4867]: I0214 04:15:11.619484 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5cb8bf5b5c-f5pvq" Feb 14 04:15:11 crc kubenswrapper[4867]: I0214 04:15:11.623122 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-f78cb94dd-pp8qj" Feb 14 04:15:11 crc kubenswrapper[4867]: I0214 04:15:11.623380 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5cb8bf5b5c-f5pvq" Feb 14 04:15:11 crc kubenswrapper[4867]: I0214 04:15:11.633434 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-f78cb94dd-pp8qj" podStartSLOduration=3.633420018 podStartE2EDuration="3.633420018s" podCreationTimestamp="2026-02-14 04:15:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:15:11.632119585 +0000 UTC m=+343.713056899" watchObservedRunningTime="2026-02-14 04:15:11.633420018 +0000 UTC m=+343.714357322" Feb 14 04:15:11 crc kubenswrapper[4867]: I0214 04:15:11.659127 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5cb8bf5b5c-f5pvq" podStartSLOduration=3.659101984 podStartE2EDuration="3.659101984s" podCreationTimestamp="2026-02-14 04:15:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:15:11.655912492 +0000 UTC m=+343.736849806" watchObservedRunningTime="2026-02-14 04:15:11.659101984 +0000 UTC m=+343.740039298" Feb 14 04:15:12 crc kubenswrapper[4867]: I0214 04:15:12.973479 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-9zpdc"] Feb 14 04:15:12 crc kubenswrapper[4867]: I0214 04:15:12.974920 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-9zpdc" Feb 14 04:15:12 crc kubenswrapper[4867]: I0214 04:15:12.977308 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Feb 14 04:15:12 crc kubenswrapper[4867]: I0214 04:15:12.979273 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Feb 14 04:15:12 crc kubenswrapper[4867]: I0214 04:15:12.979600 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-dockercfg-wwt9l" Feb 14 04:15:12 crc kubenswrapper[4867]: I0214 04:15:12.979629 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Feb 14 04:15:12 crc kubenswrapper[4867]: I0214 04:15:12.979794 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Feb 14 04:15:12 crc kubenswrapper[4867]: I0214 04:15:12.985297 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-9zpdc"] Feb 14 04:15:13 crc kubenswrapper[4867]: I0214 04:15:13.146970 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kskfl\" (UniqueName: \"kubernetes.io/projected/4a7e088b-b9a0-4187-9acc-601d315d8d0f-kube-api-access-kskfl\") pod \"cluster-monitoring-operator-6d5b84845-9zpdc\" (UID: \"4a7e088b-b9a0-4187-9acc-601d315d8d0f\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-9zpdc" Feb 14 04:15:13 crc kubenswrapper[4867]: I0214 04:15:13.147018 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/4a7e088b-b9a0-4187-9acc-601d315d8d0f-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-9zpdc\" (UID: \"4a7e088b-b9a0-4187-9acc-601d315d8d0f\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-9zpdc" Feb 14 04:15:13 crc kubenswrapper[4867]: I0214 04:15:13.147053 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/4a7e088b-b9a0-4187-9acc-601d315d8d0f-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-9zpdc\" (UID: \"4a7e088b-b9a0-4187-9acc-601d315d8d0f\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-9zpdc" Feb 14 04:15:13 crc kubenswrapper[4867]: I0214 04:15:13.248063 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kskfl\" (UniqueName: \"kubernetes.io/projected/4a7e088b-b9a0-4187-9acc-601d315d8d0f-kube-api-access-kskfl\") pod \"cluster-monitoring-operator-6d5b84845-9zpdc\" (UID: \"4a7e088b-b9a0-4187-9acc-601d315d8d0f\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-9zpdc" Feb 14 04:15:13 crc kubenswrapper[4867]: I0214 04:15:13.248110 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/4a7e088b-b9a0-4187-9acc-601d315d8d0f-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-9zpdc\" (UID: \"4a7e088b-b9a0-4187-9acc-601d315d8d0f\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-9zpdc" Feb 14 04:15:13 crc kubenswrapper[4867]: I0214 04:15:13.248143 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/4a7e088b-b9a0-4187-9acc-601d315d8d0f-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-9zpdc\" (UID: \"4a7e088b-b9a0-4187-9acc-601d315d8d0f\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-9zpdc" Feb 14 04:15:13 crc kubenswrapper[4867]: I0214 04:15:13.249076 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/4a7e088b-b9a0-4187-9acc-601d315d8d0f-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-9zpdc\" (UID: \"4a7e088b-b9a0-4187-9acc-601d315d8d0f\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-9zpdc" Feb 14 04:15:13 crc kubenswrapper[4867]: I0214 04:15:13.254053 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/4a7e088b-b9a0-4187-9acc-601d315d8d0f-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-9zpdc\" (UID: \"4a7e088b-b9a0-4187-9acc-601d315d8d0f\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-9zpdc" Feb 14 04:15:13 crc kubenswrapper[4867]: I0214 04:15:13.269909 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kskfl\" (UniqueName: \"kubernetes.io/projected/4a7e088b-b9a0-4187-9acc-601d315d8d0f-kube-api-access-kskfl\") pod \"cluster-monitoring-operator-6d5b84845-9zpdc\" (UID: \"4a7e088b-b9a0-4187-9acc-601d315d8d0f\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-9zpdc" Feb 14 04:15:13 crc kubenswrapper[4867]: I0214 04:15:13.313026 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-9zpdc" Feb 14 04:15:13 crc kubenswrapper[4867]: I0214 04:15:13.757101 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-9zpdc"] Feb 14 04:15:13 crc kubenswrapper[4867]: W0214 04:15:13.761263 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a7e088b_b9a0_4187_9acc_601d315d8d0f.slice/crio-647fdd9b2a7612423e4347dbd873aa13f3962b07554aa3dbcda5defad0882482 WatchSource:0}: Error finding container 647fdd9b2a7612423e4347dbd873aa13f3962b07554aa3dbcda5defad0882482: Status 404 returned error can't find the container with id 647fdd9b2a7612423e4347dbd873aa13f3962b07554aa3dbcda5defad0882482 Feb 14 04:15:14 crc kubenswrapper[4867]: I0214 04:15:14.636462 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-9zpdc" event={"ID":"4a7e088b-b9a0-4187-9acc-601d315d8d0f","Type":"ContainerStarted","Data":"647fdd9b2a7612423e4347dbd873aa13f3962b07554aa3dbcda5defad0882482"} Feb 14 04:15:14 crc kubenswrapper[4867]: I0214 04:15:14.727891 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 14 04:15:16 crc kubenswrapper[4867]: I0214 04:15:16.254825 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-72mpc"] Feb 14 04:15:16 crc kubenswrapper[4867]: I0214 04:15:16.255493 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-72mpc" Feb 14 04:15:16 crc kubenswrapper[4867]: I0214 04:15:16.257086 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Feb 14 04:15:16 crc kubenswrapper[4867]: I0214 04:15:16.258634 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-dockercfg-7rsz8" Feb 14 04:15:16 crc kubenswrapper[4867]: I0214 04:15:16.264754 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-72mpc"] Feb 14 04:15:16 crc kubenswrapper[4867]: I0214 04:15:16.387362 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/b967a9e8-e5f1-4c92-889a-1dd6adf747fd-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-72mpc\" (UID: \"b967a9e8-e5f1-4c92-889a-1dd6adf747fd\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-72mpc" Feb 14 04:15:16 crc kubenswrapper[4867]: I0214 04:15:16.489047 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/b967a9e8-e5f1-4c92-889a-1dd6adf747fd-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-72mpc\" (UID: \"b967a9e8-e5f1-4c92-889a-1dd6adf747fd\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-72mpc" Feb 14 04:15:16 crc kubenswrapper[4867]: E0214 04:15:16.489210 4867 secret.go:188] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: secret "prometheus-operator-admission-webhook-tls" not found Feb 14 04:15:16 crc kubenswrapper[4867]: E0214 04:15:16.489273 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b967a9e8-e5f1-4c92-889a-1dd6adf747fd-tls-certificates podName:b967a9e8-e5f1-4c92-889a-1dd6adf747fd nodeName:}" failed. No retries permitted until 2026-02-14 04:15:16.98925227 +0000 UTC m=+349.070189584 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/b967a9e8-e5f1-4c92-889a-1dd6adf747fd-tls-certificates") pod "prometheus-operator-admission-webhook-f54c54754-72mpc" (UID: "b967a9e8-e5f1-4c92-889a-1dd6adf747fd") : secret "prometheus-operator-admission-webhook-tls" not found Feb 14 04:15:16 crc kubenswrapper[4867]: I0214 04:15:16.647980 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-9zpdc" event={"ID":"4a7e088b-b9a0-4187-9acc-601d315d8d0f","Type":"ContainerStarted","Data":"9d746188b23a7e6be17773e838adf45a22e82f6ed5dd0ae26f926e3c20c72059"} Feb 14 04:15:16 crc kubenswrapper[4867]: I0214 04:15:16.662725 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-9zpdc" podStartSLOduration=2.857978295 podStartE2EDuration="4.662708169s" podCreationTimestamp="2026-02-14 04:15:12 +0000 UTC" firstStartedPulling="2026-02-14 04:15:13.763227133 +0000 UTC m=+345.844164447" lastFinishedPulling="2026-02-14 04:15:15.567956997 +0000 UTC m=+347.648894321" observedRunningTime="2026-02-14 04:15:16.661236371 +0000 UTC m=+348.742173695" watchObservedRunningTime="2026-02-14 04:15:16.662708169 +0000 UTC m=+348.743645483" Feb 14 04:15:16 crc kubenswrapper[4867]: I0214 04:15:16.995806 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/b967a9e8-e5f1-4c92-889a-1dd6adf747fd-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-72mpc\" (UID: \"b967a9e8-e5f1-4c92-889a-1dd6adf747fd\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-72mpc" Feb 14 04:15:16 crc kubenswrapper[4867]: E0214 04:15:16.995990 4867 secret.go:188] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: secret "prometheus-operator-admission-webhook-tls" not found Feb 14 04:15:16 crc kubenswrapper[4867]: E0214 04:15:16.996086 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b967a9e8-e5f1-4c92-889a-1dd6adf747fd-tls-certificates podName:b967a9e8-e5f1-4c92-889a-1dd6adf747fd nodeName:}" failed. No retries permitted until 2026-02-14 04:15:17.996064504 +0000 UTC m=+350.077001818 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/b967a9e8-e5f1-4c92-889a-1dd6adf747fd-tls-certificates") pod "prometheus-operator-admission-webhook-f54c54754-72mpc" (UID: "b967a9e8-e5f1-4c92-889a-1dd6adf747fd") : secret "prometheus-operator-admission-webhook-tls" not found Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.014729 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5cb8bf5b5c-f5pvq"] Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.014919 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5cb8bf5b5c-f5pvq" podUID="a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2" containerName="controller-manager" containerID="cri-o://79b6df8ad449678d1ab295023a9a7003c72a1a06cb1ca593b34794017c19ab69" gracePeriod=30 Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.030559 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f78cb94dd-pp8qj"] Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.030802 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-f78cb94dd-pp8qj" podUID="e50093fe-87a2-46d7-aab7-3bf4179dc49b" containerName="route-controller-manager" containerID="cri-o://cec0c2937bd7622aca6d6cadfea46713d67a14b0de8cfcd88c7d84ab9a7580e2" gracePeriod=30 Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.511073 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f78cb94dd-pp8qj" Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.596866 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5cb8bf5b5c-f5pvq" Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.604166 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e50093fe-87a2-46d7-aab7-3bf4179dc49b-client-ca\") pod \"e50093fe-87a2-46d7-aab7-3bf4179dc49b\" (UID: \"e50093fe-87a2-46d7-aab7-3bf4179dc49b\") " Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.604222 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e50093fe-87a2-46d7-aab7-3bf4179dc49b-config\") pod \"e50093fe-87a2-46d7-aab7-3bf4179dc49b\" (UID: \"e50093fe-87a2-46d7-aab7-3bf4179dc49b\") " Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.604327 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jlwrf\" (UniqueName: \"kubernetes.io/projected/e50093fe-87a2-46d7-aab7-3bf4179dc49b-kube-api-access-jlwrf\") pod \"e50093fe-87a2-46d7-aab7-3bf4179dc49b\" (UID: \"e50093fe-87a2-46d7-aab7-3bf4179dc49b\") " Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.604351 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e50093fe-87a2-46d7-aab7-3bf4179dc49b-serving-cert\") pod \"e50093fe-87a2-46d7-aab7-3bf4179dc49b\" (UID: \"e50093fe-87a2-46d7-aab7-3bf4179dc49b\") " Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.605451 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e50093fe-87a2-46d7-aab7-3bf4179dc49b-client-ca" (OuterVolumeSpecName: "client-ca") pod "e50093fe-87a2-46d7-aab7-3bf4179dc49b" (UID: "e50093fe-87a2-46d7-aab7-3bf4179dc49b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.605935 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e50093fe-87a2-46d7-aab7-3bf4179dc49b-config" (OuterVolumeSpecName: "config") pod "e50093fe-87a2-46d7-aab7-3bf4179dc49b" (UID: "e50093fe-87a2-46d7-aab7-3bf4179dc49b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.612229 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e50093fe-87a2-46d7-aab7-3bf4179dc49b-kube-api-access-jlwrf" (OuterVolumeSpecName: "kube-api-access-jlwrf") pod "e50093fe-87a2-46d7-aab7-3bf4179dc49b" (UID: "e50093fe-87a2-46d7-aab7-3bf4179dc49b"). InnerVolumeSpecName "kube-api-access-jlwrf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.613573 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e50093fe-87a2-46d7-aab7-3bf4179dc49b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e50093fe-87a2-46d7-aab7-3bf4179dc49b" (UID: "e50093fe-87a2-46d7-aab7-3bf4179dc49b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.655878 4867 generic.go:334] "Generic (PLEG): container finished" podID="e50093fe-87a2-46d7-aab7-3bf4179dc49b" containerID="cec0c2937bd7622aca6d6cadfea46713d67a14b0de8cfcd88c7d84ab9a7580e2" exitCode=0 Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.655990 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f78cb94dd-pp8qj" Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.656062 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f78cb94dd-pp8qj" event={"ID":"e50093fe-87a2-46d7-aab7-3bf4179dc49b","Type":"ContainerDied","Data":"cec0c2937bd7622aca6d6cadfea46713d67a14b0de8cfcd88c7d84ab9a7580e2"} Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.656110 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f78cb94dd-pp8qj" event={"ID":"e50093fe-87a2-46d7-aab7-3bf4179dc49b","Type":"ContainerDied","Data":"a9053638ca02cf4b81c623ee2fa7a93b209439119614460e70b88e72705e85c2"} Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.656132 4867 scope.go:117] "RemoveContainer" containerID="cec0c2937bd7622aca6d6cadfea46713d67a14b0de8cfcd88c7d84ab9a7580e2" Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.658174 4867 generic.go:334] "Generic (PLEG): container finished" podID="a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2" containerID="79b6df8ad449678d1ab295023a9a7003c72a1a06cb1ca593b34794017c19ab69" exitCode=0 Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.658272 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5cb8bf5b5c-f5pvq" event={"ID":"a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2","Type":"ContainerDied","Data":"79b6df8ad449678d1ab295023a9a7003c72a1a06cb1ca593b34794017c19ab69"} Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.658308 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5cb8bf5b5c-f5pvq" event={"ID":"a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2","Type":"ContainerDied","Data":"360828fdf971e05929afc0cdacaf1fd44127fa200f8fc77f1916b7cb060bcb94"} Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.658319 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5cb8bf5b5c-f5pvq" Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.676891 4867 scope.go:117] "RemoveContainer" containerID="cec0c2937bd7622aca6d6cadfea46713d67a14b0de8cfcd88c7d84ab9a7580e2" Feb 14 04:15:17 crc kubenswrapper[4867]: E0214 04:15:17.677229 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cec0c2937bd7622aca6d6cadfea46713d67a14b0de8cfcd88c7d84ab9a7580e2\": container with ID starting with cec0c2937bd7622aca6d6cadfea46713d67a14b0de8cfcd88c7d84ab9a7580e2 not found: ID does not exist" containerID="cec0c2937bd7622aca6d6cadfea46713d67a14b0de8cfcd88c7d84ab9a7580e2" Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.677261 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cec0c2937bd7622aca6d6cadfea46713d67a14b0de8cfcd88c7d84ab9a7580e2"} err="failed to get container status \"cec0c2937bd7622aca6d6cadfea46713d67a14b0de8cfcd88c7d84ab9a7580e2\": rpc error: code = NotFound desc = could not find container \"cec0c2937bd7622aca6d6cadfea46713d67a14b0de8cfcd88c7d84ab9a7580e2\": container with ID starting with cec0c2937bd7622aca6d6cadfea46713d67a14b0de8cfcd88c7d84ab9a7580e2 not found: ID does not exist" Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.677281 4867 scope.go:117] "RemoveContainer" containerID="79b6df8ad449678d1ab295023a9a7003c72a1a06cb1ca593b34794017c19ab69" Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.688957 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f78cb94dd-pp8qj"] Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.691323 4867 scope.go:117] "RemoveContainer" containerID="79b6df8ad449678d1ab295023a9a7003c72a1a06cb1ca593b34794017c19ab69" Feb 14 04:15:17 crc kubenswrapper[4867]: E0214 04:15:17.691732 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79b6df8ad449678d1ab295023a9a7003c72a1a06cb1ca593b34794017c19ab69\": container with ID starting with 79b6df8ad449678d1ab295023a9a7003c72a1a06cb1ca593b34794017c19ab69 not found: ID does not exist" containerID="79b6df8ad449678d1ab295023a9a7003c72a1a06cb1ca593b34794017c19ab69" Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.691760 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79b6df8ad449678d1ab295023a9a7003c72a1a06cb1ca593b34794017c19ab69"} err="failed to get container status \"79b6df8ad449678d1ab295023a9a7003c72a1a06cb1ca593b34794017c19ab69\": rpc error: code = NotFound desc = could not find container \"79b6df8ad449678d1ab295023a9a7003c72a1a06cb1ca593b34794017c19ab69\": container with ID starting with 79b6df8ad449678d1ab295023a9a7003c72a1a06cb1ca593b34794017c19ab69 not found: ID does not exist" Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.691972 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f78cb94dd-pp8qj"] Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.705259 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2-client-ca\") pod \"a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2\" (UID: \"a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2\") " Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.705315 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2-proxy-ca-bundles\") pod \"a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2\" (UID: \"a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2\") " Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.705355 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2-config\") pod \"a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2\" (UID: \"a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2\") " Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.705380 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2-serving-cert\") pod \"a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2\" (UID: \"a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2\") " Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.705419 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dvsv\" (UniqueName: \"kubernetes.io/projected/a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2-kube-api-access-6dvsv\") pod \"a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2\" (UID: \"a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2\") " Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.705645 4867 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e50093fe-87a2-46d7-aab7-3bf4179dc49b-client-ca\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.705663 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e50093fe-87a2-46d7-aab7-3bf4179dc49b-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.705672 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jlwrf\" (UniqueName: \"kubernetes.io/projected/e50093fe-87a2-46d7-aab7-3bf4179dc49b-kube-api-access-jlwrf\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.705681 4867 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e50093fe-87a2-46d7-aab7-3bf4179dc49b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.706616 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2" (UID: "a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.706656 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2-config" (OuterVolumeSpecName: "config") pod "a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2" (UID: "a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.706631 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2-client-ca" (OuterVolumeSpecName: "client-ca") pod "a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2" (UID: "a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.708520 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2-kube-api-access-6dvsv" (OuterVolumeSpecName: "kube-api-access-6dvsv") pod "a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2" (UID: "a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2"). InnerVolumeSpecName "kube-api-access-6dvsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.709759 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2" (UID: "a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.807547 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6dvsv\" (UniqueName: \"kubernetes.io/projected/a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2-kube-api-access-6dvsv\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.807610 4867 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2-client-ca\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.807642 4867 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.807668 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.807691 4867 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.989081 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5cb8bf5b5c-f5pvq"] Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.995985 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5cb8bf5b5c-f5pvq"] Feb 14 04:15:18 crc kubenswrapper[4867]: I0214 04:15:18.010042 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/b967a9e8-e5f1-4c92-889a-1dd6adf747fd-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-72mpc\" (UID: \"b967a9e8-e5f1-4c92-889a-1dd6adf747fd\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-72mpc" Feb 14 04:15:18 crc kubenswrapper[4867]: E0214 04:15:18.010273 4867 secret.go:188] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: secret "prometheus-operator-admission-webhook-tls" not found Feb 14 04:15:18 crc kubenswrapper[4867]: E0214 04:15:18.010348 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b967a9e8-e5f1-4c92-889a-1dd6adf747fd-tls-certificates podName:b967a9e8-e5f1-4c92-889a-1dd6adf747fd nodeName:}" failed. No retries permitted until 2026-02-14 04:15:20.010330607 +0000 UTC m=+352.091267931 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/b967a9e8-e5f1-4c92-889a-1dd6adf747fd-tls-certificates") pod "prometheus-operator-admission-webhook-f54c54754-72mpc" (UID: "b967a9e8-e5f1-4c92-889a-1dd6adf747fd") : secret "prometheus-operator-admission-webhook-tls" not found Feb 14 04:15:18 crc kubenswrapper[4867]: I0214 04:15:18.938769 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-645fd87585-cg7sr"] Feb 14 04:15:18 crc kubenswrapper[4867]: E0214 04:15:18.939058 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e50093fe-87a2-46d7-aab7-3bf4179dc49b" containerName="route-controller-manager" Feb 14 04:15:18 crc kubenswrapper[4867]: I0214 04:15:18.939075 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="e50093fe-87a2-46d7-aab7-3bf4179dc49b" containerName="route-controller-manager" Feb 14 04:15:18 crc kubenswrapper[4867]: E0214 04:15:18.939091 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2" containerName="controller-manager" Feb 14 04:15:18 crc kubenswrapper[4867]: I0214 04:15:18.939100 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2" containerName="controller-manager" Feb 14 04:15:18 crc kubenswrapper[4867]: I0214 04:15:18.939213 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="e50093fe-87a2-46d7-aab7-3bf4179dc49b" containerName="route-controller-manager" Feb 14 04:15:18 crc kubenswrapper[4867]: I0214 04:15:18.939234 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2" containerName="controller-manager" Feb 14 04:15:18 crc kubenswrapper[4867]: I0214 04:15:18.939726 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-645fd87585-cg7sr" Feb 14 04:15:18 crc kubenswrapper[4867]: I0214 04:15:18.941455 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6dd4d98c55-vl8mx"] Feb 14 04:15:18 crc kubenswrapper[4867]: I0214 04:15:18.942181 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6dd4d98c55-vl8mx" Feb 14 04:15:18 crc kubenswrapper[4867]: I0214 04:15:18.946360 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 14 04:15:18 crc kubenswrapper[4867]: I0214 04:15:18.946791 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 14 04:15:18 crc kubenswrapper[4867]: I0214 04:15:18.946926 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 14 04:15:18 crc kubenswrapper[4867]: I0214 04:15:18.946792 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 14 04:15:18 crc kubenswrapper[4867]: I0214 04:15:18.947343 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 14 04:15:18 crc kubenswrapper[4867]: I0214 04:15:18.947396 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 14 04:15:18 crc kubenswrapper[4867]: I0214 04:15:18.949098 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 14 04:15:18 crc kubenswrapper[4867]: I0214 04:15:18.949324 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 14 04:15:18 crc kubenswrapper[4867]: I0214 04:15:18.949567 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 14 04:15:18 crc kubenswrapper[4867]: I0214 04:15:18.949778 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 14 04:15:18 crc kubenswrapper[4867]: I0214 04:15:18.951326 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 14 04:15:18 crc kubenswrapper[4867]: I0214 04:15:18.951464 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 14 04:15:18 crc kubenswrapper[4867]: I0214 04:15:18.962823 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6dd4d98c55-vl8mx"] Feb 14 04:15:18 crc kubenswrapper[4867]: I0214 04:15:18.965767 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-645fd87585-cg7sr"] Feb 14 04:15:18 crc kubenswrapper[4867]: I0214 04:15:18.966315 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 14 04:15:19 crc kubenswrapper[4867]: I0214 04:15:19.012879 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2" path="/var/lib/kubelet/pods/a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2/volumes" Feb 14 04:15:19 crc kubenswrapper[4867]: I0214 04:15:19.015410 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e50093fe-87a2-46d7-aab7-3bf4179dc49b" path="/var/lib/kubelet/pods/e50093fe-87a2-46d7-aab7-3bf4179dc49b/volumes" Feb 14 04:15:19 crc kubenswrapper[4867]: I0214 04:15:19.123373 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cd15dd24-0b64-4213-842f-5727fdedffaf-client-ca\") pod \"route-controller-manager-6dd4d98c55-vl8mx\" (UID: \"cd15dd24-0b64-4213-842f-5727fdedffaf\") " pod="openshift-route-controller-manager/route-controller-manager-6dd4d98c55-vl8mx" Feb 14 04:15:19 crc kubenswrapper[4867]: I0214 04:15:19.123473 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd15dd24-0b64-4213-842f-5727fdedffaf-config\") pod \"route-controller-manager-6dd4d98c55-vl8mx\" (UID: \"cd15dd24-0b64-4213-842f-5727fdedffaf\") " pod="openshift-route-controller-manager/route-controller-manager-6dd4d98c55-vl8mx" Feb 14 04:15:19 crc kubenswrapper[4867]: I0214 04:15:19.123655 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/460ab01d-a050-4210-8f77-1564c687b8aa-serving-cert\") pod \"controller-manager-645fd87585-cg7sr\" (UID: \"460ab01d-a050-4210-8f77-1564c687b8aa\") " pod="openshift-controller-manager/controller-manager-645fd87585-cg7sr" Feb 14 04:15:19 crc kubenswrapper[4867]: I0214 04:15:19.123682 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/460ab01d-a050-4210-8f77-1564c687b8aa-client-ca\") pod \"controller-manager-645fd87585-cg7sr\" (UID: \"460ab01d-a050-4210-8f77-1564c687b8aa\") " pod="openshift-controller-manager/controller-manager-645fd87585-cg7sr" Feb 14 04:15:19 crc kubenswrapper[4867]: I0214 04:15:19.124613 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lv2fh\" (UniqueName: \"kubernetes.io/projected/460ab01d-a050-4210-8f77-1564c687b8aa-kube-api-access-lv2fh\") pod \"controller-manager-645fd87585-cg7sr\" (UID: \"460ab01d-a050-4210-8f77-1564c687b8aa\") " pod="openshift-controller-manager/controller-manager-645fd87585-cg7sr" Feb 14 04:15:19 crc kubenswrapper[4867]: I0214 04:15:19.124894 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd15dd24-0b64-4213-842f-5727fdedffaf-serving-cert\") pod \"route-controller-manager-6dd4d98c55-vl8mx\" (UID: \"cd15dd24-0b64-4213-842f-5727fdedffaf\") " pod="openshift-route-controller-manager/route-controller-manager-6dd4d98c55-vl8mx" Feb 14 04:15:19 crc kubenswrapper[4867]: I0214 04:15:19.125072 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/460ab01d-a050-4210-8f77-1564c687b8aa-config\") pod \"controller-manager-645fd87585-cg7sr\" (UID: \"460ab01d-a050-4210-8f77-1564c687b8aa\") " pod="openshift-controller-manager/controller-manager-645fd87585-cg7sr" Feb 14 04:15:19 crc kubenswrapper[4867]: I0214 04:15:19.125258 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhmp7\" (UniqueName: \"kubernetes.io/projected/cd15dd24-0b64-4213-842f-5727fdedffaf-kube-api-access-qhmp7\") pod \"route-controller-manager-6dd4d98c55-vl8mx\" (UID: \"cd15dd24-0b64-4213-842f-5727fdedffaf\") " pod="openshift-route-controller-manager/route-controller-manager-6dd4d98c55-vl8mx" Feb 14 04:15:19 crc kubenswrapper[4867]: I0214 04:15:19.125405 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/460ab01d-a050-4210-8f77-1564c687b8aa-proxy-ca-bundles\") pod \"controller-manager-645fd87585-cg7sr\" (UID: \"460ab01d-a050-4210-8f77-1564c687b8aa\") " pod="openshift-controller-manager/controller-manager-645fd87585-cg7sr" Feb 14 04:15:19 crc kubenswrapper[4867]: I0214 04:15:19.226557 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/460ab01d-a050-4210-8f77-1564c687b8aa-proxy-ca-bundles\") pod \"controller-manager-645fd87585-cg7sr\" (UID: \"460ab01d-a050-4210-8f77-1564c687b8aa\") " pod="openshift-controller-manager/controller-manager-645fd87585-cg7sr" Feb 14 04:15:19 crc kubenswrapper[4867]: I0214 04:15:19.226635 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cd15dd24-0b64-4213-842f-5727fdedffaf-client-ca\") pod \"route-controller-manager-6dd4d98c55-vl8mx\" (UID: \"cd15dd24-0b64-4213-842f-5727fdedffaf\") " pod="openshift-route-controller-manager/route-controller-manager-6dd4d98c55-vl8mx" Feb 14 04:15:19 crc kubenswrapper[4867]: I0214 04:15:19.226709 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd15dd24-0b64-4213-842f-5727fdedffaf-config\") pod \"route-controller-manager-6dd4d98c55-vl8mx\" (UID: \"cd15dd24-0b64-4213-842f-5727fdedffaf\") " pod="openshift-route-controller-manager/route-controller-manager-6dd4d98c55-vl8mx" Feb 14 04:15:19 crc kubenswrapper[4867]: I0214 04:15:19.226804 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/460ab01d-a050-4210-8f77-1564c687b8aa-serving-cert\") pod \"controller-manager-645fd87585-cg7sr\" (UID: \"460ab01d-a050-4210-8f77-1564c687b8aa\") " pod="openshift-controller-manager/controller-manager-645fd87585-cg7sr" Feb 14 04:15:19 crc kubenswrapper[4867]: I0214 04:15:19.226845 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/460ab01d-a050-4210-8f77-1564c687b8aa-client-ca\") pod \"controller-manager-645fd87585-cg7sr\" (UID: \"460ab01d-a050-4210-8f77-1564c687b8aa\") " pod="openshift-controller-manager/controller-manager-645fd87585-cg7sr" Feb 14 04:15:19 crc kubenswrapper[4867]: I0214 04:15:19.226935 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lv2fh\" (UniqueName: \"kubernetes.io/projected/460ab01d-a050-4210-8f77-1564c687b8aa-kube-api-access-lv2fh\") pod \"controller-manager-645fd87585-cg7sr\" (UID: \"460ab01d-a050-4210-8f77-1564c687b8aa\") " pod="openshift-controller-manager/controller-manager-645fd87585-cg7sr" Feb 14 04:15:19 crc kubenswrapper[4867]: I0214 04:15:19.226972 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd15dd24-0b64-4213-842f-5727fdedffaf-serving-cert\") pod \"route-controller-manager-6dd4d98c55-vl8mx\" (UID: \"cd15dd24-0b64-4213-842f-5727fdedffaf\") " pod="openshift-route-controller-manager/route-controller-manager-6dd4d98c55-vl8mx" Feb 14 04:15:19 crc kubenswrapper[4867]: I0214 04:15:19.227024 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/460ab01d-a050-4210-8f77-1564c687b8aa-config\") pod \"controller-manager-645fd87585-cg7sr\" (UID: \"460ab01d-a050-4210-8f77-1564c687b8aa\") " pod="openshift-controller-manager/controller-manager-645fd87585-cg7sr" Feb 14 04:15:19 crc kubenswrapper[4867]: I0214 04:15:19.227080 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhmp7\" (UniqueName: \"kubernetes.io/projected/cd15dd24-0b64-4213-842f-5727fdedffaf-kube-api-access-qhmp7\") pod \"route-controller-manager-6dd4d98c55-vl8mx\" (UID: \"cd15dd24-0b64-4213-842f-5727fdedffaf\") " pod="openshift-route-controller-manager/route-controller-manager-6dd4d98c55-vl8mx" Feb 14 04:15:19 crc kubenswrapper[4867]: I0214 04:15:19.228925 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/460ab01d-a050-4210-8f77-1564c687b8aa-client-ca\") pod \"controller-manager-645fd87585-cg7sr\" (UID: \"460ab01d-a050-4210-8f77-1564c687b8aa\") " pod="openshift-controller-manager/controller-manager-645fd87585-cg7sr" Feb 14 04:15:19 crc kubenswrapper[4867]: I0214 04:15:19.229000 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/460ab01d-a050-4210-8f77-1564c687b8aa-proxy-ca-bundles\") pod \"controller-manager-645fd87585-cg7sr\" (UID: \"460ab01d-a050-4210-8f77-1564c687b8aa\") " pod="openshift-controller-manager/controller-manager-645fd87585-cg7sr" Feb 14 04:15:19 crc kubenswrapper[4867]: I0214 04:15:19.229905 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd15dd24-0b64-4213-842f-5727fdedffaf-config\") pod \"route-controller-manager-6dd4d98c55-vl8mx\" (UID: \"cd15dd24-0b64-4213-842f-5727fdedffaf\") " pod="openshift-route-controller-manager/route-controller-manager-6dd4d98c55-vl8mx" Feb 14 04:15:19 crc kubenswrapper[4867]: I0214 04:15:19.230861 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/460ab01d-a050-4210-8f77-1564c687b8aa-config\") pod \"controller-manager-645fd87585-cg7sr\" (UID: \"460ab01d-a050-4210-8f77-1564c687b8aa\") " pod="openshift-controller-manager/controller-manager-645fd87585-cg7sr" Feb 14 04:15:19 crc kubenswrapper[4867]: I0214 04:15:19.234466 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cd15dd24-0b64-4213-842f-5727fdedffaf-client-ca\") pod \"route-controller-manager-6dd4d98c55-vl8mx\" (UID: \"cd15dd24-0b64-4213-842f-5727fdedffaf\") " pod="openshift-route-controller-manager/route-controller-manager-6dd4d98c55-vl8mx" Feb 14 04:15:19 crc kubenswrapper[4867]: I0214 04:15:19.240739 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/460ab01d-a050-4210-8f77-1564c687b8aa-serving-cert\") pod \"controller-manager-645fd87585-cg7sr\" (UID: \"460ab01d-a050-4210-8f77-1564c687b8aa\") " pod="openshift-controller-manager/controller-manager-645fd87585-cg7sr" Feb 14 04:15:19 crc kubenswrapper[4867]: I0214 04:15:19.240800 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd15dd24-0b64-4213-842f-5727fdedffaf-serving-cert\") pod \"route-controller-manager-6dd4d98c55-vl8mx\" (UID: \"cd15dd24-0b64-4213-842f-5727fdedffaf\") " pod="openshift-route-controller-manager/route-controller-manager-6dd4d98c55-vl8mx" Feb 14 04:15:19 crc kubenswrapper[4867]: I0214 04:15:19.248569 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhmp7\" (UniqueName: \"kubernetes.io/projected/cd15dd24-0b64-4213-842f-5727fdedffaf-kube-api-access-qhmp7\") pod \"route-controller-manager-6dd4d98c55-vl8mx\" (UID: \"cd15dd24-0b64-4213-842f-5727fdedffaf\") " pod="openshift-route-controller-manager/route-controller-manager-6dd4d98c55-vl8mx" Feb 14 04:15:19 crc kubenswrapper[4867]: I0214 04:15:19.257814 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lv2fh\" (UniqueName: \"kubernetes.io/projected/460ab01d-a050-4210-8f77-1564c687b8aa-kube-api-access-lv2fh\") pod \"controller-manager-645fd87585-cg7sr\" (UID: \"460ab01d-a050-4210-8f77-1564c687b8aa\") " pod="openshift-controller-manager/controller-manager-645fd87585-cg7sr" Feb 14 04:15:19 crc kubenswrapper[4867]: I0214 04:15:19.284388 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-645fd87585-cg7sr" Feb 14 04:15:19 crc kubenswrapper[4867]: I0214 04:15:19.302147 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6dd4d98c55-vl8mx" Feb 14 04:15:19 crc kubenswrapper[4867]: I0214 04:15:19.494329 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6dd4d98c55-vl8mx"] Feb 14 04:15:19 crc kubenswrapper[4867]: W0214 04:15:19.504604 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcd15dd24_0b64_4213_842f_5727fdedffaf.slice/crio-f67f6d6f5857e795b0810abf5a8af2c6365a5e6e9a844dfa3bbdd069b8dcceb1 WatchSource:0}: Error finding container f67f6d6f5857e795b0810abf5a8af2c6365a5e6e9a844dfa3bbdd069b8dcceb1: Status 404 returned error can't find the container with id f67f6d6f5857e795b0810abf5a8af2c6365a5e6e9a844dfa3bbdd069b8dcceb1 Feb 14 04:15:19 crc kubenswrapper[4867]: I0214 04:15:19.532825 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-645fd87585-cg7sr"] Feb 14 04:15:19 crc kubenswrapper[4867]: W0214 04:15:19.540082 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod460ab01d_a050_4210_8f77_1564c687b8aa.slice/crio-ec17f4737d4f6752779dbdb60d879bee862c16976ddcbbba41458c6d682fa9fe WatchSource:0}: Error finding container ec17f4737d4f6752779dbdb60d879bee862c16976ddcbbba41458c6d682fa9fe: Status 404 returned error can't find the container with id ec17f4737d4f6752779dbdb60d879bee862c16976ddcbbba41458c6d682fa9fe Feb 14 04:15:19 crc kubenswrapper[4867]: I0214 04:15:19.671430 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-645fd87585-cg7sr" event={"ID":"460ab01d-a050-4210-8f77-1564c687b8aa","Type":"ContainerStarted","Data":"ec17f4737d4f6752779dbdb60d879bee862c16976ddcbbba41458c6d682fa9fe"} Feb 14 04:15:19 crc kubenswrapper[4867]: I0214 04:15:19.672522 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6dd4d98c55-vl8mx" event={"ID":"cd15dd24-0b64-4213-842f-5727fdedffaf","Type":"ContainerStarted","Data":"f67f6d6f5857e795b0810abf5a8af2c6365a5e6e9a844dfa3bbdd069b8dcceb1"} Feb 14 04:15:20 crc kubenswrapper[4867]: I0214 04:15:20.038120 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/b967a9e8-e5f1-4c92-889a-1dd6adf747fd-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-72mpc\" (UID: \"b967a9e8-e5f1-4c92-889a-1dd6adf747fd\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-72mpc" Feb 14 04:15:20 crc kubenswrapper[4867]: E0214 04:15:20.038314 4867 secret.go:188] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: secret "prometheus-operator-admission-webhook-tls" not found Feb 14 04:15:20 crc kubenswrapper[4867]: E0214 04:15:20.038603 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b967a9e8-e5f1-4c92-889a-1dd6adf747fd-tls-certificates podName:b967a9e8-e5f1-4c92-889a-1dd6adf747fd nodeName:}" failed. No retries permitted until 2026-02-14 04:15:24.038581209 +0000 UTC m=+356.119518523 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/b967a9e8-e5f1-4c92-889a-1dd6adf747fd-tls-certificates") pod "prometheus-operator-admission-webhook-f54c54754-72mpc" (UID: "b967a9e8-e5f1-4c92-889a-1dd6adf747fd") : secret "prometheus-operator-admission-webhook-tls" not found Feb 14 04:15:20 crc kubenswrapper[4867]: I0214 04:15:20.681870 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6dd4d98c55-vl8mx" event={"ID":"cd15dd24-0b64-4213-842f-5727fdedffaf","Type":"ContainerStarted","Data":"dc49d46bbf08c3a8f13c574f31042497b2f838320abe428a9400869962f6a94d"} Feb 14 04:15:20 crc kubenswrapper[4867]: I0214 04:15:20.682132 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6dd4d98c55-vl8mx" Feb 14 04:15:20 crc kubenswrapper[4867]: I0214 04:15:20.683878 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-645fd87585-cg7sr" event={"ID":"460ab01d-a050-4210-8f77-1564c687b8aa","Type":"ContainerStarted","Data":"0d82750f6cb70aa5afe9e78ddf40f91e7f394d82ecfe5e44397ce38b6f93dbee"} Feb 14 04:15:20 crc kubenswrapper[4867]: I0214 04:15:20.684847 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-645fd87585-cg7sr" Feb 14 04:15:20 crc kubenswrapper[4867]: I0214 04:15:20.689875 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6dd4d98c55-vl8mx" Feb 14 04:15:20 crc kubenswrapper[4867]: I0214 04:15:20.694258 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-645fd87585-cg7sr" Feb 14 04:15:20 crc kubenswrapper[4867]: I0214 04:15:20.711013 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6dd4d98c55-vl8mx" podStartSLOduration=3.7109971379999998 podStartE2EDuration="3.710997138s" podCreationTimestamp="2026-02-14 04:15:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:15:20.710122685 +0000 UTC m=+352.791060019" watchObservedRunningTime="2026-02-14 04:15:20.710997138 +0000 UTC m=+352.791934452" Feb 14 04:15:20 crc kubenswrapper[4867]: I0214 04:15:20.764240 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-645fd87585-cg7sr" podStartSLOduration=3.764212958 podStartE2EDuration="3.764212958s" podCreationTimestamp="2026-02-14 04:15:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:15:20.760743538 +0000 UTC m=+352.841680942" watchObservedRunningTime="2026-02-14 04:15:20.764212958 +0000 UTC m=+352.845150292" Feb 14 04:15:22 crc kubenswrapper[4867]: I0214 04:15:22.257695 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-c65kr"] Feb 14 04:15:22 crc kubenswrapper[4867]: I0214 04:15:22.648331 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-wwh9m"] Feb 14 04:15:22 crc kubenswrapper[4867]: I0214 04:15:22.649100 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-wwh9m" Feb 14 04:15:22 crc kubenswrapper[4867]: I0214 04:15:22.657922 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-wwh9m"] Feb 14 04:15:22 crc kubenswrapper[4867]: I0214 04:15:22.777951 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwrfh\" (UniqueName: \"kubernetes.io/projected/bbf9502a-06eb-4e94-911a-3a7ac1426dd8-kube-api-access-fwrfh\") pod \"image-registry-66df7c8f76-wwh9m\" (UID: \"bbf9502a-06eb-4e94-911a-3a7ac1426dd8\") " pod="openshift-image-registry/image-registry-66df7c8f76-wwh9m" Feb 14 04:15:22 crc kubenswrapper[4867]: I0214 04:15:22.778252 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bbf9502a-06eb-4e94-911a-3a7ac1426dd8-registry-tls\") pod \"image-registry-66df7c8f76-wwh9m\" (UID: \"bbf9502a-06eb-4e94-911a-3a7ac1426dd8\") " pod="openshift-image-registry/image-registry-66df7c8f76-wwh9m" Feb 14 04:15:22 crc kubenswrapper[4867]: I0214 04:15:22.778344 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bbf9502a-06eb-4e94-911a-3a7ac1426dd8-installation-pull-secrets\") pod \"image-registry-66df7c8f76-wwh9m\" (UID: \"bbf9502a-06eb-4e94-911a-3a7ac1426dd8\") " pod="openshift-image-registry/image-registry-66df7c8f76-wwh9m" Feb 14 04:15:22 crc kubenswrapper[4867]: I0214 04:15:22.778451 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bbf9502a-06eb-4e94-911a-3a7ac1426dd8-ca-trust-extracted\") pod \"image-registry-66df7c8f76-wwh9m\" (UID: \"bbf9502a-06eb-4e94-911a-3a7ac1426dd8\") " pod="openshift-image-registry/image-registry-66df7c8f76-wwh9m" Feb 14 04:15:22 crc kubenswrapper[4867]: I0214 04:15:22.778595 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bbf9502a-06eb-4e94-911a-3a7ac1426dd8-registry-certificates\") pod \"image-registry-66df7c8f76-wwh9m\" (UID: \"bbf9502a-06eb-4e94-911a-3a7ac1426dd8\") " pod="openshift-image-registry/image-registry-66df7c8f76-wwh9m" Feb 14 04:15:22 crc kubenswrapper[4867]: I0214 04:15:22.778690 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bbf9502a-06eb-4e94-911a-3a7ac1426dd8-trusted-ca\") pod \"image-registry-66df7c8f76-wwh9m\" (UID: \"bbf9502a-06eb-4e94-911a-3a7ac1426dd8\") " pod="openshift-image-registry/image-registry-66df7c8f76-wwh9m" Feb 14 04:15:22 crc kubenswrapper[4867]: I0214 04:15:22.778811 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bbf9502a-06eb-4e94-911a-3a7ac1426dd8-bound-sa-token\") pod \"image-registry-66df7c8f76-wwh9m\" (UID: \"bbf9502a-06eb-4e94-911a-3a7ac1426dd8\") " pod="openshift-image-registry/image-registry-66df7c8f76-wwh9m" Feb 14 04:15:22 crc kubenswrapper[4867]: I0214 04:15:22.778913 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-wwh9m\" (UID: \"bbf9502a-06eb-4e94-911a-3a7ac1426dd8\") " pod="openshift-image-registry/image-registry-66df7c8f76-wwh9m" Feb 14 04:15:22 crc kubenswrapper[4867]: I0214 04:15:22.800029 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-wwh9m\" (UID: \"bbf9502a-06eb-4e94-911a-3a7ac1426dd8\") " pod="openshift-image-registry/image-registry-66df7c8f76-wwh9m" Feb 14 04:15:22 crc kubenswrapper[4867]: I0214 04:15:22.880040 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bbf9502a-06eb-4e94-911a-3a7ac1426dd8-registry-certificates\") pod \"image-registry-66df7c8f76-wwh9m\" (UID: \"bbf9502a-06eb-4e94-911a-3a7ac1426dd8\") " pod="openshift-image-registry/image-registry-66df7c8f76-wwh9m" Feb 14 04:15:22 crc kubenswrapper[4867]: I0214 04:15:22.880083 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bbf9502a-06eb-4e94-911a-3a7ac1426dd8-trusted-ca\") pod \"image-registry-66df7c8f76-wwh9m\" (UID: \"bbf9502a-06eb-4e94-911a-3a7ac1426dd8\") " pod="openshift-image-registry/image-registry-66df7c8f76-wwh9m" Feb 14 04:15:22 crc kubenswrapper[4867]: I0214 04:15:22.880138 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bbf9502a-06eb-4e94-911a-3a7ac1426dd8-bound-sa-token\") pod \"image-registry-66df7c8f76-wwh9m\" (UID: \"bbf9502a-06eb-4e94-911a-3a7ac1426dd8\") " pod="openshift-image-registry/image-registry-66df7c8f76-wwh9m" Feb 14 04:15:22 crc kubenswrapper[4867]: I0214 04:15:22.880199 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwrfh\" (UniqueName: \"kubernetes.io/projected/bbf9502a-06eb-4e94-911a-3a7ac1426dd8-kube-api-access-fwrfh\") pod \"image-registry-66df7c8f76-wwh9m\" (UID: \"bbf9502a-06eb-4e94-911a-3a7ac1426dd8\") " pod="openshift-image-registry/image-registry-66df7c8f76-wwh9m" Feb 14 04:15:22 crc kubenswrapper[4867]: I0214 04:15:22.880258 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bbf9502a-06eb-4e94-911a-3a7ac1426dd8-registry-tls\") pod \"image-registry-66df7c8f76-wwh9m\" (UID: \"bbf9502a-06eb-4e94-911a-3a7ac1426dd8\") " pod="openshift-image-registry/image-registry-66df7c8f76-wwh9m" Feb 14 04:15:22 crc kubenswrapper[4867]: I0214 04:15:22.880276 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bbf9502a-06eb-4e94-911a-3a7ac1426dd8-installation-pull-secrets\") pod \"image-registry-66df7c8f76-wwh9m\" (UID: \"bbf9502a-06eb-4e94-911a-3a7ac1426dd8\") " pod="openshift-image-registry/image-registry-66df7c8f76-wwh9m" Feb 14 04:15:22 crc kubenswrapper[4867]: I0214 04:15:22.880316 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bbf9502a-06eb-4e94-911a-3a7ac1426dd8-ca-trust-extracted\") pod \"image-registry-66df7c8f76-wwh9m\" (UID: \"bbf9502a-06eb-4e94-911a-3a7ac1426dd8\") " pod="openshift-image-registry/image-registry-66df7c8f76-wwh9m" Feb 14 04:15:22 crc kubenswrapper[4867]: I0214 04:15:22.881095 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bbf9502a-06eb-4e94-911a-3a7ac1426dd8-ca-trust-extracted\") pod \"image-registry-66df7c8f76-wwh9m\" (UID: \"bbf9502a-06eb-4e94-911a-3a7ac1426dd8\") " pod="openshift-image-registry/image-registry-66df7c8f76-wwh9m" Feb 14 04:15:22 crc kubenswrapper[4867]: I0214 04:15:22.881466 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bbf9502a-06eb-4e94-911a-3a7ac1426dd8-registry-certificates\") pod \"image-registry-66df7c8f76-wwh9m\" (UID: \"bbf9502a-06eb-4e94-911a-3a7ac1426dd8\") " pod="openshift-image-registry/image-registry-66df7c8f76-wwh9m" Feb 14 04:15:22 crc kubenswrapper[4867]: I0214 04:15:22.881501 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bbf9502a-06eb-4e94-911a-3a7ac1426dd8-trusted-ca\") pod \"image-registry-66df7c8f76-wwh9m\" (UID: \"bbf9502a-06eb-4e94-911a-3a7ac1426dd8\") " pod="openshift-image-registry/image-registry-66df7c8f76-wwh9m" Feb 14 04:15:22 crc kubenswrapper[4867]: I0214 04:15:22.888863 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bbf9502a-06eb-4e94-911a-3a7ac1426dd8-installation-pull-secrets\") pod \"image-registry-66df7c8f76-wwh9m\" (UID: \"bbf9502a-06eb-4e94-911a-3a7ac1426dd8\") " pod="openshift-image-registry/image-registry-66df7c8f76-wwh9m" Feb 14 04:15:22 crc kubenswrapper[4867]: I0214 04:15:22.889134 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bbf9502a-06eb-4e94-911a-3a7ac1426dd8-registry-tls\") pod \"image-registry-66df7c8f76-wwh9m\" (UID: \"bbf9502a-06eb-4e94-911a-3a7ac1426dd8\") " pod="openshift-image-registry/image-registry-66df7c8f76-wwh9m" Feb 14 04:15:22 crc kubenswrapper[4867]: I0214 04:15:22.897495 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bbf9502a-06eb-4e94-911a-3a7ac1426dd8-bound-sa-token\") pod \"image-registry-66df7c8f76-wwh9m\" (UID: \"bbf9502a-06eb-4e94-911a-3a7ac1426dd8\") " pod="openshift-image-registry/image-registry-66df7c8f76-wwh9m" Feb 14 04:15:22 crc kubenswrapper[4867]: I0214 04:15:22.898326 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwrfh\" (UniqueName: \"kubernetes.io/projected/bbf9502a-06eb-4e94-911a-3a7ac1426dd8-kube-api-access-fwrfh\") pod \"image-registry-66df7c8f76-wwh9m\" (UID: \"bbf9502a-06eb-4e94-911a-3a7ac1426dd8\") " pod="openshift-image-registry/image-registry-66df7c8f76-wwh9m" Feb 14 04:15:22 crc kubenswrapper[4867]: I0214 04:15:22.964791 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-wwh9m" Feb 14 04:15:23 crc kubenswrapper[4867]: I0214 04:15:23.349763 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-wwh9m"] Feb 14 04:15:23 crc kubenswrapper[4867]: W0214 04:15:23.357756 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbbf9502a_06eb_4e94_911a_3a7ac1426dd8.slice/crio-1306214d36d4fc4cf450d2329be291ed5fafa54839f0cd306bfec562ccbebb67 WatchSource:0}: Error finding container 1306214d36d4fc4cf450d2329be291ed5fafa54839f0cd306bfec562ccbebb67: Status 404 returned error can't find the container with id 1306214d36d4fc4cf450d2329be291ed5fafa54839f0cd306bfec562ccbebb67 Feb 14 04:15:23 crc kubenswrapper[4867]: I0214 04:15:23.699105 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-wwh9m" event={"ID":"bbf9502a-06eb-4e94-911a-3a7ac1426dd8","Type":"ContainerStarted","Data":"972779f98658ff6da5bb0e972489175cb939a80dd58a4d23e04dc2b8617b4c65"} Feb 14 04:15:23 crc kubenswrapper[4867]: I0214 04:15:23.699461 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-wwh9m" Feb 14 04:15:23 crc kubenswrapper[4867]: I0214 04:15:23.699479 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-wwh9m" event={"ID":"bbf9502a-06eb-4e94-911a-3a7ac1426dd8","Type":"ContainerStarted","Data":"1306214d36d4fc4cf450d2329be291ed5fafa54839f0cd306bfec562ccbebb67"} Feb 14 04:15:23 crc kubenswrapper[4867]: I0214 04:15:23.719425 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-wwh9m" podStartSLOduration=1.7194003279999999 podStartE2EDuration="1.719400328s" podCreationTimestamp="2026-02-14 04:15:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:15:23.715351593 +0000 UTC m=+355.796288947" watchObservedRunningTime="2026-02-14 04:15:23.719400328 +0000 UTC m=+355.800337662" Feb 14 04:15:24 crc kubenswrapper[4867]: I0214 04:15:24.097225 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/b967a9e8-e5f1-4c92-889a-1dd6adf747fd-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-72mpc\" (UID: \"b967a9e8-e5f1-4c92-889a-1dd6adf747fd\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-72mpc" Feb 14 04:15:24 crc kubenswrapper[4867]: E0214 04:15:24.097395 4867 secret.go:188] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: secret "prometheus-operator-admission-webhook-tls" not found Feb 14 04:15:24 crc kubenswrapper[4867]: E0214 04:15:24.097455 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b967a9e8-e5f1-4c92-889a-1dd6adf747fd-tls-certificates podName:b967a9e8-e5f1-4c92-889a-1dd6adf747fd nodeName:}" failed. No retries permitted until 2026-02-14 04:15:32.097436542 +0000 UTC m=+364.178373856 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/b967a9e8-e5f1-4c92-889a-1dd6adf747fd-tls-certificates") pod "prometheus-operator-admission-webhook-f54c54754-72mpc" (UID: "b967a9e8-e5f1-4c92-889a-1dd6adf747fd") : secret "prometheus-operator-admission-webhook-tls" not found Feb 14 04:15:31 crc kubenswrapper[4867]: I0214 04:15:31.250938 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 04:15:31 crc kubenswrapper[4867]: I0214 04:15:31.251665 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 04:15:32 crc kubenswrapper[4867]: I0214 04:15:32.102469 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/b967a9e8-e5f1-4c92-889a-1dd6adf747fd-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-72mpc\" (UID: \"b967a9e8-e5f1-4c92-889a-1dd6adf747fd\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-72mpc" Feb 14 04:15:32 crc kubenswrapper[4867]: E0214 04:15:32.102744 4867 secret.go:188] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: secret "prometheus-operator-admission-webhook-tls" not found Feb 14 04:15:32 crc kubenswrapper[4867]: E0214 04:15:32.102857 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b967a9e8-e5f1-4c92-889a-1dd6adf747fd-tls-certificates podName:b967a9e8-e5f1-4c92-889a-1dd6adf747fd nodeName:}" failed. No retries permitted until 2026-02-14 04:15:48.102820603 +0000 UTC m=+380.183757957 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/b967a9e8-e5f1-4c92-889a-1dd6adf747fd-tls-certificates") pod "prometheus-operator-admission-webhook-f54c54754-72mpc" (UID: "b967a9e8-e5f1-4c92-889a-1dd6adf747fd") : secret "prometheus-operator-admission-webhook-tls" not found Feb 14 04:15:33 crc kubenswrapper[4867]: I0214 04:15:33.133383 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mrccv"] Feb 14 04:15:33 crc kubenswrapper[4867]: I0214 04:15:33.134336 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mrccv" Feb 14 04:15:33 crc kubenswrapper[4867]: I0214 04:15:33.136266 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 14 04:15:33 crc kubenswrapper[4867]: I0214 04:15:33.142975 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mrccv"] Feb 14 04:15:33 crc kubenswrapper[4867]: I0214 04:15:33.216524 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0fe6db4-add0-4993-a40c-c5b6725565fa-catalog-content\") pod \"certified-operators-mrccv\" (UID: \"e0fe6db4-add0-4993-a40c-c5b6725565fa\") " pod="openshift-marketplace/certified-operators-mrccv" Feb 14 04:15:33 crc kubenswrapper[4867]: I0214 04:15:33.216880 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9jdb\" (UniqueName: \"kubernetes.io/projected/e0fe6db4-add0-4993-a40c-c5b6725565fa-kube-api-access-v9jdb\") pod \"certified-operators-mrccv\" (UID: \"e0fe6db4-add0-4993-a40c-c5b6725565fa\") " pod="openshift-marketplace/certified-operators-mrccv" Feb 14 04:15:33 crc kubenswrapper[4867]: I0214 04:15:33.216936 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0fe6db4-add0-4993-a40c-c5b6725565fa-utilities\") pod \"certified-operators-mrccv\" (UID: \"e0fe6db4-add0-4993-a40c-c5b6725565fa\") " pod="openshift-marketplace/certified-operators-mrccv" Feb 14 04:15:33 crc kubenswrapper[4867]: I0214 04:15:33.318363 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0fe6db4-add0-4993-a40c-c5b6725565fa-catalog-content\") pod \"certified-operators-mrccv\" (UID: \"e0fe6db4-add0-4993-a40c-c5b6725565fa\") " pod="openshift-marketplace/certified-operators-mrccv" Feb 14 04:15:33 crc kubenswrapper[4867]: I0214 04:15:33.318431 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9jdb\" (UniqueName: \"kubernetes.io/projected/e0fe6db4-add0-4993-a40c-c5b6725565fa-kube-api-access-v9jdb\") pod \"certified-operators-mrccv\" (UID: \"e0fe6db4-add0-4993-a40c-c5b6725565fa\") " pod="openshift-marketplace/certified-operators-mrccv" Feb 14 04:15:33 crc kubenswrapper[4867]: I0214 04:15:33.318483 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0fe6db4-add0-4993-a40c-c5b6725565fa-utilities\") pod \"certified-operators-mrccv\" (UID: \"e0fe6db4-add0-4993-a40c-c5b6725565fa\") " pod="openshift-marketplace/certified-operators-mrccv" Feb 14 04:15:33 crc kubenswrapper[4867]: I0214 04:15:33.318822 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0fe6db4-add0-4993-a40c-c5b6725565fa-catalog-content\") pod \"certified-operators-mrccv\" (UID: \"e0fe6db4-add0-4993-a40c-c5b6725565fa\") " pod="openshift-marketplace/certified-operators-mrccv" Feb 14 04:15:33 crc kubenswrapper[4867]: I0214 04:15:33.318899 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0fe6db4-add0-4993-a40c-c5b6725565fa-utilities\") pod \"certified-operators-mrccv\" (UID: \"e0fe6db4-add0-4993-a40c-c5b6725565fa\") " pod="openshift-marketplace/certified-operators-mrccv" Feb 14 04:15:33 crc kubenswrapper[4867]: I0214 04:15:33.330318 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-w69fq"] Feb 14 04:15:33 crc kubenswrapper[4867]: I0214 04:15:33.331303 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w69fq" Feb 14 04:15:33 crc kubenswrapper[4867]: I0214 04:15:33.335412 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 14 04:15:33 crc kubenswrapper[4867]: I0214 04:15:33.339542 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9jdb\" (UniqueName: \"kubernetes.io/projected/e0fe6db4-add0-4993-a40c-c5b6725565fa-kube-api-access-v9jdb\") pod \"certified-operators-mrccv\" (UID: \"e0fe6db4-add0-4993-a40c-c5b6725565fa\") " pod="openshift-marketplace/certified-operators-mrccv" Feb 14 04:15:33 crc kubenswrapper[4867]: I0214 04:15:33.341472 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w69fq"] Feb 14 04:15:33 crc kubenswrapper[4867]: I0214 04:15:33.420207 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25nk5\" (UniqueName: \"kubernetes.io/projected/be125812-eeef-4043-bef9-fea01037dddb-kube-api-access-25nk5\") pod \"community-operators-w69fq\" (UID: \"be125812-eeef-4043-bef9-fea01037dddb\") " pod="openshift-marketplace/community-operators-w69fq" Feb 14 04:15:33 crc kubenswrapper[4867]: I0214 04:15:33.420371 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be125812-eeef-4043-bef9-fea01037dddb-catalog-content\") pod \"community-operators-w69fq\" (UID: \"be125812-eeef-4043-bef9-fea01037dddb\") " pod="openshift-marketplace/community-operators-w69fq" Feb 14 04:15:33 crc kubenswrapper[4867]: I0214 04:15:33.420414 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be125812-eeef-4043-bef9-fea01037dddb-utilities\") pod \"community-operators-w69fq\" (UID: \"be125812-eeef-4043-bef9-fea01037dddb\") " pod="openshift-marketplace/community-operators-w69fq" Feb 14 04:15:33 crc kubenswrapper[4867]: I0214 04:15:33.450859 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mrccv" Feb 14 04:15:33 crc kubenswrapper[4867]: I0214 04:15:33.521838 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be125812-eeef-4043-bef9-fea01037dddb-catalog-content\") pod \"community-operators-w69fq\" (UID: \"be125812-eeef-4043-bef9-fea01037dddb\") " pod="openshift-marketplace/community-operators-w69fq" Feb 14 04:15:33 crc kubenswrapper[4867]: I0214 04:15:33.521893 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be125812-eeef-4043-bef9-fea01037dddb-utilities\") pod \"community-operators-w69fq\" (UID: \"be125812-eeef-4043-bef9-fea01037dddb\") " pod="openshift-marketplace/community-operators-w69fq" Feb 14 04:15:33 crc kubenswrapper[4867]: I0214 04:15:33.521921 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25nk5\" (UniqueName: \"kubernetes.io/projected/be125812-eeef-4043-bef9-fea01037dddb-kube-api-access-25nk5\") pod \"community-operators-w69fq\" (UID: \"be125812-eeef-4043-bef9-fea01037dddb\") " pod="openshift-marketplace/community-operators-w69fq" Feb 14 04:15:33 crc kubenswrapper[4867]: I0214 04:15:33.522358 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be125812-eeef-4043-bef9-fea01037dddb-catalog-content\") pod \"community-operators-w69fq\" (UID: \"be125812-eeef-4043-bef9-fea01037dddb\") " pod="openshift-marketplace/community-operators-w69fq" Feb 14 04:15:33 crc kubenswrapper[4867]: I0214 04:15:33.522441 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be125812-eeef-4043-bef9-fea01037dddb-utilities\") pod \"community-operators-w69fq\" (UID: \"be125812-eeef-4043-bef9-fea01037dddb\") " pod="openshift-marketplace/community-operators-w69fq" Feb 14 04:15:33 crc kubenswrapper[4867]: I0214 04:15:33.542182 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25nk5\" (UniqueName: \"kubernetes.io/projected/be125812-eeef-4043-bef9-fea01037dddb-kube-api-access-25nk5\") pod \"community-operators-w69fq\" (UID: \"be125812-eeef-4043-bef9-fea01037dddb\") " pod="openshift-marketplace/community-operators-w69fq" Feb 14 04:15:33 crc kubenswrapper[4867]: I0214 04:15:33.664778 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w69fq" Feb 14 04:15:33 crc kubenswrapper[4867]: I0214 04:15:33.840614 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mrccv"] Feb 14 04:15:33 crc kubenswrapper[4867]: W0214 04:15:33.845990 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode0fe6db4_add0_4993_a40c_c5b6725565fa.slice/crio-4f098b431d5934c21c393a1541a639e82160499facce797045d7a0dae4cf3873 WatchSource:0}: Error finding container 4f098b431d5934c21c393a1541a639e82160499facce797045d7a0dae4cf3873: Status 404 returned error can't find the container with id 4f098b431d5934c21c393a1541a639e82160499facce797045d7a0dae4cf3873 Feb 14 04:15:34 crc kubenswrapper[4867]: I0214 04:15:34.041667 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w69fq"] Feb 14 04:15:34 crc kubenswrapper[4867]: W0214 04:15:34.047822 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe125812_eeef_4043_bef9_fea01037dddb.slice/crio-e7632a4935f6e53caf49f08bcf459867c8d51565affe150d2dd91bb73296e20b WatchSource:0}: Error finding container e7632a4935f6e53caf49f08bcf459867c8d51565affe150d2dd91bb73296e20b: Status 404 returned error can't find the container with id e7632a4935f6e53caf49f08bcf459867c8d51565affe150d2dd91bb73296e20b Feb 14 04:15:34 crc kubenswrapper[4867]: I0214 04:15:34.757995 4867 generic.go:334] "Generic (PLEG): container finished" podID="be125812-eeef-4043-bef9-fea01037dddb" containerID="450f441c3fc59e9212fe447420930708c0698125bce6cd66d1552fe6d6695ba6" exitCode=0 Feb 14 04:15:34 crc kubenswrapper[4867]: I0214 04:15:34.758048 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w69fq" event={"ID":"be125812-eeef-4043-bef9-fea01037dddb","Type":"ContainerDied","Data":"450f441c3fc59e9212fe447420930708c0698125bce6cd66d1552fe6d6695ba6"} Feb 14 04:15:34 crc kubenswrapper[4867]: I0214 04:15:34.758570 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w69fq" event={"ID":"be125812-eeef-4043-bef9-fea01037dddb","Type":"ContainerStarted","Data":"e7632a4935f6e53caf49f08bcf459867c8d51565affe150d2dd91bb73296e20b"} Feb 14 04:15:34 crc kubenswrapper[4867]: I0214 04:15:34.760391 4867 generic.go:334] "Generic (PLEG): container finished" podID="e0fe6db4-add0-4993-a40c-c5b6725565fa" containerID="f7057ae1c4e2413e60ccd9e1345e2b034e0ca95a6196aeca71b8376a3e569f50" exitCode=0 Feb 14 04:15:34 crc kubenswrapper[4867]: I0214 04:15:34.760422 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mrccv" event={"ID":"e0fe6db4-add0-4993-a40c-c5b6725565fa","Type":"ContainerDied","Data":"f7057ae1c4e2413e60ccd9e1345e2b034e0ca95a6196aeca71b8376a3e569f50"} Feb 14 04:15:34 crc kubenswrapper[4867]: I0214 04:15:34.760450 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mrccv" event={"ID":"e0fe6db4-add0-4993-a40c-c5b6725565fa","Type":"ContainerStarted","Data":"4f098b431d5934c21c393a1541a639e82160499facce797045d7a0dae4cf3873"} Feb 14 04:15:35 crc kubenswrapper[4867]: I0214 04:15:35.542035 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gbz8c"] Feb 14 04:15:35 crc kubenswrapper[4867]: I0214 04:15:35.543369 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gbz8c" Feb 14 04:15:35 crc kubenswrapper[4867]: I0214 04:15:35.545946 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 14 04:15:35 crc kubenswrapper[4867]: I0214 04:15:35.556327 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gbz8c"] Feb 14 04:15:35 crc kubenswrapper[4867]: I0214 04:15:35.653710 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8fe62eb-932d-4b17-8ffa-6c90780bdd74-utilities\") pod \"redhat-marketplace-gbz8c\" (UID: \"c8fe62eb-932d-4b17-8ffa-6c90780bdd74\") " pod="openshift-marketplace/redhat-marketplace-gbz8c" Feb 14 04:15:35 crc kubenswrapper[4867]: I0214 04:15:35.653920 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8fe62eb-932d-4b17-8ffa-6c90780bdd74-catalog-content\") pod \"redhat-marketplace-gbz8c\" (UID: \"c8fe62eb-932d-4b17-8ffa-6c90780bdd74\") " pod="openshift-marketplace/redhat-marketplace-gbz8c" Feb 14 04:15:35 crc kubenswrapper[4867]: I0214 04:15:35.653949 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dppdt\" (UniqueName: \"kubernetes.io/projected/c8fe62eb-932d-4b17-8ffa-6c90780bdd74-kube-api-access-dppdt\") pod \"redhat-marketplace-gbz8c\" (UID: \"c8fe62eb-932d-4b17-8ffa-6c90780bdd74\") " pod="openshift-marketplace/redhat-marketplace-gbz8c" Feb 14 04:15:35 crc kubenswrapper[4867]: I0214 04:15:35.730656 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bvb8v"] Feb 14 04:15:35 crc kubenswrapper[4867]: I0214 04:15:35.731586 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bvb8v" Feb 14 04:15:35 crc kubenswrapper[4867]: I0214 04:15:35.737041 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 14 04:15:35 crc kubenswrapper[4867]: I0214 04:15:35.745717 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bvb8v"] Feb 14 04:15:35 crc kubenswrapper[4867]: I0214 04:15:35.755422 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8fe62eb-932d-4b17-8ffa-6c90780bdd74-catalog-content\") pod \"redhat-marketplace-gbz8c\" (UID: \"c8fe62eb-932d-4b17-8ffa-6c90780bdd74\") " pod="openshift-marketplace/redhat-marketplace-gbz8c" Feb 14 04:15:35 crc kubenswrapper[4867]: I0214 04:15:35.755462 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dppdt\" (UniqueName: \"kubernetes.io/projected/c8fe62eb-932d-4b17-8ffa-6c90780bdd74-kube-api-access-dppdt\") pod \"redhat-marketplace-gbz8c\" (UID: \"c8fe62eb-932d-4b17-8ffa-6c90780bdd74\") " pod="openshift-marketplace/redhat-marketplace-gbz8c" Feb 14 04:15:35 crc kubenswrapper[4867]: I0214 04:15:35.755537 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8fe62eb-932d-4b17-8ffa-6c90780bdd74-utilities\") pod \"redhat-marketplace-gbz8c\" (UID: \"c8fe62eb-932d-4b17-8ffa-6c90780bdd74\") " pod="openshift-marketplace/redhat-marketplace-gbz8c" Feb 14 04:15:35 crc kubenswrapper[4867]: I0214 04:15:35.756019 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8fe62eb-932d-4b17-8ffa-6c90780bdd74-utilities\") pod \"redhat-marketplace-gbz8c\" (UID: \"c8fe62eb-932d-4b17-8ffa-6c90780bdd74\") " pod="openshift-marketplace/redhat-marketplace-gbz8c" Feb 14 04:15:35 crc kubenswrapper[4867]: I0214 04:15:35.756019 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8fe62eb-932d-4b17-8ffa-6c90780bdd74-catalog-content\") pod \"redhat-marketplace-gbz8c\" (UID: \"c8fe62eb-932d-4b17-8ffa-6c90780bdd74\") " pod="openshift-marketplace/redhat-marketplace-gbz8c" Feb 14 04:15:35 crc kubenswrapper[4867]: I0214 04:15:35.765964 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mrccv" event={"ID":"e0fe6db4-add0-4993-a40c-c5b6725565fa","Type":"ContainerStarted","Data":"8c1adc5e45fa6551a914874e9d31f9ae8f905ef3c1f028ce884edd5ee5d1cf3e"} Feb 14 04:15:35 crc kubenswrapper[4867]: I0214 04:15:35.767595 4867 generic.go:334] "Generic (PLEG): container finished" podID="be125812-eeef-4043-bef9-fea01037dddb" containerID="ccfec040a9892d4263ee046f6330e18b2c143d33869eb2511259bf87aeda48cb" exitCode=0 Feb 14 04:15:35 crc kubenswrapper[4867]: I0214 04:15:35.767631 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w69fq" event={"ID":"be125812-eeef-4043-bef9-fea01037dddb","Type":"ContainerDied","Data":"ccfec040a9892d4263ee046f6330e18b2c143d33869eb2511259bf87aeda48cb"} Feb 14 04:15:35 crc kubenswrapper[4867]: I0214 04:15:35.780500 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dppdt\" (UniqueName: \"kubernetes.io/projected/c8fe62eb-932d-4b17-8ffa-6c90780bdd74-kube-api-access-dppdt\") pod \"redhat-marketplace-gbz8c\" (UID: \"c8fe62eb-932d-4b17-8ffa-6c90780bdd74\") " pod="openshift-marketplace/redhat-marketplace-gbz8c" Feb 14 04:15:35 crc kubenswrapper[4867]: I0214 04:15:35.856862 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/140d0152-99c5-425c-b956-595dea337206-utilities\") pod \"redhat-operators-bvb8v\" (UID: \"140d0152-99c5-425c-b956-595dea337206\") " pod="openshift-marketplace/redhat-operators-bvb8v" Feb 14 04:15:35 crc kubenswrapper[4867]: I0214 04:15:35.856952 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/140d0152-99c5-425c-b956-595dea337206-catalog-content\") pod \"redhat-operators-bvb8v\" (UID: \"140d0152-99c5-425c-b956-595dea337206\") " pod="openshift-marketplace/redhat-operators-bvb8v" Feb 14 04:15:35 crc kubenswrapper[4867]: I0214 04:15:35.856987 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bk6f8\" (UniqueName: \"kubernetes.io/projected/140d0152-99c5-425c-b956-595dea337206-kube-api-access-bk6f8\") pod \"redhat-operators-bvb8v\" (UID: \"140d0152-99c5-425c-b956-595dea337206\") " pod="openshift-marketplace/redhat-operators-bvb8v" Feb 14 04:15:35 crc kubenswrapper[4867]: I0214 04:15:35.860703 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gbz8c" Feb 14 04:15:35 crc kubenswrapper[4867]: I0214 04:15:35.957809 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/140d0152-99c5-425c-b956-595dea337206-catalog-content\") pod \"redhat-operators-bvb8v\" (UID: \"140d0152-99c5-425c-b956-595dea337206\") " pod="openshift-marketplace/redhat-operators-bvb8v" Feb 14 04:15:35 crc kubenswrapper[4867]: I0214 04:15:35.958128 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bk6f8\" (UniqueName: \"kubernetes.io/projected/140d0152-99c5-425c-b956-595dea337206-kube-api-access-bk6f8\") pod \"redhat-operators-bvb8v\" (UID: \"140d0152-99c5-425c-b956-595dea337206\") " pod="openshift-marketplace/redhat-operators-bvb8v" Feb 14 04:15:35 crc kubenswrapper[4867]: I0214 04:15:35.958217 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/140d0152-99c5-425c-b956-595dea337206-utilities\") pod \"redhat-operators-bvb8v\" (UID: \"140d0152-99c5-425c-b956-595dea337206\") " pod="openshift-marketplace/redhat-operators-bvb8v" Feb 14 04:15:35 crc kubenswrapper[4867]: I0214 04:15:35.958968 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/140d0152-99c5-425c-b956-595dea337206-utilities\") pod \"redhat-operators-bvb8v\" (UID: \"140d0152-99c5-425c-b956-595dea337206\") " pod="openshift-marketplace/redhat-operators-bvb8v" Feb 14 04:15:35 crc kubenswrapper[4867]: I0214 04:15:35.959141 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/140d0152-99c5-425c-b956-595dea337206-catalog-content\") pod \"redhat-operators-bvb8v\" (UID: \"140d0152-99c5-425c-b956-595dea337206\") " pod="openshift-marketplace/redhat-operators-bvb8v" Feb 14 04:15:35 crc kubenswrapper[4867]: I0214 04:15:35.975583 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bk6f8\" (UniqueName: \"kubernetes.io/projected/140d0152-99c5-425c-b956-595dea337206-kube-api-access-bk6f8\") pod \"redhat-operators-bvb8v\" (UID: \"140d0152-99c5-425c-b956-595dea337206\") " pod="openshift-marketplace/redhat-operators-bvb8v" Feb 14 04:15:36 crc kubenswrapper[4867]: I0214 04:15:36.046984 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bvb8v" Feb 14 04:15:36 crc kubenswrapper[4867]: I0214 04:15:36.237444 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gbz8c"] Feb 14 04:15:36 crc kubenswrapper[4867]: W0214 04:15:36.246851 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc8fe62eb_932d_4b17_8ffa_6c90780bdd74.slice/crio-909a4d6ba1cd96b4355f13c8201e808244da2a8bf25edf4c0728314815252c0a WatchSource:0}: Error finding container 909a4d6ba1cd96b4355f13c8201e808244da2a8bf25edf4c0728314815252c0a: Status 404 returned error can't find the container with id 909a4d6ba1cd96b4355f13c8201e808244da2a8bf25edf4c0728314815252c0a Feb 14 04:15:36 crc kubenswrapper[4867]: I0214 04:15:36.407654 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bvb8v"] Feb 14 04:15:36 crc kubenswrapper[4867]: I0214 04:15:36.775714 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w69fq" event={"ID":"be125812-eeef-4043-bef9-fea01037dddb","Type":"ContainerStarted","Data":"69ab4f23480ad187e639a58fd17104be8c48c506d1ee3c45267693b5ee9cc4a9"} Feb 14 04:15:36 crc kubenswrapper[4867]: I0214 04:15:36.778173 4867 generic.go:334] "Generic (PLEG): container finished" podID="c8fe62eb-932d-4b17-8ffa-6c90780bdd74" containerID="bc39cfe6c4e3f56df9c5948f4fa345c452ace9e770ea0fba08c4fc6389bc05b2" exitCode=0 Feb 14 04:15:36 crc kubenswrapper[4867]: I0214 04:15:36.778215 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gbz8c" event={"ID":"c8fe62eb-932d-4b17-8ffa-6c90780bdd74","Type":"ContainerDied","Data":"bc39cfe6c4e3f56df9c5948f4fa345c452ace9e770ea0fba08c4fc6389bc05b2"} Feb 14 04:15:36 crc kubenswrapper[4867]: I0214 04:15:36.778248 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gbz8c" event={"ID":"c8fe62eb-932d-4b17-8ffa-6c90780bdd74","Type":"ContainerStarted","Data":"909a4d6ba1cd96b4355f13c8201e808244da2a8bf25edf4c0728314815252c0a"} Feb 14 04:15:36 crc kubenswrapper[4867]: I0214 04:15:36.779881 4867 generic.go:334] "Generic (PLEG): container finished" podID="140d0152-99c5-425c-b956-595dea337206" containerID="ccb9fa229a0d0673ab8663782ef04bf45bc05fe821571028056bb1469529e936" exitCode=0 Feb 14 04:15:36 crc kubenswrapper[4867]: I0214 04:15:36.779945 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bvb8v" event={"ID":"140d0152-99c5-425c-b956-595dea337206","Type":"ContainerDied","Data":"ccb9fa229a0d0673ab8663782ef04bf45bc05fe821571028056bb1469529e936"} Feb 14 04:15:36 crc kubenswrapper[4867]: I0214 04:15:36.779975 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bvb8v" event={"ID":"140d0152-99c5-425c-b956-595dea337206","Type":"ContainerStarted","Data":"1f273fc0233825535c5879cadfe14d979a0b834e11cf19e55eeaf980caad47ed"} Feb 14 04:15:36 crc kubenswrapper[4867]: I0214 04:15:36.782790 4867 generic.go:334] "Generic (PLEG): container finished" podID="e0fe6db4-add0-4993-a40c-c5b6725565fa" containerID="8c1adc5e45fa6551a914874e9d31f9ae8f905ef3c1f028ce884edd5ee5d1cf3e" exitCode=0 Feb 14 04:15:36 crc kubenswrapper[4867]: I0214 04:15:36.782832 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mrccv" event={"ID":"e0fe6db4-add0-4993-a40c-c5b6725565fa","Type":"ContainerDied","Data":"8c1adc5e45fa6551a914874e9d31f9ae8f905ef3c1f028ce884edd5ee5d1cf3e"} Feb 14 04:15:36 crc kubenswrapper[4867]: I0214 04:15:36.795417 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-w69fq" podStartSLOduration=2.3673676869999998 podStartE2EDuration="3.795400911s" podCreationTimestamp="2026-02-14 04:15:33 +0000 UTC" firstStartedPulling="2026-02-14 04:15:34.759954224 +0000 UTC m=+366.840891538" lastFinishedPulling="2026-02-14 04:15:36.187987448 +0000 UTC m=+368.268924762" observedRunningTime="2026-02-14 04:15:36.794176069 +0000 UTC m=+368.875113393" watchObservedRunningTime="2026-02-14 04:15:36.795400911 +0000 UTC m=+368.876338225" Feb 14 04:15:36 crc kubenswrapper[4867]: I0214 04:15:36.988172 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-645fd87585-cg7sr"] Feb 14 04:15:36 crc kubenswrapper[4867]: I0214 04:15:36.988408 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-645fd87585-cg7sr" podUID="460ab01d-a050-4210-8f77-1564c687b8aa" containerName="controller-manager" containerID="cri-o://0d82750f6cb70aa5afe9e78ddf40f91e7f394d82ecfe5e44397ce38b6f93dbee" gracePeriod=30 Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.020344 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6dd4d98c55-vl8mx"] Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.020618 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6dd4d98c55-vl8mx" podUID="cd15dd24-0b64-4213-842f-5727fdedffaf" containerName="route-controller-manager" containerID="cri-o://dc49d46bbf08c3a8f13c574f31042497b2f838320abe428a9400869962f6a94d" gracePeriod=30 Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.576170 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6dd4d98c55-vl8mx" Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.666494 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-645fd87585-cg7sr" Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.681158 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd15dd24-0b64-4213-842f-5727fdedffaf-serving-cert\") pod \"cd15dd24-0b64-4213-842f-5727fdedffaf\" (UID: \"cd15dd24-0b64-4213-842f-5727fdedffaf\") " Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.681296 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhmp7\" (UniqueName: \"kubernetes.io/projected/cd15dd24-0b64-4213-842f-5727fdedffaf-kube-api-access-qhmp7\") pod \"cd15dd24-0b64-4213-842f-5727fdedffaf\" (UID: \"cd15dd24-0b64-4213-842f-5727fdedffaf\") " Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.681357 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cd15dd24-0b64-4213-842f-5727fdedffaf-client-ca\") pod \"cd15dd24-0b64-4213-842f-5727fdedffaf\" (UID: \"cd15dd24-0b64-4213-842f-5727fdedffaf\") " Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.681382 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd15dd24-0b64-4213-842f-5727fdedffaf-config\") pod \"cd15dd24-0b64-4213-842f-5727fdedffaf\" (UID: \"cd15dd24-0b64-4213-842f-5727fdedffaf\") " Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.681981 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd15dd24-0b64-4213-842f-5727fdedffaf-client-ca" (OuterVolumeSpecName: "client-ca") pod "cd15dd24-0b64-4213-842f-5727fdedffaf" (UID: "cd15dd24-0b64-4213-842f-5727fdedffaf"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.682017 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd15dd24-0b64-4213-842f-5727fdedffaf-config" (OuterVolumeSpecName: "config") pod "cd15dd24-0b64-4213-842f-5727fdedffaf" (UID: "cd15dd24-0b64-4213-842f-5727fdedffaf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.690697 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd15dd24-0b64-4213-842f-5727fdedffaf-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "cd15dd24-0b64-4213-842f-5727fdedffaf" (UID: "cd15dd24-0b64-4213-842f-5727fdedffaf"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.698752 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd15dd24-0b64-4213-842f-5727fdedffaf-kube-api-access-qhmp7" (OuterVolumeSpecName: "kube-api-access-qhmp7") pod "cd15dd24-0b64-4213-842f-5727fdedffaf" (UID: "cd15dd24-0b64-4213-842f-5727fdedffaf"). InnerVolumeSpecName "kube-api-access-qhmp7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.782342 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/460ab01d-a050-4210-8f77-1564c687b8aa-config\") pod \"460ab01d-a050-4210-8f77-1564c687b8aa\" (UID: \"460ab01d-a050-4210-8f77-1564c687b8aa\") " Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.782395 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/460ab01d-a050-4210-8f77-1564c687b8aa-serving-cert\") pod \"460ab01d-a050-4210-8f77-1564c687b8aa\" (UID: \"460ab01d-a050-4210-8f77-1564c687b8aa\") " Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.782463 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lv2fh\" (UniqueName: \"kubernetes.io/projected/460ab01d-a050-4210-8f77-1564c687b8aa-kube-api-access-lv2fh\") pod \"460ab01d-a050-4210-8f77-1564c687b8aa\" (UID: \"460ab01d-a050-4210-8f77-1564c687b8aa\") " Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.782520 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/460ab01d-a050-4210-8f77-1564c687b8aa-client-ca\") pod \"460ab01d-a050-4210-8f77-1564c687b8aa\" (UID: \"460ab01d-a050-4210-8f77-1564c687b8aa\") " Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.782541 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/460ab01d-a050-4210-8f77-1564c687b8aa-proxy-ca-bundles\") pod \"460ab01d-a050-4210-8f77-1564c687b8aa\" (UID: \"460ab01d-a050-4210-8f77-1564c687b8aa\") " Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.782799 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qhmp7\" (UniqueName: \"kubernetes.io/projected/cd15dd24-0b64-4213-842f-5727fdedffaf-kube-api-access-qhmp7\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.782811 4867 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cd15dd24-0b64-4213-842f-5727fdedffaf-client-ca\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.782824 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd15dd24-0b64-4213-842f-5727fdedffaf-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.782834 4867 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd15dd24-0b64-4213-842f-5727fdedffaf-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.783183 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/460ab01d-a050-4210-8f77-1564c687b8aa-config" (OuterVolumeSpecName: "config") pod "460ab01d-a050-4210-8f77-1564c687b8aa" (UID: "460ab01d-a050-4210-8f77-1564c687b8aa"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.783207 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/460ab01d-a050-4210-8f77-1564c687b8aa-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "460ab01d-a050-4210-8f77-1564c687b8aa" (UID: "460ab01d-a050-4210-8f77-1564c687b8aa"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.783317 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/460ab01d-a050-4210-8f77-1564c687b8aa-client-ca" (OuterVolumeSpecName: "client-ca") pod "460ab01d-a050-4210-8f77-1564c687b8aa" (UID: "460ab01d-a050-4210-8f77-1564c687b8aa"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.788655 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/460ab01d-a050-4210-8f77-1564c687b8aa-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "460ab01d-a050-4210-8f77-1564c687b8aa" (UID: "460ab01d-a050-4210-8f77-1564c687b8aa"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.788852 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/460ab01d-a050-4210-8f77-1564c687b8aa-kube-api-access-lv2fh" (OuterVolumeSpecName: "kube-api-access-lv2fh") pod "460ab01d-a050-4210-8f77-1564c687b8aa" (UID: "460ab01d-a050-4210-8f77-1564c687b8aa"). InnerVolumeSpecName "kube-api-access-lv2fh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.793500 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mrccv" event={"ID":"e0fe6db4-add0-4993-a40c-c5b6725565fa","Type":"ContainerStarted","Data":"5a18a56f3dda9e5462434b66a63a51cc809ec7dc9d7b1183267bce6297e94690"} Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.795401 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gbz8c" event={"ID":"c8fe62eb-932d-4b17-8ffa-6c90780bdd74","Type":"ContainerStarted","Data":"e536bcf6e044b2186c74c96bf70e1fc9fdbed61298a6d9edb177cbaf3be1ab21"} Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.796900 4867 generic.go:334] "Generic (PLEG): container finished" podID="cd15dd24-0b64-4213-842f-5727fdedffaf" containerID="dc49d46bbf08c3a8f13c574f31042497b2f838320abe428a9400869962f6a94d" exitCode=0 Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.796937 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6dd4d98c55-vl8mx" event={"ID":"cd15dd24-0b64-4213-842f-5727fdedffaf","Type":"ContainerDied","Data":"dc49d46bbf08c3a8f13c574f31042497b2f838320abe428a9400869962f6a94d"} Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.796961 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6dd4d98c55-vl8mx" Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.796982 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6dd4d98c55-vl8mx" event={"ID":"cd15dd24-0b64-4213-842f-5727fdedffaf","Type":"ContainerDied","Data":"f67f6d6f5857e795b0810abf5a8af2c6365a5e6e9a844dfa3bbdd069b8dcceb1"} Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.797006 4867 scope.go:117] "RemoveContainer" containerID="dc49d46bbf08c3a8f13c574f31042497b2f838320abe428a9400869962f6a94d" Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.798275 4867 generic.go:334] "Generic (PLEG): container finished" podID="460ab01d-a050-4210-8f77-1564c687b8aa" containerID="0d82750f6cb70aa5afe9e78ddf40f91e7f394d82ecfe5e44397ce38b6f93dbee" exitCode=0 Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.798323 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-645fd87585-cg7sr" Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.798345 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-645fd87585-cg7sr" event={"ID":"460ab01d-a050-4210-8f77-1564c687b8aa","Type":"ContainerDied","Data":"0d82750f6cb70aa5afe9e78ddf40f91e7f394d82ecfe5e44397ce38b6f93dbee"} Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.799797 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-645fd87585-cg7sr" event={"ID":"460ab01d-a050-4210-8f77-1564c687b8aa","Type":"ContainerDied","Data":"ec17f4737d4f6752779dbdb60d879bee862c16976ddcbbba41458c6d682fa9fe"} Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.806842 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bvb8v" event={"ID":"140d0152-99c5-425c-b956-595dea337206","Type":"ContainerStarted","Data":"fe1d1c3d5c0a2edd6e41c4c3e268598df1771a8ec2436a1ec87fa2eead423423"} Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.815361 4867 scope.go:117] "RemoveContainer" containerID="dc49d46bbf08c3a8f13c574f31042497b2f838320abe428a9400869962f6a94d" Feb 14 04:15:37 crc kubenswrapper[4867]: E0214 04:15:37.815908 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc49d46bbf08c3a8f13c574f31042497b2f838320abe428a9400869962f6a94d\": container with ID starting with dc49d46bbf08c3a8f13c574f31042497b2f838320abe428a9400869962f6a94d not found: ID does not exist" containerID="dc49d46bbf08c3a8f13c574f31042497b2f838320abe428a9400869962f6a94d" Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.815935 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc49d46bbf08c3a8f13c574f31042497b2f838320abe428a9400869962f6a94d"} err="failed to get container status \"dc49d46bbf08c3a8f13c574f31042497b2f838320abe428a9400869962f6a94d\": rpc error: code = NotFound desc = could not find container \"dc49d46bbf08c3a8f13c574f31042497b2f838320abe428a9400869962f6a94d\": container with ID starting with dc49d46bbf08c3a8f13c574f31042497b2f838320abe428a9400869962f6a94d not found: ID does not exist" Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.815957 4867 scope.go:117] "RemoveContainer" containerID="0d82750f6cb70aa5afe9e78ddf40f91e7f394d82ecfe5e44397ce38b6f93dbee" Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.819005 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mrccv" podStartSLOduration=2.413322938 podStartE2EDuration="4.818983007s" podCreationTimestamp="2026-02-14 04:15:33 +0000 UTC" firstStartedPulling="2026-02-14 04:15:34.761923985 +0000 UTC m=+366.842861299" lastFinishedPulling="2026-02-14 04:15:37.167584054 +0000 UTC m=+369.248521368" observedRunningTime="2026-02-14 04:15:37.814425499 +0000 UTC m=+369.895362833" watchObservedRunningTime="2026-02-14 04:15:37.818983007 +0000 UTC m=+369.899920321" Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.831362 4867 scope.go:117] "RemoveContainer" containerID="0d82750f6cb70aa5afe9e78ddf40f91e7f394d82ecfe5e44397ce38b6f93dbee" Feb 14 04:15:37 crc kubenswrapper[4867]: E0214 04:15:37.831946 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d82750f6cb70aa5afe9e78ddf40f91e7f394d82ecfe5e44397ce38b6f93dbee\": container with ID starting with 0d82750f6cb70aa5afe9e78ddf40f91e7f394d82ecfe5e44397ce38b6f93dbee not found: ID does not exist" containerID="0d82750f6cb70aa5afe9e78ddf40f91e7f394d82ecfe5e44397ce38b6f93dbee" Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.831990 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d82750f6cb70aa5afe9e78ddf40f91e7f394d82ecfe5e44397ce38b6f93dbee"} err="failed to get container status \"0d82750f6cb70aa5afe9e78ddf40f91e7f394d82ecfe5e44397ce38b6f93dbee\": rpc error: code = NotFound desc = could not find container \"0d82750f6cb70aa5afe9e78ddf40f91e7f394d82ecfe5e44397ce38b6f93dbee\": container with ID starting with 0d82750f6cb70aa5afe9e78ddf40f91e7f394d82ecfe5e44397ce38b6f93dbee not found: ID does not exist" Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.885234 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-645fd87585-cg7sr"] Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.887909 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/460ab01d-a050-4210-8f77-1564c687b8aa-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.887962 4867 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/460ab01d-a050-4210-8f77-1564c687b8aa-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.887978 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lv2fh\" (UniqueName: \"kubernetes.io/projected/460ab01d-a050-4210-8f77-1564c687b8aa-kube-api-access-lv2fh\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.887995 4867 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/460ab01d-a050-4210-8f77-1564c687b8aa-client-ca\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.888007 4867 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/460ab01d-a050-4210-8f77-1564c687b8aa-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.896499 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-645fd87585-cg7sr"] Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.906344 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6dd4d98c55-vl8mx"] Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.911043 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6dd4d98c55-vl8mx"] Feb 14 04:15:38 crc kubenswrapper[4867]: I0214 04:15:38.812226 4867 generic.go:334] "Generic (PLEG): container finished" podID="c8fe62eb-932d-4b17-8ffa-6c90780bdd74" containerID="e536bcf6e044b2186c74c96bf70e1fc9fdbed61298a6d9edb177cbaf3be1ab21" exitCode=0 Feb 14 04:15:38 crc kubenswrapper[4867]: I0214 04:15:38.812276 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gbz8c" event={"ID":"c8fe62eb-932d-4b17-8ffa-6c90780bdd74","Type":"ContainerDied","Data":"e536bcf6e044b2186c74c96bf70e1fc9fdbed61298a6d9edb177cbaf3be1ab21"} Feb 14 04:15:38 crc kubenswrapper[4867]: I0214 04:15:38.817031 4867 generic.go:334] "Generic (PLEG): container finished" podID="140d0152-99c5-425c-b956-595dea337206" containerID="fe1d1c3d5c0a2edd6e41c4c3e268598df1771a8ec2436a1ec87fa2eead423423" exitCode=0 Feb 14 04:15:38 crc kubenswrapper[4867]: I0214 04:15:38.818004 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bvb8v" event={"ID":"140d0152-99c5-425c-b956-595dea337206","Type":"ContainerDied","Data":"fe1d1c3d5c0a2edd6e41c4c3e268598df1771a8ec2436a1ec87fa2eead423423"} Feb 14 04:15:38 crc kubenswrapper[4867]: I0214 04:15:38.949810 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-856f7b9d6f-8fm27"] Feb 14 04:15:38 crc kubenswrapper[4867]: E0214 04:15:38.950069 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="460ab01d-a050-4210-8f77-1564c687b8aa" containerName="controller-manager" Feb 14 04:15:38 crc kubenswrapper[4867]: I0214 04:15:38.950084 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="460ab01d-a050-4210-8f77-1564c687b8aa" containerName="controller-manager" Feb 14 04:15:38 crc kubenswrapper[4867]: E0214 04:15:38.950095 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd15dd24-0b64-4213-842f-5727fdedffaf" containerName="route-controller-manager" Feb 14 04:15:38 crc kubenswrapper[4867]: I0214 04:15:38.950102 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd15dd24-0b64-4213-842f-5727fdedffaf" containerName="route-controller-manager" Feb 14 04:15:38 crc kubenswrapper[4867]: I0214 04:15:38.950235 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="460ab01d-a050-4210-8f77-1564c687b8aa" containerName="controller-manager" Feb 14 04:15:38 crc kubenswrapper[4867]: I0214 04:15:38.950257 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd15dd24-0b64-4213-842f-5727fdedffaf" containerName="route-controller-manager" Feb 14 04:15:38 crc kubenswrapper[4867]: I0214 04:15:38.950715 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-856f7b9d6f-8fm27" Feb 14 04:15:38 crc kubenswrapper[4867]: I0214 04:15:38.955071 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-64cd899fff-wknv7"] Feb 14 04:15:38 crc kubenswrapper[4867]: I0214 04:15:38.956054 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-64cd899fff-wknv7" Feb 14 04:15:38 crc kubenswrapper[4867]: I0214 04:15:38.958535 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-856f7b9d6f-8fm27"] Feb 14 04:15:38 crc kubenswrapper[4867]: I0214 04:15:38.962646 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-64cd899fff-wknv7"] Feb 14 04:15:38 crc kubenswrapper[4867]: I0214 04:15:38.963229 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 14 04:15:38 crc kubenswrapper[4867]: I0214 04:15:38.963348 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 14 04:15:38 crc kubenswrapper[4867]: I0214 04:15:38.963348 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 14 04:15:38 crc kubenswrapper[4867]: I0214 04:15:38.963380 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 14 04:15:38 crc kubenswrapper[4867]: I0214 04:15:38.963412 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 14 04:15:38 crc kubenswrapper[4867]: I0214 04:15:38.963496 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 14 04:15:38 crc kubenswrapper[4867]: I0214 04:15:38.963374 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 14 04:15:38 crc kubenswrapper[4867]: I0214 04:15:38.963558 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 14 04:15:38 crc kubenswrapper[4867]: I0214 04:15:38.963622 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 14 04:15:38 crc kubenswrapper[4867]: I0214 04:15:38.963671 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 14 04:15:38 crc kubenswrapper[4867]: I0214 04:15:38.964241 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 14 04:15:38 crc kubenswrapper[4867]: I0214 04:15:38.965796 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 14 04:15:38 crc kubenswrapper[4867]: I0214 04:15:38.969308 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 14 04:15:39 crc kubenswrapper[4867]: I0214 04:15:39.017318 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="460ab01d-a050-4210-8f77-1564c687b8aa" path="/var/lib/kubelet/pods/460ab01d-a050-4210-8f77-1564c687b8aa/volumes" Feb 14 04:15:39 crc kubenswrapper[4867]: I0214 04:15:39.018014 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd15dd24-0b64-4213-842f-5727fdedffaf" path="/var/lib/kubelet/pods/cd15dd24-0b64-4213-842f-5727fdedffaf/volumes" Feb 14 04:15:39 crc kubenswrapper[4867]: I0214 04:15:39.105316 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8b0aebec-9a44-4db3-9bcb-e63c5f1748c8-proxy-ca-bundles\") pod \"controller-manager-64cd899fff-wknv7\" (UID: \"8b0aebec-9a44-4db3-9bcb-e63c5f1748c8\") " pod="openshift-controller-manager/controller-manager-64cd899fff-wknv7" Feb 14 04:15:39 crc kubenswrapper[4867]: I0214 04:15:39.105416 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b0aebec-9a44-4db3-9bcb-e63c5f1748c8-config\") pod \"controller-manager-64cd899fff-wknv7\" (UID: \"8b0aebec-9a44-4db3-9bcb-e63c5f1748c8\") " pod="openshift-controller-manager/controller-manager-64cd899fff-wknv7" Feb 14 04:15:39 crc kubenswrapper[4867]: I0214 04:15:39.105443 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e06bd216-b4b8-4754-a364-76f41991e155-client-ca\") pod \"route-controller-manager-856f7b9d6f-8fm27\" (UID: \"e06bd216-b4b8-4754-a364-76f41991e155\") " pod="openshift-route-controller-manager/route-controller-manager-856f7b9d6f-8fm27" Feb 14 04:15:39 crc kubenswrapper[4867]: I0214 04:15:39.105491 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8b0aebec-9a44-4db3-9bcb-e63c5f1748c8-client-ca\") pod \"controller-manager-64cd899fff-wknv7\" (UID: \"8b0aebec-9a44-4db3-9bcb-e63c5f1748c8\") " pod="openshift-controller-manager/controller-manager-64cd899fff-wknv7" Feb 14 04:15:39 crc kubenswrapper[4867]: I0214 04:15:39.105531 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4ck6\" (UniqueName: \"kubernetes.io/projected/8b0aebec-9a44-4db3-9bcb-e63c5f1748c8-kube-api-access-x4ck6\") pod \"controller-manager-64cd899fff-wknv7\" (UID: \"8b0aebec-9a44-4db3-9bcb-e63c5f1748c8\") " pod="openshift-controller-manager/controller-manager-64cd899fff-wknv7" Feb 14 04:15:39 crc kubenswrapper[4867]: I0214 04:15:39.105660 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b0aebec-9a44-4db3-9bcb-e63c5f1748c8-serving-cert\") pod \"controller-manager-64cd899fff-wknv7\" (UID: \"8b0aebec-9a44-4db3-9bcb-e63c5f1748c8\") " pod="openshift-controller-manager/controller-manager-64cd899fff-wknv7" Feb 14 04:15:39 crc kubenswrapper[4867]: I0214 04:15:39.105739 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e06bd216-b4b8-4754-a364-76f41991e155-serving-cert\") pod \"route-controller-manager-856f7b9d6f-8fm27\" (UID: \"e06bd216-b4b8-4754-a364-76f41991e155\") " pod="openshift-route-controller-manager/route-controller-manager-856f7b9d6f-8fm27" Feb 14 04:15:39 crc kubenswrapper[4867]: I0214 04:15:39.105898 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gp5z6\" (UniqueName: \"kubernetes.io/projected/e06bd216-b4b8-4754-a364-76f41991e155-kube-api-access-gp5z6\") pod \"route-controller-manager-856f7b9d6f-8fm27\" (UID: \"e06bd216-b4b8-4754-a364-76f41991e155\") " pod="openshift-route-controller-manager/route-controller-manager-856f7b9d6f-8fm27" Feb 14 04:15:39 crc kubenswrapper[4867]: I0214 04:15:39.105988 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e06bd216-b4b8-4754-a364-76f41991e155-config\") pod \"route-controller-manager-856f7b9d6f-8fm27\" (UID: \"e06bd216-b4b8-4754-a364-76f41991e155\") " pod="openshift-route-controller-manager/route-controller-manager-856f7b9d6f-8fm27" Feb 14 04:15:39 crc kubenswrapper[4867]: I0214 04:15:39.206691 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8b0aebec-9a44-4db3-9bcb-e63c5f1748c8-client-ca\") pod \"controller-manager-64cd899fff-wknv7\" (UID: \"8b0aebec-9a44-4db3-9bcb-e63c5f1748c8\") " pod="openshift-controller-manager/controller-manager-64cd899fff-wknv7" Feb 14 04:15:39 crc kubenswrapper[4867]: I0214 04:15:39.206735 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4ck6\" (UniqueName: \"kubernetes.io/projected/8b0aebec-9a44-4db3-9bcb-e63c5f1748c8-kube-api-access-x4ck6\") pod \"controller-manager-64cd899fff-wknv7\" (UID: \"8b0aebec-9a44-4db3-9bcb-e63c5f1748c8\") " pod="openshift-controller-manager/controller-manager-64cd899fff-wknv7" Feb 14 04:15:39 crc kubenswrapper[4867]: I0214 04:15:39.206780 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b0aebec-9a44-4db3-9bcb-e63c5f1748c8-serving-cert\") pod \"controller-manager-64cd899fff-wknv7\" (UID: \"8b0aebec-9a44-4db3-9bcb-e63c5f1748c8\") " pod="openshift-controller-manager/controller-manager-64cd899fff-wknv7" Feb 14 04:15:39 crc kubenswrapper[4867]: I0214 04:15:39.206801 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e06bd216-b4b8-4754-a364-76f41991e155-serving-cert\") pod \"route-controller-manager-856f7b9d6f-8fm27\" (UID: \"e06bd216-b4b8-4754-a364-76f41991e155\") " pod="openshift-route-controller-manager/route-controller-manager-856f7b9d6f-8fm27" Feb 14 04:15:39 crc kubenswrapper[4867]: I0214 04:15:39.206831 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gp5z6\" (UniqueName: \"kubernetes.io/projected/e06bd216-b4b8-4754-a364-76f41991e155-kube-api-access-gp5z6\") pod \"route-controller-manager-856f7b9d6f-8fm27\" (UID: \"e06bd216-b4b8-4754-a364-76f41991e155\") " pod="openshift-route-controller-manager/route-controller-manager-856f7b9d6f-8fm27" Feb 14 04:15:39 crc kubenswrapper[4867]: I0214 04:15:39.206856 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e06bd216-b4b8-4754-a364-76f41991e155-config\") pod \"route-controller-manager-856f7b9d6f-8fm27\" (UID: \"e06bd216-b4b8-4754-a364-76f41991e155\") " pod="openshift-route-controller-manager/route-controller-manager-856f7b9d6f-8fm27" Feb 14 04:15:39 crc kubenswrapper[4867]: I0214 04:15:39.206891 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8b0aebec-9a44-4db3-9bcb-e63c5f1748c8-proxy-ca-bundles\") pod \"controller-manager-64cd899fff-wknv7\" (UID: \"8b0aebec-9a44-4db3-9bcb-e63c5f1748c8\") " pod="openshift-controller-manager/controller-manager-64cd899fff-wknv7" Feb 14 04:15:39 crc kubenswrapper[4867]: I0214 04:15:39.206911 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b0aebec-9a44-4db3-9bcb-e63c5f1748c8-config\") pod \"controller-manager-64cd899fff-wknv7\" (UID: \"8b0aebec-9a44-4db3-9bcb-e63c5f1748c8\") " pod="openshift-controller-manager/controller-manager-64cd899fff-wknv7" Feb 14 04:15:39 crc kubenswrapper[4867]: I0214 04:15:39.206925 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e06bd216-b4b8-4754-a364-76f41991e155-client-ca\") pod \"route-controller-manager-856f7b9d6f-8fm27\" (UID: \"e06bd216-b4b8-4754-a364-76f41991e155\") " pod="openshift-route-controller-manager/route-controller-manager-856f7b9d6f-8fm27" Feb 14 04:15:39 crc kubenswrapper[4867]: I0214 04:15:39.207710 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8b0aebec-9a44-4db3-9bcb-e63c5f1748c8-client-ca\") pod \"controller-manager-64cd899fff-wknv7\" (UID: \"8b0aebec-9a44-4db3-9bcb-e63c5f1748c8\") " pod="openshift-controller-manager/controller-manager-64cd899fff-wknv7" Feb 14 04:15:39 crc kubenswrapper[4867]: I0214 04:15:39.207982 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e06bd216-b4b8-4754-a364-76f41991e155-client-ca\") pod \"route-controller-manager-856f7b9d6f-8fm27\" (UID: \"e06bd216-b4b8-4754-a364-76f41991e155\") " pod="openshift-route-controller-manager/route-controller-manager-856f7b9d6f-8fm27" Feb 14 04:15:39 crc kubenswrapper[4867]: I0214 04:15:39.208236 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e06bd216-b4b8-4754-a364-76f41991e155-config\") pod \"route-controller-manager-856f7b9d6f-8fm27\" (UID: \"e06bd216-b4b8-4754-a364-76f41991e155\") " pod="openshift-route-controller-manager/route-controller-manager-856f7b9d6f-8fm27" Feb 14 04:15:39 crc kubenswrapper[4867]: I0214 04:15:39.209018 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8b0aebec-9a44-4db3-9bcb-e63c5f1748c8-proxy-ca-bundles\") pod \"controller-manager-64cd899fff-wknv7\" (UID: \"8b0aebec-9a44-4db3-9bcb-e63c5f1748c8\") " pod="openshift-controller-manager/controller-manager-64cd899fff-wknv7" Feb 14 04:15:39 crc kubenswrapper[4867]: I0214 04:15:39.209291 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b0aebec-9a44-4db3-9bcb-e63c5f1748c8-config\") pod \"controller-manager-64cd899fff-wknv7\" (UID: \"8b0aebec-9a44-4db3-9bcb-e63c5f1748c8\") " pod="openshift-controller-manager/controller-manager-64cd899fff-wknv7" Feb 14 04:15:39 crc kubenswrapper[4867]: I0214 04:15:39.215298 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b0aebec-9a44-4db3-9bcb-e63c5f1748c8-serving-cert\") pod \"controller-manager-64cd899fff-wknv7\" (UID: \"8b0aebec-9a44-4db3-9bcb-e63c5f1748c8\") " pod="openshift-controller-manager/controller-manager-64cd899fff-wknv7" Feb 14 04:15:39 crc kubenswrapper[4867]: I0214 04:15:39.226251 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e06bd216-b4b8-4754-a364-76f41991e155-serving-cert\") pod \"route-controller-manager-856f7b9d6f-8fm27\" (UID: \"e06bd216-b4b8-4754-a364-76f41991e155\") " pod="openshift-route-controller-manager/route-controller-manager-856f7b9d6f-8fm27" Feb 14 04:15:39 crc kubenswrapper[4867]: I0214 04:15:39.232374 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4ck6\" (UniqueName: \"kubernetes.io/projected/8b0aebec-9a44-4db3-9bcb-e63c5f1748c8-kube-api-access-x4ck6\") pod \"controller-manager-64cd899fff-wknv7\" (UID: \"8b0aebec-9a44-4db3-9bcb-e63c5f1748c8\") " pod="openshift-controller-manager/controller-manager-64cd899fff-wknv7" Feb 14 04:15:39 crc kubenswrapper[4867]: I0214 04:15:39.243087 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gp5z6\" (UniqueName: \"kubernetes.io/projected/e06bd216-b4b8-4754-a364-76f41991e155-kube-api-access-gp5z6\") pod \"route-controller-manager-856f7b9d6f-8fm27\" (UID: \"e06bd216-b4b8-4754-a364-76f41991e155\") " pod="openshift-route-controller-manager/route-controller-manager-856f7b9d6f-8fm27" Feb 14 04:15:39 crc kubenswrapper[4867]: I0214 04:15:39.269607 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-856f7b9d6f-8fm27" Feb 14 04:15:39 crc kubenswrapper[4867]: I0214 04:15:39.282915 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-64cd899fff-wknv7" Feb 14 04:15:40 crc kubenswrapper[4867]: I0214 04:15:39.461132 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-856f7b9d6f-8fm27"] Feb 14 04:15:40 crc kubenswrapper[4867]: I0214 04:15:39.846029 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gbz8c" event={"ID":"c8fe62eb-932d-4b17-8ffa-6c90780bdd74","Type":"ContainerStarted","Data":"b65587de43aa6ea02405a8183ab53782da2064888cd423d8e57c6b42b146f30e"} Feb 14 04:15:40 crc kubenswrapper[4867]: I0214 04:15:39.850759 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bvb8v" event={"ID":"140d0152-99c5-425c-b956-595dea337206","Type":"ContainerStarted","Data":"d8dc2df6324b08cc38cc32dd78258391d8945bcaac442105f07f930438bed3e2"} Feb 14 04:15:40 crc kubenswrapper[4867]: I0214 04:15:39.852126 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-856f7b9d6f-8fm27" event={"ID":"e06bd216-b4b8-4754-a364-76f41991e155","Type":"ContainerStarted","Data":"a4e095f624f44728d2a3fc2a1dc0256cd3539ee32a625d8e202f8d80d7e3e7de"} Feb 14 04:15:40 crc kubenswrapper[4867]: I0214 04:15:39.852147 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-856f7b9d6f-8fm27" event={"ID":"e06bd216-b4b8-4754-a364-76f41991e155","Type":"ContainerStarted","Data":"2fd954bb3352333915558ae50d85ba987c86894ea554a0ffcd81defc26a5063c"} Feb 14 04:15:40 crc kubenswrapper[4867]: I0214 04:15:39.852388 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-856f7b9d6f-8fm27" Feb 14 04:15:40 crc kubenswrapper[4867]: I0214 04:15:39.873742 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gbz8c" podStartSLOduration=2.388542174 podStartE2EDuration="4.873721384s" podCreationTimestamp="2026-02-14 04:15:35 +0000 UTC" firstStartedPulling="2026-02-14 04:15:36.779425407 +0000 UTC m=+368.860362731" lastFinishedPulling="2026-02-14 04:15:39.264604627 +0000 UTC m=+371.345541941" observedRunningTime="2026-02-14 04:15:39.869811073 +0000 UTC m=+371.950748377" watchObservedRunningTime="2026-02-14 04:15:39.873721384 +0000 UTC m=+371.954658708" Feb 14 04:15:40 crc kubenswrapper[4867]: I0214 04:15:39.898476 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bvb8v" podStartSLOduration=2.133193932 podStartE2EDuration="4.898457666s" podCreationTimestamp="2026-02-14 04:15:35 +0000 UTC" firstStartedPulling="2026-02-14 04:15:36.781039109 +0000 UTC m=+368.861976423" lastFinishedPulling="2026-02-14 04:15:39.546302843 +0000 UTC m=+371.627240157" observedRunningTime="2026-02-14 04:15:39.894876093 +0000 UTC m=+371.975813427" watchObservedRunningTime="2026-02-14 04:15:39.898457666 +0000 UTC m=+371.979394970" Feb 14 04:15:40 crc kubenswrapper[4867]: I0214 04:15:40.001481 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-856f7b9d6f-8fm27" Feb 14 04:15:40 crc kubenswrapper[4867]: I0214 04:15:40.018815 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-856f7b9d6f-8fm27" podStartSLOduration=3.018760056 podStartE2EDuration="3.018760056s" podCreationTimestamp="2026-02-14 04:15:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:15:39.917993303 +0000 UTC m=+371.998930617" watchObservedRunningTime="2026-02-14 04:15:40.018760056 +0000 UTC m=+372.099697370" Feb 14 04:15:40 crc kubenswrapper[4867]: I0214 04:15:40.565026 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-64cd899fff-wknv7"] Feb 14 04:15:40 crc kubenswrapper[4867]: W0214 04:15:40.569646 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b0aebec_9a44_4db3_9bcb_e63c5f1748c8.slice/crio-9b0735e90ad616b3867ce1cf48d98caaf6b478aa90e26b262b52fd8cb6e1c8ca WatchSource:0}: Error finding container 9b0735e90ad616b3867ce1cf48d98caaf6b478aa90e26b262b52fd8cb6e1c8ca: Status 404 returned error can't find the container with id 9b0735e90ad616b3867ce1cf48d98caaf6b478aa90e26b262b52fd8cb6e1c8ca Feb 14 04:15:40 crc kubenswrapper[4867]: I0214 04:15:40.858406 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-64cd899fff-wknv7" event={"ID":"8b0aebec-9a44-4db3-9bcb-e63c5f1748c8","Type":"ContainerStarted","Data":"f048a7044e6a5e0c4f276f047bccbb72c43ee0e536c6a2c0efeb288de5790980"} Feb 14 04:15:40 crc kubenswrapper[4867]: I0214 04:15:40.858726 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-64cd899fff-wknv7" event={"ID":"8b0aebec-9a44-4db3-9bcb-e63c5f1748c8","Type":"ContainerStarted","Data":"9b0735e90ad616b3867ce1cf48d98caaf6b478aa90e26b262b52fd8cb6e1c8ca"} Feb 14 04:15:40 crc kubenswrapper[4867]: I0214 04:15:40.886457 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-64cd899fff-wknv7" podStartSLOduration=3.886440929 podStartE2EDuration="3.886440929s" podCreationTimestamp="2026-02-14 04:15:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:15:40.882723322 +0000 UTC m=+372.963660646" watchObservedRunningTime="2026-02-14 04:15:40.886440929 +0000 UTC m=+372.967378243" Feb 14 04:15:41 crc kubenswrapper[4867]: I0214 04:15:41.863033 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-64cd899fff-wknv7" Feb 14 04:15:41 crc kubenswrapper[4867]: I0214 04:15:41.867143 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-64cd899fff-wknv7" Feb 14 04:15:42 crc kubenswrapper[4867]: I0214 04:15:42.969817 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-wwh9m" Feb 14 04:15:43 crc kubenswrapper[4867]: I0214 04:15:43.013210 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-5rxcg"] Feb 14 04:15:43 crc kubenswrapper[4867]: I0214 04:15:43.451612 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mrccv" Feb 14 04:15:43 crc kubenswrapper[4867]: I0214 04:15:43.451969 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mrccv" Feb 14 04:15:43 crc kubenswrapper[4867]: I0214 04:15:43.498569 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mrccv" Feb 14 04:15:43 crc kubenswrapper[4867]: I0214 04:15:43.665359 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-w69fq" Feb 14 04:15:43 crc kubenswrapper[4867]: I0214 04:15:43.665703 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-w69fq" Feb 14 04:15:43 crc kubenswrapper[4867]: I0214 04:15:43.701863 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-w69fq" Feb 14 04:15:43 crc kubenswrapper[4867]: I0214 04:15:43.907222 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-w69fq" Feb 14 04:15:43 crc kubenswrapper[4867]: I0214 04:15:43.911181 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mrccv" Feb 14 04:15:45 crc kubenswrapper[4867]: I0214 04:15:45.861619 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gbz8c" Feb 14 04:15:45 crc kubenswrapper[4867]: I0214 04:15:45.861696 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gbz8c" Feb 14 04:15:45 crc kubenswrapper[4867]: I0214 04:15:45.899365 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gbz8c" Feb 14 04:15:45 crc kubenswrapper[4867]: I0214 04:15:45.945182 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gbz8c" Feb 14 04:15:46 crc kubenswrapper[4867]: I0214 04:15:46.048328 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bvb8v" Feb 14 04:15:46 crc kubenswrapper[4867]: I0214 04:15:46.048393 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bvb8v" Feb 14 04:15:46 crc kubenswrapper[4867]: I0214 04:15:46.095000 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bvb8v" Feb 14 04:15:46 crc kubenswrapper[4867]: I0214 04:15:46.935389 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bvb8v" Feb 14 04:15:47 crc kubenswrapper[4867]: I0214 04:15:47.298484 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" podUID="0ad7b333-6328-41ea-a81d-bce9790b185a" containerName="oauth-openshift" containerID="cri-o://271deed38181d3d03a61bb60c701b3fc845d6907348df479c58ecd82b90d57ea" gracePeriod=15 Feb 14 04:15:47 crc kubenswrapper[4867]: I0214 04:15:47.652961 4867 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-c65kr container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.34:6443/healthz\": dial tcp 10.217.0.34:6443: connect: connection refused" start-of-body= Feb 14 04:15:47 crc kubenswrapper[4867]: I0214 04:15:47.653019 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" podUID="0ad7b333-6328-41ea-a81d-bce9790b185a" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.34:6443/healthz\": dial tcp 10.217.0.34:6443: connect: connection refused" Feb 14 04:15:48 crc kubenswrapper[4867]: I0214 04:15:48.121045 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/b967a9e8-e5f1-4c92-889a-1dd6adf747fd-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-72mpc\" (UID: \"b967a9e8-e5f1-4c92-889a-1dd6adf747fd\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-72mpc" Feb 14 04:15:48 crc kubenswrapper[4867]: I0214 04:15:48.127299 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/b967a9e8-e5f1-4c92-889a-1dd6adf747fd-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-72mpc\" (UID: \"b967a9e8-e5f1-4c92-889a-1dd6adf747fd\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-72mpc" Feb 14 04:15:48 crc kubenswrapper[4867]: I0214 04:15:48.389490 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-dockercfg-7rsz8" Feb 14 04:15:48 crc kubenswrapper[4867]: I0214 04:15:48.397993 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-72mpc" Feb 14 04:15:48 crc kubenswrapper[4867]: I0214 04:15:48.784756 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-72mpc"] Feb 14 04:15:48 crc kubenswrapper[4867]: I0214 04:15:48.901957 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-72mpc" event={"ID":"b967a9e8-e5f1-4c92-889a-1dd6adf747fd","Type":"ContainerStarted","Data":"55d18515117e753920e9e272d9a88de7c22f36f1d0b769a87520cf9673e87279"} Feb 14 04:15:48 crc kubenswrapper[4867]: I0214 04:15:48.904015 4867 generic.go:334] "Generic (PLEG): container finished" podID="0ad7b333-6328-41ea-a81d-bce9790b185a" containerID="271deed38181d3d03a61bb60c701b3fc845d6907348df479c58ecd82b90d57ea" exitCode=0 Feb 14 04:15:48 crc kubenswrapper[4867]: I0214 04:15:48.904044 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" event={"ID":"0ad7b333-6328-41ea-a81d-bce9790b185a","Type":"ContainerDied","Data":"271deed38181d3d03a61bb60c701b3fc845d6907348df479c58ecd82b90d57ea"} Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.401031 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.444011 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-79479887dd-9ltbt"] Feb 14 04:15:50 crc kubenswrapper[4867]: E0214 04:15:50.444343 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ad7b333-6328-41ea-a81d-bce9790b185a" containerName="oauth-openshift" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.444374 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ad7b333-6328-41ea-a81d-bce9790b185a" containerName="oauth-openshift" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.444561 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ad7b333-6328-41ea-a81d-bce9790b185a" containerName="oauth-openshift" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.445202 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.452083 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-79479887dd-9ltbt"] Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.552873 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-system-trusted-ca-bundle\") pod \"0ad7b333-6328-41ea-a81d-bce9790b185a\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.552937 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-user-template-provider-selection\") pod \"0ad7b333-6328-41ea-a81d-bce9790b185a\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.552973 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-system-cliconfig\") pod \"0ad7b333-6328-41ea-a81d-bce9790b185a\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.552994 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-system-service-ca\") pod \"0ad7b333-6328-41ea-a81d-bce9790b185a\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.553010 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-system-serving-cert\") pod \"0ad7b333-6328-41ea-a81d-bce9790b185a\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.553042 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-user-template-error\") pod \"0ad7b333-6328-41ea-a81d-bce9790b185a\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.553099 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ad7b333-6328-41ea-a81d-bce9790b185a-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "0ad7b333-6328-41ea-a81d-bce9790b185a" (UID: "0ad7b333-6328-41ea-a81d-bce9790b185a"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.553502 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "0ad7b333-6328-41ea-a81d-bce9790b185a" (UID: "0ad7b333-6328-41ea-a81d-bce9790b185a"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.553531 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "0ad7b333-6328-41ea-a81d-bce9790b185a" (UID: "0ad7b333-6328-41ea-a81d-bce9790b185a"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.553067 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0ad7b333-6328-41ea-a81d-bce9790b185a-audit-dir\") pod \"0ad7b333-6328-41ea-a81d-bce9790b185a\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.553603 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-system-ocp-branding-template\") pod \"0ad7b333-6328-41ea-a81d-bce9790b185a\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.553634 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tf64k\" (UniqueName: \"kubernetes.io/projected/0ad7b333-6328-41ea-a81d-bce9790b185a-kube-api-access-tf64k\") pod \"0ad7b333-6328-41ea-a81d-bce9790b185a\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.553678 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-system-session\") pod \"0ad7b333-6328-41ea-a81d-bce9790b185a\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.553721 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-system-router-certs\") pod \"0ad7b333-6328-41ea-a81d-bce9790b185a\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.553749 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-user-template-login\") pod \"0ad7b333-6328-41ea-a81d-bce9790b185a\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.553787 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-user-idp-0-file-data\") pod \"0ad7b333-6328-41ea-a81d-bce9790b185a\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.553848 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0ad7b333-6328-41ea-a81d-bce9790b185a-audit-policies\") pod \"0ad7b333-6328-41ea-a81d-bce9790b185a\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.554185 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "0ad7b333-6328-41ea-a81d-bce9790b185a" (UID: "0ad7b333-6328-41ea-a81d-bce9790b185a"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.554497 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/351f0f21-497e-4c3e-99cc-30baff4e6484-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-79479887dd-9ltbt\" (UID: \"351f0f21-497e-4c3e-99cc-30baff4e6484\") " pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.554546 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ad7b333-6328-41ea-a81d-bce9790b185a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "0ad7b333-6328-41ea-a81d-bce9790b185a" (UID: "0ad7b333-6328-41ea-a81d-bce9790b185a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.554675 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/351f0f21-497e-4c3e-99cc-30baff4e6484-v4-0-config-system-service-ca\") pod \"oauth-openshift-79479887dd-9ltbt\" (UID: \"351f0f21-497e-4c3e-99cc-30baff4e6484\") " pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.554726 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/351f0f21-497e-4c3e-99cc-30baff4e6484-audit-policies\") pod \"oauth-openshift-79479887dd-9ltbt\" (UID: \"351f0f21-497e-4c3e-99cc-30baff4e6484\") " pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.554747 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/351f0f21-497e-4c3e-99cc-30baff4e6484-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-79479887dd-9ltbt\" (UID: \"351f0f21-497e-4c3e-99cc-30baff4e6484\") " pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.554795 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/351f0f21-497e-4c3e-99cc-30baff4e6484-v4-0-config-system-serving-cert\") pod \"oauth-openshift-79479887dd-9ltbt\" (UID: \"351f0f21-497e-4c3e-99cc-30baff4e6484\") " pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.554846 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/351f0f21-497e-4c3e-99cc-30baff4e6484-v4-0-config-system-session\") pod \"oauth-openshift-79479887dd-9ltbt\" (UID: \"351f0f21-497e-4c3e-99cc-30baff4e6484\") " pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.554869 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/351f0f21-497e-4c3e-99cc-30baff4e6484-v4-0-config-system-router-certs\") pod \"oauth-openshift-79479887dd-9ltbt\" (UID: \"351f0f21-497e-4c3e-99cc-30baff4e6484\") " pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.554915 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/351f0f21-497e-4c3e-99cc-30baff4e6484-audit-dir\") pod \"oauth-openshift-79479887dd-9ltbt\" (UID: \"351f0f21-497e-4c3e-99cc-30baff4e6484\") " pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.554940 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/351f0f21-497e-4c3e-99cc-30baff4e6484-v4-0-config-user-template-error\") pod \"oauth-openshift-79479887dd-9ltbt\" (UID: \"351f0f21-497e-4c3e-99cc-30baff4e6484\") " pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.554970 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/351f0f21-497e-4c3e-99cc-30baff4e6484-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-79479887dd-9ltbt\" (UID: \"351f0f21-497e-4c3e-99cc-30baff4e6484\") " pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.555079 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wfkz\" (UniqueName: \"kubernetes.io/projected/351f0f21-497e-4c3e-99cc-30baff4e6484-kube-api-access-7wfkz\") pod \"oauth-openshift-79479887dd-9ltbt\" (UID: \"351f0f21-497e-4c3e-99cc-30baff4e6484\") " pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.555130 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/351f0f21-497e-4c3e-99cc-30baff4e6484-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-79479887dd-9ltbt\" (UID: \"351f0f21-497e-4c3e-99cc-30baff4e6484\") " pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.555167 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/351f0f21-497e-4c3e-99cc-30baff4e6484-v4-0-config-system-cliconfig\") pod \"oauth-openshift-79479887dd-9ltbt\" (UID: \"351f0f21-497e-4c3e-99cc-30baff4e6484\") " pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.555202 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/351f0f21-497e-4c3e-99cc-30baff4e6484-v4-0-config-user-template-login\") pod \"oauth-openshift-79479887dd-9ltbt\" (UID: \"351f0f21-497e-4c3e-99cc-30baff4e6484\") " pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.555266 4867 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0ad7b333-6328-41ea-a81d-bce9790b185a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.555278 4867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.555289 4867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.555299 4867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.555311 4867 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0ad7b333-6328-41ea-a81d-bce9790b185a-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.558218 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "0ad7b333-6328-41ea-a81d-bce9790b185a" (UID: "0ad7b333-6328-41ea-a81d-bce9790b185a"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.558417 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "0ad7b333-6328-41ea-a81d-bce9790b185a" (UID: "0ad7b333-6328-41ea-a81d-bce9790b185a"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.558827 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "0ad7b333-6328-41ea-a81d-bce9790b185a" (UID: "0ad7b333-6328-41ea-a81d-bce9790b185a"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.559411 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "0ad7b333-6328-41ea-a81d-bce9790b185a" (UID: "0ad7b333-6328-41ea-a81d-bce9790b185a"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.559918 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "0ad7b333-6328-41ea-a81d-bce9790b185a" (UID: "0ad7b333-6328-41ea-a81d-bce9790b185a"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.560157 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "0ad7b333-6328-41ea-a81d-bce9790b185a" (UID: "0ad7b333-6328-41ea-a81d-bce9790b185a"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.560400 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "0ad7b333-6328-41ea-a81d-bce9790b185a" (UID: "0ad7b333-6328-41ea-a81d-bce9790b185a"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.560563 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "0ad7b333-6328-41ea-a81d-bce9790b185a" (UID: "0ad7b333-6328-41ea-a81d-bce9790b185a"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.562954 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ad7b333-6328-41ea-a81d-bce9790b185a-kube-api-access-tf64k" (OuterVolumeSpecName: "kube-api-access-tf64k") pod "0ad7b333-6328-41ea-a81d-bce9790b185a" (UID: "0ad7b333-6328-41ea-a81d-bce9790b185a"). InnerVolumeSpecName "kube-api-access-tf64k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.656689 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/351f0f21-497e-4c3e-99cc-30baff4e6484-v4-0-config-system-service-ca\") pod \"oauth-openshift-79479887dd-9ltbt\" (UID: \"351f0f21-497e-4c3e-99cc-30baff4e6484\") " pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.656742 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/351f0f21-497e-4c3e-99cc-30baff4e6484-audit-policies\") pod \"oauth-openshift-79479887dd-9ltbt\" (UID: \"351f0f21-497e-4c3e-99cc-30baff4e6484\") " pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.656762 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/351f0f21-497e-4c3e-99cc-30baff4e6484-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-79479887dd-9ltbt\" (UID: \"351f0f21-497e-4c3e-99cc-30baff4e6484\") " pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.656786 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/351f0f21-497e-4c3e-99cc-30baff4e6484-v4-0-config-system-serving-cert\") pod \"oauth-openshift-79479887dd-9ltbt\" (UID: \"351f0f21-497e-4c3e-99cc-30baff4e6484\") " pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.656808 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/351f0f21-497e-4c3e-99cc-30baff4e6484-v4-0-config-system-session\") pod \"oauth-openshift-79479887dd-9ltbt\" (UID: \"351f0f21-497e-4c3e-99cc-30baff4e6484\") " pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.656825 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/351f0f21-497e-4c3e-99cc-30baff4e6484-v4-0-config-system-router-certs\") pod \"oauth-openshift-79479887dd-9ltbt\" (UID: \"351f0f21-497e-4c3e-99cc-30baff4e6484\") " pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.656848 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/351f0f21-497e-4c3e-99cc-30baff4e6484-audit-dir\") pod \"oauth-openshift-79479887dd-9ltbt\" (UID: \"351f0f21-497e-4c3e-99cc-30baff4e6484\") " pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.656862 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/351f0f21-497e-4c3e-99cc-30baff4e6484-v4-0-config-user-template-error\") pod \"oauth-openshift-79479887dd-9ltbt\" (UID: \"351f0f21-497e-4c3e-99cc-30baff4e6484\") " pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.656883 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/351f0f21-497e-4c3e-99cc-30baff4e6484-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-79479887dd-9ltbt\" (UID: \"351f0f21-497e-4c3e-99cc-30baff4e6484\") " pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.656911 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wfkz\" (UniqueName: \"kubernetes.io/projected/351f0f21-497e-4c3e-99cc-30baff4e6484-kube-api-access-7wfkz\") pod \"oauth-openshift-79479887dd-9ltbt\" (UID: \"351f0f21-497e-4c3e-99cc-30baff4e6484\") " pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.656940 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/351f0f21-497e-4c3e-99cc-30baff4e6484-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-79479887dd-9ltbt\" (UID: \"351f0f21-497e-4c3e-99cc-30baff4e6484\") " pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.656964 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/351f0f21-497e-4c3e-99cc-30baff4e6484-v4-0-config-system-cliconfig\") pod \"oauth-openshift-79479887dd-9ltbt\" (UID: \"351f0f21-497e-4c3e-99cc-30baff4e6484\") " pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.656994 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/351f0f21-497e-4c3e-99cc-30baff4e6484-v4-0-config-user-template-login\") pod \"oauth-openshift-79479887dd-9ltbt\" (UID: \"351f0f21-497e-4c3e-99cc-30baff4e6484\") " pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.657027 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/351f0f21-497e-4c3e-99cc-30baff4e6484-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-79479887dd-9ltbt\" (UID: \"351f0f21-497e-4c3e-99cc-30baff4e6484\") " pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.657084 4867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.657097 4867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.657108 4867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.657117 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tf64k\" (UniqueName: \"kubernetes.io/projected/0ad7b333-6328-41ea-a81d-bce9790b185a-kube-api-access-tf64k\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.657127 4867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.657136 4867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.657146 4867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.657156 4867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.657167 4867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.657291 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/351f0f21-497e-4c3e-99cc-30baff4e6484-audit-dir\") pod \"oauth-openshift-79479887dd-9ltbt\" (UID: \"351f0f21-497e-4c3e-99cc-30baff4e6484\") " pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.658351 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/351f0f21-497e-4c3e-99cc-30baff4e6484-audit-policies\") pod \"oauth-openshift-79479887dd-9ltbt\" (UID: \"351f0f21-497e-4c3e-99cc-30baff4e6484\") " pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.658607 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/351f0f21-497e-4c3e-99cc-30baff4e6484-v4-0-config-system-service-ca\") pod \"oauth-openshift-79479887dd-9ltbt\" (UID: \"351f0f21-497e-4c3e-99cc-30baff4e6484\") " pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.660962 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/351f0f21-497e-4c3e-99cc-30baff4e6484-v4-0-config-system-cliconfig\") pod \"oauth-openshift-79479887dd-9ltbt\" (UID: \"351f0f21-497e-4c3e-99cc-30baff4e6484\") " pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.661417 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/351f0f21-497e-4c3e-99cc-30baff4e6484-v4-0-config-system-router-certs\") pod \"oauth-openshift-79479887dd-9ltbt\" (UID: \"351f0f21-497e-4c3e-99cc-30baff4e6484\") " pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.661498 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/351f0f21-497e-4c3e-99cc-30baff4e6484-v4-0-config-system-serving-cert\") pod \"oauth-openshift-79479887dd-9ltbt\" (UID: \"351f0f21-497e-4c3e-99cc-30baff4e6484\") " pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.662095 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/351f0f21-497e-4c3e-99cc-30baff4e6484-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-79479887dd-9ltbt\" (UID: \"351f0f21-497e-4c3e-99cc-30baff4e6484\") " pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.663210 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/351f0f21-497e-4c3e-99cc-30baff4e6484-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-79479887dd-9ltbt\" (UID: \"351f0f21-497e-4c3e-99cc-30baff4e6484\") " pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.663407 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/351f0f21-497e-4c3e-99cc-30baff4e6484-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-79479887dd-9ltbt\" (UID: \"351f0f21-497e-4c3e-99cc-30baff4e6484\") " pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.663648 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/351f0f21-497e-4c3e-99cc-30baff4e6484-v4-0-config-user-template-login\") pod \"oauth-openshift-79479887dd-9ltbt\" (UID: \"351f0f21-497e-4c3e-99cc-30baff4e6484\") " pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.663729 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/351f0f21-497e-4c3e-99cc-30baff4e6484-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-79479887dd-9ltbt\" (UID: \"351f0f21-497e-4c3e-99cc-30baff4e6484\") " pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.663882 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/351f0f21-497e-4c3e-99cc-30baff4e6484-v4-0-config-system-session\") pod \"oauth-openshift-79479887dd-9ltbt\" (UID: \"351f0f21-497e-4c3e-99cc-30baff4e6484\") " pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.664427 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/351f0f21-497e-4c3e-99cc-30baff4e6484-v4-0-config-user-template-error\") pod \"oauth-openshift-79479887dd-9ltbt\" (UID: \"351f0f21-497e-4c3e-99cc-30baff4e6484\") " pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.672937 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wfkz\" (UniqueName: \"kubernetes.io/projected/351f0f21-497e-4c3e-99cc-30baff4e6484-kube-api-access-7wfkz\") pod \"oauth-openshift-79479887dd-9ltbt\" (UID: \"351f0f21-497e-4c3e-99cc-30baff4e6484\") " pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.758072 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.920192 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" event={"ID":"0ad7b333-6328-41ea-a81d-bce9790b185a","Type":"ContainerDied","Data":"0005bb5ab795f3cb3316208372a9d4195e426c2a1f38a510bf0162032f954a9f"} Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.920548 4867 scope.go:117] "RemoveContainer" containerID="271deed38181d3d03a61bb60c701b3fc845d6907348df479c58ecd82b90d57ea" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.920284 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.973901 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-c65kr"] Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.983441 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-c65kr"] Feb 14 04:15:51 crc kubenswrapper[4867]: I0214 04:15:51.004343 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ad7b333-6328-41ea-a81d-bce9790b185a" path="/var/lib/kubelet/pods/0ad7b333-6328-41ea-a81d-bce9790b185a/volumes" Feb 14 04:15:51 crc kubenswrapper[4867]: I0214 04:15:51.223520 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-79479887dd-9ltbt"] Feb 14 04:15:51 crc kubenswrapper[4867]: I0214 04:15:51.926194 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" event={"ID":"351f0f21-497e-4c3e-99cc-30baff4e6484","Type":"ContainerStarted","Data":"563d4e57c17a704703d730e549779becfa05a0901ceefc0c24faf0d612500998"} Feb 14 04:15:51 crc kubenswrapper[4867]: I0214 04:15:51.926552 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" event={"ID":"351f0f21-497e-4c3e-99cc-30baff4e6484","Type":"ContainerStarted","Data":"ec56011e077735a51f9641794580e4ff556553e447a8038ff938b25782de9471"} Feb 14 04:15:51 crc kubenswrapper[4867]: I0214 04:15:51.926579 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 04:15:51 crc kubenswrapper[4867]: I0214 04:15:51.950580 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" podStartSLOduration=29.950562596 podStartE2EDuration="29.950562596s" podCreationTimestamp="2026-02-14 04:15:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:15:51.949141099 +0000 UTC m=+384.030078413" watchObservedRunningTime="2026-02-14 04:15:51.950562596 +0000 UTC m=+384.031499920" Feb 14 04:15:52 crc kubenswrapper[4867]: I0214 04:15:52.159157 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 04:15:52 crc kubenswrapper[4867]: I0214 04:15:52.937617 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-72mpc" event={"ID":"b967a9e8-e5f1-4c92-889a-1dd6adf747fd","Type":"ContainerStarted","Data":"1771829f5105142e5fb1906dbc8e69f1496d47af4f931c40341a4509f9eb8537"} Feb 14 04:15:52 crc kubenswrapper[4867]: I0214 04:15:52.951649 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-72mpc" podStartSLOduration=33.275061131 podStartE2EDuration="36.951630349s" podCreationTimestamp="2026-02-14 04:15:16 +0000 UTC" firstStartedPulling="2026-02-14 04:15:48.790026341 +0000 UTC m=+380.870963655" lastFinishedPulling="2026-02-14 04:15:52.466595559 +0000 UTC m=+384.547532873" observedRunningTime="2026-02-14 04:15:52.950292914 +0000 UTC m=+385.031230228" watchObservedRunningTime="2026-02-14 04:15:52.951630349 +0000 UTC m=+385.032567663" Feb 14 04:15:53 crc kubenswrapper[4867]: I0214 04:15:53.941850 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-72mpc" Feb 14 04:15:53 crc kubenswrapper[4867]: I0214 04:15:53.945701 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-72mpc" Feb 14 04:15:54 crc kubenswrapper[4867]: I0214 04:15:54.308340 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-g2d66"] Feb 14 04:15:54 crc kubenswrapper[4867]: I0214 04:15:54.309411 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-g2d66" Feb 14 04:15:54 crc kubenswrapper[4867]: I0214 04:15:54.311451 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Feb 14 04:15:54 crc kubenswrapper[4867]: I0214 04:15:54.311616 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-wzkj2" Feb 14 04:15:54 crc kubenswrapper[4867]: I0214 04:15:54.313009 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Feb 14 04:15:54 crc kubenswrapper[4867]: I0214 04:15:54.313738 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Feb 14 04:15:54 crc kubenswrapper[4867]: I0214 04:15:54.330693 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-g2d66"] Feb 14 04:15:54 crc kubenswrapper[4867]: I0214 04:15:54.347988 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/cfdf6bd8-5b7c-47eb-8763-9bf734d6cc79-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-g2d66\" (UID: \"cfdf6bd8-5b7c-47eb-8763-9bf734d6cc79\") " pod="openshift-monitoring/prometheus-operator-db54df47d-g2d66" Feb 14 04:15:54 crc kubenswrapper[4867]: I0214 04:15:54.348411 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/cfdf6bd8-5b7c-47eb-8763-9bf734d6cc79-metrics-client-ca\") pod \"prometheus-operator-db54df47d-g2d66\" (UID: \"cfdf6bd8-5b7c-47eb-8763-9bf734d6cc79\") " pod="openshift-monitoring/prometheus-operator-db54df47d-g2d66" Feb 14 04:15:54 crc kubenswrapper[4867]: I0214 04:15:54.348560 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lphl8\" (UniqueName: \"kubernetes.io/projected/cfdf6bd8-5b7c-47eb-8763-9bf734d6cc79-kube-api-access-lphl8\") pod \"prometheus-operator-db54df47d-g2d66\" (UID: \"cfdf6bd8-5b7c-47eb-8763-9bf734d6cc79\") " pod="openshift-monitoring/prometheus-operator-db54df47d-g2d66" Feb 14 04:15:54 crc kubenswrapper[4867]: I0214 04:15:54.348618 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/cfdf6bd8-5b7c-47eb-8763-9bf734d6cc79-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-g2d66\" (UID: \"cfdf6bd8-5b7c-47eb-8763-9bf734d6cc79\") " pod="openshift-monitoring/prometheus-operator-db54df47d-g2d66" Feb 14 04:15:54 crc kubenswrapper[4867]: I0214 04:15:54.449457 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/cfdf6bd8-5b7c-47eb-8763-9bf734d6cc79-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-g2d66\" (UID: \"cfdf6bd8-5b7c-47eb-8763-9bf734d6cc79\") " pod="openshift-monitoring/prometheus-operator-db54df47d-g2d66" Feb 14 04:15:54 crc kubenswrapper[4867]: I0214 04:15:54.449536 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/cfdf6bd8-5b7c-47eb-8763-9bf734d6cc79-metrics-client-ca\") pod \"prometheus-operator-db54df47d-g2d66\" (UID: \"cfdf6bd8-5b7c-47eb-8763-9bf734d6cc79\") " pod="openshift-monitoring/prometheus-operator-db54df47d-g2d66" Feb 14 04:15:54 crc kubenswrapper[4867]: I0214 04:15:54.449588 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lphl8\" (UniqueName: \"kubernetes.io/projected/cfdf6bd8-5b7c-47eb-8763-9bf734d6cc79-kube-api-access-lphl8\") pod \"prometheus-operator-db54df47d-g2d66\" (UID: \"cfdf6bd8-5b7c-47eb-8763-9bf734d6cc79\") " pod="openshift-monitoring/prometheus-operator-db54df47d-g2d66" Feb 14 04:15:54 crc kubenswrapper[4867]: I0214 04:15:54.449633 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/cfdf6bd8-5b7c-47eb-8763-9bf734d6cc79-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-g2d66\" (UID: \"cfdf6bd8-5b7c-47eb-8763-9bf734d6cc79\") " pod="openshift-monitoring/prometheus-operator-db54df47d-g2d66" Feb 14 04:15:54 crc kubenswrapper[4867]: E0214 04:15:54.450492 4867 secret.go:188] Couldn't get secret openshift-monitoring/prometheus-operator-tls: secret "prometheus-operator-tls" not found Feb 14 04:15:54 crc kubenswrapper[4867]: E0214 04:15:54.450588 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cfdf6bd8-5b7c-47eb-8763-9bf734d6cc79-prometheus-operator-tls podName:cfdf6bd8-5b7c-47eb-8763-9bf734d6cc79 nodeName:}" failed. No retries permitted until 2026-02-14 04:15:54.950567162 +0000 UTC m=+387.031504476 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/cfdf6bd8-5b7c-47eb-8763-9bf734d6cc79-prometheus-operator-tls") pod "prometheus-operator-db54df47d-g2d66" (UID: "cfdf6bd8-5b7c-47eb-8763-9bf734d6cc79") : secret "prometheus-operator-tls" not found Feb 14 04:15:54 crc kubenswrapper[4867]: I0214 04:15:54.453735 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/cfdf6bd8-5b7c-47eb-8763-9bf734d6cc79-metrics-client-ca\") pod \"prometheus-operator-db54df47d-g2d66\" (UID: \"cfdf6bd8-5b7c-47eb-8763-9bf734d6cc79\") " pod="openshift-monitoring/prometheus-operator-db54df47d-g2d66" Feb 14 04:15:54 crc kubenswrapper[4867]: I0214 04:15:54.473793 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/cfdf6bd8-5b7c-47eb-8763-9bf734d6cc79-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-g2d66\" (UID: \"cfdf6bd8-5b7c-47eb-8763-9bf734d6cc79\") " pod="openshift-monitoring/prometheus-operator-db54df47d-g2d66" Feb 14 04:15:54 crc kubenswrapper[4867]: I0214 04:15:54.482143 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lphl8\" (UniqueName: \"kubernetes.io/projected/cfdf6bd8-5b7c-47eb-8763-9bf734d6cc79-kube-api-access-lphl8\") pod \"prometheus-operator-db54df47d-g2d66\" (UID: \"cfdf6bd8-5b7c-47eb-8763-9bf734d6cc79\") " pod="openshift-monitoring/prometheus-operator-db54df47d-g2d66" Feb 14 04:15:54 crc kubenswrapper[4867]: I0214 04:15:54.957792 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/cfdf6bd8-5b7c-47eb-8763-9bf734d6cc79-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-g2d66\" (UID: \"cfdf6bd8-5b7c-47eb-8763-9bf734d6cc79\") " pod="openshift-monitoring/prometheus-operator-db54df47d-g2d66" Feb 14 04:15:54 crc kubenswrapper[4867]: I0214 04:15:54.962296 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/cfdf6bd8-5b7c-47eb-8763-9bf734d6cc79-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-g2d66\" (UID: \"cfdf6bd8-5b7c-47eb-8763-9bf734d6cc79\") " pod="openshift-monitoring/prometheus-operator-db54df47d-g2d66" Feb 14 04:15:55 crc kubenswrapper[4867]: I0214 04:15:55.246065 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-g2d66" Feb 14 04:15:55 crc kubenswrapper[4867]: I0214 04:15:55.714148 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-g2d66"] Feb 14 04:15:55 crc kubenswrapper[4867]: I0214 04:15:55.954297 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-g2d66" event={"ID":"cfdf6bd8-5b7c-47eb-8763-9bf734d6cc79","Type":"ContainerStarted","Data":"7db2257006a1dce4c327dec7939024ac5808b3eee8119129b1c1a67673793112"} Feb 14 04:15:57 crc kubenswrapper[4867]: I0214 04:15:57.077924 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-64cd899fff-wknv7"] Feb 14 04:15:57 crc kubenswrapper[4867]: I0214 04:15:57.078557 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-64cd899fff-wknv7" podUID="8b0aebec-9a44-4db3-9bcb-e63c5f1748c8" containerName="controller-manager" containerID="cri-o://f048a7044e6a5e0c4f276f047bccbb72c43ee0e536c6a2c0efeb288de5790980" gracePeriod=30 Feb 14 04:15:57 crc kubenswrapper[4867]: I0214 04:15:57.179747 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-856f7b9d6f-8fm27"] Feb 14 04:15:57 crc kubenswrapper[4867]: I0214 04:15:57.179946 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-856f7b9d6f-8fm27" podUID="e06bd216-b4b8-4754-a364-76f41991e155" containerName="route-controller-manager" containerID="cri-o://a4e095f624f44728d2a3fc2a1dc0256cd3539ee32a625d8e202f8d80d7e3e7de" gracePeriod=30 Feb 14 04:15:57 crc kubenswrapper[4867]: I0214 04:15:57.974695 4867 generic.go:334] "Generic (PLEG): container finished" podID="8b0aebec-9a44-4db3-9bcb-e63c5f1748c8" containerID="f048a7044e6a5e0c4f276f047bccbb72c43ee0e536c6a2c0efeb288de5790980" exitCode=0 Feb 14 04:15:57 crc kubenswrapper[4867]: I0214 04:15:57.974768 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-64cd899fff-wknv7" event={"ID":"8b0aebec-9a44-4db3-9bcb-e63c5f1748c8","Type":"ContainerDied","Data":"f048a7044e6a5e0c4f276f047bccbb72c43ee0e536c6a2c0efeb288de5790980"} Feb 14 04:15:57 crc kubenswrapper[4867]: I0214 04:15:57.981345 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-g2d66" event={"ID":"cfdf6bd8-5b7c-47eb-8763-9bf734d6cc79","Type":"ContainerStarted","Data":"de1c27492cf2ee3b7e71306ec0493f4eb050389488e398e112decb528537a85d"} Feb 14 04:15:57 crc kubenswrapper[4867]: I0214 04:15:57.983384 4867 generic.go:334] "Generic (PLEG): container finished" podID="e06bd216-b4b8-4754-a364-76f41991e155" containerID="a4e095f624f44728d2a3fc2a1dc0256cd3539ee32a625d8e202f8d80d7e3e7de" exitCode=0 Feb 14 04:15:57 crc kubenswrapper[4867]: I0214 04:15:57.983479 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-856f7b9d6f-8fm27" event={"ID":"e06bd216-b4b8-4754-a364-76f41991e155","Type":"ContainerDied","Data":"a4e095f624f44728d2a3fc2a1dc0256cd3539ee32a625d8e202f8d80d7e3e7de"} Feb 14 04:15:58 crc kubenswrapper[4867]: I0214 04:15:58.084964 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-856f7b9d6f-8fm27" Feb 14 04:15:58 crc kubenswrapper[4867]: I0214 04:15:58.099515 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e06bd216-b4b8-4754-a364-76f41991e155-config\") pod \"e06bd216-b4b8-4754-a364-76f41991e155\" (UID: \"e06bd216-b4b8-4754-a364-76f41991e155\") " Feb 14 04:15:58 crc kubenswrapper[4867]: I0214 04:15:58.099581 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e06bd216-b4b8-4754-a364-76f41991e155-client-ca\") pod \"e06bd216-b4b8-4754-a364-76f41991e155\" (UID: \"e06bd216-b4b8-4754-a364-76f41991e155\") " Feb 14 04:15:58 crc kubenswrapper[4867]: I0214 04:15:58.099638 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e06bd216-b4b8-4754-a364-76f41991e155-serving-cert\") pod \"e06bd216-b4b8-4754-a364-76f41991e155\" (UID: \"e06bd216-b4b8-4754-a364-76f41991e155\") " Feb 14 04:15:58 crc kubenswrapper[4867]: I0214 04:15:58.099673 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gp5z6\" (UniqueName: \"kubernetes.io/projected/e06bd216-b4b8-4754-a364-76f41991e155-kube-api-access-gp5z6\") pod \"e06bd216-b4b8-4754-a364-76f41991e155\" (UID: \"e06bd216-b4b8-4754-a364-76f41991e155\") " Feb 14 04:15:58 crc kubenswrapper[4867]: I0214 04:15:58.100477 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e06bd216-b4b8-4754-a364-76f41991e155-config" (OuterVolumeSpecName: "config") pod "e06bd216-b4b8-4754-a364-76f41991e155" (UID: "e06bd216-b4b8-4754-a364-76f41991e155"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:15:58 crc kubenswrapper[4867]: I0214 04:15:58.100795 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e06bd216-b4b8-4754-a364-76f41991e155-client-ca" (OuterVolumeSpecName: "client-ca") pod "e06bd216-b4b8-4754-a364-76f41991e155" (UID: "e06bd216-b4b8-4754-a364-76f41991e155"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:15:58 crc kubenswrapper[4867]: I0214 04:15:58.114481 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e06bd216-b4b8-4754-a364-76f41991e155-kube-api-access-gp5z6" (OuterVolumeSpecName: "kube-api-access-gp5z6") pod "e06bd216-b4b8-4754-a364-76f41991e155" (UID: "e06bd216-b4b8-4754-a364-76f41991e155"). InnerVolumeSpecName "kube-api-access-gp5z6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:15:58 crc kubenswrapper[4867]: I0214 04:15:58.114550 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e06bd216-b4b8-4754-a364-76f41991e155-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e06bd216-b4b8-4754-a364-76f41991e155" (UID: "e06bd216-b4b8-4754-a364-76f41991e155"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:15:58 crc kubenswrapper[4867]: I0214 04:15:58.200422 4867 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e06bd216-b4b8-4754-a364-76f41991e155-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:58 crc kubenswrapper[4867]: I0214 04:15:58.200463 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gp5z6\" (UniqueName: \"kubernetes.io/projected/e06bd216-b4b8-4754-a364-76f41991e155-kube-api-access-gp5z6\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:58 crc kubenswrapper[4867]: I0214 04:15:58.200476 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e06bd216-b4b8-4754-a364-76f41991e155-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:58 crc kubenswrapper[4867]: I0214 04:15:58.200486 4867 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e06bd216-b4b8-4754-a364-76f41991e155-client-ca\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:58 crc kubenswrapper[4867]: I0214 04:15:58.242334 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-64cd899fff-wknv7" Feb 14 04:15:58 crc kubenswrapper[4867]: I0214 04:15:58.402356 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8b0aebec-9a44-4db3-9bcb-e63c5f1748c8-client-ca\") pod \"8b0aebec-9a44-4db3-9bcb-e63c5f1748c8\" (UID: \"8b0aebec-9a44-4db3-9bcb-e63c5f1748c8\") " Feb 14 04:15:58 crc kubenswrapper[4867]: I0214 04:15:58.402434 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4ck6\" (UniqueName: \"kubernetes.io/projected/8b0aebec-9a44-4db3-9bcb-e63c5f1748c8-kube-api-access-x4ck6\") pod \"8b0aebec-9a44-4db3-9bcb-e63c5f1748c8\" (UID: \"8b0aebec-9a44-4db3-9bcb-e63c5f1748c8\") " Feb 14 04:15:58 crc kubenswrapper[4867]: I0214 04:15:58.402492 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8b0aebec-9a44-4db3-9bcb-e63c5f1748c8-proxy-ca-bundles\") pod \"8b0aebec-9a44-4db3-9bcb-e63c5f1748c8\" (UID: \"8b0aebec-9a44-4db3-9bcb-e63c5f1748c8\") " Feb 14 04:15:58 crc kubenswrapper[4867]: I0214 04:15:58.402542 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b0aebec-9a44-4db3-9bcb-e63c5f1748c8-serving-cert\") pod \"8b0aebec-9a44-4db3-9bcb-e63c5f1748c8\" (UID: \"8b0aebec-9a44-4db3-9bcb-e63c5f1748c8\") " Feb 14 04:15:58 crc kubenswrapper[4867]: I0214 04:15:58.402574 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b0aebec-9a44-4db3-9bcb-e63c5f1748c8-config\") pod \"8b0aebec-9a44-4db3-9bcb-e63c5f1748c8\" (UID: \"8b0aebec-9a44-4db3-9bcb-e63c5f1748c8\") " Feb 14 04:15:58 crc kubenswrapper[4867]: I0214 04:15:58.403427 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b0aebec-9a44-4db3-9bcb-e63c5f1748c8-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "8b0aebec-9a44-4db3-9bcb-e63c5f1748c8" (UID: "8b0aebec-9a44-4db3-9bcb-e63c5f1748c8"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:15:58 crc kubenswrapper[4867]: I0214 04:15:58.403468 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b0aebec-9a44-4db3-9bcb-e63c5f1748c8-config" (OuterVolumeSpecName: "config") pod "8b0aebec-9a44-4db3-9bcb-e63c5f1748c8" (UID: "8b0aebec-9a44-4db3-9bcb-e63c5f1748c8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:15:58 crc kubenswrapper[4867]: I0214 04:15:58.403558 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b0aebec-9a44-4db3-9bcb-e63c5f1748c8-client-ca" (OuterVolumeSpecName: "client-ca") pod "8b0aebec-9a44-4db3-9bcb-e63c5f1748c8" (UID: "8b0aebec-9a44-4db3-9bcb-e63c5f1748c8"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:15:58 crc kubenswrapper[4867]: I0214 04:15:58.405773 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b0aebec-9a44-4db3-9bcb-e63c5f1748c8-kube-api-access-x4ck6" (OuterVolumeSpecName: "kube-api-access-x4ck6") pod "8b0aebec-9a44-4db3-9bcb-e63c5f1748c8" (UID: "8b0aebec-9a44-4db3-9bcb-e63c5f1748c8"). InnerVolumeSpecName "kube-api-access-x4ck6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:15:58 crc kubenswrapper[4867]: I0214 04:15:58.405898 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b0aebec-9a44-4db3-9bcb-e63c5f1748c8-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8b0aebec-9a44-4db3-9bcb-e63c5f1748c8" (UID: "8b0aebec-9a44-4db3-9bcb-e63c5f1748c8"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:15:58 crc kubenswrapper[4867]: I0214 04:15:58.503477 4867 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8b0aebec-9a44-4db3-9bcb-e63c5f1748c8-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:58 crc kubenswrapper[4867]: I0214 04:15:58.503524 4867 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b0aebec-9a44-4db3-9bcb-e63c5f1748c8-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:58 crc kubenswrapper[4867]: I0214 04:15:58.503536 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b0aebec-9a44-4db3-9bcb-e63c5f1748c8-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:58 crc kubenswrapper[4867]: I0214 04:15:58.503544 4867 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8b0aebec-9a44-4db3-9bcb-e63c5f1748c8-client-ca\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:58 crc kubenswrapper[4867]: I0214 04:15:58.503554 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4ck6\" (UniqueName: \"kubernetes.io/projected/8b0aebec-9a44-4db3-9bcb-e63c5f1748c8-kube-api-access-x4ck6\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:58 crc kubenswrapper[4867]: I0214 04:15:58.968665 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-658bcc664-kwbrz"] Feb 14 04:15:58 crc kubenswrapper[4867]: E0214 04:15:58.968920 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e06bd216-b4b8-4754-a364-76f41991e155" containerName="route-controller-manager" Feb 14 04:15:58 crc kubenswrapper[4867]: I0214 04:15:58.968940 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="e06bd216-b4b8-4754-a364-76f41991e155" containerName="route-controller-manager" Feb 14 04:15:58 crc kubenswrapper[4867]: E0214 04:15:58.968961 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b0aebec-9a44-4db3-9bcb-e63c5f1748c8" containerName="controller-manager" Feb 14 04:15:58 crc kubenswrapper[4867]: I0214 04:15:58.968970 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b0aebec-9a44-4db3-9bcb-e63c5f1748c8" containerName="controller-manager" Feb 14 04:15:58 crc kubenswrapper[4867]: I0214 04:15:58.969099 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b0aebec-9a44-4db3-9bcb-e63c5f1748c8" containerName="controller-manager" Feb 14 04:15:58 crc kubenswrapper[4867]: I0214 04:15:58.969119 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="e06bd216-b4b8-4754-a364-76f41991e155" containerName="route-controller-manager" Feb 14 04:15:58 crc kubenswrapper[4867]: I0214 04:15:58.969804 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-658bcc664-kwbrz" Feb 14 04:15:58 crc kubenswrapper[4867]: I0214 04:15:58.991276 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-856f7b9d6f-8fm27" event={"ID":"e06bd216-b4b8-4754-a364-76f41991e155","Type":"ContainerDied","Data":"2fd954bb3352333915558ae50d85ba987c86894ea554a0ffcd81defc26a5063c"} Feb 14 04:15:58 crc kubenswrapper[4867]: I0214 04:15:58.991383 4867 scope.go:117] "RemoveContainer" containerID="a4e095f624f44728d2a3fc2a1dc0256cd3539ee32a625d8e202f8d80d7e3e7de" Feb 14 04:15:58 crc kubenswrapper[4867]: I0214 04:15:58.991618 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-856f7b9d6f-8fm27" Feb 14 04:15:58 crc kubenswrapper[4867]: I0214 04:15:58.993803 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-658bcc664-kwbrz"] Feb 14 04:15:59 crc kubenswrapper[4867]: I0214 04:15:59.005835 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-64cd899fff-wknv7" Feb 14 04:15:59 crc kubenswrapper[4867]: I0214 04:15:59.018430 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-64cd899fff-wknv7" event={"ID":"8b0aebec-9a44-4db3-9bcb-e63c5f1748c8","Type":"ContainerDied","Data":"9b0735e90ad616b3867ce1cf48d98caaf6b478aa90e26b262b52fd8cb6e1c8ca"} Feb 14 04:15:59 crc kubenswrapper[4867]: I0214 04:15:59.019036 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-g2d66" event={"ID":"cfdf6bd8-5b7c-47eb-8763-9bf734d6cc79","Type":"ContainerStarted","Data":"c9e0ebbec040bfcfa0018745f36e8ec5e28793607a413016790f0c0f786c4220"} Feb 14 04:15:59 crc kubenswrapper[4867]: I0214 04:15:59.023530 4867 scope.go:117] "RemoveContainer" containerID="f048a7044e6a5e0c4f276f047bccbb72c43ee0e536c6a2c0efeb288de5790980" Feb 14 04:15:59 crc kubenswrapper[4867]: I0214 04:15:59.046054 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-64cd899fff-wknv7"] Feb 14 04:15:59 crc kubenswrapper[4867]: I0214 04:15:59.049499 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-64cd899fff-wknv7"] Feb 14 04:15:59 crc kubenswrapper[4867]: I0214 04:15:59.060008 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-856f7b9d6f-8fm27"] Feb 14 04:15:59 crc kubenswrapper[4867]: I0214 04:15:59.060104 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-856f7b9d6f-8fm27"] Feb 14 04:15:59 crc kubenswrapper[4867]: I0214 04:15:59.067754 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-db54df47d-g2d66" podStartSLOduration=3.136401999 podStartE2EDuration="5.067732997s" podCreationTimestamp="2026-02-14 04:15:54 +0000 UTC" firstStartedPulling="2026-02-14 04:15:55.725526838 +0000 UTC m=+387.806464152" lastFinishedPulling="2026-02-14 04:15:57.656857826 +0000 UTC m=+389.737795150" observedRunningTime="2026-02-14 04:15:59.064384401 +0000 UTC m=+391.145321715" watchObservedRunningTime="2026-02-14 04:15:59.067732997 +0000 UTC m=+391.148670311" Feb 14 04:15:59 crc kubenswrapper[4867]: I0214 04:15:59.109942 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/96b49908-c23d-45d6-b7fa-3d718d01ee00-client-ca\") pod \"route-controller-manager-658bcc664-kwbrz\" (UID: \"96b49908-c23d-45d6-b7fa-3d718d01ee00\") " pod="openshift-route-controller-manager/route-controller-manager-658bcc664-kwbrz" Feb 14 04:15:59 crc kubenswrapper[4867]: I0214 04:15:59.110051 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96b49908-c23d-45d6-b7fa-3d718d01ee00-serving-cert\") pod \"route-controller-manager-658bcc664-kwbrz\" (UID: \"96b49908-c23d-45d6-b7fa-3d718d01ee00\") " pod="openshift-route-controller-manager/route-controller-manager-658bcc664-kwbrz" Feb 14 04:15:59 crc kubenswrapper[4867]: I0214 04:15:59.110146 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96b49908-c23d-45d6-b7fa-3d718d01ee00-config\") pod \"route-controller-manager-658bcc664-kwbrz\" (UID: \"96b49908-c23d-45d6-b7fa-3d718d01ee00\") " pod="openshift-route-controller-manager/route-controller-manager-658bcc664-kwbrz" Feb 14 04:15:59 crc kubenswrapper[4867]: I0214 04:15:59.110221 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgmdg\" (UniqueName: \"kubernetes.io/projected/96b49908-c23d-45d6-b7fa-3d718d01ee00-kube-api-access-rgmdg\") pod \"route-controller-manager-658bcc664-kwbrz\" (UID: \"96b49908-c23d-45d6-b7fa-3d718d01ee00\") " pod="openshift-route-controller-manager/route-controller-manager-658bcc664-kwbrz" Feb 14 04:15:59 crc kubenswrapper[4867]: I0214 04:15:59.211106 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgmdg\" (UniqueName: \"kubernetes.io/projected/96b49908-c23d-45d6-b7fa-3d718d01ee00-kube-api-access-rgmdg\") pod \"route-controller-manager-658bcc664-kwbrz\" (UID: \"96b49908-c23d-45d6-b7fa-3d718d01ee00\") " pod="openshift-route-controller-manager/route-controller-manager-658bcc664-kwbrz" Feb 14 04:15:59 crc kubenswrapper[4867]: I0214 04:15:59.211588 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/96b49908-c23d-45d6-b7fa-3d718d01ee00-client-ca\") pod \"route-controller-manager-658bcc664-kwbrz\" (UID: \"96b49908-c23d-45d6-b7fa-3d718d01ee00\") " pod="openshift-route-controller-manager/route-controller-manager-658bcc664-kwbrz" Feb 14 04:15:59 crc kubenswrapper[4867]: I0214 04:15:59.211694 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96b49908-c23d-45d6-b7fa-3d718d01ee00-serving-cert\") pod \"route-controller-manager-658bcc664-kwbrz\" (UID: \"96b49908-c23d-45d6-b7fa-3d718d01ee00\") " pod="openshift-route-controller-manager/route-controller-manager-658bcc664-kwbrz" Feb 14 04:15:59 crc kubenswrapper[4867]: I0214 04:15:59.211761 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96b49908-c23d-45d6-b7fa-3d718d01ee00-config\") pod \"route-controller-manager-658bcc664-kwbrz\" (UID: \"96b49908-c23d-45d6-b7fa-3d718d01ee00\") " pod="openshift-route-controller-manager/route-controller-manager-658bcc664-kwbrz" Feb 14 04:15:59 crc kubenswrapper[4867]: I0214 04:15:59.212935 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/96b49908-c23d-45d6-b7fa-3d718d01ee00-client-ca\") pod \"route-controller-manager-658bcc664-kwbrz\" (UID: \"96b49908-c23d-45d6-b7fa-3d718d01ee00\") " pod="openshift-route-controller-manager/route-controller-manager-658bcc664-kwbrz" Feb 14 04:15:59 crc kubenswrapper[4867]: I0214 04:15:59.213375 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96b49908-c23d-45d6-b7fa-3d718d01ee00-config\") pod \"route-controller-manager-658bcc664-kwbrz\" (UID: \"96b49908-c23d-45d6-b7fa-3d718d01ee00\") " pod="openshift-route-controller-manager/route-controller-manager-658bcc664-kwbrz" Feb 14 04:15:59 crc kubenswrapper[4867]: I0214 04:15:59.216677 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96b49908-c23d-45d6-b7fa-3d718d01ee00-serving-cert\") pod \"route-controller-manager-658bcc664-kwbrz\" (UID: \"96b49908-c23d-45d6-b7fa-3d718d01ee00\") " pod="openshift-route-controller-manager/route-controller-manager-658bcc664-kwbrz" Feb 14 04:15:59 crc kubenswrapper[4867]: I0214 04:15:59.230054 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgmdg\" (UniqueName: \"kubernetes.io/projected/96b49908-c23d-45d6-b7fa-3d718d01ee00-kube-api-access-rgmdg\") pod \"route-controller-manager-658bcc664-kwbrz\" (UID: \"96b49908-c23d-45d6-b7fa-3d718d01ee00\") " pod="openshift-route-controller-manager/route-controller-manager-658bcc664-kwbrz" Feb 14 04:15:59 crc kubenswrapper[4867]: I0214 04:15:59.289463 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-658bcc664-kwbrz" Feb 14 04:15:59 crc kubenswrapper[4867]: I0214 04:15:59.673561 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-658bcc664-kwbrz"] Feb 14 04:15:59 crc kubenswrapper[4867]: W0214 04:15:59.678724 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod96b49908_c23d_45d6_b7fa_3d718d01ee00.slice/crio-36ca2d37b0192cdee33dc6fe36ba136f75d321a0564771f7e8b3c2c82c2a9e3c WatchSource:0}: Error finding container 36ca2d37b0192cdee33dc6fe36ba136f75d321a0564771f7e8b3c2c82c2a9e3c: Status 404 returned error can't find the container with id 36ca2d37b0192cdee33dc6fe36ba136f75d321a0564771f7e8b3c2c82c2a9e3c Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.027643 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-658bcc664-kwbrz" event={"ID":"96b49908-c23d-45d6-b7fa-3d718d01ee00","Type":"ContainerStarted","Data":"6b1dcdc8ab4882eb0ae66f99651a492e0075228f8a659714df05c3f830d62ae6"} Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.027720 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-658bcc664-kwbrz" event={"ID":"96b49908-c23d-45d6-b7fa-3d718d01ee00","Type":"ContainerStarted","Data":"36ca2d37b0192cdee33dc6fe36ba136f75d321a0564771f7e8b3c2c82c2a9e3c"} Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.027963 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-658bcc664-kwbrz" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.046124 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-658bcc664-kwbrz" podStartSLOduration=3.046106935 podStartE2EDuration="3.046106935s" podCreationTimestamp="2026-02-14 04:15:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:16:00.044889493 +0000 UTC m=+392.125826797" watchObservedRunningTime="2026-02-14 04:16:00.046106935 +0000 UTC m=+392.127044249" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.455040 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-658bcc664-kwbrz" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.686942 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-4v7sj"] Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.688030 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-4v7sj" Feb 14 04:16:00 crc kubenswrapper[4867]: W0214 04:16:00.698808 4867 reflector.go:561] object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config": failed to list *v1.Secret: secrets "openshift-state-metrics-kube-rbac-proxy-config" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-monitoring": no relationship found between node 'crc' and this object Feb 14 04:16:00 crc kubenswrapper[4867]: E0214 04:16:00.698861 4867 reflector.go:158] "Unhandled Error" err="object-\"openshift-monitoring\"/\"openshift-state-metrics-kube-rbac-proxy-config\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"openshift-state-metrics-kube-rbac-proxy-config\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-monitoring\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.703277 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.712476 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-4v7sj"] Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.726047 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-s5thh"] Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.727453 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-s5thh" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.730046 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.730189 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.730664 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.749580 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-s5thh"] Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.752909 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-r85dv"] Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.754375 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-r85dv" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.756271 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.758174 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.844332 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8a489956-9dfa-4e5f-ba64-03e262f9ef85-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-4v7sj\" (UID: \"8a489956-9dfa-4e5f-ba64-03e262f9ef85\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-4v7sj" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.844381 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/abb7e15d-7a93-4f87-a926-78eb1ead3680-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-s5thh\" (UID: \"abb7e15d-7a93-4f87-a926-78eb1ead3680\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-s5thh" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.844406 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8a489956-9dfa-4e5f-ba64-03e262f9ef85-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-4v7sj\" (UID: \"8a489956-9dfa-4e5f-ba64-03e262f9ef85\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-4v7sj" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.844422 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/abb7e15d-7a93-4f87-a926-78eb1ead3680-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-s5thh\" (UID: \"abb7e15d-7a93-4f87-a926-78eb1ead3680\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-s5thh" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.844453 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/abb7e15d-7a93-4f87-a926-78eb1ead3680-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-s5thh\" (UID: \"abb7e15d-7a93-4f87-a926-78eb1ead3680\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-s5thh" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.844517 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/abb7e15d-7a93-4f87-a926-78eb1ead3680-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-s5thh\" (UID: \"abb7e15d-7a93-4f87-a926-78eb1ead3680\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-s5thh" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.844867 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d56jc\" (UniqueName: \"kubernetes.io/projected/8a489956-9dfa-4e5f-ba64-03e262f9ef85-kube-api-access-d56jc\") pod \"openshift-state-metrics-566fddb674-4v7sj\" (UID: \"8a489956-9dfa-4e5f-ba64-03e262f9ef85\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-4v7sj" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.845021 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zrmq\" (UniqueName: \"kubernetes.io/projected/abb7e15d-7a93-4f87-a926-78eb1ead3680-kube-api-access-4zrmq\") pod \"kube-state-metrics-777cb5bd5d-s5thh\" (UID: \"abb7e15d-7a93-4f87-a926-78eb1ead3680\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-s5thh" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.845138 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/abb7e15d-7a93-4f87-a926-78eb1ead3680-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-s5thh\" (UID: \"abb7e15d-7a93-4f87-a926-78eb1ead3680\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-s5thh" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.845173 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/8a489956-9dfa-4e5f-ba64-03e262f9ef85-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-4v7sj\" (UID: \"8a489956-9dfa-4e5f-ba64-03e262f9ef85\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-4v7sj" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.946936 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8a489956-9dfa-4e5f-ba64-03e262f9ef85-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-4v7sj\" (UID: \"8a489956-9dfa-4e5f-ba64-03e262f9ef85\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-4v7sj" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.947008 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/abb7e15d-7a93-4f87-a926-78eb1ead3680-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-s5thh\" (UID: \"abb7e15d-7a93-4f87-a926-78eb1ead3680\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-s5thh" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.947052 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/7d066eda-8f33-492d-bf5c-fb6eefed1ced-sys\") pod \"node-exporter-r85dv\" (UID: \"7d066eda-8f33-492d-bf5c-fb6eefed1ced\") " pod="openshift-monitoring/node-exporter-r85dv" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.947088 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/7d066eda-8f33-492d-bf5c-fb6eefed1ced-metrics-client-ca\") pod \"node-exporter-r85dv\" (UID: \"7d066eda-8f33-492d-bf5c-fb6eefed1ced\") " pod="openshift-monitoring/node-exporter-r85dv" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.947208 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/7d066eda-8f33-492d-bf5c-fb6eefed1ced-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-r85dv\" (UID: \"7d066eda-8f33-492d-bf5c-fb6eefed1ced\") " pod="openshift-monitoring/node-exporter-r85dv" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.947275 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/abb7e15d-7a93-4f87-a926-78eb1ead3680-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-s5thh\" (UID: \"abb7e15d-7a93-4f87-a926-78eb1ead3680\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-s5thh" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.948724 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/abb7e15d-7a93-4f87-a926-78eb1ead3680-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-s5thh\" (UID: \"abb7e15d-7a93-4f87-a926-78eb1ead3680\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-s5thh" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.947307 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvpjp\" (UniqueName: \"kubernetes.io/projected/7d066eda-8f33-492d-bf5c-fb6eefed1ced-kube-api-access-nvpjp\") pod \"node-exporter-r85dv\" (UID: \"7d066eda-8f33-492d-bf5c-fb6eefed1ced\") " pod="openshift-monitoring/node-exporter-r85dv" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.948846 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/7d066eda-8f33-492d-bf5c-fb6eefed1ced-root\") pod \"node-exporter-r85dv\" (UID: \"7d066eda-8f33-492d-bf5c-fb6eefed1ced\") " pod="openshift-monitoring/node-exporter-r85dv" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.948965 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/7d066eda-8f33-492d-bf5c-fb6eefed1ced-node-exporter-tls\") pod \"node-exporter-r85dv\" (UID: \"7d066eda-8f33-492d-bf5c-fb6eefed1ced\") " pod="openshift-monitoring/node-exporter-r85dv" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.949002 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/7d066eda-8f33-492d-bf5c-fb6eefed1ced-node-exporter-textfile\") pod \"node-exporter-r85dv\" (UID: \"7d066eda-8f33-492d-bf5c-fb6eefed1ced\") " pod="openshift-monitoring/node-exporter-r85dv" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.949067 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/abb7e15d-7a93-4f87-a926-78eb1ead3680-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-s5thh\" (UID: \"abb7e15d-7a93-4f87-a926-78eb1ead3680\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-s5thh" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.949155 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/7d066eda-8f33-492d-bf5c-fb6eefed1ced-node-exporter-wtmp\") pod \"node-exporter-r85dv\" (UID: \"7d066eda-8f33-492d-bf5c-fb6eefed1ced\") " pod="openshift-monitoring/node-exporter-r85dv" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.949283 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d56jc\" (UniqueName: \"kubernetes.io/projected/8a489956-9dfa-4e5f-ba64-03e262f9ef85-kube-api-access-d56jc\") pod \"openshift-state-metrics-566fddb674-4v7sj\" (UID: \"8a489956-9dfa-4e5f-ba64-03e262f9ef85\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-4v7sj" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.949448 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/abb7e15d-7a93-4f87-a926-78eb1ead3680-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-s5thh\" (UID: \"abb7e15d-7a93-4f87-a926-78eb1ead3680\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-s5thh" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.949872 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zrmq\" (UniqueName: \"kubernetes.io/projected/abb7e15d-7a93-4f87-a926-78eb1ead3680-kube-api-access-4zrmq\") pod \"kube-state-metrics-777cb5bd5d-s5thh\" (UID: \"abb7e15d-7a93-4f87-a926-78eb1ead3680\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-s5thh" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.950200 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/abb7e15d-7a93-4f87-a926-78eb1ead3680-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-s5thh\" (UID: \"abb7e15d-7a93-4f87-a926-78eb1ead3680\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-s5thh" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.950759 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/8a489956-9dfa-4e5f-ba64-03e262f9ef85-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-4v7sj\" (UID: \"8a489956-9dfa-4e5f-ba64-03e262f9ef85\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-4v7sj" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.950813 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8a489956-9dfa-4e5f-ba64-03e262f9ef85-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-4v7sj\" (UID: \"8a489956-9dfa-4e5f-ba64-03e262f9ef85\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-4v7sj" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.950851 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/abb7e15d-7a93-4f87-a926-78eb1ead3680-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-s5thh\" (UID: \"abb7e15d-7a93-4f87-a926-78eb1ead3680\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-s5thh" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.951733 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8a489956-9dfa-4e5f-ba64-03e262f9ef85-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-4v7sj\" (UID: \"8a489956-9dfa-4e5f-ba64-03e262f9ef85\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-4v7sj" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.951814 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/abb7e15d-7a93-4f87-a926-78eb1ead3680-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-s5thh\" (UID: \"abb7e15d-7a93-4f87-a926-78eb1ead3680\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-s5thh" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.953883 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/abb7e15d-7a93-4f87-a926-78eb1ead3680-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-s5thh\" (UID: \"abb7e15d-7a93-4f87-a926-78eb1ead3680\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-s5thh" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.953909 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/abb7e15d-7a93-4f87-a926-78eb1ead3680-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-s5thh\" (UID: \"abb7e15d-7a93-4f87-a926-78eb1ead3680\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-s5thh" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.954708 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/8a489956-9dfa-4e5f-ba64-03e262f9ef85-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-4v7sj\" (UID: \"8a489956-9dfa-4e5f-ba64-03e262f9ef85\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-4v7sj" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.969071 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zrmq\" (UniqueName: \"kubernetes.io/projected/abb7e15d-7a93-4f87-a926-78eb1ead3680-kube-api-access-4zrmq\") pod \"kube-state-metrics-777cb5bd5d-s5thh\" (UID: \"abb7e15d-7a93-4f87-a926-78eb1ead3680\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-s5thh" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.970002 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d56jc\" (UniqueName: \"kubernetes.io/projected/8a489956-9dfa-4e5f-ba64-03e262f9ef85-kube-api-access-d56jc\") pod \"openshift-state-metrics-566fddb674-4v7sj\" (UID: \"8a489956-9dfa-4e5f-ba64-03e262f9ef85\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-4v7sj" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.976885 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-76866bf749-9m2w5"] Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.985750 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-76866bf749-9m2w5" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.987556 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.988176 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.988344 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.988468 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.990301 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-76866bf749-9m2w5"] Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.991215 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.991894 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.003361 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.016525 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b0aebec-9a44-4db3-9bcb-e63c5f1748c8" path="/var/lib/kubelet/pods/8b0aebec-9a44-4db3-9bcb-e63c5f1748c8/volumes" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.017437 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e06bd216-b4b8-4754-a364-76f41991e155" path="/var/lib/kubelet/pods/e06bd216-b4b8-4754-a364-76f41991e155/volumes" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.046037 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-s5thh" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.051686 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shl4z\" (UniqueName: \"kubernetes.io/projected/8708b876-3ece-4820-b4f1-35d9fb2a195c-kube-api-access-shl4z\") pod \"controller-manager-76866bf749-9m2w5\" (UID: \"8708b876-3ece-4820-b4f1-35d9fb2a195c\") " pod="openshift-controller-manager/controller-manager-76866bf749-9m2w5" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.051748 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/7d066eda-8f33-492d-bf5c-fb6eefed1ced-sys\") pod \"node-exporter-r85dv\" (UID: \"7d066eda-8f33-492d-bf5c-fb6eefed1ced\") " pod="openshift-monitoring/node-exporter-r85dv" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.051772 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/7d066eda-8f33-492d-bf5c-fb6eefed1ced-metrics-client-ca\") pod \"node-exporter-r85dv\" (UID: \"7d066eda-8f33-492d-bf5c-fb6eefed1ced\") " pod="openshift-monitoring/node-exporter-r85dv" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.051800 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8708b876-3ece-4820-b4f1-35d9fb2a195c-proxy-ca-bundles\") pod \"controller-manager-76866bf749-9m2w5\" (UID: \"8708b876-3ece-4820-b4f1-35d9fb2a195c\") " pod="openshift-controller-manager/controller-manager-76866bf749-9m2w5" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.051820 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/7d066eda-8f33-492d-bf5c-fb6eefed1ced-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-r85dv\" (UID: \"7d066eda-8f33-492d-bf5c-fb6eefed1ced\") " pod="openshift-monitoring/node-exporter-r85dv" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.051837 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvpjp\" (UniqueName: \"kubernetes.io/projected/7d066eda-8f33-492d-bf5c-fb6eefed1ced-kube-api-access-nvpjp\") pod \"node-exporter-r85dv\" (UID: \"7d066eda-8f33-492d-bf5c-fb6eefed1ced\") " pod="openshift-monitoring/node-exporter-r85dv" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.051853 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/7d066eda-8f33-492d-bf5c-fb6eefed1ced-root\") pod \"node-exporter-r85dv\" (UID: \"7d066eda-8f33-492d-bf5c-fb6eefed1ced\") " pod="openshift-monitoring/node-exporter-r85dv" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.051889 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/7d066eda-8f33-492d-bf5c-fb6eefed1ced-node-exporter-tls\") pod \"node-exporter-r85dv\" (UID: \"7d066eda-8f33-492d-bf5c-fb6eefed1ced\") " pod="openshift-monitoring/node-exporter-r85dv" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.051907 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/7d066eda-8f33-492d-bf5c-fb6eefed1ced-node-exporter-textfile\") pod \"node-exporter-r85dv\" (UID: \"7d066eda-8f33-492d-bf5c-fb6eefed1ced\") " pod="openshift-monitoring/node-exporter-r85dv" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.051929 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8708b876-3ece-4820-b4f1-35d9fb2a195c-client-ca\") pod \"controller-manager-76866bf749-9m2w5\" (UID: \"8708b876-3ece-4820-b4f1-35d9fb2a195c\") " pod="openshift-controller-manager/controller-manager-76866bf749-9m2w5" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.051948 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/7d066eda-8f33-492d-bf5c-fb6eefed1ced-node-exporter-wtmp\") pod \"node-exporter-r85dv\" (UID: \"7d066eda-8f33-492d-bf5c-fb6eefed1ced\") " pod="openshift-monitoring/node-exporter-r85dv" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.051968 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8708b876-3ece-4820-b4f1-35d9fb2a195c-config\") pod \"controller-manager-76866bf749-9m2w5\" (UID: \"8708b876-3ece-4820-b4f1-35d9fb2a195c\") " pod="openshift-controller-manager/controller-manager-76866bf749-9m2w5" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.051997 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8708b876-3ece-4820-b4f1-35d9fb2a195c-serving-cert\") pod \"controller-manager-76866bf749-9m2w5\" (UID: \"8708b876-3ece-4820-b4f1-35d9fb2a195c\") " pod="openshift-controller-manager/controller-manager-76866bf749-9m2w5" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.052293 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/7d066eda-8f33-492d-bf5c-fb6eefed1ced-sys\") pod \"node-exporter-r85dv\" (UID: \"7d066eda-8f33-492d-bf5c-fb6eefed1ced\") " pod="openshift-monitoring/node-exporter-r85dv" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.052712 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/7d066eda-8f33-492d-bf5c-fb6eefed1ced-root\") pod \"node-exporter-r85dv\" (UID: \"7d066eda-8f33-492d-bf5c-fb6eefed1ced\") " pod="openshift-monitoring/node-exporter-r85dv" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.052999 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/7d066eda-8f33-492d-bf5c-fb6eefed1ced-metrics-client-ca\") pod \"node-exporter-r85dv\" (UID: \"7d066eda-8f33-492d-bf5c-fb6eefed1ced\") " pod="openshift-monitoring/node-exporter-r85dv" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.054657 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/7d066eda-8f33-492d-bf5c-fb6eefed1ced-node-exporter-wtmp\") pod \"node-exporter-r85dv\" (UID: \"7d066eda-8f33-492d-bf5c-fb6eefed1ced\") " pod="openshift-monitoring/node-exporter-r85dv" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.054749 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/7d066eda-8f33-492d-bf5c-fb6eefed1ced-node-exporter-textfile\") pod \"node-exporter-r85dv\" (UID: \"7d066eda-8f33-492d-bf5c-fb6eefed1ced\") " pod="openshift-monitoring/node-exporter-r85dv" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.058641 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/7d066eda-8f33-492d-bf5c-fb6eefed1ced-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-r85dv\" (UID: \"7d066eda-8f33-492d-bf5c-fb6eefed1ced\") " pod="openshift-monitoring/node-exporter-r85dv" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.062763 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/7d066eda-8f33-492d-bf5c-fb6eefed1ced-node-exporter-tls\") pod \"node-exporter-r85dv\" (UID: \"7d066eda-8f33-492d-bf5c-fb6eefed1ced\") " pod="openshift-monitoring/node-exporter-r85dv" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.085851 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvpjp\" (UniqueName: \"kubernetes.io/projected/7d066eda-8f33-492d-bf5c-fb6eefed1ced-kube-api-access-nvpjp\") pod \"node-exporter-r85dv\" (UID: \"7d066eda-8f33-492d-bf5c-fb6eefed1ced\") " pod="openshift-monitoring/node-exporter-r85dv" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.153446 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8708b876-3ece-4820-b4f1-35d9fb2a195c-client-ca\") pod \"controller-manager-76866bf749-9m2w5\" (UID: \"8708b876-3ece-4820-b4f1-35d9fb2a195c\") " pod="openshift-controller-manager/controller-manager-76866bf749-9m2w5" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.153804 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8708b876-3ece-4820-b4f1-35d9fb2a195c-config\") pod \"controller-manager-76866bf749-9m2w5\" (UID: \"8708b876-3ece-4820-b4f1-35d9fb2a195c\") " pod="openshift-controller-manager/controller-manager-76866bf749-9m2w5" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.153827 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8708b876-3ece-4820-b4f1-35d9fb2a195c-serving-cert\") pod \"controller-manager-76866bf749-9m2w5\" (UID: \"8708b876-3ece-4820-b4f1-35d9fb2a195c\") " pod="openshift-controller-manager/controller-manager-76866bf749-9m2w5" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.153874 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shl4z\" (UniqueName: \"kubernetes.io/projected/8708b876-3ece-4820-b4f1-35d9fb2a195c-kube-api-access-shl4z\") pod \"controller-manager-76866bf749-9m2w5\" (UID: \"8708b876-3ece-4820-b4f1-35d9fb2a195c\") " pod="openshift-controller-manager/controller-manager-76866bf749-9m2w5" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.153921 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8708b876-3ece-4820-b4f1-35d9fb2a195c-proxy-ca-bundles\") pod \"controller-manager-76866bf749-9m2w5\" (UID: \"8708b876-3ece-4820-b4f1-35d9fb2a195c\") " pod="openshift-controller-manager/controller-manager-76866bf749-9m2w5" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.155439 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8708b876-3ece-4820-b4f1-35d9fb2a195c-proxy-ca-bundles\") pod \"controller-manager-76866bf749-9m2w5\" (UID: \"8708b876-3ece-4820-b4f1-35d9fb2a195c\") " pod="openshift-controller-manager/controller-manager-76866bf749-9m2w5" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.155992 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8708b876-3ece-4820-b4f1-35d9fb2a195c-client-ca\") pod \"controller-manager-76866bf749-9m2w5\" (UID: \"8708b876-3ece-4820-b4f1-35d9fb2a195c\") " pod="openshift-controller-manager/controller-manager-76866bf749-9m2w5" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.157051 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8708b876-3ece-4820-b4f1-35d9fb2a195c-config\") pod \"controller-manager-76866bf749-9m2w5\" (UID: \"8708b876-3ece-4820-b4f1-35d9fb2a195c\") " pod="openshift-controller-manager/controller-manager-76866bf749-9m2w5" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.159243 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8708b876-3ece-4820-b4f1-35d9fb2a195c-serving-cert\") pod \"controller-manager-76866bf749-9m2w5\" (UID: \"8708b876-3ece-4820-b4f1-35d9fb2a195c\") " pod="openshift-controller-manager/controller-manager-76866bf749-9m2w5" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.176604 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shl4z\" (UniqueName: \"kubernetes.io/projected/8708b876-3ece-4820-b4f1-35d9fb2a195c-kube-api-access-shl4z\") pod \"controller-manager-76866bf749-9m2w5\" (UID: \"8708b876-3ece-4820-b4f1-35d9fb2a195c\") " pod="openshift-controller-manager/controller-manager-76866bf749-9m2w5" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.251320 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.251371 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.337763 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-76866bf749-9m2w5" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.373762 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-r85dv" Feb 14 04:16:01 crc kubenswrapper[4867]: W0214 04:16:01.391299 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d066eda_8f33_492d_bf5c_fb6eefed1ced.slice/crio-10e6369faf39f04e446fd37c20e679a23aab7d1633ac0e4a0794215fd833d56a WatchSource:0}: Error finding container 10e6369faf39f04e446fd37c20e679a23aab7d1633ac0e4a0794215fd833d56a: Status 404 returned error can't find the container with id 10e6369faf39f04e446fd37c20e679a23aab7d1633ac0e4a0794215fd833d56a Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.515156 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-s5thh"] Feb 14 04:16:01 crc kubenswrapper[4867]: W0214 04:16:01.522694 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podabb7e15d_7a93_4f87_a926_78eb1ead3680.slice/crio-fbefbaa847ab8fbcc5371eef8be097c800f79c2b0df7ccaf18efab82b01fe16e WatchSource:0}: Error finding container fbefbaa847ab8fbcc5371eef8be097c800f79c2b0df7ccaf18efab82b01fe16e: Status 404 returned error can't find the container with id fbefbaa847ab8fbcc5371eef8be097c800f79c2b0df7ccaf18efab82b01fe16e Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.632528 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.643782 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8a489956-9dfa-4e5f-ba64-03e262f9ef85-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-4v7sj\" (UID: \"8a489956-9dfa-4e5f-ba64-03e262f9ef85\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-4v7sj" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.845748 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-76866bf749-9m2w5"] Feb 14 04:16:01 crc kubenswrapper[4867]: W0214 04:16:01.852337 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8708b876_3ece_4820_b4f1_35d9fb2a195c.slice/crio-d45331f7f516f685e06d725fb6651d41df87d69b6bbe0b5ca1d4db8536a8773c WatchSource:0}: Error finding container d45331f7f516f685e06d725fb6651d41df87d69b6bbe0b5ca1d4db8536a8773c: Status 404 returned error can't find the container with id d45331f7f516f685e06d725fb6651d41df87d69b6bbe0b5ca1d4db8536a8773c Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.866780 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.868591 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.872554 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.872818 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.872961 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.873151 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.873728 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.880580 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.882445 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.884775 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.885780 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.906318 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-4v7sj" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.967413 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/c5a5db44-6c30-46cf-a796-64a6e898d1d8-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"c5a5db44-6c30-46cf-a796-64a6e898d1d8\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.967468 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/c5a5db44-6c30-46cf-a796-64a6e898d1d8-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"c5a5db44-6c30-46cf-a796-64a6e898d1d8\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.967521 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/c5a5db44-6c30-46cf-a796-64a6e898d1d8-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"c5a5db44-6c30-46cf-a796-64a6e898d1d8\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.967540 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/c5a5db44-6c30-46cf-a796-64a6e898d1d8-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"c5a5db44-6c30-46cf-a796-64a6e898d1d8\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.967576 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c5a5db44-6c30-46cf-a796-64a6e898d1d8-config-out\") pod \"alertmanager-main-0\" (UID: \"c5a5db44-6c30-46cf-a796-64a6e898d1d8\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.967606 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c5a5db44-6c30-46cf-a796-64a6e898d1d8-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"c5a5db44-6c30-46cf-a796-64a6e898d1d8\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.967626 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/c5a5db44-6c30-46cf-a796-64a6e898d1d8-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"c5a5db44-6c30-46cf-a796-64a6e898d1d8\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.967645 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/c5a5db44-6c30-46cf-a796-64a6e898d1d8-config-volume\") pod \"alertmanager-main-0\" (UID: \"c5a5db44-6c30-46cf-a796-64a6e898d1d8\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.967661 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zt5jp\" (UniqueName: \"kubernetes.io/projected/c5a5db44-6c30-46cf-a796-64a6e898d1d8-kube-api-access-zt5jp\") pod \"alertmanager-main-0\" (UID: \"c5a5db44-6c30-46cf-a796-64a6e898d1d8\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.967803 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c5a5db44-6c30-46cf-a796-64a6e898d1d8-web-config\") pod \"alertmanager-main-0\" (UID: \"c5a5db44-6c30-46cf-a796-64a6e898d1d8\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.967821 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c5a5db44-6c30-46cf-a796-64a6e898d1d8-tls-assets\") pod \"alertmanager-main-0\" (UID: \"c5a5db44-6c30-46cf-a796-64a6e898d1d8\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.967837 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/c5a5db44-6c30-46cf-a796-64a6e898d1d8-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"c5a5db44-6c30-46cf-a796-64a6e898d1d8\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.066457 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-r85dv" event={"ID":"7d066eda-8f33-492d-bf5c-fb6eefed1ced","Type":"ContainerStarted","Data":"10e6369faf39f04e446fd37c20e679a23aab7d1633ac0e4a0794215fd833d56a"} Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.070814 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/c5a5db44-6c30-46cf-a796-64a6e898d1d8-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"c5a5db44-6c30-46cf-a796-64a6e898d1d8\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.070862 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/c5a5db44-6c30-46cf-a796-64a6e898d1d8-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"c5a5db44-6c30-46cf-a796-64a6e898d1d8\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.070914 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c5a5db44-6c30-46cf-a796-64a6e898d1d8-config-out\") pod \"alertmanager-main-0\" (UID: \"c5a5db44-6c30-46cf-a796-64a6e898d1d8\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.070974 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c5a5db44-6c30-46cf-a796-64a6e898d1d8-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"c5a5db44-6c30-46cf-a796-64a6e898d1d8\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.071005 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/c5a5db44-6c30-46cf-a796-64a6e898d1d8-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"c5a5db44-6c30-46cf-a796-64a6e898d1d8\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.071025 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/c5a5db44-6c30-46cf-a796-64a6e898d1d8-config-volume\") pod \"alertmanager-main-0\" (UID: \"c5a5db44-6c30-46cf-a796-64a6e898d1d8\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.071052 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zt5jp\" (UniqueName: \"kubernetes.io/projected/c5a5db44-6c30-46cf-a796-64a6e898d1d8-kube-api-access-zt5jp\") pod \"alertmanager-main-0\" (UID: \"c5a5db44-6c30-46cf-a796-64a6e898d1d8\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.071083 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c5a5db44-6c30-46cf-a796-64a6e898d1d8-web-config\") pod \"alertmanager-main-0\" (UID: \"c5a5db44-6c30-46cf-a796-64a6e898d1d8\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.071104 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c5a5db44-6c30-46cf-a796-64a6e898d1d8-tls-assets\") pod \"alertmanager-main-0\" (UID: \"c5a5db44-6c30-46cf-a796-64a6e898d1d8\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.071129 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/c5a5db44-6c30-46cf-a796-64a6e898d1d8-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"c5a5db44-6c30-46cf-a796-64a6e898d1d8\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.071159 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/c5a5db44-6c30-46cf-a796-64a6e898d1d8-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"c5a5db44-6c30-46cf-a796-64a6e898d1d8\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.071192 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/c5a5db44-6c30-46cf-a796-64a6e898d1d8-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"c5a5db44-6c30-46cf-a796-64a6e898d1d8\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.078767 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/c5a5db44-6c30-46cf-a796-64a6e898d1d8-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"c5a5db44-6c30-46cf-a796-64a6e898d1d8\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.079177 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/c5a5db44-6c30-46cf-a796-64a6e898d1d8-config-volume\") pod \"alertmanager-main-0\" (UID: \"c5a5db44-6c30-46cf-a796-64a6e898d1d8\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.079639 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-76866bf749-9m2w5" event={"ID":"8708b876-3ece-4820-b4f1-35d9fb2a195c","Type":"ContainerStarted","Data":"d45331f7f516f685e06d725fb6651d41df87d69b6bbe0b5ca1d4db8536a8773c"} Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.080149 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/c5a5db44-6c30-46cf-a796-64a6e898d1d8-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"c5a5db44-6c30-46cf-a796-64a6e898d1d8\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.080889 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c5a5db44-6c30-46cf-a796-64a6e898d1d8-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"c5a5db44-6c30-46cf-a796-64a6e898d1d8\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.082640 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/c5a5db44-6c30-46cf-a796-64a6e898d1d8-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"c5a5db44-6c30-46cf-a796-64a6e898d1d8\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.083973 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/c5a5db44-6c30-46cf-a796-64a6e898d1d8-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"c5a5db44-6c30-46cf-a796-64a6e898d1d8\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.089678 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c5a5db44-6c30-46cf-a796-64a6e898d1d8-config-out\") pod \"alertmanager-main-0\" (UID: \"c5a5db44-6c30-46cf-a796-64a6e898d1d8\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.092969 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-s5thh" event={"ID":"abb7e15d-7a93-4f87-a926-78eb1ead3680","Type":"ContainerStarted","Data":"fbefbaa847ab8fbcc5371eef8be097c800f79c2b0df7ccaf18efab82b01fe16e"} Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.093282 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c5a5db44-6c30-46cf-a796-64a6e898d1d8-web-config\") pod \"alertmanager-main-0\" (UID: \"c5a5db44-6c30-46cf-a796-64a6e898d1d8\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.093356 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c5a5db44-6c30-46cf-a796-64a6e898d1d8-tls-assets\") pod \"alertmanager-main-0\" (UID: \"c5a5db44-6c30-46cf-a796-64a6e898d1d8\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.093843 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/c5a5db44-6c30-46cf-a796-64a6e898d1d8-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"c5a5db44-6c30-46cf-a796-64a6e898d1d8\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.093963 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/c5a5db44-6c30-46cf-a796-64a6e898d1d8-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"c5a5db44-6c30-46cf-a796-64a6e898d1d8\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.096750 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zt5jp\" (UniqueName: \"kubernetes.io/projected/c5a5db44-6c30-46cf-a796-64a6e898d1d8-kube-api-access-zt5jp\") pod \"alertmanager-main-0\" (UID: \"c5a5db44-6c30-46cf-a796-64a6e898d1d8\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.196159 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.386371 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-4v7sj"] Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.723388 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.784483 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/thanos-querier-85586fc579-b75c7"] Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.793751 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-85586fc579-b75c7" Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.797442 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-dockercfg-7kwwq" Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.797668 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.797788 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.797893 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.798033 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.798149 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.803420 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-17jpo9sluqn12" Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.832851 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-85586fc579-b75c7"] Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.892017 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/72801c86-0365-4e93-8887-4fdc6d8a9cad-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-85586fc579-b75c7\" (UID: \"72801c86-0365-4e93-8887-4fdc6d8a9cad\") " pod="openshift-monitoring/thanos-querier-85586fc579-b75c7" Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.892100 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/72801c86-0365-4e93-8887-4fdc6d8a9cad-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-85586fc579-b75c7\" (UID: \"72801c86-0365-4e93-8887-4fdc6d8a9cad\") " pod="openshift-monitoring/thanos-querier-85586fc579-b75c7" Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.892150 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/72801c86-0365-4e93-8887-4fdc6d8a9cad-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-85586fc579-b75c7\" (UID: \"72801c86-0365-4e93-8887-4fdc6d8a9cad\") " pod="openshift-monitoring/thanos-querier-85586fc579-b75c7" Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.892192 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/72801c86-0365-4e93-8887-4fdc6d8a9cad-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-85586fc579-b75c7\" (UID: \"72801c86-0365-4e93-8887-4fdc6d8a9cad\") " pod="openshift-monitoring/thanos-querier-85586fc579-b75c7" Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.892275 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/72801c86-0365-4e93-8887-4fdc6d8a9cad-metrics-client-ca\") pod \"thanos-querier-85586fc579-b75c7\" (UID: \"72801c86-0365-4e93-8887-4fdc6d8a9cad\") " pod="openshift-monitoring/thanos-querier-85586fc579-b75c7" Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.892309 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bslvv\" (UniqueName: \"kubernetes.io/projected/72801c86-0365-4e93-8887-4fdc6d8a9cad-kube-api-access-bslvv\") pod \"thanos-querier-85586fc579-b75c7\" (UID: \"72801c86-0365-4e93-8887-4fdc6d8a9cad\") " pod="openshift-monitoring/thanos-querier-85586fc579-b75c7" Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.892349 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/72801c86-0365-4e93-8887-4fdc6d8a9cad-secret-thanos-querier-tls\") pod \"thanos-querier-85586fc579-b75c7\" (UID: \"72801c86-0365-4e93-8887-4fdc6d8a9cad\") " pod="openshift-monitoring/thanos-querier-85586fc579-b75c7" Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.892412 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/72801c86-0365-4e93-8887-4fdc6d8a9cad-secret-grpc-tls\") pod \"thanos-querier-85586fc579-b75c7\" (UID: \"72801c86-0365-4e93-8887-4fdc6d8a9cad\") " pod="openshift-monitoring/thanos-querier-85586fc579-b75c7" Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.998838 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/72801c86-0365-4e93-8887-4fdc6d8a9cad-secret-grpc-tls\") pod \"thanos-querier-85586fc579-b75c7\" (UID: \"72801c86-0365-4e93-8887-4fdc6d8a9cad\") " pod="openshift-monitoring/thanos-querier-85586fc579-b75c7" Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.999220 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/72801c86-0365-4e93-8887-4fdc6d8a9cad-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-85586fc579-b75c7\" (UID: \"72801c86-0365-4e93-8887-4fdc6d8a9cad\") " pod="openshift-monitoring/thanos-querier-85586fc579-b75c7" Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.999259 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/72801c86-0365-4e93-8887-4fdc6d8a9cad-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-85586fc579-b75c7\" (UID: \"72801c86-0365-4e93-8887-4fdc6d8a9cad\") " pod="openshift-monitoring/thanos-querier-85586fc579-b75c7" Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.999282 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/72801c86-0365-4e93-8887-4fdc6d8a9cad-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-85586fc579-b75c7\" (UID: \"72801c86-0365-4e93-8887-4fdc6d8a9cad\") " pod="openshift-monitoring/thanos-querier-85586fc579-b75c7" Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.999316 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/72801c86-0365-4e93-8887-4fdc6d8a9cad-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-85586fc579-b75c7\" (UID: \"72801c86-0365-4e93-8887-4fdc6d8a9cad\") " pod="openshift-monitoring/thanos-querier-85586fc579-b75c7" Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.999363 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/72801c86-0365-4e93-8887-4fdc6d8a9cad-metrics-client-ca\") pod \"thanos-querier-85586fc579-b75c7\" (UID: \"72801c86-0365-4e93-8887-4fdc6d8a9cad\") " pod="openshift-monitoring/thanos-querier-85586fc579-b75c7" Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.999399 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bslvv\" (UniqueName: \"kubernetes.io/projected/72801c86-0365-4e93-8887-4fdc6d8a9cad-kube-api-access-bslvv\") pod \"thanos-querier-85586fc579-b75c7\" (UID: \"72801c86-0365-4e93-8887-4fdc6d8a9cad\") " pod="openshift-monitoring/thanos-querier-85586fc579-b75c7" Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.999433 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/72801c86-0365-4e93-8887-4fdc6d8a9cad-secret-thanos-querier-tls\") pod \"thanos-querier-85586fc579-b75c7\" (UID: \"72801c86-0365-4e93-8887-4fdc6d8a9cad\") " pod="openshift-monitoring/thanos-querier-85586fc579-b75c7" Feb 14 04:16:03 crc kubenswrapper[4867]: I0214 04:16:03.000708 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/72801c86-0365-4e93-8887-4fdc6d8a9cad-metrics-client-ca\") pod \"thanos-querier-85586fc579-b75c7\" (UID: \"72801c86-0365-4e93-8887-4fdc6d8a9cad\") " pod="openshift-monitoring/thanos-querier-85586fc579-b75c7" Feb 14 04:16:03 crc kubenswrapper[4867]: I0214 04:16:03.007661 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/72801c86-0365-4e93-8887-4fdc6d8a9cad-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-85586fc579-b75c7\" (UID: \"72801c86-0365-4e93-8887-4fdc6d8a9cad\") " pod="openshift-monitoring/thanos-querier-85586fc579-b75c7" Feb 14 04:16:03 crc kubenswrapper[4867]: I0214 04:16:03.008129 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/72801c86-0365-4e93-8887-4fdc6d8a9cad-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-85586fc579-b75c7\" (UID: \"72801c86-0365-4e93-8887-4fdc6d8a9cad\") " pod="openshift-monitoring/thanos-querier-85586fc579-b75c7" Feb 14 04:16:03 crc kubenswrapper[4867]: I0214 04:16:03.010464 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/72801c86-0365-4e93-8887-4fdc6d8a9cad-secret-thanos-querier-tls\") pod \"thanos-querier-85586fc579-b75c7\" (UID: \"72801c86-0365-4e93-8887-4fdc6d8a9cad\") " pod="openshift-monitoring/thanos-querier-85586fc579-b75c7" Feb 14 04:16:03 crc kubenswrapper[4867]: I0214 04:16:03.015873 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/72801c86-0365-4e93-8887-4fdc6d8a9cad-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-85586fc579-b75c7\" (UID: \"72801c86-0365-4e93-8887-4fdc6d8a9cad\") " pod="openshift-monitoring/thanos-querier-85586fc579-b75c7" Feb 14 04:16:03 crc kubenswrapper[4867]: I0214 04:16:03.034720 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/72801c86-0365-4e93-8887-4fdc6d8a9cad-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-85586fc579-b75c7\" (UID: \"72801c86-0365-4e93-8887-4fdc6d8a9cad\") " pod="openshift-monitoring/thanos-querier-85586fc579-b75c7" Feb 14 04:16:03 crc kubenswrapper[4867]: I0214 04:16:03.040620 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bslvv\" (UniqueName: \"kubernetes.io/projected/72801c86-0365-4e93-8887-4fdc6d8a9cad-kube-api-access-bslvv\") pod \"thanos-querier-85586fc579-b75c7\" (UID: \"72801c86-0365-4e93-8887-4fdc6d8a9cad\") " pod="openshift-monitoring/thanos-querier-85586fc579-b75c7" Feb 14 04:16:03 crc kubenswrapper[4867]: I0214 04:16:03.048904 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/72801c86-0365-4e93-8887-4fdc6d8a9cad-secret-grpc-tls\") pod \"thanos-querier-85586fc579-b75c7\" (UID: \"72801c86-0365-4e93-8887-4fdc6d8a9cad\") " pod="openshift-monitoring/thanos-querier-85586fc579-b75c7" Feb 14 04:16:03 crc kubenswrapper[4867]: I0214 04:16:03.098246 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-4v7sj" event={"ID":"8a489956-9dfa-4e5f-ba64-03e262f9ef85","Type":"ContainerStarted","Data":"41226741f1ca63a0314854105ee1ce32c395e601a7879b00a61e3531e13e0e9a"} Feb 14 04:16:03 crc kubenswrapper[4867]: I0214 04:16:03.098313 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-4v7sj" event={"ID":"8a489956-9dfa-4e5f-ba64-03e262f9ef85","Type":"ContainerStarted","Data":"b76398484dfa69747d3ec86f6c5324e37226daf8848e6352b3135a8d16581f21"} Feb 14 04:16:03 crc kubenswrapper[4867]: I0214 04:16:03.098327 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-4v7sj" event={"ID":"8a489956-9dfa-4e5f-ba64-03e262f9ef85","Type":"ContainerStarted","Data":"3af0078730ce8ecd268ea6d91af18ec80365f8d9649e0bb2ac70611110bdd78b"} Feb 14 04:16:03 crc kubenswrapper[4867]: I0214 04:16:03.100763 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-76866bf749-9m2w5" event={"ID":"8708b876-3ece-4820-b4f1-35d9fb2a195c","Type":"ContainerStarted","Data":"abaa323618b879bb61fc24afaa3f869dc0bc36bdaf9414230f2b473467c245b7"} Feb 14 04:16:03 crc kubenswrapper[4867]: I0214 04:16:03.101075 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-76866bf749-9m2w5" Feb 14 04:16:03 crc kubenswrapper[4867]: I0214 04:16:03.103577 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"c5a5db44-6c30-46cf-a796-64a6e898d1d8","Type":"ContainerStarted","Data":"9f022d0231f135a752d98219eee7840a83e14d9d801b81aee3ea93de570a6a0c"} Feb 14 04:16:03 crc kubenswrapper[4867]: I0214 04:16:03.109350 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-76866bf749-9m2w5" Feb 14 04:16:03 crc kubenswrapper[4867]: I0214 04:16:03.135236 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-85586fc579-b75c7" Feb 14 04:16:03 crc kubenswrapper[4867]: I0214 04:16:03.142214 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-76866bf749-9m2w5" podStartSLOduration=6.142199751 podStartE2EDuration="6.142199751s" podCreationTimestamp="2026-02-14 04:15:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:16:03.118564399 +0000 UTC m=+395.199501733" watchObservedRunningTime="2026-02-14 04:16:03.142199751 +0000 UTC m=+395.223137055" Feb 14 04:16:04 crc kubenswrapper[4867]: I0214 04:16:04.531209 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-85586fc579-b75c7"] Feb 14 04:16:05 crc kubenswrapper[4867]: I0214 04:16:05.117990 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-85586fc579-b75c7" event={"ID":"72801c86-0365-4e93-8887-4fdc6d8a9cad","Type":"ContainerStarted","Data":"3cae1f3da5324ad6a7765b39315d91d008076db773acf89cdfe16d10df3238f2"} Feb 14 04:16:05 crc kubenswrapper[4867]: I0214 04:16:05.124391 4867 generic.go:334] "Generic (PLEG): container finished" podID="7d066eda-8f33-492d-bf5c-fb6eefed1ced" containerID="39494fc9f698469501e541fe48f10554b81437f5f3f35bd14d402b6e2cf1c3ca" exitCode=0 Feb 14 04:16:05 crc kubenswrapper[4867]: I0214 04:16:05.125953 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-r85dv" event={"ID":"7d066eda-8f33-492d-bf5c-fb6eefed1ced","Type":"ContainerDied","Data":"39494fc9f698469501e541fe48f10554b81437f5f3f35bd14d402b6e2cf1c3ca"} Feb 14 04:16:05 crc kubenswrapper[4867]: I0214 04:16:05.542637 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7fbfc7fbd4-76v9z"] Feb 14 04:16:05 crc kubenswrapper[4867]: I0214 04:16:05.544837 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7fbfc7fbd4-76v9z" Feb 14 04:16:05 crc kubenswrapper[4867]: I0214 04:16:05.604821 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7fbfc7fbd4-76v9z"] Feb 14 04:16:05 crc kubenswrapper[4867]: I0214 04:16:05.666206 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsfch\" (UniqueName: \"kubernetes.io/projected/f77d496b-c6fc-478c-9bf7-7ea59cb3a474-kube-api-access-wsfch\") pod \"console-7fbfc7fbd4-76v9z\" (UID: \"f77d496b-c6fc-478c-9bf7-7ea59cb3a474\") " pod="openshift-console/console-7fbfc7fbd4-76v9z" Feb 14 04:16:05 crc kubenswrapper[4867]: I0214 04:16:05.666288 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f77d496b-c6fc-478c-9bf7-7ea59cb3a474-service-ca\") pod \"console-7fbfc7fbd4-76v9z\" (UID: \"f77d496b-c6fc-478c-9bf7-7ea59cb3a474\") " pod="openshift-console/console-7fbfc7fbd4-76v9z" Feb 14 04:16:05 crc kubenswrapper[4867]: I0214 04:16:05.666309 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f77d496b-c6fc-478c-9bf7-7ea59cb3a474-oauth-serving-cert\") pod \"console-7fbfc7fbd4-76v9z\" (UID: \"f77d496b-c6fc-478c-9bf7-7ea59cb3a474\") " pod="openshift-console/console-7fbfc7fbd4-76v9z" Feb 14 04:16:05 crc kubenswrapper[4867]: I0214 04:16:05.666345 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f77d496b-c6fc-478c-9bf7-7ea59cb3a474-trusted-ca-bundle\") pod \"console-7fbfc7fbd4-76v9z\" (UID: \"f77d496b-c6fc-478c-9bf7-7ea59cb3a474\") " pod="openshift-console/console-7fbfc7fbd4-76v9z" Feb 14 04:16:05 crc kubenswrapper[4867]: I0214 04:16:05.666393 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f77d496b-c6fc-478c-9bf7-7ea59cb3a474-console-oauth-config\") pod \"console-7fbfc7fbd4-76v9z\" (UID: \"f77d496b-c6fc-478c-9bf7-7ea59cb3a474\") " pod="openshift-console/console-7fbfc7fbd4-76v9z" Feb 14 04:16:05 crc kubenswrapper[4867]: I0214 04:16:05.666435 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f77d496b-c6fc-478c-9bf7-7ea59cb3a474-console-serving-cert\") pod \"console-7fbfc7fbd4-76v9z\" (UID: \"f77d496b-c6fc-478c-9bf7-7ea59cb3a474\") " pod="openshift-console/console-7fbfc7fbd4-76v9z" Feb 14 04:16:05 crc kubenswrapper[4867]: I0214 04:16:05.666473 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f77d496b-c6fc-478c-9bf7-7ea59cb3a474-console-config\") pod \"console-7fbfc7fbd4-76v9z\" (UID: \"f77d496b-c6fc-478c-9bf7-7ea59cb3a474\") " pod="openshift-console/console-7fbfc7fbd4-76v9z" Feb 14 04:16:05 crc kubenswrapper[4867]: I0214 04:16:05.769083 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f77d496b-c6fc-478c-9bf7-7ea59cb3a474-trusted-ca-bundle\") pod \"console-7fbfc7fbd4-76v9z\" (UID: \"f77d496b-c6fc-478c-9bf7-7ea59cb3a474\") " pod="openshift-console/console-7fbfc7fbd4-76v9z" Feb 14 04:16:05 crc kubenswrapper[4867]: I0214 04:16:05.769652 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f77d496b-c6fc-478c-9bf7-7ea59cb3a474-console-oauth-config\") pod \"console-7fbfc7fbd4-76v9z\" (UID: \"f77d496b-c6fc-478c-9bf7-7ea59cb3a474\") " pod="openshift-console/console-7fbfc7fbd4-76v9z" Feb 14 04:16:05 crc kubenswrapper[4867]: I0214 04:16:05.769696 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f77d496b-c6fc-478c-9bf7-7ea59cb3a474-console-serving-cert\") pod \"console-7fbfc7fbd4-76v9z\" (UID: \"f77d496b-c6fc-478c-9bf7-7ea59cb3a474\") " pod="openshift-console/console-7fbfc7fbd4-76v9z" Feb 14 04:16:05 crc kubenswrapper[4867]: I0214 04:16:05.769766 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f77d496b-c6fc-478c-9bf7-7ea59cb3a474-console-config\") pod \"console-7fbfc7fbd4-76v9z\" (UID: \"f77d496b-c6fc-478c-9bf7-7ea59cb3a474\") " pod="openshift-console/console-7fbfc7fbd4-76v9z" Feb 14 04:16:05 crc kubenswrapper[4867]: I0214 04:16:05.769856 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsfch\" (UniqueName: \"kubernetes.io/projected/f77d496b-c6fc-478c-9bf7-7ea59cb3a474-kube-api-access-wsfch\") pod \"console-7fbfc7fbd4-76v9z\" (UID: \"f77d496b-c6fc-478c-9bf7-7ea59cb3a474\") " pod="openshift-console/console-7fbfc7fbd4-76v9z" Feb 14 04:16:05 crc kubenswrapper[4867]: I0214 04:16:05.769897 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f77d496b-c6fc-478c-9bf7-7ea59cb3a474-service-ca\") pod \"console-7fbfc7fbd4-76v9z\" (UID: \"f77d496b-c6fc-478c-9bf7-7ea59cb3a474\") " pod="openshift-console/console-7fbfc7fbd4-76v9z" Feb 14 04:16:05 crc kubenswrapper[4867]: I0214 04:16:05.769924 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f77d496b-c6fc-478c-9bf7-7ea59cb3a474-oauth-serving-cert\") pod \"console-7fbfc7fbd4-76v9z\" (UID: \"f77d496b-c6fc-478c-9bf7-7ea59cb3a474\") " pod="openshift-console/console-7fbfc7fbd4-76v9z" Feb 14 04:16:05 crc kubenswrapper[4867]: I0214 04:16:05.770950 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f77d496b-c6fc-478c-9bf7-7ea59cb3a474-oauth-serving-cert\") pod \"console-7fbfc7fbd4-76v9z\" (UID: \"f77d496b-c6fc-478c-9bf7-7ea59cb3a474\") " pod="openshift-console/console-7fbfc7fbd4-76v9z" Feb 14 04:16:05 crc kubenswrapper[4867]: I0214 04:16:05.771227 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f77d496b-c6fc-478c-9bf7-7ea59cb3a474-trusted-ca-bundle\") pod \"console-7fbfc7fbd4-76v9z\" (UID: \"f77d496b-c6fc-478c-9bf7-7ea59cb3a474\") " pod="openshift-console/console-7fbfc7fbd4-76v9z" Feb 14 04:16:05 crc kubenswrapper[4867]: I0214 04:16:05.771836 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f77d496b-c6fc-478c-9bf7-7ea59cb3a474-console-config\") pod \"console-7fbfc7fbd4-76v9z\" (UID: \"f77d496b-c6fc-478c-9bf7-7ea59cb3a474\") " pod="openshift-console/console-7fbfc7fbd4-76v9z" Feb 14 04:16:05 crc kubenswrapper[4867]: I0214 04:16:05.772781 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f77d496b-c6fc-478c-9bf7-7ea59cb3a474-service-ca\") pod \"console-7fbfc7fbd4-76v9z\" (UID: \"f77d496b-c6fc-478c-9bf7-7ea59cb3a474\") " pod="openshift-console/console-7fbfc7fbd4-76v9z" Feb 14 04:16:05 crc kubenswrapper[4867]: I0214 04:16:05.779529 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f77d496b-c6fc-478c-9bf7-7ea59cb3a474-console-oauth-config\") pod \"console-7fbfc7fbd4-76v9z\" (UID: \"f77d496b-c6fc-478c-9bf7-7ea59cb3a474\") " pod="openshift-console/console-7fbfc7fbd4-76v9z" Feb 14 04:16:05 crc kubenswrapper[4867]: I0214 04:16:05.779602 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f77d496b-c6fc-478c-9bf7-7ea59cb3a474-console-serving-cert\") pod \"console-7fbfc7fbd4-76v9z\" (UID: \"f77d496b-c6fc-478c-9bf7-7ea59cb3a474\") " pod="openshift-console/console-7fbfc7fbd4-76v9z" Feb 14 04:16:05 crc kubenswrapper[4867]: I0214 04:16:05.795084 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsfch\" (UniqueName: \"kubernetes.io/projected/f77d496b-c6fc-478c-9bf7-7ea59cb3a474-kube-api-access-wsfch\") pod \"console-7fbfc7fbd4-76v9z\" (UID: \"f77d496b-c6fc-478c-9bf7-7ea59cb3a474\") " pod="openshift-console/console-7fbfc7fbd4-76v9z" Feb 14 04:16:05 crc kubenswrapper[4867]: I0214 04:16:05.956161 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7fbfc7fbd4-76v9z" Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.114722 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-76ddc659b-tzdtd"] Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.120793 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-76ddc659b-tzdtd" Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.124244 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.124765 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-mc7cq" Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.124955 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.125050 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.125334 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-abg8865f8j0ji" Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.125552 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.133169 4867 generic.go:334] "Generic (PLEG): container finished" podID="c5a5db44-6c30-46cf-a796-64a6e898d1d8" containerID="86fc1a6798da12a1789d84257cbccec7dccff2f126dc7b986ccb003e93a9c590" exitCode=0 Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.133231 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"c5a5db44-6c30-46cf-a796-64a6e898d1d8","Type":"ContainerDied","Data":"86fc1a6798da12a1789d84257cbccec7dccff2f126dc7b986ccb003e93a9c590"} Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.138834 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-76ddc659b-tzdtd"] Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.148872 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-s5thh" event={"ID":"abb7e15d-7a93-4f87-a926-78eb1ead3680","Type":"ContainerStarted","Data":"9591beb52dab2e4705bcde5b084f7050eaee63d3797fbf0fc8bfa9dbb6b8cd39"} Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.149281 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-s5thh" event={"ID":"abb7e15d-7a93-4f87-a926-78eb1ead3680","Type":"ContainerStarted","Data":"d08ca02c1ff320120218e63dd9fb8d0b5e23c858da1415d59e2a0dedb0001612"} Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.149294 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-s5thh" event={"ID":"abb7e15d-7a93-4f87-a926-78eb1ead3680","Type":"ContainerStarted","Data":"d34d202e077f56baa0981c4e3634f34875c0d7fbde4e24fa95e94a15f7803c4f"} Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.151523 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-4v7sj" event={"ID":"8a489956-9dfa-4e5f-ba64-03e262f9ef85","Type":"ContainerStarted","Data":"46fabc4fd91cd9c51b46059c87c082a0879831203a97e844e9a887eaceb509d3"} Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.156489 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-r85dv" event={"ID":"7d066eda-8f33-492d-bf5c-fb6eefed1ced","Type":"ContainerStarted","Data":"d37c84664af69c7327ec7303516ef0bbe2962e265fbee20c35b4e4962f3bdb92"} Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.156548 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-r85dv" event={"ID":"7d066eda-8f33-492d-bf5c-fb6eefed1ced","Type":"ContainerStarted","Data":"ac1aaafe0177a2d6a82c473c3bb33148114ea78577bbfe08cb129d9db744fb63"} Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.197147 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/openshift-state-metrics-566fddb674-4v7sj" podStartSLOduration=3.97261469 podStartE2EDuration="6.197114771s" podCreationTimestamp="2026-02-14 04:16:00 +0000 UTC" firstStartedPulling="2026-02-14 04:16:02.954352931 +0000 UTC m=+395.035290245" lastFinishedPulling="2026-02-14 04:16:05.178853012 +0000 UTC m=+397.259790326" observedRunningTime="2026-02-14 04:16:06.187489752 +0000 UTC m=+398.268427066" watchObservedRunningTime="2026-02-14 04:16:06.197114771 +0000 UTC m=+398.278052085" Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.236485 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-r85dv" podStartSLOduration=3.509462531 podStartE2EDuration="6.236471899s" podCreationTimestamp="2026-02-14 04:16:00 +0000 UTC" firstStartedPulling="2026-02-14 04:16:01.393798745 +0000 UTC m=+393.474736059" lastFinishedPulling="2026-02-14 04:16:04.120808093 +0000 UTC m=+396.201745427" observedRunningTime="2026-02-14 04:16:06.235329859 +0000 UTC m=+398.316267173" watchObservedRunningTime="2026-02-14 04:16:06.236471899 +0000 UTC m=+398.317409213" Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.239365 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-s5thh" podStartSLOduration=3.623661554 podStartE2EDuration="6.239353763s" podCreationTimestamp="2026-02-14 04:16:00 +0000 UTC" firstStartedPulling="2026-02-14 04:16:01.525534892 +0000 UTC m=+393.606472206" lastFinishedPulling="2026-02-14 04:16:04.141227101 +0000 UTC m=+396.222164415" observedRunningTime="2026-02-14 04:16:06.212874919 +0000 UTC m=+398.293812233" watchObservedRunningTime="2026-02-14 04:16:06.239353763 +0000 UTC m=+398.320291067" Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.275475 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6p79\" (UniqueName: \"kubernetes.io/projected/652d53d9-a4c0-4061-b817-ca5173785521-kube-api-access-d6p79\") pod \"metrics-server-76ddc659b-tzdtd\" (UID: \"652d53d9-a4c0-4061-b817-ca5173785521\") " pod="openshift-monitoring/metrics-server-76ddc659b-tzdtd" Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.276294 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/652d53d9-a4c0-4061-b817-ca5173785521-client-ca-bundle\") pod \"metrics-server-76ddc659b-tzdtd\" (UID: \"652d53d9-a4c0-4061-b817-ca5173785521\") " pod="openshift-monitoring/metrics-server-76ddc659b-tzdtd" Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.276538 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/652d53d9-a4c0-4061-b817-ca5173785521-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-76ddc659b-tzdtd\" (UID: \"652d53d9-a4c0-4061-b817-ca5173785521\") " pod="openshift-monitoring/metrics-server-76ddc659b-tzdtd" Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.276625 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/652d53d9-a4c0-4061-b817-ca5173785521-secret-metrics-server-tls\") pod \"metrics-server-76ddc659b-tzdtd\" (UID: \"652d53d9-a4c0-4061-b817-ca5173785521\") " pod="openshift-monitoring/metrics-server-76ddc659b-tzdtd" Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.276729 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/652d53d9-a4c0-4061-b817-ca5173785521-metrics-server-audit-profiles\") pod \"metrics-server-76ddc659b-tzdtd\" (UID: \"652d53d9-a4c0-4061-b817-ca5173785521\") " pod="openshift-monitoring/metrics-server-76ddc659b-tzdtd" Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.276767 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/652d53d9-a4c0-4061-b817-ca5173785521-secret-metrics-client-certs\") pod \"metrics-server-76ddc659b-tzdtd\" (UID: \"652d53d9-a4c0-4061-b817-ca5173785521\") " pod="openshift-monitoring/metrics-server-76ddc659b-tzdtd" Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.276829 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/652d53d9-a4c0-4061-b817-ca5173785521-audit-log\") pod \"metrics-server-76ddc659b-tzdtd\" (UID: \"652d53d9-a4c0-4061-b817-ca5173785521\") " pod="openshift-monitoring/metrics-server-76ddc659b-tzdtd" Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.379628 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6p79\" (UniqueName: \"kubernetes.io/projected/652d53d9-a4c0-4061-b817-ca5173785521-kube-api-access-d6p79\") pod \"metrics-server-76ddc659b-tzdtd\" (UID: \"652d53d9-a4c0-4061-b817-ca5173785521\") " pod="openshift-monitoring/metrics-server-76ddc659b-tzdtd" Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.379696 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/652d53d9-a4c0-4061-b817-ca5173785521-client-ca-bundle\") pod \"metrics-server-76ddc659b-tzdtd\" (UID: \"652d53d9-a4c0-4061-b817-ca5173785521\") " pod="openshift-monitoring/metrics-server-76ddc659b-tzdtd" Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.379748 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/652d53d9-a4c0-4061-b817-ca5173785521-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-76ddc659b-tzdtd\" (UID: \"652d53d9-a4c0-4061-b817-ca5173785521\") " pod="openshift-monitoring/metrics-server-76ddc659b-tzdtd" Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.379781 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/652d53d9-a4c0-4061-b817-ca5173785521-secret-metrics-server-tls\") pod \"metrics-server-76ddc659b-tzdtd\" (UID: \"652d53d9-a4c0-4061-b817-ca5173785521\") " pod="openshift-monitoring/metrics-server-76ddc659b-tzdtd" Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.379814 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/652d53d9-a4c0-4061-b817-ca5173785521-metrics-server-audit-profiles\") pod \"metrics-server-76ddc659b-tzdtd\" (UID: \"652d53d9-a4c0-4061-b817-ca5173785521\") " pod="openshift-monitoring/metrics-server-76ddc659b-tzdtd" Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.379867 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/652d53d9-a4c0-4061-b817-ca5173785521-secret-metrics-client-certs\") pod \"metrics-server-76ddc659b-tzdtd\" (UID: \"652d53d9-a4c0-4061-b817-ca5173785521\") " pod="openshift-monitoring/metrics-server-76ddc659b-tzdtd" Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.380048 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/652d53d9-a4c0-4061-b817-ca5173785521-audit-log\") pod \"metrics-server-76ddc659b-tzdtd\" (UID: \"652d53d9-a4c0-4061-b817-ca5173785521\") " pod="openshift-monitoring/metrics-server-76ddc659b-tzdtd" Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.380677 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/652d53d9-a4c0-4061-b817-ca5173785521-audit-log\") pod \"metrics-server-76ddc659b-tzdtd\" (UID: \"652d53d9-a4c0-4061-b817-ca5173785521\") " pod="openshift-monitoring/metrics-server-76ddc659b-tzdtd" Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.381788 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/652d53d9-a4c0-4061-b817-ca5173785521-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-76ddc659b-tzdtd\" (UID: \"652d53d9-a4c0-4061-b817-ca5173785521\") " pod="openshift-monitoring/metrics-server-76ddc659b-tzdtd" Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.381907 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/652d53d9-a4c0-4061-b817-ca5173785521-metrics-server-audit-profiles\") pod \"metrics-server-76ddc659b-tzdtd\" (UID: \"652d53d9-a4c0-4061-b817-ca5173785521\") " pod="openshift-monitoring/metrics-server-76ddc659b-tzdtd" Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.387074 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/652d53d9-a4c0-4061-b817-ca5173785521-secret-metrics-server-tls\") pod \"metrics-server-76ddc659b-tzdtd\" (UID: \"652d53d9-a4c0-4061-b817-ca5173785521\") " pod="openshift-monitoring/metrics-server-76ddc659b-tzdtd" Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.387294 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/652d53d9-a4c0-4061-b817-ca5173785521-client-ca-bundle\") pod \"metrics-server-76ddc659b-tzdtd\" (UID: \"652d53d9-a4c0-4061-b817-ca5173785521\") " pod="openshift-monitoring/metrics-server-76ddc659b-tzdtd" Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.389573 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/652d53d9-a4c0-4061-b817-ca5173785521-secret-metrics-client-certs\") pod \"metrics-server-76ddc659b-tzdtd\" (UID: \"652d53d9-a4c0-4061-b817-ca5173785521\") " pod="openshift-monitoring/metrics-server-76ddc659b-tzdtd" Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.397532 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6p79\" (UniqueName: \"kubernetes.io/projected/652d53d9-a4c0-4061-b817-ca5173785521-kube-api-access-d6p79\") pod \"metrics-server-76ddc659b-tzdtd\" (UID: \"652d53d9-a4c0-4061-b817-ca5173785521\") " pod="openshift-monitoring/metrics-server-76ddc659b-tzdtd" Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.414837 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7fbfc7fbd4-76v9z"] Feb 14 04:16:06 crc kubenswrapper[4867]: W0214 04:16:06.421201 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf77d496b_c6fc_478c_9bf7_7ea59cb3a474.slice/crio-27f66f9acfe9eb8d98daf1aedc7604a2c13203017a16447c28475c04bbfd3cf7 WatchSource:0}: Error finding container 27f66f9acfe9eb8d98daf1aedc7604a2c13203017a16447c28475c04bbfd3cf7: Status 404 returned error can't find the container with id 27f66f9acfe9eb8d98daf1aedc7604a2c13203017a16447c28475c04bbfd3cf7 Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.448098 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-76ddc659b-tzdtd" Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.491160 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-7f5858d95d-fvlxd"] Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.492907 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-7f5858d95d-fvlxd" Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.496239 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-6tstp" Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.499302 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.499367 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-7f5858d95d-fvlxd"] Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.592903 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/bcf2722f-8c1f-4061-8c4a-9888961c5361-monitoring-plugin-cert\") pod \"monitoring-plugin-7f5858d95d-fvlxd\" (UID: \"bcf2722f-8c1f-4061-8c4a-9888961c5361\") " pod="openshift-monitoring/monitoring-plugin-7f5858d95d-fvlxd" Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.694455 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/bcf2722f-8c1f-4061-8c4a-9888961c5361-monitoring-plugin-cert\") pod \"monitoring-plugin-7f5858d95d-fvlxd\" (UID: \"bcf2722f-8c1f-4061-8c4a-9888961c5361\") " pod="openshift-monitoring/monitoring-plugin-7f5858d95d-fvlxd" Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.701123 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/bcf2722f-8c1f-4061-8c4a-9888961c5361-monitoring-plugin-cert\") pod \"monitoring-plugin-7f5858d95d-fvlxd\" (UID: \"bcf2722f-8c1f-4061-8c4a-9888961c5361\") " pod="openshift-monitoring/monitoring-plugin-7f5858d95d-fvlxd" Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.829526 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-7f5858d95d-fvlxd" Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.895181 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-76ddc659b-tzdtd"] Feb 14 04:16:06 crc kubenswrapper[4867]: W0214 04:16:06.955441 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod652d53d9_a4c0_4061_b817_ca5173785521.slice/crio-8380ec1c893b73a66d9d682954baa50258140ac65258e730cb625793017a2292 WatchSource:0}: Error finding container 8380ec1c893b73a66d9d682954baa50258140ac65258e730cb625793017a2292: Status 404 returned error can't find the container with id 8380ec1c893b73a66d9d682954baa50258140ac65258e730cb625793017a2292 Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.117327 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.119359 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.129833 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.129874 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.129993 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.130091 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.130375 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.130485 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.131798 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.131811 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.131949 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-dockercfg-dmlbq" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.132165 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.132286 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-8oiq8eud6lg7c" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.132446 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.140652 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.141039 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.206473 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7fbfc7fbd4-76v9z" event={"ID":"f77d496b-c6fc-478c-9bf7-7ea59cb3a474","Type":"ContainerStarted","Data":"df535b9b85af4492848df019310db3541d82923651d0d5f2862f2b53665e91fd"} Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.206583 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7fbfc7fbd4-76v9z" event={"ID":"f77d496b-c6fc-478c-9bf7-7ea59cb3a474","Type":"ContainerStarted","Data":"27f66f9acfe9eb8d98daf1aedc7604a2c13203017a16447c28475c04bbfd3cf7"} Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.206850 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/62ee3130-2952-453e-82b6-dba068ba1bc9-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.206904 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/62ee3130-2952-453e-82b6-dba068ba1bc9-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.206925 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/62ee3130-2952-453e-82b6-dba068ba1bc9-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.206961 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/62ee3130-2952-453e-82b6-dba068ba1bc9-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.206982 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/62ee3130-2952-453e-82b6-dba068ba1bc9-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.207000 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/62ee3130-2952-453e-82b6-dba068ba1bc9-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.207019 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/62ee3130-2952-453e-82b6-dba068ba1bc9-config\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.207042 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/62ee3130-2952-453e-82b6-dba068ba1bc9-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.207090 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/62ee3130-2952-453e-82b6-dba068ba1bc9-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.207113 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/62ee3130-2952-453e-82b6-dba068ba1bc9-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.207147 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/62ee3130-2952-453e-82b6-dba068ba1bc9-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.207167 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/62ee3130-2952-453e-82b6-dba068ba1bc9-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.207186 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/62ee3130-2952-453e-82b6-dba068ba1bc9-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.207202 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/62ee3130-2952-453e-82b6-dba068ba1bc9-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.207222 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwx2t\" (UniqueName: \"kubernetes.io/projected/62ee3130-2952-453e-82b6-dba068ba1bc9-kube-api-access-vwx2t\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.207246 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/62ee3130-2952-453e-82b6-dba068ba1bc9-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.207263 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/62ee3130-2952-453e-82b6-dba068ba1bc9-web-config\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.207284 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/62ee3130-2952-453e-82b6-dba068ba1bc9-config-out\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.208877 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-76ddc659b-tzdtd" event={"ID":"652d53d9-a4c0-4061-b817-ca5173785521","Type":"ContainerStarted","Data":"8380ec1c893b73a66d9d682954baa50258140ac65258e730cb625793017a2292"} Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.223857 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7fbfc7fbd4-76v9z" podStartSLOduration=2.223836339 podStartE2EDuration="2.223836339s" podCreationTimestamp="2026-02-14 04:16:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:16:07.221870788 +0000 UTC m=+399.302808102" watchObservedRunningTime="2026-02-14 04:16:07.223836339 +0000 UTC m=+399.304773643" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.263986 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-7f5858d95d-fvlxd"] Feb 14 04:16:07 crc kubenswrapper[4867]: W0214 04:16:07.272090 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbcf2722f_8c1f_4061_8c4a_9888961c5361.slice/crio-5d31c340527198d5e84bc28ee692f930625e450c1c7b56ed8d327fcc0a767674 WatchSource:0}: Error finding container 5d31c340527198d5e84bc28ee692f930625e450c1c7b56ed8d327fcc0a767674: Status 404 returned error can't find the container with id 5d31c340527198d5e84bc28ee692f930625e450c1c7b56ed8d327fcc0a767674 Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.308215 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/62ee3130-2952-453e-82b6-dba068ba1bc9-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.308272 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/62ee3130-2952-453e-82b6-dba068ba1bc9-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.308293 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/62ee3130-2952-453e-82b6-dba068ba1bc9-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.308350 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/62ee3130-2952-453e-82b6-dba068ba1bc9-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.308391 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/62ee3130-2952-453e-82b6-dba068ba1bc9-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.308423 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/62ee3130-2952-453e-82b6-dba068ba1bc9-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.308441 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/62ee3130-2952-453e-82b6-dba068ba1bc9-config\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.308460 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/62ee3130-2952-453e-82b6-dba068ba1bc9-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.308526 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/62ee3130-2952-453e-82b6-dba068ba1bc9-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.308546 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/62ee3130-2952-453e-82b6-dba068ba1bc9-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.309271 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/62ee3130-2952-453e-82b6-dba068ba1bc9-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.309325 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/62ee3130-2952-453e-82b6-dba068ba1bc9-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.309361 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/62ee3130-2952-453e-82b6-dba068ba1bc9-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.309378 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/62ee3130-2952-453e-82b6-dba068ba1bc9-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.309409 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwx2t\" (UniqueName: \"kubernetes.io/projected/62ee3130-2952-453e-82b6-dba068ba1bc9-kube-api-access-vwx2t\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.309458 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/62ee3130-2952-453e-82b6-dba068ba1bc9-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.309482 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/62ee3130-2952-453e-82b6-dba068ba1bc9-web-config\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.309499 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/62ee3130-2952-453e-82b6-dba068ba1bc9-config-out\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.311093 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/62ee3130-2952-453e-82b6-dba068ba1bc9-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.311201 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/62ee3130-2952-453e-82b6-dba068ba1bc9-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.311987 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/62ee3130-2952-453e-82b6-dba068ba1bc9-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.313352 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/62ee3130-2952-453e-82b6-dba068ba1bc9-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.317342 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/62ee3130-2952-453e-82b6-dba068ba1bc9-config-out\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.317543 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/62ee3130-2952-453e-82b6-dba068ba1bc9-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.317651 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/62ee3130-2952-453e-82b6-dba068ba1bc9-config\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.318154 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/62ee3130-2952-453e-82b6-dba068ba1bc9-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.320332 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/62ee3130-2952-453e-82b6-dba068ba1bc9-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.320952 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/62ee3130-2952-453e-82b6-dba068ba1bc9-web-config\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.321725 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/62ee3130-2952-453e-82b6-dba068ba1bc9-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.322453 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/62ee3130-2952-453e-82b6-dba068ba1bc9-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.325843 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/62ee3130-2952-453e-82b6-dba068ba1bc9-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.328048 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/62ee3130-2952-453e-82b6-dba068ba1bc9-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.332263 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/62ee3130-2952-453e-82b6-dba068ba1bc9-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.332781 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/62ee3130-2952-453e-82b6-dba068ba1bc9-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.336217 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwx2t\" (UniqueName: \"kubernetes.io/projected/62ee3130-2952-453e-82b6-dba068ba1bc9-kube-api-access-vwx2t\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.340389 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/62ee3130-2952-453e-82b6-dba068ba1bc9-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.462543 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:08 crc kubenswrapper[4867]: I0214 04:16:08.057853 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" podUID="c029599e-5014-4874-917f-076635849451" containerName="registry" containerID="cri-o://984105ff3eb0991dfe28181ee193825f9011bc66c156c9de4b38deec4acb2517" gracePeriod=30 Feb 14 04:16:08 crc kubenswrapper[4867]: I0214 04:16:08.216539 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-7f5858d95d-fvlxd" event={"ID":"bcf2722f-8c1f-4061-8c4a-9888961c5361","Type":"ContainerStarted","Data":"5d31c340527198d5e84bc28ee692f930625e450c1c7b56ed8d327fcc0a767674"} Feb 14 04:16:08 crc kubenswrapper[4867]: I0214 04:16:08.219196 4867 generic.go:334] "Generic (PLEG): container finished" podID="c029599e-5014-4874-917f-076635849451" containerID="984105ff3eb0991dfe28181ee193825f9011bc66c156c9de4b38deec4acb2517" exitCode=0 Feb 14 04:16:08 crc kubenswrapper[4867]: I0214 04:16:08.219250 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" event={"ID":"c029599e-5014-4874-917f-076635849451","Type":"ContainerDied","Data":"984105ff3eb0991dfe28181ee193825f9011bc66c156c9de4b38deec4acb2517"} Feb 14 04:16:09 crc kubenswrapper[4867]: I0214 04:16:09.594679 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:16:09 crc kubenswrapper[4867]: I0214 04:16:09.762138 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"c029599e-5014-4874-917f-076635849451\" (UID: \"c029599e-5014-4874-917f-076635849451\") " Feb 14 04:16:09 crc kubenswrapper[4867]: I0214 04:16:09.762593 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c029599e-5014-4874-917f-076635849451-ca-trust-extracted\") pod \"c029599e-5014-4874-917f-076635849451\" (UID: \"c029599e-5014-4874-917f-076635849451\") " Feb 14 04:16:09 crc kubenswrapper[4867]: I0214 04:16:09.762622 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c029599e-5014-4874-917f-076635849451-registry-tls\") pod \"c029599e-5014-4874-917f-076635849451\" (UID: \"c029599e-5014-4874-917f-076635849451\") " Feb 14 04:16:09 crc kubenswrapper[4867]: I0214 04:16:09.762654 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c029599e-5014-4874-917f-076635849451-trusted-ca\") pod \"c029599e-5014-4874-917f-076635849451\" (UID: \"c029599e-5014-4874-917f-076635849451\") " Feb 14 04:16:09 crc kubenswrapper[4867]: I0214 04:16:09.762673 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c029599e-5014-4874-917f-076635849451-bound-sa-token\") pod \"c029599e-5014-4874-917f-076635849451\" (UID: \"c029599e-5014-4874-917f-076635849451\") " Feb 14 04:16:09 crc kubenswrapper[4867]: I0214 04:16:09.762721 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c029599e-5014-4874-917f-076635849451-registry-certificates\") pod \"c029599e-5014-4874-917f-076635849451\" (UID: \"c029599e-5014-4874-917f-076635849451\") " Feb 14 04:16:09 crc kubenswrapper[4867]: I0214 04:16:09.762752 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bmbh6\" (UniqueName: \"kubernetes.io/projected/c029599e-5014-4874-917f-076635849451-kube-api-access-bmbh6\") pod \"c029599e-5014-4874-917f-076635849451\" (UID: \"c029599e-5014-4874-917f-076635849451\") " Feb 14 04:16:09 crc kubenswrapper[4867]: I0214 04:16:09.762769 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c029599e-5014-4874-917f-076635849451-installation-pull-secrets\") pod \"c029599e-5014-4874-917f-076635849451\" (UID: \"c029599e-5014-4874-917f-076635849451\") " Feb 14 04:16:09 crc kubenswrapper[4867]: I0214 04:16:09.763979 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c029599e-5014-4874-917f-076635849451-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "c029599e-5014-4874-917f-076635849451" (UID: "c029599e-5014-4874-917f-076635849451"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:16:09 crc kubenswrapper[4867]: I0214 04:16:09.764093 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c029599e-5014-4874-917f-076635849451-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "c029599e-5014-4874-917f-076635849451" (UID: "c029599e-5014-4874-917f-076635849451"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:16:09 crc kubenswrapper[4867]: I0214 04:16:09.771358 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c029599e-5014-4874-917f-076635849451-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "c029599e-5014-4874-917f-076635849451" (UID: "c029599e-5014-4874-917f-076635849451"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:16:09 crc kubenswrapper[4867]: I0214 04:16:09.771587 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c029599e-5014-4874-917f-076635849451-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "c029599e-5014-4874-917f-076635849451" (UID: "c029599e-5014-4874-917f-076635849451"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:16:09 crc kubenswrapper[4867]: I0214 04:16:09.771806 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c029599e-5014-4874-917f-076635849451-kube-api-access-bmbh6" (OuterVolumeSpecName: "kube-api-access-bmbh6") pod "c029599e-5014-4874-917f-076635849451" (UID: "c029599e-5014-4874-917f-076635849451"). InnerVolumeSpecName "kube-api-access-bmbh6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:16:09 crc kubenswrapper[4867]: I0214 04:16:09.783195 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "c029599e-5014-4874-917f-076635849451" (UID: "c029599e-5014-4874-917f-076635849451"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 14 04:16:09 crc kubenswrapper[4867]: I0214 04:16:09.783360 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c029599e-5014-4874-917f-076635849451-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "c029599e-5014-4874-917f-076635849451" (UID: "c029599e-5014-4874-917f-076635849451"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:16:09 crc kubenswrapper[4867]: I0214 04:16:09.784687 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c029599e-5014-4874-917f-076635849451-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "c029599e-5014-4874-917f-076635849451" (UID: "c029599e-5014-4874-917f-076635849451"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:16:09 crc kubenswrapper[4867]: I0214 04:16:09.864544 4867 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c029599e-5014-4874-917f-076635849451-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 14 04:16:09 crc kubenswrapper[4867]: I0214 04:16:09.864576 4867 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c029599e-5014-4874-917f-076635849451-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 14 04:16:09 crc kubenswrapper[4867]: I0214 04:16:09.864587 4867 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c029599e-5014-4874-917f-076635849451-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 14 04:16:09 crc kubenswrapper[4867]: I0214 04:16:09.864596 4867 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c029599e-5014-4874-917f-076635849451-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 14 04:16:09 crc kubenswrapper[4867]: I0214 04:16:09.864605 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bmbh6\" (UniqueName: \"kubernetes.io/projected/c029599e-5014-4874-917f-076635849451-kube-api-access-bmbh6\") on node \"crc\" DevicePath \"\"" Feb 14 04:16:09 crc kubenswrapper[4867]: I0214 04:16:09.864612 4867 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c029599e-5014-4874-917f-076635849451-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 14 04:16:09 crc kubenswrapper[4867]: I0214 04:16:09.864620 4867 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c029599e-5014-4874-917f-076635849451-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 14 04:16:09 crc kubenswrapper[4867]: I0214 04:16:09.954914 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 14 04:16:10 crc kubenswrapper[4867]: I0214 04:16:10.248054 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" event={"ID":"c029599e-5014-4874-917f-076635849451","Type":"ContainerDied","Data":"6ea0765f93238181496aa9ad98328dd359db53721f5f5fd14d5d2d61c6d3b39b"} Feb 14 04:16:10 crc kubenswrapper[4867]: I0214 04:16:10.248119 4867 scope.go:117] "RemoveContainer" containerID="984105ff3eb0991dfe28181ee193825f9011bc66c156c9de4b38deec4acb2517" Feb 14 04:16:10 crc kubenswrapper[4867]: I0214 04:16:10.248285 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:16:10 crc kubenswrapper[4867]: I0214 04:16:10.293304 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-5rxcg"] Feb 14 04:16:10 crc kubenswrapper[4867]: I0214 04:16:10.297301 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-5rxcg"] Feb 14 04:16:11 crc kubenswrapper[4867]: I0214 04:16:11.005014 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c029599e-5014-4874-917f-076635849451" path="/var/lib/kubelet/pods/c029599e-5014-4874-917f-076635849451/volumes" Feb 14 04:16:11 crc kubenswrapper[4867]: I0214 04:16:11.257294 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-85586fc579-b75c7" event={"ID":"72801c86-0365-4e93-8887-4fdc6d8a9cad","Type":"ContainerStarted","Data":"2feaa7ec3b997344380510cbb416c62fadf0bc72aa0c4b6730f60e6d52015870"} Feb 14 04:16:11 crc kubenswrapper[4867]: I0214 04:16:11.257339 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-85586fc579-b75c7" event={"ID":"72801c86-0365-4e93-8887-4fdc6d8a9cad","Type":"ContainerStarted","Data":"d2159771377fc702371462aa9a14ef614a4f97f6537f88e8acad4e91910fe740"} Feb 14 04:16:11 crc kubenswrapper[4867]: I0214 04:16:11.257353 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-85586fc579-b75c7" event={"ID":"72801c86-0365-4e93-8887-4fdc6d8a9cad","Type":"ContainerStarted","Data":"a4944956fbbc325cfc0cd1268c251f77eced79caeff535d2b1c8b141aeb39bc0"} Feb 14 04:16:11 crc kubenswrapper[4867]: I0214 04:16:11.260270 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-7f5858d95d-fvlxd" event={"ID":"bcf2722f-8c1f-4061-8c4a-9888961c5361","Type":"ContainerStarted","Data":"9111a116940ebcb2258feb531f677548eeb63b1e51787d91375ec3b3726af5fa"} Feb 14 04:16:11 crc kubenswrapper[4867]: I0214 04:16:11.260711 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-7f5858d95d-fvlxd" Feb 14 04:16:11 crc kubenswrapper[4867]: I0214 04:16:11.265995 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"c5a5db44-6c30-46cf-a796-64a6e898d1d8","Type":"ContainerStarted","Data":"0571d9124b51b4ce87998f4b34d8cd3fdfc350358086d53c9ee26294983f688e"} Feb 14 04:16:11 crc kubenswrapper[4867]: I0214 04:16:11.266033 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"c5a5db44-6c30-46cf-a796-64a6e898d1d8","Type":"ContainerStarted","Data":"1ab0d330fb12bc5326d725bac308511aa1fb2faae489b227d65b8cf1e3aa52d5"} Feb 14 04:16:11 crc kubenswrapper[4867]: I0214 04:16:11.266044 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"c5a5db44-6c30-46cf-a796-64a6e898d1d8","Type":"ContainerStarted","Data":"566622c854e9cda9094c9653505e2c60d9642f51a39cac4072cb7722d74d89a4"} Feb 14 04:16:11 crc kubenswrapper[4867]: I0214 04:16:11.267799 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-7f5858d95d-fvlxd" Feb 14 04:16:11 crc kubenswrapper[4867]: I0214 04:16:11.270007 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-76ddc659b-tzdtd" event={"ID":"652d53d9-a4c0-4061-b817-ca5173785521","Type":"ContainerStarted","Data":"075b79918bc2f91b3a5dae96c88d4b1fcea3cd1da542c02c4a8dfaa3b4541715"} Feb 14 04:16:11 crc kubenswrapper[4867]: I0214 04:16:11.273475 4867 generic.go:334] "Generic (PLEG): container finished" podID="62ee3130-2952-453e-82b6-dba068ba1bc9" containerID="6676250e6ab4328a00c955c252f7334c62f0069abe3d9ce15319bd01bbf22dd8" exitCode=0 Feb 14 04:16:11 crc kubenswrapper[4867]: I0214 04:16:11.273539 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"62ee3130-2952-453e-82b6-dba068ba1bc9","Type":"ContainerDied","Data":"6676250e6ab4328a00c955c252f7334c62f0069abe3d9ce15319bd01bbf22dd8"} Feb 14 04:16:11 crc kubenswrapper[4867]: I0214 04:16:11.273588 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"62ee3130-2952-453e-82b6-dba068ba1bc9","Type":"ContainerStarted","Data":"63a220aecf6d9618dc2a4c714dd800d1ef55f79fcac6e58e5693dec9210c1604"} Feb 14 04:16:11 crc kubenswrapper[4867]: I0214 04:16:11.290402 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-7f5858d95d-fvlxd" podStartSLOduration=1.8926475379999999 podStartE2EDuration="5.290347025s" podCreationTimestamp="2026-02-14 04:16:06 +0000 UTC" firstStartedPulling="2026-02-14 04:16:07.27449811 +0000 UTC m=+399.355435424" lastFinishedPulling="2026-02-14 04:16:10.672197597 +0000 UTC m=+402.753134911" observedRunningTime="2026-02-14 04:16:11.282064241 +0000 UTC m=+403.363001555" watchObservedRunningTime="2026-02-14 04:16:11.290347025 +0000 UTC m=+403.371284369" Feb 14 04:16:11 crc kubenswrapper[4867]: I0214 04:16:11.377554 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-76ddc659b-tzdtd" podStartSLOduration=1.672964886 podStartE2EDuration="5.377533831s" podCreationTimestamp="2026-02-14 04:16:06 +0000 UTC" firstStartedPulling="2026-02-14 04:16:06.958180777 +0000 UTC m=+399.039118091" lastFinishedPulling="2026-02-14 04:16:10.662749722 +0000 UTC m=+402.743687036" observedRunningTime="2026-02-14 04:16:11.376255818 +0000 UTC m=+403.457193142" watchObservedRunningTime="2026-02-14 04:16:11.377533831 +0000 UTC m=+403.458471145" Feb 14 04:16:12 crc kubenswrapper[4867]: I0214 04:16:12.282533 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"c5a5db44-6c30-46cf-a796-64a6e898d1d8","Type":"ContainerStarted","Data":"639d21805a94f75183a7db3daa9f3bc373f7cdf3d67020113d1e034c2cf56388"} Feb 14 04:16:12 crc kubenswrapper[4867]: I0214 04:16:12.283286 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"c5a5db44-6c30-46cf-a796-64a6e898d1d8","Type":"ContainerStarted","Data":"f01f775fa1b8582cc6203a580eb810f3bc4133698ba910f8a34d5dace4711a59"} Feb 14 04:16:14 crc kubenswrapper[4867]: I0214 04:16:14.579744 4867 patch_prober.go:28] interesting pod/image-registry-697d97f7c8-5rxcg container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.15:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 04:16:14 crc kubenswrapper[4867]: I0214 04:16:14.580456 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" podUID="c029599e-5014-4874-917f-076635849451" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.15:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 04:16:15 crc kubenswrapper[4867]: I0214 04:16:15.307779 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-85586fc579-b75c7" event={"ID":"72801c86-0365-4e93-8887-4fdc6d8a9cad","Type":"ContainerStarted","Data":"24413e9ce4db97b9a01e5d1bc087f8b72ce77a2da91cb1efbb9dd2aae6bf3986"} Feb 14 04:16:15 crc kubenswrapper[4867]: I0214 04:16:15.312099 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"c5a5db44-6c30-46cf-a796-64a6e898d1d8","Type":"ContainerStarted","Data":"d00b368d905164f5120c48870c7bc64d59c4964cb3f7346655f07db23a4047bd"} Feb 14 04:16:15 crc kubenswrapper[4867]: I0214 04:16:15.314276 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"62ee3130-2952-453e-82b6-dba068ba1bc9","Type":"ContainerStarted","Data":"147c2f8c163d08e9696f10f3abfcd588dd0513b30c08f7e00c51a9d7851cd103"} Feb 14 04:16:15 crc kubenswrapper[4867]: I0214 04:16:15.342753 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=2.037592155 podStartE2EDuration="14.342711387s" podCreationTimestamp="2026-02-14 04:16:01 +0000 UTC" firstStartedPulling="2026-02-14 04:16:02.784256172 +0000 UTC m=+394.865193476" lastFinishedPulling="2026-02-14 04:16:15.089375394 +0000 UTC m=+407.170312708" observedRunningTime="2026-02-14 04:16:15.33898931 +0000 UTC m=+407.419926624" watchObservedRunningTime="2026-02-14 04:16:15.342711387 +0000 UTC m=+407.423648701" Feb 14 04:16:15 crc kubenswrapper[4867]: I0214 04:16:15.957103 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7fbfc7fbd4-76v9z" Feb 14 04:16:15 crc kubenswrapper[4867]: I0214 04:16:15.957162 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7fbfc7fbd4-76v9z" Feb 14 04:16:15 crc kubenswrapper[4867]: I0214 04:16:15.963649 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7fbfc7fbd4-76v9z" Feb 14 04:16:16 crc kubenswrapper[4867]: I0214 04:16:16.322545 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"62ee3130-2952-453e-82b6-dba068ba1bc9","Type":"ContainerStarted","Data":"5d3b9e6890a6983a76b8aaf4fbb189d3b95cb8b346e7654b16efa15fc1727158"} Feb 14 04:16:16 crc kubenswrapper[4867]: I0214 04:16:16.323004 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"62ee3130-2952-453e-82b6-dba068ba1bc9","Type":"ContainerStarted","Data":"c3d8f2697ea91aea780e16dab27e369fe312387513058af3a300f091529a0d05"} Feb 14 04:16:16 crc kubenswrapper[4867]: I0214 04:16:16.323021 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"62ee3130-2952-453e-82b6-dba068ba1bc9","Type":"ContainerStarted","Data":"ce3cf96117dabb896387465fc0c257d4c04299c9b495615f2560e1637d1ca81f"} Feb 14 04:16:16 crc kubenswrapper[4867]: I0214 04:16:16.323032 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"62ee3130-2952-453e-82b6-dba068ba1bc9","Type":"ContainerStarted","Data":"dc7e54770405cf89b69354f3e30c9c2865b6bd4f85f209126b67b26b417b646c"} Feb 14 04:16:16 crc kubenswrapper[4867]: I0214 04:16:16.323043 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"62ee3130-2952-453e-82b6-dba068ba1bc9","Type":"ContainerStarted","Data":"599b4a74cea66c2d77401338e138b4d9b1a9f005f8b2c3f1104caf320d0c7126"} Feb 14 04:16:16 crc kubenswrapper[4867]: I0214 04:16:16.327411 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-85586fc579-b75c7" event={"ID":"72801c86-0365-4e93-8887-4fdc6d8a9cad","Type":"ContainerStarted","Data":"49d5c225eb2af6354612f9d06ed06b8e4f4d89b994c5f92f22d0ac4184aa978f"} Feb 14 04:16:16 crc kubenswrapper[4867]: I0214 04:16:16.327440 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-85586fc579-b75c7" event={"ID":"72801c86-0365-4e93-8887-4fdc6d8a9cad","Type":"ContainerStarted","Data":"02dba26e3be94b0469342c8cd74b724969629756f0f4acd78582351c566c3abd"} Feb 14 04:16:16 crc kubenswrapper[4867]: I0214 04:16:16.330888 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7fbfc7fbd4-76v9z" Feb 14 04:16:16 crc kubenswrapper[4867]: I0214 04:16:16.353585 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=5.534510957 podStartE2EDuration="9.353562954s" podCreationTimestamp="2026-02-14 04:16:07 +0000 UTC" firstStartedPulling="2026-02-14 04:16:11.275930972 +0000 UTC m=+403.356868296" lastFinishedPulling="2026-02-14 04:16:15.094982979 +0000 UTC m=+407.175920293" observedRunningTime="2026-02-14 04:16:16.349281914 +0000 UTC m=+408.430219238" watchObservedRunningTime="2026-02-14 04:16:16.353562954 +0000 UTC m=+408.434500278" Feb 14 04:16:16 crc kubenswrapper[4867]: I0214 04:16:16.398483 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-85586fc579-b75c7" podStartSLOduration=4.43365389 podStartE2EDuration="14.398457536s" podCreationTimestamp="2026-02-14 04:16:02 +0000 UTC" firstStartedPulling="2026-02-14 04:16:05.118726356 +0000 UTC m=+397.199663680" lastFinishedPulling="2026-02-14 04:16:15.083530012 +0000 UTC m=+407.164467326" observedRunningTime="2026-02-14 04:16:16.392971854 +0000 UTC m=+408.473909168" watchObservedRunningTime="2026-02-14 04:16:16.398457536 +0000 UTC m=+408.479394850" Feb 14 04:16:16 crc kubenswrapper[4867]: I0214 04:16:16.450620 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-c4c52"] Feb 14 04:16:17 crc kubenswrapper[4867]: I0214 04:16:17.004561 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-76866bf749-9m2w5"] Feb 14 04:16:17 crc kubenswrapper[4867]: I0214 04:16:17.004761 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-76866bf749-9m2w5" podUID="8708b876-3ece-4820-b4f1-35d9fb2a195c" containerName="controller-manager" containerID="cri-o://abaa323618b879bb61fc24afaa3f869dc0bc36bdaf9414230f2b473467c245b7" gracePeriod=30 Feb 14 04:16:17 crc kubenswrapper[4867]: I0214 04:16:17.017592 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-658bcc664-kwbrz"] Feb 14 04:16:17 crc kubenswrapper[4867]: I0214 04:16:17.017793 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-658bcc664-kwbrz" podUID="96b49908-c23d-45d6-b7fa-3d718d01ee00" containerName="route-controller-manager" containerID="cri-o://6b1dcdc8ab4882eb0ae66f99651a492e0075228f8a659714df05c3f830d62ae6" gracePeriod=30 Feb 14 04:16:17 crc kubenswrapper[4867]: I0214 04:16:17.335594 4867 generic.go:334] "Generic (PLEG): container finished" podID="96b49908-c23d-45d6-b7fa-3d718d01ee00" containerID="6b1dcdc8ab4882eb0ae66f99651a492e0075228f8a659714df05c3f830d62ae6" exitCode=0 Feb 14 04:16:17 crc kubenswrapper[4867]: I0214 04:16:17.335712 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-658bcc664-kwbrz" event={"ID":"96b49908-c23d-45d6-b7fa-3d718d01ee00","Type":"ContainerDied","Data":"6b1dcdc8ab4882eb0ae66f99651a492e0075228f8a659714df05c3f830d62ae6"} Feb 14 04:16:17 crc kubenswrapper[4867]: I0214 04:16:17.337898 4867 generic.go:334] "Generic (PLEG): container finished" podID="8708b876-3ece-4820-b4f1-35d9fb2a195c" containerID="abaa323618b879bb61fc24afaa3f869dc0bc36bdaf9414230f2b473467c245b7" exitCode=0 Feb 14 04:16:17 crc kubenswrapper[4867]: I0214 04:16:17.338073 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-76866bf749-9m2w5" event={"ID":"8708b876-3ece-4820-b4f1-35d9fb2a195c","Type":"ContainerDied","Data":"abaa323618b879bb61fc24afaa3f869dc0bc36bdaf9414230f2b473467c245b7"} Feb 14 04:16:17 crc kubenswrapper[4867]: I0214 04:16:17.339447 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-85586fc579-b75c7" Feb 14 04:16:17 crc kubenswrapper[4867]: I0214 04:16:17.352027 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-85586fc579-b75c7" Feb 14 04:16:17 crc kubenswrapper[4867]: I0214 04:16:17.464393 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:17 crc kubenswrapper[4867]: I0214 04:16:17.654102 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-658bcc664-kwbrz" Feb 14 04:16:17 crc kubenswrapper[4867]: I0214 04:16:17.663823 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-76866bf749-9m2w5" Feb 14 04:16:17 crc kubenswrapper[4867]: I0214 04:16:17.816607 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shl4z\" (UniqueName: \"kubernetes.io/projected/8708b876-3ece-4820-b4f1-35d9fb2a195c-kube-api-access-shl4z\") pod \"8708b876-3ece-4820-b4f1-35d9fb2a195c\" (UID: \"8708b876-3ece-4820-b4f1-35d9fb2a195c\") " Feb 14 04:16:17 crc kubenswrapper[4867]: I0214 04:16:17.816674 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8708b876-3ece-4820-b4f1-35d9fb2a195c-config\") pod \"8708b876-3ece-4820-b4f1-35d9fb2a195c\" (UID: \"8708b876-3ece-4820-b4f1-35d9fb2a195c\") " Feb 14 04:16:17 crc kubenswrapper[4867]: I0214 04:16:17.816694 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/96b49908-c23d-45d6-b7fa-3d718d01ee00-client-ca\") pod \"96b49908-c23d-45d6-b7fa-3d718d01ee00\" (UID: \"96b49908-c23d-45d6-b7fa-3d718d01ee00\") " Feb 14 04:16:17 crc kubenswrapper[4867]: I0214 04:16:17.816736 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rgmdg\" (UniqueName: \"kubernetes.io/projected/96b49908-c23d-45d6-b7fa-3d718d01ee00-kube-api-access-rgmdg\") pod \"96b49908-c23d-45d6-b7fa-3d718d01ee00\" (UID: \"96b49908-c23d-45d6-b7fa-3d718d01ee00\") " Feb 14 04:16:17 crc kubenswrapper[4867]: I0214 04:16:17.816785 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96b49908-c23d-45d6-b7fa-3d718d01ee00-serving-cert\") pod \"96b49908-c23d-45d6-b7fa-3d718d01ee00\" (UID: \"96b49908-c23d-45d6-b7fa-3d718d01ee00\") " Feb 14 04:16:17 crc kubenswrapper[4867]: I0214 04:16:17.816812 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8708b876-3ece-4820-b4f1-35d9fb2a195c-client-ca\") pod \"8708b876-3ece-4820-b4f1-35d9fb2a195c\" (UID: \"8708b876-3ece-4820-b4f1-35d9fb2a195c\") " Feb 14 04:16:17 crc kubenswrapper[4867]: I0214 04:16:17.816853 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96b49908-c23d-45d6-b7fa-3d718d01ee00-config\") pod \"96b49908-c23d-45d6-b7fa-3d718d01ee00\" (UID: \"96b49908-c23d-45d6-b7fa-3d718d01ee00\") " Feb 14 04:16:17 crc kubenswrapper[4867]: I0214 04:16:17.816872 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8708b876-3ece-4820-b4f1-35d9fb2a195c-serving-cert\") pod \"8708b876-3ece-4820-b4f1-35d9fb2a195c\" (UID: \"8708b876-3ece-4820-b4f1-35d9fb2a195c\") " Feb 14 04:16:17 crc kubenswrapper[4867]: I0214 04:16:17.816909 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8708b876-3ece-4820-b4f1-35d9fb2a195c-proxy-ca-bundles\") pod \"8708b876-3ece-4820-b4f1-35d9fb2a195c\" (UID: \"8708b876-3ece-4820-b4f1-35d9fb2a195c\") " Feb 14 04:16:17 crc kubenswrapper[4867]: I0214 04:16:17.817644 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8708b876-3ece-4820-b4f1-35d9fb2a195c-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "8708b876-3ece-4820-b4f1-35d9fb2a195c" (UID: "8708b876-3ece-4820-b4f1-35d9fb2a195c"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:16:17 crc kubenswrapper[4867]: I0214 04:16:17.817792 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8708b876-3ece-4820-b4f1-35d9fb2a195c-config" (OuterVolumeSpecName: "config") pod "8708b876-3ece-4820-b4f1-35d9fb2a195c" (UID: "8708b876-3ece-4820-b4f1-35d9fb2a195c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:16:17 crc kubenswrapper[4867]: I0214 04:16:17.817902 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8708b876-3ece-4820-b4f1-35d9fb2a195c-client-ca" (OuterVolumeSpecName: "client-ca") pod "8708b876-3ece-4820-b4f1-35d9fb2a195c" (UID: "8708b876-3ece-4820-b4f1-35d9fb2a195c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:16:17 crc kubenswrapper[4867]: I0214 04:16:17.818261 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96b49908-c23d-45d6-b7fa-3d718d01ee00-config" (OuterVolumeSpecName: "config") pod "96b49908-c23d-45d6-b7fa-3d718d01ee00" (UID: "96b49908-c23d-45d6-b7fa-3d718d01ee00"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:16:17 crc kubenswrapper[4867]: I0214 04:16:17.818497 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96b49908-c23d-45d6-b7fa-3d718d01ee00-client-ca" (OuterVolumeSpecName: "client-ca") pod "96b49908-c23d-45d6-b7fa-3d718d01ee00" (UID: "96b49908-c23d-45d6-b7fa-3d718d01ee00"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:16:17 crc kubenswrapper[4867]: I0214 04:16:17.822590 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b49908-c23d-45d6-b7fa-3d718d01ee00-kube-api-access-rgmdg" (OuterVolumeSpecName: "kube-api-access-rgmdg") pod "96b49908-c23d-45d6-b7fa-3d718d01ee00" (UID: "96b49908-c23d-45d6-b7fa-3d718d01ee00"). InnerVolumeSpecName "kube-api-access-rgmdg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:16:17 crc kubenswrapper[4867]: I0214 04:16:17.822687 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b49908-c23d-45d6-b7fa-3d718d01ee00-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "96b49908-c23d-45d6-b7fa-3d718d01ee00" (UID: "96b49908-c23d-45d6-b7fa-3d718d01ee00"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:16:17 crc kubenswrapper[4867]: I0214 04:16:17.822990 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8708b876-3ece-4820-b4f1-35d9fb2a195c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8708b876-3ece-4820-b4f1-35d9fb2a195c" (UID: "8708b876-3ece-4820-b4f1-35d9fb2a195c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:16:17 crc kubenswrapper[4867]: I0214 04:16:17.830639 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8708b876-3ece-4820-b4f1-35d9fb2a195c-kube-api-access-shl4z" (OuterVolumeSpecName: "kube-api-access-shl4z") pod "8708b876-3ece-4820-b4f1-35d9fb2a195c" (UID: "8708b876-3ece-4820-b4f1-35d9fb2a195c"). InnerVolumeSpecName "kube-api-access-shl4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:16:17 crc kubenswrapper[4867]: I0214 04:16:17.919161 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shl4z\" (UniqueName: \"kubernetes.io/projected/8708b876-3ece-4820-b4f1-35d9fb2a195c-kube-api-access-shl4z\") on node \"crc\" DevicePath \"\"" Feb 14 04:16:17 crc kubenswrapper[4867]: I0214 04:16:17.919220 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8708b876-3ece-4820-b4f1-35d9fb2a195c-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:16:17 crc kubenswrapper[4867]: I0214 04:16:17.919235 4867 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/96b49908-c23d-45d6-b7fa-3d718d01ee00-client-ca\") on node \"crc\" DevicePath \"\"" Feb 14 04:16:17 crc kubenswrapper[4867]: I0214 04:16:17.919246 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rgmdg\" (UniqueName: \"kubernetes.io/projected/96b49908-c23d-45d6-b7fa-3d718d01ee00-kube-api-access-rgmdg\") on node \"crc\" DevicePath \"\"" Feb 14 04:16:17 crc kubenswrapper[4867]: I0214 04:16:17.919260 4867 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96b49908-c23d-45d6-b7fa-3d718d01ee00-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:16:17 crc kubenswrapper[4867]: I0214 04:16:17.919273 4867 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8708b876-3ece-4820-b4f1-35d9fb2a195c-client-ca\") on node \"crc\" DevicePath \"\"" Feb 14 04:16:17 crc kubenswrapper[4867]: I0214 04:16:17.919282 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96b49908-c23d-45d6-b7fa-3d718d01ee00-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:16:17 crc kubenswrapper[4867]: I0214 04:16:17.919291 4867 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8708b876-3ece-4820-b4f1-35d9fb2a195c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:16:17 crc kubenswrapper[4867]: I0214 04:16:17.919300 4867 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8708b876-3ece-4820-b4f1-35d9fb2a195c-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 14 04:16:18 crc kubenswrapper[4867]: I0214 04:16:18.352726 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-658bcc664-kwbrz" event={"ID":"96b49908-c23d-45d6-b7fa-3d718d01ee00","Type":"ContainerDied","Data":"36ca2d37b0192cdee33dc6fe36ba136f75d321a0564771f7e8b3c2c82c2a9e3c"} Feb 14 04:16:18 crc kubenswrapper[4867]: I0214 04:16:18.352814 4867 scope.go:117] "RemoveContainer" containerID="6b1dcdc8ab4882eb0ae66f99651a492e0075228f8a659714df05c3f830d62ae6" Feb 14 04:16:18 crc kubenswrapper[4867]: I0214 04:16:18.352828 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-658bcc664-kwbrz" Feb 14 04:16:18 crc kubenswrapper[4867]: I0214 04:16:18.356231 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-76866bf749-9m2w5" event={"ID":"8708b876-3ece-4820-b4f1-35d9fb2a195c","Type":"ContainerDied","Data":"d45331f7f516f685e06d725fb6651d41df87d69b6bbe0b5ca1d4db8536a8773c"} Feb 14 04:16:18 crc kubenswrapper[4867]: I0214 04:16:18.356301 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-76866bf749-9m2w5" Feb 14 04:16:18 crc kubenswrapper[4867]: I0214 04:16:18.372797 4867 scope.go:117] "RemoveContainer" containerID="abaa323618b879bb61fc24afaa3f869dc0bc36bdaf9414230f2b473467c245b7" Feb 14 04:16:18 crc kubenswrapper[4867]: I0214 04:16:18.399720 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-76866bf749-9m2w5"] Feb 14 04:16:18 crc kubenswrapper[4867]: I0214 04:16:18.406081 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-76866bf749-9m2w5"] Feb 14 04:16:18 crc kubenswrapper[4867]: I0214 04:16:18.410676 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-658bcc664-kwbrz"] Feb 14 04:16:18 crc kubenswrapper[4867]: I0214 04:16:18.413736 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-658bcc664-kwbrz"] Feb 14 04:16:18 crc kubenswrapper[4867]: I0214 04:16:18.988123 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7575f7b945-9zbh8"] Feb 14 04:16:18 crc kubenswrapper[4867]: E0214 04:16:18.988372 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8708b876-3ece-4820-b4f1-35d9fb2a195c" containerName="controller-manager" Feb 14 04:16:18 crc kubenswrapper[4867]: I0214 04:16:18.988387 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="8708b876-3ece-4820-b4f1-35d9fb2a195c" containerName="controller-manager" Feb 14 04:16:18 crc kubenswrapper[4867]: E0214 04:16:18.988405 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96b49908-c23d-45d6-b7fa-3d718d01ee00" containerName="route-controller-manager" Feb 14 04:16:18 crc kubenswrapper[4867]: I0214 04:16:18.988411 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="96b49908-c23d-45d6-b7fa-3d718d01ee00" containerName="route-controller-manager" Feb 14 04:16:18 crc kubenswrapper[4867]: E0214 04:16:18.988425 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c029599e-5014-4874-917f-076635849451" containerName="registry" Feb 14 04:16:18 crc kubenswrapper[4867]: I0214 04:16:18.988431 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="c029599e-5014-4874-917f-076635849451" containerName="registry" Feb 14 04:16:18 crc kubenswrapper[4867]: I0214 04:16:18.988557 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="8708b876-3ece-4820-b4f1-35d9fb2a195c" containerName="controller-manager" Feb 14 04:16:18 crc kubenswrapper[4867]: I0214 04:16:18.988570 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="96b49908-c23d-45d6-b7fa-3d718d01ee00" containerName="route-controller-manager" Feb 14 04:16:18 crc kubenswrapper[4867]: I0214 04:16:18.988578 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="c029599e-5014-4874-917f-076635849451" containerName="registry" Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:18.988987 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7575f7b945-9zbh8" Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:18.992305 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-574c444545-stzjc"] Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:18.992921 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-574c444545-stzjc" Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:18.993141 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:18.993298 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:18.993353 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:18.993698 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:18.993990 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:18.994317 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:18.994849 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:19.003815 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:19.004800 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:19.005103 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:19.005392 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:19.005455 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:19.017236 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:19.023309 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8708b876-3ece-4820-b4f1-35d9fb2a195c" path="/var/lib/kubelet/pods/8708b876-3ece-4820-b4f1-35d9fb2a195c/volumes" Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:19.023905 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b49908-c23d-45d6-b7fa-3d718d01ee00" path="/var/lib/kubelet/pods/96b49908-c23d-45d6-b7fa-3d718d01ee00/volumes" Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:19.024735 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-574c444545-stzjc"] Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:19.027678 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7575f7b945-9zbh8"] Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:19.138089 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8q4n\" (UniqueName: \"kubernetes.io/projected/29172228-9eb8-461f-8f75-cdd021e0d30c-kube-api-access-k8q4n\") pod \"route-controller-manager-7575f7b945-9zbh8\" (UID: \"29172228-9eb8-461f-8f75-cdd021e0d30c\") " pod="openshift-route-controller-manager/route-controller-manager-7575f7b945-9zbh8" Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:19.138194 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/29172228-9eb8-461f-8f75-cdd021e0d30c-client-ca\") pod \"route-controller-manager-7575f7b945-9zbh8\" (UID: \"29172228-9eb8-461f-8f75-cdd021e0d30c\") " pod="openshift-route-controller-manager/route-controller-manager-7575f7b945-9zbh8" Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:19.138279 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a9fc9dc1-437a-4160-b805-fabfd7f877c2-client-ca\") pod \"controller-manager-574c444545-stzjc\" (UID: \"a9fc9dc1-437a-4160-b805-fabfd7f877c2\") " pod="openshift-controller-manager/controller-manager-574c444545-stzjc" Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:19.138323 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29172228-9eb8-461f-8f75-cdd021e0d30c-serving-cert\") pod \"route-controller-manager-7575f7b945-9zbh8\" (UID: \"29172228-9eb8-461f-8f75-cdd021e0d30c\") " pod="openshift-route-controller-manager/route-controller-manager-7575f7b945-9zbh8" Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:19.138373 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29172228-9eb8-461f-8f75-cdd021e0d30c-config\") pod \"route-controller-manager-7575f7b945-9zbh8\" (UID: \"29172228-9eb8-461f-8f75-cdd021e0d30c\") " pod="openshift-route-controller-manager/route-controller-manager-7575f7b945-9zbh8" Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:19.138407 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a9fc9dc1-437a-4160-b805-fabfd7f877c2-serving-cert\") pod \"controller-manager-574c444545-stzjc\" (UID: \"a9fc9dc1-437a-4160-b805-fabfd7f877c2\") " pod="openshift-controller-manager/controller-manager-574c444545-stzjc" Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:19.138435 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a9fc9dc1-437a-4160-b805-fabfd7f877c2-proxy-ca-bundles\") pod \"controller-manager-574c444545-stzjc\" (UID: \"a9fc9dc1-437a-4160-b805-fabfd7f877c2\") " pod="openshift-controller-manager/controller-manager-574c444545-stzjc" Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:19.138474 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cc495\" (UniqueName: \"kubernetes.io/projected/a9fc9dc1-437a-4160-b805-fabfd7f877c2-kube-api-access-cc495\") pod \"controller-manager-574c444545-stzjc\" (UID: \"a9fc9dc1-437a-4160-b805-fabfd7f877c2\") " pod="openshift-controller-manager/controller-manager-574c444545-stzjc" Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:19.138573 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9fc9dc1-437a-4160-b805-fabfd7f877c2-config\") pod \"controller-manager-574c444545-stzjc\" (UID: \"a9fc9dc1-437a-4160-b805-fabfd7f877c2\") " pod="openshift-controller-manager/controller-manager-574c444545-stzjc" Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:19.240911 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9fc9dc1-437a-4160-b805-fabfd7f877c2-config\") pod \"controller-manager-574c444545-stzjc\" (UID: \"a9fc9dc1-437a-4160-b805-fabfd7f877c2\") " pod="openshift-controller-manager/controller-manager-574c444545-stzjc" Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:19.241028 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8q4n\" (UniqueName: \"kubernetes.io/projected/29172228-9eb8-461f-8f75-cdd021e0d30c-kube-api-access-k8q4n\") pod \"route-controller-manager-7575f7b945-9zbh8\" (UID: \"29172228-9eb8-461f-8f75-cdd021e0d30c\") " pod="openshift-route-controller-manager/route-controller-manager-7575f7b945-9zbh8" Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:19.241120 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/29172228-9eb8-461f-8f75-cdd021e0d30c-client-ca\") pod \"route-controller-manager-7575f7b945-9zbh8\" (UID: \"29172228-9eb8-461f-8f75-cdd021e0d30c\") " pod="openshift-route-controller-manager/route-controller-manager-7575f7b945-9zbh8" Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:19.241202 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a9fc9dc1-437a-4160-b805-fabfd7f877c2-client-ca\") pod \"controller-manager-574c444545-stzjc\" (UID: \"a9fc9dc1-437a-4160-b805-fabfd7f877c2\") " pod="openshift-controller-manager/controller-manager-574c444545-stzjc" Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:19.241256 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29172228-9eb8-461f-8f75-cdd021e0d30c-serving-cert\") pod \"route-controller-manager-7575f7b945-9zbh8\" (UID: \"29172228-9eb8-461f-8f75-cdd021e0d30c\") " pod="openshift-route-controller-manager/route-controller-manager-7575f7b945-9zbh8" Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:19.241342 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29172228-9eb8-461f-8f75-cdd021e0d30c-config\") pod \"route-controller-manager-7575f7b945-9zbh8\" (UID: \"29172228-9eb8-461f-8f75-cdd021e0d30c\") " pod="openshift-route-controller-manager/route-controller-manager-7575f7b945-9zbh8" Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:19.241395 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a9fc9dc1-437a-4160-b805-fabfd7f877c2-serving-cert\") pod \"controller-manager-574c444545-stzjc\" (UID: \"a9fc9dc1-437a-4160-b805-fabfd7f877c2\") " pod="openshift-controller-manager/controller-manager-574c444545-stzjc" Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:19.241439 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a9fc9dc1-437a-4160-b805-fabfd7f877c2-proxy-ca-bundles\") pod \"controller-manager-574c444545-stzjc\" (UID: \"a9fc9dc1-437a-4160-b805-fabfd7f877c2\") " pod="openshift-controller-manager/controller-manager-574c444545-stzjc" Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:19.241545 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cc495\" (UniqueName: \"kubernetes.io/projected/a9fc9dc1-437a-4160-b805-fabfd7f877c2-kube-api-access-cc495\") pod \"controller-manager-574c444545-stzjc\" (UID: \"a9fc9dc1-437a-4160-b805-fabfd7f877c2\") " pod="openshift-controller-manager/controller-manager-574c444545-stzjc" Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:19.243222 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9fc9dc1-437a-4160-b805-fabfd7f877c2-config\") pod \"controller-manager-574c444545-stzjc\" (UID: \"a9fc9dc1-437a-4160-b805-fabfd7f877c2\") " pod="openshift-controller-manager/controller-manager-574c444545-stzjc" Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:19.246371 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a9fc9dc1-437a-4160-b805-fabfd7f877c2-client-ca\") pod \"controller-manager-574c444545-stzjc\" (UID: \"a9fc9dc1-437a-4160-b805-fabfd7f877c2\") " pod="openshift-controller-manager/controller-manager-574c444545-stzjc" Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:19.247712 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29172228-9eb8-461f-8f75-cdd021e0d30c-config\") pod \"route-controller-manager-7575f7b945-9zbh8\" (UID: \"29172228-9eb8-461f-8f75-cdd021e0d30c\") " pod="openshift-route-controller-manager/route-controller-manager-7575f7b945-9zbh8" Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:19.248021 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/29172228-9eb8-461f-8f75-cdd021e0d30c-client-ca\") pod \"route-controller-manager-7575f7b945-9zbh8\" (UID: \"29172228-9eb8-461f-8f75-cdd021e0d30c\") " pod="openshift-route-controller-manager/route-controller-manager-7575f7b945-9zbh8" Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:19.250415 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a9fc9dc1-437a-4160-b805-fabfd7f877c2-proxy-ca-bundles\") pod \"controller-manager-574c444545-stzjc\" (UID: \"a9fc9dc1-437a-4160-b805-fabfd7f877c2\") " pod="openshift-controller-manager/controller-manager-574c444545-stzjc" Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:19.257758 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29172228-9eb8-461f-8f75-cdd021e0d30c-serving-cert\") pod \"route-controller-manager-7575f7b945-9zbh8\" (UID: \"29172228-9eb8-461f-8f75-cdd021e0d30c\") " pod="openshift-route-controller-manager/route-controller-manager-7575f7b945-9zbh8" Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:19.261370 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8q4n\" (UniqueName: \"kubernetes.io/projected/29172228-9eb8-461f-8f75-cdd021e0d30c-kube-api-access-k8q4n\") pod \"route-controller-manager-7575f7b945-9zbh8\" (UID: \"29172228-9eb8-461f-8f75-cdd021e0d30c\") " pod="openshift-route-controller-manager/route-controller-manager-7575f7b945-9zbh8" Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:19.261440 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a9fc9dc1-437a-4160-b805-fabfd7f877c2-serving-cert\") pod \"controller-manager-574c444545-stzjc\" (UID: \"a9fc9dc1-437a-4160-b805-fabfd7f877c2\") " pod="openshift-controller-manager/controller-manager-574c444545-stzjc" Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:19.264375 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cc495\" (UniqueName: \"kubernetes.io/projected/a9fc9dc1-437a-4160-b805-fabfd7f877c2-kube-api-access-cc495\") pod \"controller-manager-574c444545-stzjc\" (UID: \"a9fc9dc1-437a-4160-b805-fabfd7f877c2\") " pod="openshift-controller-manager/controller-manager-574c444545-stzjc" Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:19.339164 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-574c444545-stzjc" Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:19.341939 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7575f7b945-9zbh8" Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:19.752375 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7575f7b945-9zbh8"] Feb 14 04:16:19 crc kubenswrapper[4867]: W0214 04:16:19.761816 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod29172228_9eb8_461f_8f75_cdd021e0d30c.slice/crio-0bf4f8b0d07802e8b94543db21cd34c5b22cce6586e64afbd6c096ec6e7aa112 WatchSource:0}: Error finding container 0bf4f8b0d07802e8b94543db21cd34c5b22cce6586e64afbd6c096ec6e7aa112: Status 404 returned error can't find the container with id 0bf4f8b0d07802e8b94543db21cd34c5b22cce6586e64afbd6c096ec6e7aa112 Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:19.933308 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-574c444545-stzjc"] Feb 14 04:16:19 crc kubenswrapper[4867]: W0214 04:16:19.934361 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda9fc9dc1_437a_4160_b805_fabfd7f877c2.slice/crio-05d30f129c010d9463418ba8920f196b29c46fb2a634f4475dfe9b2bf1a97a8f WatchSource:0}: Error finding container 05d30f129c010d9463418ba8920f196b29c46fb2a634f4475dfe9b2bf1a97a8f: Status 404 returned error can't find the container with id 05d30f129c010d9463418ba8920f196b29c46fb2a634f4475dfe9b2bf1a97a8f Feb 14 04:16:20 crc kubenswrapper[4867]: I0214 04:16:20.382566 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7575f7b945-9zbh8" event={"ID":"29172228-9eb8-461f-8f75-cdd021e0d30c","Type":"ContainerStarted","Data":"b2b4d86a5abf177e594abdba567dce9b2b749401c08580b54c991a839d54dc2c"} Feb 14 04:16:20 crc kubenswrapper[4867]: I0214 04:16:20.382997 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7575f7b945-9zbh8" Feb 14 04:16:20 crc kubenswrapper[4867]: I0214 04:16:20.383011 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7575f7b945-9zbh8" event={"ID":"29172228-9eb8-461f-8f75-cdd021e0d30c","Type":"ContainerStarted","Data":"0bf4f8b0d07802e8b94543db21cd34c5b22cce6586e64afbd6c096ec6e7aa112"} Feb 14 04:16:20 crc kubenswrapper[4867]: I0214 04:16:20.385840 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-574c444545-stzjc" event={"ID":"a9fc9dc1-437a-4160-b805-fabfd7f877c2","Type":"ContainerStarted","Data":"8ea3d56833a0efa19ba33e28ae9cc5702afdb9a3c57db5fa754cb3ed8734293a"} Feb 14 04:16:20 crc kubenswrapper[4867]: I0214 04:16:20.385902 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-574c444545-stzjc" event={"ID":"a9fc9dc1-437a-4160-b805-fabfd7f877c2","Type":"ContainerStarted","Data":"05d30f129c010d9463418ba8920f196b29c46fb2a634f4475dfe9b2bf1a97a8f"} Feb 14 04:16:20 crc kubenswrapper[4867]: I0214 04:16:20.386077 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-574c444545-stzjc" Feb 14 04:16:20 crc kubenswrapper[4867]: I0214 04:16:20.389424 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7575f7b945-9zbh8" Feb 14 04:16:20 crc kubenswrapper[4867]: I0214 04:16:20.392347 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-574c444545-stzjc" Feb 14 04:16:20 crc kubenswrapper[4867]: I0214 04:16:20.400871 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7575f7b945-9zbh8" podStartSLOduration=3.4008482239999998 podStartE2EDuration="3.400848224s" podCreationTimestamp="2026-02-14 04:16:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:16:20.400646199 +0000 UTC m=+412.481583503" watchObservedRunningTime="2026-02-14 04:16:20.400848224 +0000 UTC m=+412.481785538" Feb 14 04:16:20 crc kubenswrapper[4867]: I0214 04:16:20.430586 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-574c444545-stzjc" podStartSLOduration=3.430567243 podStartE2EDuration="3.430567243s" podCreationTimestamp="2026-02-14 04:16:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:16:20.424556278 +0000 UTC m=+412.505493592" watchObservedRunningTime="2026-02-14 04:16:20.430567243 +0000 UTC m=+412.511504557" Feb 14 04:16:26 crc kubenswrapper[4867]: I0214 04:16:26.449144 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-76ddc659b-tzdtd" Feb 14 04:16:26 crc kubenswrapper[4867]: I0214 04:16:26.449715 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-76ddc659b-tzdtd" Feb 14 04:16:31 crc kubenswrapper[4867]: I0214 04:16:31.251473 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 04:16:31 crc kubenswrapper[4867]: I0214 04:16:31.252172 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 04:16:31 crc kubenswrapper[4867]: I0214 04:16:31.252265 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" Feb 14 04:16:31 crc kubenswrapper[4867]: I0214 04:16:31.252937 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a1533900ce1e5bb0e6f304c6961b52011041a6df37ce715de5540edb7f995f66"} pod="openshift-machine-config-operator/machine-config-daemon-4s95t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 04:16:31 crc kubenswrapper[4867]: I0214 04:16:31.253002 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" containerID="cri-o://a1533900ce1e5bb0e6f304c6961b52011041a6df37ce715de5540edb7f995f66" gracePeriod=600 Feb 14 04:16:31 crc kubenswrapper[4867]: I0214 04:16:31.461379 4867 generic.go:334] "Generic (PLEG): container finished" podID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerID="a1533900ce1e5bb0e6f304c6961b52011041a6df37ce715de5540edb7f995f66" exitCode=0 Feb 14 04:16:31 crc kubenswrapper[4867]: I0214 04:16:31.461457 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" event={"ID":"5992e46c-bce7-4b9f-82f2-c7ffb93286cd","Type":"ContainerDied","Data":"a1533900ce1e5bb0e6f304c6961b52011041a6df37ce715de5540edb7f995f66"} Feb 14 04:16:31 crc kubenswrapper[4867]: I0214 04:16:31.461495 4867 scope.go:117] "RemoveContainer" containerID="c06b088007e4cc02eff5f33dffc101f9d559fc0af6d9fc99cb7d1a49c47deec3" Feb 14 04:16:32 crc kubenswrapper[4867]: I0214 04:16:32.471831 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" event={"ID":"5992e46c-bce7-4b9f-82f2-c7ffb93286cd","Type":"ContainerStarted","Data":"2de3d61c1f6c01b61b6559aa8687b810bcfdab61e971db1007a35ef4d563c645"} Feb 14 04:16:41 crc kubenswrapper[4867]: I0214 04:16:41.535195 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-c4c52" podUID="bb63883f-65f5-4107-877a-ff786d6c00f9" containerName="console" containerID="cri-o://63e5a177904c856ac44a70adc1fabc18b6435a4f03e0f904c50917bec344fb2b" gracePeriod=15 Feb 14 04:16:41 crc kubenswrapper[4867]: E0214 04:16:41.653098 4867 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb63883f_65f5_4107_877a_ff786d6c00f9.slice/crio-63e5a177904c856ac44a70adc1fabc18b6435a4f03e0f904c50917bec344fb2b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb63883f_65f5_4107_877a_ff786d6c00f9.slice/crio-conmon-63e5a177904c856ac44a70adc1fabc18b6435a4f03e0f904c50917bec344fb2b.scope\": RecentStats: unable to find data in memory cache]" Feb 14 04:16:42 crc kubenswrapper[4867]: I0214 04:16:42.030038 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-c4c52_bb63883f-65f5-4107-877a-ff786d6c00f9/console/0.log" Feb 14 04:16:42 crc kubenswrapper[4867]: I0214 04:16:42.030107 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-c4c52" Feb 14 04:16:42 crc kubenswrapper[4867]: I0214 04:16:42.110563 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zvv7t\" (UniqueName: \"kubernetes.io/projected/bb63883f-65f5-4107-877a-ff786d6c00f9-kube-api-access-zvv7t\") pod \"bb63883f-65f5-4107-877a-ff786d6c00f9\" (UID: \"bb63883f-65f5-4107-877a-ff786d6c00f9\") " Feb 14 04:16:42 crc kubenswrapper[4867]: I0214 04:16:42.111304 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bb63883f-65f5-4107-877a-ff786d6c00f9-trusted-ca-bundle\") pod \"bb63883f-65f5-4107-877a-ff786d6c00f9\" (UID: \"bb63883f-65f5-4107-877a-ff786d6c00f9\") " Feb 14 04:16:42 crc kubenswrapper[4867]: I0214 04:16:42.111422 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bb63883f-65f5-4107-877a-ff786d6c00f9-service-ca\") pod \"bb63883f-65f5-4107-877a-ff786d6c00f9\" (UID: \"bb63883f-65f5-4107-877a-ff786d6c00f9\") " Feb 14 04:16:42 crc kubenswrapper[4867]: I0214 04:16:42.111606 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bb63883f-65f5-4107-877a-ff786d6c00f9-console-oauth-config\") pod \"bb63883f-65f5-4107-877a-ff786d6c00f9\" (UID: \"bb63883f-65f5-4107-877a-ff786d6c00f9\") " Feb 14 04:16:42 crc kubenswrapper[4867]: I0214 04:16:42.111752 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bb63883f-65f5-4107-877a-ff786d6c00f9-console-serving-cert\") pod \"bb63883f-65f5-4107-877a-ff786d6c00f9\" (UID: \"bb63883f-65f5-4107-877a-ff786d6c00f9\") " Feb 14 04:16:42 crc kubenswrapper[4867]: I0214 04:16:42.111801 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bb63883f-65f5-4107-877a-ff786d6c00f9-oauth-serving-cert\") pod \"bb63883f-65f5-4107-877a-ff786d6c00f9\" (UID: \"bb63883f-65f5-4107-877a-ff786d6c00f9\") " Feb 14 04:16:42 crc kubenswrapper[4867]: I0214 04:16:42.111850 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bb63883f-65f5-4107-877a-ff786d6c00f9-console-config\") pod \"bb63883f-65f5-4107-877a-ff786d6c00f9\" (UID: \"bb63883f-65f5-4107-877a-ff786d6c00f9\") " Feb 14 04:16:42 crc kubenswrapper[4867]: I0214 04:16:42.112076 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb63883f-65f5-4107-877a-ff786d6c00f9-service-ca" (OuterVolumeSpecName: "service-ca") pod "bb63883f-65f5-4107-877a-ff786d6c00f9" (UID: "bb63883f-65f5-4107-877a-ff786d6c00f9"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:16:42 crc kubenswrapper[4867]: I0214 04:16:42.112220 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb63883f-65f5-4107-877a-ff786d6c00f9-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "bb63883f-65f5-4107-877a-ff786d6c00f9" (UID: "bb63883f-65f5-4107-877a-ff786d6c00f9"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:16:42 crc kubenswrapper[4867]: I0214 04:16:42.112695 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb63883f-65f5-4107-877a-ff786d6c00f9-console-config" (OuterVolumeSpecName: "console-config") pod "bb63883f-65f5-4107-877a-ff786d6c00f9" (UID: "bb63883f-65f5-4107-877a-ff786d6c00f9"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:16:42 crc kubenswrapper[4867]: I0214 04:16:42.112759 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb63883f-65f5-4107-877a-ff786d6c00f9-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "bb63883f-65f5-4107-877a-ff786d6c00f9" (UID: "bb63883f-65f5-4107-877a-ff786d6c00f9"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:16:42 crc kubenswrapper[4867]: I0214 04:16:42.113535 4867 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bb63883f-65f5-4107-877a-ff786d6c00f9-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:16:42 crc kubenswrapper[4867]: I0214 04:16:42.113570 4867 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bb63883f-65f5-4107-877a-ff786d6c00f9-service-ca\") on node \"crc\" DevicePath \"\"" Feb 14 04:16:42 crc kubenswrapper[4867]: I0214 04:16:42.113585 4867 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bb63883f-65f5-4107-877a-ff786d6c00f9-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:16:42 crc kubenswrapper[4867]: I0214 04:16:42.113595 4867 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bb63883f-65f5-4107-877a-ff786d6c00f9-console-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:16:42 crc kubenswrapper[4867]: I0214 04:16:42.117732 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb63883f-65f5-4107-877a-ff786d6c00f9-kube-api-access-zvv7t" (OuterVolumeSpecName: "kube-api-access-zvv7t") pod "bb63883f-65f5-4107-877a-ff786d6c00f9" (UID: "bb63883f-65f5-4107-877a-ff786d6c00f9"). InnerVolumeSpecName "kube-api-access-zvv7t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:16:42 crc kubenswrapper[4867]: I0214 04:16:42.118260 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb63883f-65f5-4107-877a-ff786d6c00f9-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "bb63883f-65f5-4107-877a-ff786d6c00f9" (UID: "bb63883f-65f5-4107-877a-ff786d6c00f9"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:16:42 crc kubenswrapper[4867]: I0214 04:16:42.118878 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb63883f-65f5-4107-877a-ff786d6c00f9-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "bb63883f-65f5-4107-877a-ff786d6c00f9" (UID: "bb63883f-65f5-4107-877a-ff786d6c00f9"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:16:42 crc kubenswrapper[4867]: I0214 04:16:42.214997 4867 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bb63883f-65f5-4107-877a-ff786d6c00f9-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:16:42 crc kubenswrapper[4867]: I0214 04:16:42.215062 4867 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bb63883f-65f5-4107-877a-ff786d6c00f9-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:16:42 crc kubenswrapper[4867]: I0214 04:16:42.215088 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zvv7t\" (UniqueName: \"kubernetes.io/projected/bb63883f-65f5-4107-877a-ff786d6c00f9-kube-api-access-zvv7t\") on node \"crc\" DevicePath \"\"" Feb 14 04:16:42 crc kubenswrapper[4867]: I0214 04:16:42.549896 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-c4c52_bb63883f-65f5-4107-877a-ff786d6c00f9/console/0.log" Feb 14 04:16:42 crc kubenswrapper[4867]: I0214 04:16:42.550285 4867 generic.go:334] "Generic (PLEG): container finished" podID="bb63883f-65f5-4107-877a-ff786d6c00f9" containerID="63e5a177904c856ac44a70adc1fabc18b6435a4f03e0f904c50917bec344fb2b" exitCode=2 Feb 14 04:16:42 crc kubenswrapper[4867]: I0214 04:16:42.550336 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-c4c52" event={"ID":"bb63883f-65f5-4107-877a-ff786d6c00f9","Type":"ContainerDied","Data":"63e5a177904c856ac44a70adc1fabc18b6435a4f03e0f904c50917bec344fb2b"} Feb 14 04:16:42 crc kubenswrapper[4867]: I0214 04:16:42.550373 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-c4c52" Feb 14 04:16:42 crc kubenswrapper[4867]: I0214 04:16:42.550401 4867 scope.go:117] "RemoveContainer" containerID="63e5a177904c856ac44a70adc1fabc18b6435a4f03e0f904c50917bec344fb2b" Feb 14 04:16:42 crc kubenswrapper[4867]: I0214 04:16:42.550385 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-c4c52" event={"ID":"bb63883f-65f5-4107-877a-ff786d6c00f9","Type":"ContainerDied","Data":"0bfaa5034c5f4aa419ca6cadf9c2423257fac17593840dedc0a8810563cfdfe4"} Feb 14 04:16:42 crc kubenswrapper[4867]: I0214 04:16:42.584364 4867 scope.go:117] "RemoveContainer" containerID="63e5a177904c856ac44a70adc1fabc18b6435a4f03e0f904c50917bec344fb2b" Feb 14 04:16:42 crc kubenswrapper[4867]: E0214 04:16:42.584818 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63e5a177904c856ac44a70adc1fabc18b6435a4f03e0f904c50917bec344fb2b\": container with ID starting with 63e5a177904c856ac44a70adc1fabc18b6435a4f03e0f904c50917bec344fb2b not found: ID does not exist" containerID="63e5a177904c856ac44a70adc1fabc18b6435a4f03e0f904c50917bec344fb2b" Feb 14 04:16:42 crc kubenswrapper[4867]: I0214 04:16:42.584859 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63e5a177904c856ac44a70adc1fabc18b6435a4f03e0f904c50917bec344fb2b"} err="failed to get container status \"63e5a177904c856ac44a70adc1fabc18b6435a4f03e0f904c50917bec344fb2b\": rpc error: code = NotFound desc = could not find container \"63e5a177904c856ac44a70adc1fabc18b6435a4f03e0f904c50917bec344fb2b\": container with ID starting with 63e5a177904c856ac44a70adc1fabc18b6435a4f03e0f904c50917bec344fb2b not found: ID does not exist" Feb 14 04:16:42 crc kubenswrapper[4867]: I0214 04:16:42.589353 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-c4c52"] Feb 14 04:16:42 crc kubenswrapper[4867]: I0214 04:16:42.593687 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-c4c52"] Feb 14 04:16:43 crc kubenswrapper[4867]: I0214 04:16:43.011502 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb63883f-65f5-4107-877a-ff786d6c00f9" path="/var/lib/kubelet/pods/bb63883f-65f5-4107-877a-ff786d6c00f9/volumes" Feb 14 04:16:46 crc kubenswrapper[4867]: I0214 04:16:46.460036 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-76ddc659b-tzdtd" Feb 14 04:16:46 crc kubenswrapper[4867]: I0214 04:16:46.468064 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-76ddc659b-tzdtd" Feb 14 04:17:07 crc kubenswrapper[4867]: I0214 04:17:07.464708 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:17:07 crc kubenswrapper[4867]: I0214 04:17:07.516327 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:17:07 crc kubenswrapper[4867]: I0214 04:17:07.745466 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:17:51 crc kubenswrapper[4867]: I0214 04:17:51.567058 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6687988ff8-hggh9"] Feb 14 04:17:51 crc kubenswrapper[4867]: E0214 04:17:51.568654 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb63883f-65f5-4107-877a-ff786d6c00f9" containerName="console" Feb 14 04:17:51 crc kubenswrapper[4867]: I0214 04:17:51.568681 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb63883f-65f5-4107-877a-ff786d6c00f9" containerName="console" Feb 14 04:17:51 crc kubenswrapper[4867]: I0214 04:17:51.568959 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb63883f-65f5-4107-877a-ff786d6c00f9" containerName="console" Feb 14 04:17:51 crc kubenswrapper[4867]: I0214 04:17:51.569947 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6687988ff8-hggh9" Feb 14 04:17:51 crc kubenswrapper[4867]: I0214 04:17:51.580944 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6687988ff8-hggh9"] Feb 14 04:17:51 crc kubenswrapper[4867]: I0214 04:17:51.643890 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2d9ba4d6-e777-4a10-96d1-30a492f9ecf6-service-ca\") pod \"console-6687988ff8-hggh9\" (UID: \"2d9ba4d6-e777-4a10-96d1-30a492f9ecf6\") " pod="openshift-console/console-6687988ff8-hggh9" Feb 14 04:17:51 crc kubenswrapper[4867]: I0214 04:17:51.644319 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2d9ba4d6-e777-4a10-96d1-30a492f9ecf6-console-config\") pod \"console-6687988ff8-hggh9\" (UID: \"2d9ba4d6-e777-4a10-96d1-30a492f9ecf6\") " pod="openshift-console/console-6687988ff8-hggh9" Feb 14 04:17:51 crc kubenswrapper[4867]: I0214 04:17:51.644394 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2d9ba4d6-e777-4a10-96d1-30a492f9ecf6-oauth-serving-cert\") pod \"console-6687988ff8-hggh9\" (UID: \"2d9ba4d6-e777-4a10-96d1-30a492f9ecf6\") " pod="openshift-console/console-6687988ff8-hggh9" Feb 14 04:17:51 crc kubenswrapper[4867]: I0214 04:17:51.644429 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pm2vd\" (UniqueName: \"kubernetes.io/projected/2d9ba4d6-e777-4a10-96d1-30a492f9ecf6-kube-api-access-pm2vd\") pod \"console-6687988ff8-hggh9\" (UID: \"2d9ba4d6-e777-4a10-96d1-30a492f9ecf6\") " pod="openshift-console/console-6687988ff8-hggh9" Feb 14 04:17:51 crc kubenswrapper[4867]: I0214 04:17:51.644456 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2d9ba4d6-e777-4a10-96d1-30a492f9ecf6-console-oauth-config\") pod \"console-6687988ff8-hggh9\" (UID: \"2d9ba4d6-e777-4a10-96d1-30a492f9ecf6\") " pod="openshift-console/console-6687988ff8-hggh9" Feb 14 04:17:51 crc kubenswrapper[4867]: I0214 04:17:51.644497 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2d9ba4d6-e777-4a10-96d1-30a492f9ecf6-console-serving-cert\") pod \"console-6687988ff8-hggh9\" (UID: \"2d9ba4d6-e777-4a10-96d1-30a492f9ecf6\") " pod="openshift-console/console-6687988ff8-hggh9" Feb 14 04:17:51 crc kubenswrapper[4867]: I0214 04:17:51.644563 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2d9ba4d6-e777-4a10-96d1-30a492f9ecf6-trusted-ca-bundle\") pod \"console-6687988ff8-hggh9\" (UID: \"2d9ba4d6-e777-4a10-96d1-30a492f9ecf6\") " pod="openshift-console/console-6687988ff8-hggh9" Feb 14 04:17:51 crc kubenswrapper[4867]: I0214 04:17:51.746829 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2d9ba4d6-e777-4a10-96d1-30a492f9ecf6-service-ca\") pod \"console-6687988ff8-hggh9\" (UID: \"2d9ba4d6-e777-4a10-96d1-30a492f9ecf6\") " pod="openshift-console/console-6687988ff8-hggh9" Feb 14 04:17:51 crc kubenswrapper[4867]: I0214 04:17:51.746896 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2d9ba4d6-e777-4a10-96d1-30a492f9ecf6-console-config\") pod \"console-6687988ff8-hggh9\" (UID: \"2d9ba4d6-e777-4a10-96d1-30a492f9ecf6\") " pod="openshift-console/console-6687988ff8-hggh9" Feb 14 04:17:51 crc kubenswrapper[4867]: I0214 04:17:51.746972 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2d9ba4d6-e777-4a10-96d1-30a492f9ecf6-oauth-serving-cert\") pod \"console-6687988ff8-hggh9\" (UID: \"2d9ba4d6-e777-4a10-96d1-30a492f9ecf6\") " pod="openshift-console/console-6687988ff8-hggh9" Feb 14 04:17:51 crc kubenswrapper[4867]: I0214 04:17:51.747001 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pm2vd\" (UniqueName: \"kubernetes.io/projected/2d9ba4d6-e777-4a10-96d1-30a492f9ecf6-kube-api-access-pm2vd\") pod \"console-6687988ff8-hggh9\" (UID: \"2d9ba4d6-e777-4a10-96d1-30a492f9ecf6\") " pod="openshift-console/console-6687988ff8-hggh9" Feb 14 04:17:51 crc kubenswrapper[4867]: I0214 04:17:51.747027 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2d9ba4d6-e777-4a10-96d1-30a492f9ecf6-console-oauth-config\") pod \"console-6687988ff8-hggh9\" (UID: \"2d9ba4d6-e777-4a10-96d1-30a492f9ecf6\") " pod="openshift-console/console-6687988ff8-hggh9" Feb 14 04:17:51 crc kubenswrapper[4867]: I0214 04:17:51.747051 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2d9ba4d6-e777-4a10-96d1-30a492f9ecf6-console-serving-cert\") pod \"console-6687988ff8-hggh9\" (UID: \"2d9ba4d6-e777-4a10-96d1-30a492f9ecf6\") " pod="openshift-console/console-6687988ff8-hggh9" Feb 14 04:17:51 crc kubenswrapper[4867]: I0214 04:17:51.747072 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2d9ba4d6-e777-4a10-96d1-30a492f9ecf6-trusted-ca-bundle\") pod \"console-6687988ff8-hggh9\" (UID: \"2d9ba4d6-e777-4a10-96d1-30a492f9ecf6\") " pod="openshift-console/console-6687988ff8-hggh9" Feb 14 04:17:51 crc kubenswrapper[4867]: I0214 04:17:51.748412 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2d9ba4d6-e777-4a10-96d1-30a492f9ecf6-trusted-ca-bundle\") pod \"console-6687988ff8-hggh9\" (UID: \"2d9ba4d6-e777-4a10-96d1-30a492f9ecf6\") " pod="openshift-console/console-6687988ff8-hggh9" Feb 14 04:17:51 crc kubenswrapper[4867]: I0214 04:17:51.748815 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2d9ba4d6-e777-4a10-96d1-30a492f9ecf6-service-ca\") pod \"console-6687988ff8-hggh9\" (UID: \"2d9ba4d6-e777-4a10-96d1-30a492f9ecf6\") " pod="openshift-console/console-6687988ff8-hggh9" Feb 14 04:17:51 crc kubenswrapper[4867]: I0214 04:17:51.749767 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2d9ba4d6-e777-4a10-96d1-30a492f9ecf6-console-config\") pod \"console-6687988ff8-hggh9\" (UID: \"2d9ba4d6-e777-4a10-96d1-30a492f9ecf6\") " pod="openshift-console/console-6687988ff8-hggh9" Feb 14 04:17:51 crc kubenswrapper[4867]: I0214 04:17:51.750989 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2d9ba4d6-e777-4a10-96d1-30a492f9ecf6-oauth-serving-cert\") pod \"console-6687988ff8-hggh9\" (UID: \"2d9ba4d6-e777-4a10-96d1-30a492f9ecf6\") " pod="openshift-console/console-6687988ff8-hggh9" Feb 14 04:17:51 crc kubenswrapper[4867]: I0214 04:17:51.756111 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2d9ba4d6-e777-4a10-96d1-30a492f9ecf6-console-oauth-config\") pod \"console-6687988ff8-hggh9\" (UID: \"2d9ba4d6-e777-4a10-96d1-30a492f9ecf6\") " pod="openshift-console/console-6687988ff8-hggh9" Feb 14 04:17:51 crc kubenswrapper[4867]: I0214 04:17:51.756705 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2d9ba4d6-e777-4a10-96d1-30a492f9ecf6-console-serving-cert\") pod \"console-6687988ff8-hggh9\" (UID: \"2d9ba4d6-e777-4a10-96d1-30a492f9ecf6\") " pod="openshift-console/console-6687988ff8-hggh9" Feb 14 04:17:51 crc kubenswrapper[4867]: I0214 04:17:51.770340 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pm2vd\" (UniqueName: \"kubernetes.io/projected/2d9ba4d6-e777-4a10-96d1-30a492f9ecf6-kube-api-access-pm2vd\") pod \"console-6687988ff8-hggh9\" (UID: \"2d9ba4d6-e777-4a10-96d1-30a492f9ecf6\") " pod="openshift-console/console-6687988ff8-hggh9" Feb 14 04:17:51 crc kubenswrapper[4867]: I0214 04:17:51.890720 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6687988ff8-hggh9" Feb 14 04:17:52 crc kubenswrapper[4867]: I0214 04:17:52.390332 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6687988ff8-hggh9"] Feb 14 04:17:53 crc kubenswrapper[4867]: I0214 04:17:53.020966 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6687988ff8-hggh9" event={"ID":"2d9ba4d6-e777-4a10-96d1-30a492f9ecf6","Type":"ContainerStarted","Data":"3645eb9bc387f910ed152e2f9ff7796cc316b5c34a3967a69643a1f6d547d063"} Feb 14 04:17:53 crc kubenswrapper[4867]: I0214 04:17:53.021537 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6687988ff8-hggh9" event={"ID":"2d9ba4d6-e777-4a10-96d1-30a492f9ecf6","Type":"ContainerStarted","Data":"129cdcd69132d20dcbb1f824da4d34637e927a59f414ddd5999cdc93d09a0538"} Feb 14 04:17:53 crc kubenswrapper[4867]: I0214 04:17:53.045293 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6687988ff8-hggh9" podStartSLOduration=2.045260152 podStartE2EDuration="2.045260152s" podCreationTimestamp="2026-02-14 04:17:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:17:53.038679661 +0000 UTC m=+505.119616995" watchObservedRunningTime="2026-02-14 04:17:53.045260152 +0000 UTC m=+505.126197466" Feb 14 04:18:01 crc kubenswrapper[4867]: I0214 04:18:01.892034 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6687988ff8-hggh9" Feb 14 04:18:01 crc kubenswrapper[4867]: I0214 04:18:01.893302 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6687988ff8-hggh9" Feb 14 04:18:01 crc kubenswrapper[4867]: I0214 04:18:01.896294 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6687988ff8-hggh9" Feb 14 04:18:02 crc kubenswrapper[4867]: I0214 04:18:02.090650 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6687988ff8-hggh9" Feb 14 04:18:02 crc kubenswrapper[4867]: I0214 04:18:02.198003 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-7fbfc7fbd4-76v9z"] Feb 14 04:18:27 crc kubenswrapper[4867]: I0214 04:18:27.247334 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-7fbfc7fbd4-76v9z" podUID="f77d496b-c6fc-478c-9bf7-7ea59cb3a474" containerName="console" containerID="cri-o://df535b9b85af4492848df019310db3541d82923651d0d5f2862f2b53665e91fd" gracePeriod=15 Feb 14 04:18:27 crc kubenswrapper[4867]: I0214 04:18:27.628757 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7fbfc7fbd4-76v9z_f77d496b-c6fc-478c-9bf7-7ea59cb3a474/console/0.log" Feb 14 04:18:27 crc kubenswrapper[4867]: I0214 04:18:27.629239 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7fbfc7fbd4-76v9z" Feb 14 04:18:27 crc kubenswrapper[4867]: I0214 04:18:27.810581 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f77d496b-c6fc-478c-9bf7-7ea59cb3a474-console-config\") pod \"f77d496b-c6fc-478c-9bf7-7ea59cb3a474\" (UID: \"f77d496b-c6fc-478c-9bf7-7ea59cb3a474\") " Feb 14 04:18:27 crc kubenswrapper[4867]: I0214 04:18:27.810643 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f77d496b-c6fc-478c-9bf7-7ea59cb3a474-trusted-ca-bundle\") pod \"f77d496b-c6fc-478c-9bf7-7ea59cb3a474\" (UID: \"f77d496b-c6fc-478c-9bf7-7ea59cb3a474\") " Feb 14 04:18:27 crc kubenswrapper[4867]: I0214 04:18:27.810690 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wsfch\" (UniqueName: \"kubernetes.io/projected/f77d496b-c6fc-478c-9bf7-7ea59cb3a474-kube-api-access-wsfch\") pod \"f77d496b-c6fc-478c-9bf7-7ea59cb3a474\" (UID: \"f77d496b-c6fc-478c-9bf7-7ea59cb3a474\") " Feb 14 04:18:27 crc kubenswrapper[4867]: I0214 04:18:27.810877 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f77d496b-c6fc-478c-9bf7-7ea59cb3a474-service-ca\") pod \"f77d496b-c6fc-478c-9bf7-7ea59cb3a474\" (UID: \"f77d496b-c6fc-478c-9bf7-7ea59cb3a474\") " Feb 14 04:18:27 crc kubenswrapper[4867]: I0214 04:18:27.810910 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f77d496b-c6fc-478c-9bf7-7ea59cb3a474-oauth-serving-cert\") pod \"f77d496b-c6fc-478c-9bf7-7ea59cb3a474\" (UID: \"f77d496b-c6fc-478c-9bf7-7ea59cb3a474\") " Feb 14 04:18:27 crc kubenswrapper[4867]: I0214 04:18:27.810950 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f77d496b-c6fc-478c-9bf7-7ea59cb3a474-console-oauth-config\") pod \"f77d496b-c6fc-478c-9bf7-7ea59cb3a474\" (UID: \"f77d496b-c6fc-478c-9bf7-7ea59cb3a474\") " Feb 14 04:18:27 crc kubenswrapper[4867]: I0214 04:18:27.810973 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f77d496b-c6fc-478c-9bf7-7ea59cb3a474-console-serving-cert\") pod \"f77d496b-c6fc-478c-9bf7-7ea59cb3a474\" (UID: \"f77d496b-c6fc-478c-9bf7-7ea59cb3a474\") " Feb 14 04:18:27 crc kubenswrapper[4867]: I0214 04:18:27.811601 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f77d496b-c6fc-478c-9bf7-7ea59cb3a474-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "f77d496b-c6fc-478c-9bf7-7ea59cb3a474" (UID: "f77d496b-c6fc-478c-9bf7-7ea59cb3a474"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:18:27 crc kubenswrapper[4867]: I0214 04:18:27.811611 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f77d496b-c6fc-478c-9bf7-7ea59cb3a474-service-ca" (OuterVolumeSpecName: "service-ca") pod "f77d496b-c6fc-478c-9bf7-7ea59cb3a474" (UID: "f77d496b-c6fc-478c-9bf7-7ea59cb3a474"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:18:27 crc kubenswrapper[4867]: I0214 04:18:27.811877 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f77d496b-c6fc-478c-9bf7-7ea59cb3a474-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "f77d496b-c6fc-478c-9bf7-7ea59cb3a474" (UID: "f77d496b-c6fc-478c-9bf7-7ea59cb3a474"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:18:27 crc kubenswrapper[4867]: I0214 04:18:27.811956 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f77d496b-c6fc-478c-9bf7-7ea59cb3a474-console-config" (OuterVolumeSpecName: "console-config") pod "f77d496b-c6fc-478c-9bf7-7ea59cb3a474" (UID: "f77d496b-c6fc-478c-9bf7-7ea59cb3a474"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:18:27 crc kubenswrapper[4867]: I0214 04:18:27.817244 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f77d496b-c6fc-478c-9bf7-7ea59cb3a474-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "f77d496b-c6fc-478c-9bf7-7ea59cb3a474" (UID: "f77d496b-c6fc-478c-9bf7-7ea59cb3a474"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:18:27 crc kubenswrapper[4867]: I0214 04:18:27.817261 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f77d496b-c6fc-478c-9bf7-7ea59cb3a474-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "f77d496b-c6fc-478c-9bf7-7ea59cb3a474" (UID: "f77d496b-c6fc-478c-9bf7-7ea59cb3a474"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:18:27 crc kubenswrapper[4867]: I0214 04:18:27.817453 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f77d496b-c6fc-478c-9bf7-7ea59cb3a474-kube-api-access-wsfch" (OuterVolumeSpecName: "kube-api-access-wsfch") pod "f77d496b-c6fc-478c-9bf7-7ea59cb3a474" (UID: "f77d496b-c6fc-478c-9bf7-7ea59cb3a474"). InnerVolumeSpecName "kube-api-access-wsfch". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:18:27 crc kubenswrapper[4867]: I0214 04:18:27.912787 4867 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f77d496b-c6fc-478c-9bf7-7ea59cb3a474-console-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:18:27 crc kubenswrapper[4867]: I0214 04:18:27.912827 4867 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f77d496b-c6fc-478c-9bf7-7ea59cb3a474-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:18:27 crc kubenswrapper[4867]: I0214 04:18:27.912838 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wsfch\" (UniqueName: \"kubernetes.io/projected/f77d496b-c6fc-478c-9bf7-7ea59cb3a474-kube-api-access-wsfch\") on node \"crc\" DevicePath \"\"" Feb 14 04:18:27 crc kubenswrapper[4867]: I0214 04:18:27.912851 4867 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f77d496b-c6fc-478c-9bf7-7ea59cb3a474-service-ca\") on node \"crc\" DevicePath \"\"" Feb 14 04:18:27 crc kubenswrapper[4867]: I0214 04:18:27.912860 4867 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f77d496b-c6fc-478c-9bf7-7ea59cb3a474-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:18:27 crc kubenswrapper[4867]: I0214 04:18:27.912869 4867 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f77d496b-c6fc-478c-9bf7-7ea59cb3a474-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:18:27 crc kubenswrapper[4867]: I0214 04:18:27.912879 4867 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f77d496b-c6fc-478c-9bf7-7ea59cb3a474-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:18:28 crc kubenswrapper[4867]: I0214 04:18:28.334110 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7fbfc7fbd4-76v9z_f77d496b-c6fc-478c-9bf7-7ea59cb3a474/console/0.log" Feb 14 04:18:28 crc kubenswrapper[4867]: I0214 04:18:28.334772 4867 generic.go:334] "Generic (PLEG): container finished" podID="f77d496b-c6fc-478c-9bf7-7ea59cb3a474" containerID="df535b9b85af4492848df019310db3541d82923651d0d5f2862f2b53665e91fd" exitCode=2 Feb 14 04:18:28 crc kubenswrapper[4867]: I0214 04:18:28.334849 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7fbfc7fbd4-76v9z" Feb 14 04:18:28 crc kubenswrapper[4867]: I0214 04:18:28.334846 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7fbfc7fbd4-76v9z" event={"ID":"f77d496b-c6fc-478c-9bf7-7ea59cb3a474","Type":"ContainerDied","Data":"df535b9b85af4492848df019310db3541d82923651d0d5f2862f2b53665e91fd"} Feb 14 04:18:28 crc kubenswrapper[4867]: I0214 04:18:28.335067 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7fbfc7fbd4-76v9z" event={"ID":"f77d496b-c6fc-478c-9bf7-7ea59cb3a474","Type":"ContainerDied","Data":"27f66f9acfe9eb8d98daf1aedc7604a2c13203017a16447c28475c04bbfd3cf7"} Feb 14 04:18:28 crc kubenswrapper[4867]: I0214 04:18:28.335116 4867 scope.go:117] "RemoveContainer" containerID="df535b9b85af4492848df019310db3541d82923651d0d5f2862f2b53665e91fd" Feb 14 04:18:28 crc kubenswrapper[4867]: I0214 04:18:28.362339 4867 scope.go:117] "RemoveContainer" containerID="df535b9b85af4492848df019310db3541d82923651d0d5f2862f2b53665e91fd" Feb 14 04:18:28 crc kubenswrapper[4867]: E0214 04:18:28.363075 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df535b9b85af4492848df019310db3541d82923651d0d5f2862f2b53665e91fd\": container with ID starting with df535b9b85af4492848df019310db3541d82923651d0d5f2862f2b53665e91fd not found: ID does not exist" containerID="df535b9b85af4492848df019310db3541d82923651d0d5f2862f2b53665e91fd" Feb 14 04:18:28 crc kubenswrapper[4867]: I0214 04:18:28.363153 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df535b9b85af4492848df019310db3541d82923651d0d5f2862f2b53665e91fd"} err="failed to get container status \"df535b9b85af4492848df019310db3541d82923651d0d5f2862f2b53665e91fd\": rpc error: code = NotFound desc = could not find container \"df535b9b85af4492848df019310db3541d82923651d0d5f2862f2b53665e91fd\": container with ID starting with df535b9b85af4492848df019310db3541d82923651d0d5f2862f2b53665e91fd not found: ID does not exist" Feb 14 04:18:28 crc kubenswrapper[4867]: I0214 04:18:28.386318 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-7fbfc7fbd4-76v9z"] Feb 14 04:18:28 crc kubenswrapper[4867]: I0214 04:18:28.395728 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-7fbfc7fbd4-76v9z"] Feb 14 04:18:29 crc kubenswrapper[4867]: I0214 04:18:29.012292 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f77d496b-c6fc-478c-9bf7-7ea59cb3a474" path="/var/lib/kubelet/pods/f77d496b-c6fc-478c-9bf7-7ea59cb3a474/volumes" Feb 14 04:18:31 crc kubenswrapper[4867]: I0214 04:18:31.251575 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 04:18:31 crc kubenswrapper[4867]: I0214 04:18:31.252013 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 04:19:01 crc kubenswrapper[4867]: I0214 04:19:01.250659 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 04:19:01 crc kubenswrapper[4867]: I0214 04:19:01.251429 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 04:19:32 crc kubenswrapper[4867]: I0214 04:19:32.779653 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 04:19:32 crc kubenswrapper[4867]: I0214 04:19:32.780420 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 04:19:32 crc kubenswrapper[4867]: I0214 04:19:32.780496 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" Feb 14 04:19:32 crc kubenswrapper[4867]: I0214 04:19:32.781211 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2de3d61c1f6c01b61b6559aa8687b810bcfdab61e971db1007a35ef4d563c645"} pod="openshift-machine-config-operator/machine-config-daemon-4s95t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 04:19:32 crc kubenswrapper[4867]: I0214 04:19:32.781281 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" containerID="cri-o://2de3d61c1f6c01b61b6559aa8687b810bcfdab61e971db1007a35ef4d563c645" gracePeriod=600 Feb 14 04:19:33 crc kubenswrapper[4867]: I0214 04:19:33.801269 4867 generic.go:334] "Generic (PLEG): container finished" podID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerID="2de3d61c1f6c01b61b6559aa8687b810bcfdab61e971db1007a35ef4d563c645" exitCode=0 Feb 14 04:19:33 crc kubenswrapper[4867]: I0214 04:19:33.801364 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" event={"ID":"5992e46c-bce7-4b9f-82f2-c7ffb93286cd","Type":"ContainerDied","Data":"2de3d61c1f6c01b61b6559aa8687b810bcfdab61e971db1007a35ef4d563c645"} Feb 14 04:19:33 crc kubenswrapper[4867]: I0214 04:19:33.802623 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" event={"ID":"5992e46c-bce7-4b9f-82f2-c7ffb93286cd","Type":"ContainerStarted","Data":"51f114f48cb9a2cff6d859aa7aea42ea438df249b54ac2cc89b9fb1c0a39a59a"} Feb 14 04:19:33 crc kubenswrapper[4867]: I0214 04:19:33.802669 4867 scope.go:117] "RemoveContainer" containerID="a1533900ce1e5bb0e6f304c6961b52011041a6df37ce715de5540edb7f995f66" Feb 14 04:19:37 crc kubenswrapper[4867]: I0214 04:19:37.750892 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0859vdc"] Feb 14 04:19:37 crc kubenswrapper[4867]: E0214 04:19:37.752251 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f77d496b-c6fc-478c-9bf7-7ea59cb3a474" containerName="console" Feb 14 04:19:37 crc kubenswrapper[4867]: I0214 04:19:37.752272 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f77d496b-c6fc-478c-9bf7-7ea59cb3a474" containerName="console" Feb 14 04:19:37 crc kubenswrapper[4867]: I0214 04:19:37.752449 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="f77d496b-c6fc-478c-9bf7-7ea59cb3a474" containerName="console" Feb 14 04:19:37 crc kubenswrapper[4867]: I0214 04:19:37.753740 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0859vdc" Feb 14 04:19:37 crc kubenswrapper[4867]: I0214 04:19:37.755747 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 14 04:19:37 crc kubenswrapper[4867]: I0214 04:19:37.763216 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0859vdc"] Feb 14 04:19:37 crc kubenswrapper[4867]: I0214 04:19:37.904799 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2d5a082b-f5f1-4a9d-be2a-31df6953a4a4-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0859vdc\" (UID: \"2d5a082b-f5f1-4a9d-be2a-31df6953a4a4\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0859vdc" Feb 14 04:19:37 crc kubenswrapper[4867]: I0214 04:19:37.904888 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2d5a082b-f5f1-4a9d-be2a-31df6953a4a4-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0859vdc\" (UID: \"2d5a082b-f5f1-4a9d-be2a-31df6953a4a4\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0859vdc" Feb 14 04:19:37 crc kubenswrapper[4867]: I0214 04:19:37.904931 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hj7hj\" (UniqueName: \"kubernetes.io/projected/2d5a082b-f5f1-4a9d-be2a-31df6953a4a4-kube-api-access-hj7hj\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0859vdc\" (UID: \"2d5a082b-f5f1-4a9d-be2a-31df6953a4a4\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0859vdc" Feb 14 04:19:38 crc kubenswrapper[4867]: I0214 04:19:38.006161 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2d5a082b-f5f1-4a9d-be2a-31df6953a4a4-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0859vdc\" (UID: \"2d5a082b-f5f1-4a9d-be2a-31df6953a4a4\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0859vdc" Feb 14 04:19:38 crc kubenswrapper[4867]: I0214 04:19:38.006226 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hj7hj\" (UniqueName: \"kubernetes.io/projected/2d5a082b-f5f1-4a9d-be2a-31df6953a4a4-kube-api-access-hj7hj\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0859vdc\" (UID: \"2d5a082b-f5f1-4a9d-be2a-31df6953a4a4\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0859vdc" Feb 14 04:19:38 crc kubenswrapper[4867]: I0214 04:19:38.006302 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2d5a082b-f5f1-4a9d-be2a-31df6953a4a4-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0859vdc\" (UID: \"2d5a082b-f5f1-4a9d-be2a-31df6953a4a4\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0859vdc" Feb 14 04:19:38 crc kubenswrapper[4867]: I0214 04:19:38.006708 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2d5a082b-f5f1-4a9d-be2a-31df6953a4a4-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0859vdc\" (UID: \"2d5a082b-f5f1-4a9d-be2a-31df6953a4a4\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0859vdc" Feb 14 04:19:38 crc kubenswrapper[4867]: I0214 04:19:38.006749 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2d5a082b-f5f1-4a9d-be2a-31df6953a4a4-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0859vdc\" (UID: \"2d5a082b-f5f1-4a9d-be2a-31df6953a4a4\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0859vdc" Feb 14 04:19:38 crc kubenswrapper[4867]: I0214 04:19:38.026253 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hj7hj\" (UniqueName: \"kubernetes.io/projected/2d5a082b-f5f1-4a9d-be2a-31df6953a4a4-kube-api-access-hj7hj\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0859vdc\" (UID: \"2d5a082b-f5f1-4a9d-be2a-31df6953a4a4\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0859vdc" Feb 14 04:19:38 crc kubenswrapper[4867]: I0214 04:19:38.080587 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0859vdc" Feb 14 04:19:38 crc kubenswrapper[4867]: I0214 04:19:38.546098 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0859vdc"] Feb 14 04:19:38 crc kubenswrapper[4867]: I0214 04:19:38.850084 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0859vdc" event={"ID":"2d5a082b-f5f1-4a9d-be2a-31df6953a4a4","Type":"ContainerStarted","Data":"af6533f1682e3e0b3d048ad1f8c7ab5aacdb579600593234a994eb4d881560e2"} Feb 14 04:19:38 crc kubenswrapper[4867]: I0214 04:19:38.850928 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0859vdc" event={"ID":"2d5a082b-f5f1-4a9d-be2a-31df6953a4a4","Type":"ContainerStarted","Data":"6f56f46b17695aa14bb1ca7f77fe9bea2339ea43a76ca69379bdab2ff52084f5"} Feb 14 04:19:39 crc kubenswrapper[4867]: I0214 04:19:39.865896 4867 generic.go:334] "Generic (PLEG): container finished" podID="2d5a082b-f5f1-4a9d-be2a-31df6953a4a4" containerID="af6533f1682e3e0b3d048ad1f8c7ab5aacdb579600593234a994eb4d881560e2" exitCode=0 Feb 14 04:19:39 crc kubenswrapper[4867]: I0214 04:19:39.865984 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0859vdc" event={"ID":"2d5a082b-f5f1-4a9d-be2a-31df6953a4a4","Type":"ContainerDied","Data":"af6533f1682e3e0b3d048ad1f8c7ab5aacdb579600593234a994eb4d881560e2"} Feb 14 04:19:39 crc kubenswrapper[4867]: I0214 04:19:39.869282 4867 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 14 04:19:41 crc kubenswrapper[4867]: I0214 04:19:41.884731 4867 generic.go:334] "Generic (PLEG): container finished" podID="2d5a082b-f5f1-4a9d-be2a-31df6953a4a4" containerID="99ce3c3b81d9334b837bda835fc6970e3b0d6e93be7564016ddab6611c14d7dc" exitCode=0 Feb 14 04:19:41 crc kubenswrapper[4867]: I0214 04:19:41.885323 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0859vdc" event={"ID":"2d5a082b-f5f1-4a9d-be2a-31df6953a4a4","Type":"ContainerDied","Data":"99ce3c3b81d9334b837bda835fc6970e3b0d6e93be7564016ddab6611c14d7dc"} Feb 14 04:19:42 crc kubenswrapper[4867]: I0214 04:19:42.897139 4867 generic.go:334] "Generic (PLEG): container finished" podID="2d5a082b-f5f1-4a9d-be2a-31df6953a4a4" containerID="628903026be532a3bac7ed17fd2ccb0174f67522fb4dc5532429553a1a26adf4" exitCode=0 Feb 14 04:19:42 crc kubenswrapper[4867]: I0214 04:19:42.897335 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0859vdc" event={"ID":"2d5a082b-f5f1-4a9d-be2a-31df6953a4a4","Type":"ContainerDied","Data":"628903026be532a3bac7ed17fd2ccb0174f67522fb4dc5532429553a1a26adf4"} Feb 14 04:19:44 crc kubenswrapper[4867]: I0214 04:19:44.214531 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0859vdc" Feb 14 04:19:44 crc kubenswrapper[4867]: I0214 04:19:44.236542 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2d5a082b-f5f1-4a9d-be2a-31df6953a4a4-util\") pod \"2d5a082b-f5f1-4a9d-be2a-31df6953a4a4\" (UID: \"2d5a082b-f5f1-4a9d-be2a-31df6953a4a4\") " Feb 14 04:19:44 crc kubenswrapper[4867]: I0214 04:19:44.236605 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hj7hj\" (UniqueName: \"kubernetes.io/projected/2d5a082b-f5f1-4a9d-be2a-31df6953a4a4-kube-api-access-hj7hj\") pod \"2d5a082b-f5f1-4a9d-be2a-31df6953a4a4\" (UID: \"2d5a082b-f5f1-4a9d-be2a-31df6953a4a4\") " Feb 14 04:19:44 crc kubenswrapper[4867]: I0214 04:19:44.236684 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2d5a082b-f5f1-4a9d-be2a-31df6953a4a4-bundle\") pod \"2d5a082b-f5f1-4a9d-be2a-31df6953a4a4\" (UID: \"2d5a082b-f5f1-4a9d-be2a-31df6953a4a4\") " Feb 14 04:19:44 crc kubenswrapper[4867]: I0214 04:19:44.241919 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d5a082b-f5f1-4a9d-be2a-31df6953a4a4-bundle" (OuterVolumeSpecName: "bundle") pod "2d5a082b-f5f1-4a9d-be2a-31df6953a4a4" (UID: "2d5a082b-f5f1-4a9d-be2a-31df6953a4a4"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:19:44 crc kubenswrapper[4867]: I0214 04:19:44.243797 4867 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2d5a082b-f5f1-4a9d-be2a-31df6953a4a4-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:19:44 crc kubenswrapper[4867]: I0214 04:19:44.259119 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d5a082b-f5f1-4a9d-be2a-31df6953a4a4-kube-api-access-hj7hj" (OuterVolumeSpecName: "kube-api-access-hj7hj") pod "2d5a082b-f5f1-4a9d-be2a-31df6953a4a4" (UID: "2d5a082b-f5f1-4a9d-be2a-31df6953a4a4"). InnerVolumeSpecName "kube-api-access-hj7hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:19:44 crc kubenswrapper[4867]: I0214 04:19:44.345557 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hj7hj\" (UniqueName: \"kubernetes.io/projected/2d5a082b-f5f1-4a9d-be2a-31df6953a4a4-kube-api-access-hj7hj\") on node \"crc\" DevicePath \"\"" Feb 14 04:19:44 crc kubenswrapper[4867]: I0214 04:19:44.535987 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d5a082b-f5f1-4a9d-be2a-31df6953a4a4-util" (OuterVolumeSpecName: "util") pod "2d5a082b-f5f1-4a9d-be2a-31df6953a4a4" (UID: "2d5a082b-f5f1-4a9d-be2a-31df6953a4a4"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:19:44 crc kubenswrapper[4867]: I0214 04:19:44.549351 4867 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2d5a082b-f5f1-4a9d-be2a-31df6953a4a4-util\") on node \"crc\" DevicePath \"\"" Feb 14 04:19:44 crc kubenswrapper[4867]: I0214 04:19:44.920254 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0859vdc" event={"ID":"2d5a082b-f5f1-4a9d-be2a-31df6953a4a4","Type":"ContainerDied","Data":"6f56f46b17695aa14bb1ca7f77fe9bea2339ea43a76ca69379bdab2ff52084f5"} Feb 14 04:19:44 crc kubenswrapper[4867]: I0214 04:19:44.920310 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f56f46b17695aa14bb1ca7f77fe9bea2339ea43a76ca69379bdab2ff52084f5" Feb 14 04:19:44 crc kubenswrapper[4867]: I0214 04:19:44.920403 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0859vdc" Feb 14 04:19:48 crc kubenswrapper[4867]: I0214 04:19:48.759322 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-6nndn"] Feb 14 04:19:48 crc kubenswrapper[4867]: I0214 04:19:48.761116 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" containerName="ovn-controller" containerID="cri-o://e713ec2c9e59ec516f5c2241a0e87501f4e83e05d6dadd3c54cd7f1cf11f7d1e" gracePeriod=30 Feb 14 04:19:48 crc kubenswrapper[4867]: I0214 04:19:48.761243 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://250a34062c680cecaa28554a71a782da6fa1c3554900e8b4f2fa6c093f98e307" gracePeriod=30 Feb 14 04:19:48 crc kubenswrapper[4867]: I0214 04:19:48.761292 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" containerName="kube-rbac-proxy-node" containerID="cri-o://92014b6ab3e1d7c8631fb2a2fa44b60586bb67157c43e3528a7644584fd25b18" gracePeriod=30 Feb 14 04:19:48 crc kubenswrapper[4867]: I0214 04:19:48.761372 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" containerName="northd" containerID="cri-o://d9937714cb48d5e8bc3542473d8261629ced25c342c26baa13e57c3dc2ace633" gracePeriod=30 Feb 14 04:19:48 crc kubenswrapper[4867]: I0214 04:19:48.761237 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" containerName="nbdb" containerID="cri-o://ee3393a612147da0ed4305cb2d2fab51792bf4aefb36be402a6faaa698793cfd" gracePeriod=30 Feb 14 04:19:48 crc kubenswrapper[4867]: I0214 04:19:48.761329 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" containerName="sbdb" containerID="cri-o://b353a2a6ce81989e21b42414fdc2911f63d44fbd94dd8c588a704ae66216d8b5" gracePeriod=30 Feb 14 04:19:48 crc kubenswrapper[4867]: I0214 04:19:48.761364 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" containerName="ovn-acl-logging" containerID="cri-o://669cee3fb4ea5c0247e6aa92962377d3152fc79d9022e03689b7c017f857e6e6" gracePeriod=30 Feb 14 04:19:48 crc kubenswrapper[4867]: I0214 04:19:48.793678 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" containerName="ovnkube-controller" containerID="cri-o://e1b94247074b50625f56bc042c6a881f72145192ce803fa834d64741635d9444" gracePeriod=30 Feb 14 04:19:48 crc kubenswrapper[4867]: I0214 04:19:48.950902 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6nndn_34391a30-5865-46e9-af5f-705cc3b11fba/ovnkube-controller/3.log" Feb 14 04:19:48 crc kubenswrapper[4867]: I0214 04:19:48.958992 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6nndn_34391a30-5865-46e9-af5f-705cc3b11fba/ovn-acl-logging/0.log" Feb 14 04:19:48 crc kubenswrapper[4867]: I0214 04:19:48.960731 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6nndn_34391a30-5865-46e9-af5f-705cc3b11fba/ovn-controller/0.log" Feb 14 04:19:48 crc kubenswrapper[4867]: I0214 04:19:48.964004 4867 generic.go:334] "Generic (PLEG): container finished" podID="34391a30-5865-46e9-af5f-705cc3b11fba" containerID="669cee3fb4ea5c0247e6aa92962377d3152fc79d9022e03689b7c017f857e6e6" exitCode=143 Feb 14 04:19:48 crc kubenswrapper[4867]: I0214 04:19:48.964110 4867 generic.go:334] "Generic (PLEG): container finished" podID="34391a30-5865-46e9-af5f-705cc3b11fba" containerID="e713ec2c9e59ec516f5c2241a0e87501f4e83e05d6dadd3c54cd7f1cf11f7d1e" exitCode=143 Feb 14 04:19:48 crc kubenswrapper[4867]: I0214 04:19:48.964100 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" event={"ID":"34391a30-5865-46e9-af5f-705cc3b11fba","Type":"ContainerDied","Data":"669cee3fb4ea5c0247e6aa92962377d3152fc79d9022e03689b7c017f857e6e6"} Feb 14 04:19:48 crc kubenswrapper[4867]: I0214 04:19:48.964316 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" event={"ID":"34391a30-5865-46e9-af5f-705cc3b11fba","Type":"ContainerDied","Data":"e713ec2c9e59ec516f5c2241a0e87501f4e83e05d6dadd3c54cd7f1cf11f7d1e"} Feb 14 04:19:48 crc kubenswrapper[4867]: I0214 04:19:48.970764 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fl729_fb77d03e-6ead-48b5-a96a-db4cbd540192/kube-multus/2.log" Feb 14 04:19:48 crc kubenswrapper[4867]: I0214 04:19:48.971670 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fl729_fb77d03e-6ead-48b5-a96a-db4cbd540192/kube-multus/1.log" Feb 14 04:19:48 crc kubenswrapper[4867]: I0214 04:19:48.971744 4867 generic.go:334] "Generic (PLEG): container finished" podID="fb77d03e-6ead-48b5-a96a-db4cbd540192" containerID="b07a230a65d345e7f64ecb41b905a120a6174dc5229f73c67b086608b27b5a72" exitCode=2 Feb 14 04:19:48 crc kubenswrapper[4867]: I0214 04:19:48.971792 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fl729" event={"ID":"fb77d03e-6ead-48b5-a96a-db4cbd540192","Type":"ContainerDied","Data":"b07a230a65d345e7f64ecb41b905a120a6174dc5229f73c67b086608b27b5a72"} Feb 14 04:19:48 crc kubenswrapper[4867]: I0214 04:19:48.971848 4867 scope.go:117] "RemoveContainer" containerID="2556cf2433d1b1241d711139b8c66aabe3f12046f37c0f19b972b8306ff7917b" Feb 14 04:19:48 crc kubenswrapper[4867]: I0214 04:19:48.973325 4867 scope.go:117] "RemoveContainer" containerID="b07a230a65d345e7f64ecb41b905a120a6174dc5229f73c67b086608b27b5a72" Feb 14 04:19:48 crc kubenswrapper[4867]: E0214 04:19:48.973839 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-fl729_openshift-multus(fb77d03e-6ead-48b5-a96a-db4cbd540192)\"" pod="openshift-multus/multus-fl729" podUID="fb77d03e-6ead-48b5-a96a-db4cbd540192" Feb 14 04:19:49 crc kubenswrapper[4867]: I0214 04:19:49.983967 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6nndn_34391a30-5865-46e9-af5f-705cc3b11fba/ovnkube-controller/3.log" Feb 14 04:19:49 crc kubenswrapper[4867]: I0214 04:19:49.987410 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6nndn_34391a30-5865-46e9-af5f-705cc3b11fba/ovn-acl-logging/0.log" Feb 14 04:19:49 crc kubenswrapper[4867]: I0214 04:19:49.987972 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6nndn_34391a30-5865-46e9-af5f-705cc3b11fba/ovn-controller/0.log" Feb 14 04:19:49 crc kubenswrapper[4867]: I0214 04:19:49.988550 4867 generic.go:334] "Generic (PLEG): container finished" podID="34391a30-5865-46e9-af5f-705cc3b11fba" containerID="e1b94247074b50625f56bc042c6a881f72145192ce803fa834d64741635d9444" exitCode=0 Feb 14 04:19:49 crc kubenswrapper[4867]: I0214 04:19:49.988587 4867 generic.go:334] "Generic (PLEG): container finished" podID="34391a30-5865-46e9-af5f-705cc3b11fba" containerID="b353a2a6ce81989e21b42414fdc2911f63d44fbd94dd8c588a704ae66216d8b5" exitCode=0 Feb 14 04:19:49 crc kubenswrapper[4867]: I0214 04:19:49.988599 4867 generic.go:334] "Generic (PLEG): container finished" podID="34391a30-5865-46e9-af5f-705cc3b11fba" containerID="ee3393a612147da0ed4305cb2d2fab51792bf4aefb36be402a6faaa698793cfd" exitCode=0 Feb 14 04:19:49 crc kubenswrapper[4867]: I0214 04:19:49.988609 4867 generic.go:334] "Generic (PLEG): container finished" podID="34391a30-5865-46e9-af5f-705cc3b11fba" containerID="d9937714cb48d5e8bc3542473d8261629ced25c342c26baa13e57c3dc2ace633" exitCode=0 Feb 14 04:19:49 crc kubenswrapper[4867]: I0214 04:19:49.988618 4867 generic.go:334] "Generic (PLEG): container finished" podID="34391a30-5865-46e9-af5f-705cc3b11fba" containerID="92014b6ab3e1d7c8631fb2a2fa44b60586bb67157c43e3528a7644584fd25b18" exitCode=0 Feb 14 04:19:49 crc kubenswrapper[4867]: I0214 04:19:49.988609 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" event={"ID":"34391a30-5865-46e9-af5f-705cc3b11fba","Type":"ContainerDied","Data":"e1b94247074b50625f56bc042c6a881f72145192ce803fa834d64741635d9444"} Feb 14 04:19:49 crc kubenswrapper[4867]: I0214 04:19:49.988683 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" event={"ID":"34391a30-5865-46e9-af5f-705cc3b11fba","Type":"ContainerDied","Data":"b353a2a6ce81989e21b42414fdc2911f63d44fbd94dd8c588a704ae66216d8b5"} Feb 14 04:19:49 crc kubenswrapper[4867]: I0214 04:19:49.988703 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" event={"ID":"34391a30-5865-46e9-af5f-705cc3b11fba","Type":"ContainerDied","Data":"ee3393a612147da0ed4305cb2d2fab51792bf4aefb36be402a6faaa698793cfd"} Feb 14 04:19:49 crc kubenswrapper[4867]: I0214 04:19:49.988714 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" event={"ID":"34391a30-5865-46e9-af5f-705cc3b11fba","Type":"ContainerDied","Data":"d9937714cb48d5e8bc3542473d8261629ced25c342c26baa13e57c3dc2ace633"} Feb 14 04:19:49 crc kubenswrapper[4867]: I0214 04:19:49.988731 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" event={"ID":"34391a30-5865-46e9-af5f-705cc3b11fba","Type":"ContainerDied","Data":"92014b6ab3e1d7c8631fb2a2fa44b60586bb67157c43e3528a7644584fd25b18"} Feb 14 04:19:49 crc kubenswrapper[4867]: I0214 04:19:49.988767 4867 scope.go:117] "RemoveContainer" containerID="97e1fa8b3d99d969cac9ac1d4bdd1161353186d3cf50512e692adeee0f21778a" Feb 14 04:19:49 crc kubenswrapper[4867]: I0214 04:19:49.991448 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fl729_fb77d03e-6ead-48b5-a96a-db4cbd540192/kube-multus/2.log" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.087887 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6nndn_34391a30-5865-46e9-af5f-705cc3b11fba/ovn-acl-logging/0.log" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.088559 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6nndn_34391a30-5865-46e9-af5f-705cc3b11fba/ovn-controller/0.log" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.089341 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.222276 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-c58t7"] Feb 14 04:19:50 crc kubenswrapper[4867]: E0214 04:19:50.222783 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" containerName="ovnkube-controller" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.222811 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" containerName="ovnkube-controller" Feb 14 04:19:50 crc kubenswrapper[4867]: E0214 04:19:50.222825 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" containerName="kube-rbac-proxy-node" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.222835 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" containerName="kube-rbac-proxy-node" Feb 14 04:19:50 crc kubenswrapper[4867]: E0214 04:19:50.222846 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" containerName="sbdb" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.222853 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" containerName="sbdb" Feb 14 04:19:50 crc kubenswrapper[4867]: E0214 04:19:50.222891 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" containerName="ovn-controller" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.222899 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" containerName="ovn-controller" Feb 14 04:19:50 crc kubenswrapper[4867]: E0214 04:19:50.222914 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" containerName="kubecfg-setup" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.222922 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" containerName="kubecfg-setup" Feb 14 04:19:50 crc kubenswrapper[4867]: E0214 04:19:50.222936 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" containerName="ovnkube-controller" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.222944 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" containerName="ovnkube-controller" Feb 14 04:19:50 crc kubenswrapper[4867]: E0214 04:19:50.222960 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" containerName="nbdb" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.222969 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" containerName="nbdb" Feb 14 04:19:50 crc kubenswrapper[4867]: E0214 04:19:50.222984 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d5a082b-f5f1-4a9d-be2a-31df6953a4a4" containerName="extract" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.222991 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d5a082b-f5f1-4a9d-be2a-31df6953a4a4" containerName="extract" Feb 14 04:19:50 crc kubenswrapper[4867]: E0214 04:19:50.223002 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" containerName="ovnkube-controller" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.223011 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" containerName="ovnkube-controller" Feb 14 04:19:50 crc kubenswrapper[4867]: E0214 04:19:50.223019 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" containerName="kube-rbac-proxy-ovn-metrics" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.223026 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" containerName="kube-rbac-proxy-ovn-metrics" Feb 14 04:19:50 crc kubenswrapper[4867]: E0214 04:19:50.223034 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d5a082b-f5f1-4a9d-be2a-31df6953a4a4" containerName="util" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.223042 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d5a082b-f5f1-4a9d-be2a-31df6953a4a4" containerName="util" Feb 14 04:19:50 crc kubenswrapper[4867]: E0214 04:19:50.223054 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d5a082b-f5f1-4a9d-be2a-31df6953a4a4" containerName="pull" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.223061 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d5a082b-f5f1-4a9d-be2a-31df6953a4a4" containerName="pull" Feb 14 04:19:50 crc kubenswrapper[4867]: E0214 04:19:50.223083 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" containerName="northd" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.223091 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" containerName="northd" Feb 14 04:19:50 crc kubenswrapper[4867]: E0214 04:19:50.223101 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" containerName="ovn-acl-logging" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.223109 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" containerName="ovn-acl-logging" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.223262 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" containerName="kube-rbac-proxy-ovn-metrics" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.223280 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" containerName="nbdb" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.223295 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" containerName="ovnkube-controller" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.223305 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" containerName="ovnkube-controller" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.223316 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" containerName="sbdb" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.223326 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d5a082b-f5f1-4a9d-be2a-31df6953a4a4" containerName="extract" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.223335 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" containerName="kube-rbac-proxy-node" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.223345 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" containerName="ovnkube-controller" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.223356 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" containerName="ovn-controller" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.223369 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" containerName="northd" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.223396 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" containerName="ovnkube-controller" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.223406 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" containerName="ovn-acl-logging" Feb 14 04:19:50 crc kubenswrapper[4867]: E0214 04:19:50.223548 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" containerName="ovnkube-controller" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.223558 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" containerName="ovnkube-controller" Feb 14 04:19:50 crc kubenswrapper[4867]: E0214 04:19:50.223576 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" containerName="ovnkube-controller" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.223584 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" containerName="ovnkube-controller" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.223702 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" containerName="ovnkube-controller" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.225797 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.242323 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-host-run-netns\") pod \"34391a30-5865-46e9-af5f-705cc3b11fba\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.242369 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-run-openvswitch\") pod \"34391a30-5865-46e9-af5f-705cc3b11fba\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.242416 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-host-cni-bin\") pod \"34391a30-5865-46e9-af5f-705cc3b11fba\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.242448 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/34391a30-5865-46e9-af5f-705cc3b11fba-env-overrides\") pod \"34391a30-5865-46e9-af5f-705cc3b11fba\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.242454 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "34391a30-5865-46e9-af5f-705cc3b11fba" (UID: "34391a30-5865-46e9-af5f-705cc3b11fba"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.242532 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "34391a30-5865-46e9-af5f-705cc3b11fba" (UID: "34391a30-5865-46e9-af5f-705cc3b11fba"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.242538 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-host-cni-netd\") pod \"34391a30-5865-46e9-af5f-705cc3b11fba\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.242569 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-run-systemd\") pod \"34391a30-5865-46e9-af5f-705cc3b11fba\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.242570 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "34391a30-5865-46e9-af5f-705cc3b11fba" (UID: "34391a30-5865-46e9-af5f-705cc3b11fba"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.242593 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-host-slash\") pod \"34391a30-5865-46e9-af5f-705cc3b11fba\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.242639 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-host-slash" (OuterVolumeSpecName: "host-slash") pod "34391a30-5865-46e9-af5f-705cc3b11fba" (UID: "34391a30-5865-46e9-af5f-705cc3b11fba"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.242681 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "34391a30-5865-46e9-af5f-705cc3b11fba" (UID: "34391a30-5865-46e9-af5f-705cc3b11fba"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.242685 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-run-ovn\") pod \"34391a30-5865-46e9-af5f-705cc3b11fba\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.242714 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "34391a30-5865-46e9-af5f-705cc3b11fba" (UID: "34391a30-5865-46e9-af5f-705cc3b11fba"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.242746 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmqj7\" (UniqueName: \"kubernetes.io/projected/34391a30-5865-46e9-af5f-705cc3b11fba-kube-api-access-kmqj7\") pod \"34391a30-5865-46e9-af5f-705cc3b11fba\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.243022 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34391a30-5865-46e9-af5f-705cc3b11fba-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "34391a30-5865-46e9-af5f-705cc3b11fba" (UID: "34391a30-5865-46e9-af5f-705cc3b11fba"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.243212 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-log-socket\") pod \"34391a30-5865-46e9-af5f-705cc3b11fba\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.243271 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-systemd-units\") pod \"34391a30-5865-46e9-af5f-705cc3b11fba\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.243312 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-host-run-ovn-kubernetes\") pod \"34391a30-5865-46e9-af5f-705cc3b11fba\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.243342 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-var-lib-openvswitch\") pod \"34391a30-5865-46e9-af5f-705cc3b11fba\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.243366 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-host-kubelet\") pod \"34391a30-5865-46e9-af5f-705cc3b11fba\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.243391 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/34391a30-5865-46e9-af5f-705cc3b11fba-ovn-node-metrics-cert\") pod \"34391a30-5865-46e9-af5f-705cc3b11fba\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.243412 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/34391a30-5865-46e9-af5f-705cc3b11fba-ovnkube-config\") pod \"34391a30-5865-46e9-af5f-705cc3b11fba\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.243468 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/34391a30-5865-46e9-af5f-705cc3b11fba-ovnkube-script-lib\") pod \"34391a30-5865-46e9-af5f-705cc3b11fba\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.243490 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-etc-openvswitch\") pod \"34391a30-5865-46e9-af5f-705cc3b11fba\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.243526 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "34391a30-5865-46e9-af5f-705cc3b11fba" (UID: "34391a30-5865-46e9-af5f-705cc3b11fba"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.243540 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-node-log\") pod \"34391a30-5865-46e9-af5f-705cc3b11fba\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.243554 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-log-socket" (OuterVolumeSpecName: "log-socket") pod "34391a30-5865-46e9-af5f-705cc3b11fba" (UID: "34391a30-5865-46e9-af5f-705cc3b11fba"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.243568 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-host-var-lib-cni-networks-ovn-kubernetes\") pod \"34391a30-5865-46e9-af5f-705cc3b11fba\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.243578 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "34391a30-5865-46e9-af5f-705cc3b11fba" (UID: "34391a30-5865-46e9-af5f-705cc3b11fba"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.243597 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "34391a30-5865-46e9-af5f-705cc3b11fba" (UID: "34391a30-5865-46e9-af5f-705cc3b11fba"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.243735 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6b78a78d-1660-47ec-a3c6-b826a798ef37-env-overrides\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.243794 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6b78a78d-1660-47ec-a3c6-b826a798ef37-host-kubelet\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.243822 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6b78a78d-1660-47ec-a3c6-b826a798ef37-ovn-node-metrics-cert\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.243809 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "34391a30-5865-46e9-af5f-705cc3b11fba" (UID: "34391a30-5865-46e9-af5f-705cc3b11fba"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.243855 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6b78a78d-1660-47ec-a3c6-b826a798ef37-node-log\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.243877 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6b78a78d-1660-47ec-a3c6-b826a798ef37-host-run-netns\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.243887 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "34391a30-5865-46e9-af5f-705cc3b11fba" (UID: "34391a30-5865-46e9-af5f-705cc3b11fba"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.243936 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34391a30-5865-46e9-af5f-705cc3b11fba-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "34391a30-5865-46e9-af5f-705cc3b11fba" (UID: "34391a30-5865-46e9-af5f-705cc3b11fba"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.243941 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6b78a78d-1660-47ec-a3c6-b826a798ef37-host-cni-netd\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.243999 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "34391a30-5865-46e9-af5f-705cc3b11fba" (UID: "34391a30-5865-46e9-af5f-705cc3b11fba"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.244028 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-node-log" (OuterVolumeSpecName: "node-log") pod "34391a30-5865-46e9-af5f-705cc3b11fba" (UID: "34391a30-5865-46e9-af5f-705cc3b11fba"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.244085 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6b78a78d-1660-47ec-a3c6-b826a798ef37-var-lib-openvswitch\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.244109 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6b78a78d-1660-47ec-a3c6-b826a798ef37-run-ovn\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.244135 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6b78a78d-1660-47ec-a3c6-b826a798ef37-run-openvswitch\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.244154 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6b78a78d-1660-47ec-a3c6-b826a798ef37-run-systemd\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.244170 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6b78a78d-1660-47ec-a3c6-b826a798ef37-log-socket\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.244205 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6b78a78d-1660-47ec-a3c6-b826a798ef37-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.244247 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6b78a78d-1660-47ec-a3c6-b826a798ef37-systemd-units\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.244270 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6b78a78d-1660-47ec-a3c6-b826a798ef37-ovnkube-config\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.244287 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6b78a78d-1660-47ec-a3c6-b826a798ef37-host-cni-bin\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.244311 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6b78a78d-1660-47ec-a3c6-b826a798ef37-host-run-ovn-kubernetes\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.244332 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34391a30-5865-46e9-af5f-705cc3b11fba-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "34391a30-5865-46e9-af5f-705cc3b11fba" (UID: "34391a30-5865-46e9-af5f-705cc3b11fba"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.244338 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6b78a78d-1660-47ec-a3c6-b826a798ef37-ovnkube-script-lib\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.244417 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6b78a78d-1660-47ec-a3c6-b826a798ef37-host-slash\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.244479 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6b78a78d-1660-47ec-a3c6-b826a798ef37-etc-openvswitch\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.244523 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bl2kq\" (UniqueName: \"kubernetes.io/projected/6b78a78d-1660-47ec-a3c6-b826a798ef37-kube-api-access-bl2kq\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.244575 4867 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.244587 4867 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.244597 4867 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.244607 4867 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.244616 4867 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/34391a30-5865-46e9-af5f-705cc3b11fba-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.244626 4867 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.244635 4867 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-host-slash\") on node \"crc\" DevicePath \"\"" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.244644 4867 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.244653 4867 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-log-socket\") on node \"crc\" DevicePath \"\"" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.244664 4867 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.244675 4867 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.244685 4867 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.244697 4867 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.244706 4867 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/34391a30-5865-46e9-af5f-705cc3b11fba-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.244716 4867 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/34391a30-5865-46e9-af5f-705cc3b11fba-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.244724 4867 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.244734 4867 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-node-log\") on node \"crc\" DevicePath \"\"" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.254006 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34391a30-5865-46e9-af5f-705cc3b11fba-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "34391a30-5865-46e9-af5f-705cc3b11fba" (UID: "34391a30-5865-46e9-af5f-705cc3b11fba"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.264110 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34391a30-5865-46e9-af5f-705cc3b11fba-kube-api-access-kmqj7" (OuterVolumeSpecName: "kube-api-access-kmqj7") pod "34391a30-5865-46e9-af5f-705cc3b11fba" (UID: "34391a30-5865-46e9-af5f-705cc3b11fba"). InnerVolumeSpecName "kube-api-access-kmqj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.273094 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "34391a30-5865-46e9-af5f-705cc3b11fba" (UID: "34391a30-5865-46e9-af5f-705cc3b11fba"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.345887 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6b78a78d-1660-47ec-a3c6-b826a798ef37-ovnkube-script-lib\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.345935 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6b78a78d-1660-47ec-a3c6-b826a798ef37-host-slash\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.345969 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6b78a78d-1660-47ec-a3c6-b826a798ef37-etc-openvswitch\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.346060 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6b78a78d-1660-47ec-a3c6-b826a798ef37-etc-openvswitch\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.346085 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6b78a78d-1660-47ec-a3c6-b826a798ef37-host-slash\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.345991 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bl2kq\" (UniqueName: \"kubernetes.io/projected/6b78a78d-1660-47ec-a3c6-b826a798ef37-kube-api-access-bl2kq\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.346197 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6b78a78d-1660-47ec-a3c6-b826a798ef37-env-overrides\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.346251 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6b78a78d-1660-47ec-a3c6-b826a798ef37-host-kubelet\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.346274 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6b78a78d-1660-47ec-a3c6-b826a798ef37-ovn-node-metrics-cert\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.346305 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6b78a78d-1660-47ec-a3c6-b826a798ef37-node-log\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.346329 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6b78a78d-1660-47ec-a3c6-b826a798ef37-host-run-netns\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.346376 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6b78a78d-1660-47ec-a3c6-b826a798ef37-host-cni-netd\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.346385 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6b78a78d-1660-47ec-a3c6-b826a798ef37-node-log\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.346400 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6b78a78d-1660-47ec-a3c6-b826a798ef37-host-kubelet\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.346470 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6b78a78d-1660-47ec-a3c6-b826a798ef37-var-lib-openvswitch\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.346497 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6b78a78d-1660-47ec-a3c6-b826a798ef37-var-lib-openvswitch\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.346426 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6b78a78d-1660-47ec-a3c6-b826a798ef37-host-cni-netd\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.346530 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6b78a78d-1660-47ec-a3c6-b826a798ef37-host-run-netns\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.346575 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6b78a78d-1660-47ec-a3c6-b826a798ef37-run-ovn\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.346553 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6b78a78d-1660-47ec-a3c6-b826a798ef37-run-ovn\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.346671 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6b78a78d-1660-47ec-a3c6-b826a798ef37-run-openvswitch\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.346694 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6b78a78d-1660-47ec-a3c6-b826a798ef37-run-systemd\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.346746 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6b78a78d-1660-47ec-a3c6-b826a798ef37-run-openvswitch\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.346751 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6b78a78d-1660-47ec-a3c6-b826a798ef37-log-socket\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.346773 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6b78a78d-1660-47ec-a3c6-b826a798ef37-log-socket\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.346801 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6b78a78d-1660-47ec-a3c6-b826a798ef37-run-systemd\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.346823 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6b78a78d-1660-47ec-a3c6-b826a798ef37-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.346887 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6b78a78d-1660-47ec-a3c6-b826a798ef37-systemd-units\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.346911 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6b78a78d-1660-47ec-a3c6-b826a798ef37-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.346912 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6b78a78d-1660-47ec-a3c6-b826a798ef37-env-overrides\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.346963 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6b78a78d-1660-47ec-a3c6-b826a798ef37-systemd-units\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.346917 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6b78a78d-1660-47ec-a3c6-b826a798ef37-ovnkube-config\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.347011 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6b78a78d-1660-47ec-a3c6-b826a798ef37-host-cni-bin\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.347064 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6b78a78d-1660-47ec-a3c6-b826a798ef37-host-run-ovn-kubernetes\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.347209 4867 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.347225 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmqj7\" (UniqueName: \"kubernetes.io/projected/34391a30-5865-46e9-af5f-705cc3b11fba-kube-api-access-kmqj7\") on node \"crc\" DevicePath \"\"" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.347215 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6b78a78d-1660-47ec-a3c6-b826a798ef37-host-cni-bin\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.347242 4867 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/34391a30-5865-46e9-af5f-705cc3b11fba-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.347264 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6b78a78d-1660-47ec-a3c6-b826a798ef37-host-run-ovn-kubernetes\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.347385 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6b78a78d-1660-47ec-a3c6-b826a798ef37-ovnkube-script-lib\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.347743 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6b78a78d-1660-47ec-a3c6-b826a798ef37-ovnkube-config\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.351467 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6b78a78d-1660-47ec-a3c6-b826a798ef37-ovn-node-metrics-cert\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.366205 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bl2kq\" (UniqueName: \"kubernetes.io/projected/6b78a78d-1660-47ec-a3c6-b826a798ef37-kube-api-access-bl2kq\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.539070 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: W0214 04:19:50.568685 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6b78a78d_1660_47ec_a3c6_b826a798ef37.slice/crio-84a3ca4003e7bd652188bd707a7b9ecbfcacc491c3b3b524d767abeb0024f229 WatchSource:0}: Error finding container 84a3ca4003e7bd652188bd707a7b9ecbfcacc491c3b3b524d767abeb0024f229: Status 404 returned error can't find the container with id 84a3ca4003e7bd652188bd707a7b9ecbfcacc491c3b3b524d767abeb0024f229 Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.999164 4867 generic.go:334] "Generic (PLEG): container finished" podID="6b78a78d-1660-47ec-a3c6-b826a798ef37" containerID="b68125d78fd85d06fd9c2b62bbe98e953d49c4fa37c04c665ae8afbcb5398138" exitCode=0 Feb 14 04:19:51 crc kubenswrapper[4867]: I0214 04:19:51.006436 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6nndn_34391a30-5865-46e9-af5f-705cc3b11fba/ovn-acl-logging/0.log" Feb 14 04:19:51 crc kubenswrapper[4867]: I0214 04:19:51.007449 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6nndn_34391a30-5865-46e9-af5f-705cc3b11fba/ovn-controller/0.log" Feb 14 04:19:51 crc kubenswrapper[4867]: I0214 04:19:51.007867 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" event={"ID":"6b78a78d-1660-47ec-a3c6-b826a798ef37","Type":"ContainerDied","Data":"b68125d78fd85d06fd9c2b62bbe98e953d49c4fa37c04c665ae8afbcb5398138"} Feb 14 04:19:51 crc kubenswrapper[4867]: I0214 04:19:51.007922 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" event={"ID":"6b78a78d-1660-47ec-a3c6-b826a798ef37","Type":"ContainerStarted","Data":"84a3ca4003e7bd652188bd707a7b9ecbfcacc491c3b3b524d767abeb0024f229"} Feb 14 04:19:51 crc kubenswrapper[4867]: I0214 04:19:51.008105 4867 generic.go:334] "Generic (PLEG): container finished" podID="34391a30-5865-46e9-af5f-705cc3b11fba" containerID="250a34062c680cecaa28554a71a782da6fa1c3554900e8b4f2fa6c093f98e307" exitCode=0 Feb 14 04:19:51 crc kubenswrapper[4867]: I0214 04:19:51.008181 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" event={"ID":"34391a30-5865-46e9-af5f-705cc3b11fba","Type":"ContainerDied","Data":"250a34062c680cecaa28554a71a782da6fa1c3554900e8b4f2fa6c093f98e307"} Feb 14 04:19:51 crc kubenswrapper[4867]: I0214 04:19:51.008210 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:19:51 crc kubenswrapper[4867]: I0214 04:19:51.008254 4867 scope.go:117] "RemoveContainer" containerID="e1b94247074b50625f56bc042c6a881f72145192ce803fa834d64741635d9444" Feb 14 04:19:51 crc kubenswrapper[4867]: I0214 04:19:51.008239 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" event={"ID":"34391a30-5865-46e9-af5f-705cc3b11fba","Type":"ContainerDied","Data":"766035eb89c0e6059ab573e34c9ca67206f8aeefdcb68c749029bbaceeefc307"} Feb 14 04:19:51 crc kubenswrapper[4867]: I0214 04:19:51.050741 4867 scope.go:117] "RemoveContainer" containerID="b353a2a6ce81989e21b42414fdc2911f63d44fbd94dd8c588a704ae66216d8b5" Feb 14 04:19:51 crc kubenswrapper[4867]: I0214 04:19:51.096498 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-6nndn"] Feb 14 04:19:51 crc kubenswrapper[4867]: I0214 04:19:51.105954 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-6nndn"] Feb 14 04:19:51 crc kubenswrapper[4867]: I0214 04:19:51.107695 4867 scope.go:117] "RemoveContainer" containerID="ee3393a612147da0ed4305cb2d2fab51792bf4aefb36be402a6faaa698793cfd" Feb 14 04:19:51 crc kubenswrapper[4867]: I0214 04:19:51.128641 4867 scope.go:117] "RemoveContainer" containerID="d9937714cb48d5e8bc3542473d8261629ced25c342c26baa13e57c3dc2ace633" Feb 14 04:19:51 crc kubenswrapper[4867]: I0214 04:19:51.168871 4867 scope.go:117] "RemoveContainer" containerID="250a34062c680cecaa28554a71a782da6fa1c3554900e8b4f2fa6c093f98e307" Feb 14 04:19:51 crc kubenswrapper[4867]: I0214 04:19:51.193737 4867 scope.go:117] "RemoveContainer" containerID="92014b6ab3e1d7c8631fb2a2fa44b60586bb67157c43e3528a7644584fd25b18" Feb 14 04:19:51 crc kubenswrapper[4867]: I0214 04:19:51.216974 4867 scope.go:117] "RemoveContainer" containerID="669cee3fb4ea5c0247e6aa92962377d3152fc79d9022e03689b7c017f857e6e6" Feb 14 04:19:51 crc kubenswrapper[4867]: I0214 04:19:51.273664 4867 scope.go:117] "RemoveContainer" containerID="e713ec2c9e59ec516f5c2241a0e87501f4e83e05d6dadd3c54cd7f1cf11f7d1e" Feb 14 04:19:51 crc kubenswrapper[4867]: I0214 04:19:51.311366 4867 scope.go:117] "RemoveContainer" containerID="cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288" Feb 14 04:19:51 crc kubenswrapper[4867]: I0214 04:19:51.334437 4867 scope.go:117] "RemoveContainer" containerID="e1b94247074b50625f56bc042c6a881f72145192ce803fa834d64741635d9444" Feb 14 04:19:51 crc kubenswrapper[4867]: E0214 04:19:51.334917 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1b94247074b50625f56bc042c6a881f72145192ce803fa834d64741635d9444\": container with ID starting with e1b94247074b50625f56bc042c6a881f72145192ce803fa834d64741635d9444 not found: ID does not exist" containerID="e1b94247074b50625f56bc042c6a881f72145192ce803fa834d64741635d9444" Feb 14 04:19:51 crc kubenswrapper[4867]: I0214 04:19:51.334955 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1b94247074b50625f56bc042c6a881f72145192ce803fa834d64741635d9444"} err="failed to get container status \"e1b94247074b50625f56bc042c6a881f72145192ce803fa834d64741635d9444\": rpc error: code = NotFound desc = could not find container \"e1b94247074b50625f56bc042c6a881f72145192ce803fa834d64741635d9444\": container with ID starting with e1b94247074b50625f56bc042c6a881f72145192ce803fa834d64741635d9444 not found: ID does not exist" Feb 14 04:19:51 crc kubenswrapper[4867]: I0214 04:19:51.334985 4867 scope.go:117] "RemoveContainer" containerID="b353a2a6ce81989e21b42414fdc2911f63d44fbd94dd8c588a704ae66216d8b5" Feb 14 04:19:51 crc kubenswrapper[4867]: E0214 04:19:51.335322 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b353a2a6ce81989e21b42414fdc2911f63d44fbd94dd8c588a704ae66216d8b5\": container with ID starting with b353a2a6ce81989e21b42414fdc2911f63d44fbd94dd8c588a704ae66216d8b5 not found: ID does not exist" containerID="b353a2a6ce81989e21b42414fdc2911f63d44fbd94dd8c588a704ae66216d8b5" Feb 14 04:19:51 crc kubenswrapper[4867]: I0214 04:19:51.335348 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b353a2a6ce81989e21b42414fdc2911f63d44fbd94dd8c588a704ae66216d8b5"} err="failed to get container status \"b353a2a6ce81989e21b42414fdc2911f63d44fbd94dd8c588a704ae66216d8b5\": rpc error: code = NotFound desc = could not find container \"b353a2a6ce81989e21b42414fdc2911f63d44fbd94dd8c588a704ae66216d8b5\": container with ID starting with b353a2a6ce81989e21b42414fdc2911f63d44fbd94dd8c588a704ae66216d8b5 not found: ID does not exist" Feb 14 04:19:51 crc kubenswrapper[4867]: I0214 04:19:51.335364 4867 scope.go:117] "RemoveContainer" containerID="ee3393a612147da0ed4305cb2d2fab51792bf4aefb36be402a6faaa698793cfd" Feb 14 04:19:51 crc kubenswrapper[4867]: E0214 04:19:51.335611 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee3393a612147da0ed4305cb2d2fab51792bf4aefb36be402a6faaa698793cfd\": container with ID starting with ee3393a612147da0ed4305cb2d2fab51792bf4aefb36be402a6faaa698793cfd not found: ID does not exist" containerID="ee3393a612147da0ed4305cb2d2fab51792bf4aefb36be402a6faaa698793cfd" Feb 14 04:19:51 crc kubenswrapper[4867]: I0214 04:19:51.335634 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee3393a612147da0ed4305cb2d2fab51792bf4aefb36be402a6faaa698793cfd"} err="failed to get container status \"ee3393a612147da0ed4305cb2d2fab51792bf4aefb36be402a6faaa698793cfd\": rpc error: code = NotFound desc = could not find container \"ee3393a612147da0ed4305cb2d2fab51792bf4aefb36be402a6faaa698793cfd\": container with ID starting with ee3393a612147da0ed4305cb2d2fab51792bf4aefb36be402a6faaa698793cfd not found: ID does not exist" Feb 14 04:19:51 crc kubenswrapper[4867]: I0214 04:19:51.335651 4867 scope.go:117] "RemoveContainer" containerID="d9937714cb48d5e8bc3542473d8261629ced25c342c26baa13e57c3dc2ace633" Feb 14 04:19:51 crc kubenswrapper[4867]: E0214 04:19:51.336049 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9937714cb48d5e8bc3542473d8261629ced25c342c26baa13e57c3dc2ace633\": container with ID starting with d9937714cb48d5e8bc3542473d8261629ced25c342c26baa13e57c3dc2ace633 not found: ID does not exist" containerID="d9937714cb48d5e8bc3542473d8261629ced25c342c26baa13e57c3dc2ace633" Feb 14 04:19:51 crc kubenswrapper[4867]: I0214 04:19:51.336071 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9937714cb48d5e8bc3542473d8261629ced25c342c26baa13e57c3dc2ace633"} err="failed to get container status \"d9937714cb48d5e8bc3542473d8261629ced25c342c26baa13e57c3dc2ace633\": rpc error: code = NotFound desc = could not find container \"d9937714cb48d5e8bc3542473d8261629ced25c342c26baa13e57c3dc2ace633\": container with ID starting with d9937714cb48d5e8bc3542473d8261629ced25c342c26baa13e57c3dc2ace633 not found: ID does not exist" Feb 14 04:19:51 crc kubenswrapper[4867]: I0214 04:19:51.336091 4867 scope.go:117] "RemoveContainer" containerID="250a34062c680cecaa28554a71a782da6fa1c3554900e8b4f2fa6c093f98e307" Feb 14 04:19:51 crc kubenswrapper[4867]: E0214 04:19:51.336340 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"250a34062c680cecaa28554a71a782da6fa1c3554900e8b4f2fa6c093f98e307\": container with ID starting with 250a34062c680cecaa28554a71a782da6fa1c3554900e8b4f2fa6c093f98e307 not found: ID does not exist" containerID="250a34062c680cecaa28554a71a782da6fa1c3554900e8b4f2fa6c093f98e307" Feb 14 04:19:51 crc kubenswrapper[4867]: I0214 04:19:51.336365 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"250a34062c680cecaa28554a71a782da6fa1c3554900e8b4f2fa6c093f98e307"} err="failed to get container status \"250a34062c680cecaa28554a71a782da6fa1c3554900e8b4f2fa6c093f98e307\": rpc error: code = NotFound desc = could not find container \"250a34062c680cecaa28554a71a782da6fa1c3554900e8b4f2fa6c093f98e307\": container with ID starting with 250a34062c680cecaa28554a71a782da6fa1c3554900e8b4f2fa6c093f98e307 not found: ID does not exist" Feb 14 04:19:51 crc kubenswrapper[4867]: I0214 04:19:51.336380 4867 scope.go:117] "RemoveContainer" containerID="92014b6ab3e1d7c8631fb2a2fa44b60586bb67157c43e3528a7644584fd25b18" Feb 14 04:19:51 crc kubenswrapper[4867]: E0214 04:19:51.336607 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92014b6ab3e1d7c8631fb2a2fa44b60586bb67157c43e3528a7644584fd25b18\": container with ID starting with 92014b6ab3e1d7c8631fb2a2fa44b60586bb67157c43e3528a7644584fd25b18 not found: ID does not exist" containerID="92014b6ab3e1d7c8631fb2a2fa44b60586bb67157c43e3528a7644584fd25b18" Feb 14 04:19:51 crc kubenswrapper[4867]: I0214 04:19:51.336630 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92014b6ab3e1d7c8631fb2a2fa44b60586bb67157c43e3528a7644584fd25b18"} err="failed to get container status \"92014b6ab3e1d7c8631fb2a2fa44b60586bb67157c43e3528a7644584fd25b18\": rpc error: code = NotFound desc = could not find container \"92014b6ab3e1d7c8631fb2a2fa44b60586bb67157c43e3528a7644584fd25b18\": container with ID starting with 92014b6ab3e1d7c8631fb2a2fa44b60586bb67157c43e3528a7644584fd25b18 not found: ID does not exist" Feb 14 04:19:51 crc kubenswrapper[4867]: I0214 04:19:51.336653 4867 scope.go:117] "RemoveContainer" containerID="669cee3fb4ea5c0247e6aa92962377d3152fc79d9022e03689b7c017f857e6e6" Feb 14 04:19:51 crc kubenswrapper[4867]: E0214 04:19:51.336863 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"669cee3fb4ea5c0247e6aa92962377d3152fc79d9022e03689b7c017f857e6e6\": container with ID starting with 669cee3fb4ea5c0247e6aa92962377d3152fc79d9022e03689b7c017f857e6e6 not found: ID does not exist" containerID="669cee3fb4ea5c0247e6aa92962377d3152fc79d9022e03689b7c017f857e6e6" Feb 14 04:19:51 crc kubenswrapper[4867]: I0214 04:19:51.336885 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"669cee3fb4ea5c0247e6aa92962377d3152fc79d9022e03689b7c017f857e6e6"} err="failed to get container status \"669cee3fb4ea5c0247e6aa92962377d3152fc79d9022e03689b7c017f857e6e6\": rpc error: code = NotFound desc = could not find container \"669cee3fb4ea5c0247e6aa92962377d3152fc79d9022e03689b7c017f857e6e6\": container with ID starting with 669cee3fb4ea5c0247e6aa92962377d3152fc79d9022e03689b7c017f857e6e6 not found: ID does not exist" Feb 14 04:19:51 crc kubenswrapper[4867]: I0214 04:19:51.336899 4867 scope.go:117] "RemoveContainer" containerID="e713ec2c9e59ec516f5c2241a0e87501f4e83e05d6dadd3c54cd7f1cf11f7d1e" Feb 14 04:19:51 crc kubenswrapper[4867]: E0214 04:19:51.337113 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e713ec2c9e59ec516f5c2241a0e87501f4e83e05d6dadd3c54cd7f1cf11f7d1e\": container with ID starting with e713ec2c9e59ec516f5c2241a0e87501f4e83e05d6dadd3c54cd7f1cf11f7d1e not found: ID does not exist" containerID="e713ec2c9e59ec516f5c2241a0e87501f4e83e05d6dadd3c54cd7f1cf11f7d1e" Feb 14 04:19:51 crc kubenswrapper[4867]: I0214 04:19:51.337135 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e713ec2c9e59ec516f5c2241a0e87501f4e83e05d6dadd3c54cd7f1cf11f7d1e"} err="failed to get container status \"e713ec2c9e59ec516f5c2241a0e87501f4e83e05d6dadd3c54cd7f1cf11f7d1e\": rpc error: code = NotFound desc = could not find container \"e713ec2c9e59ec516f5c2241a0e87501f4e83e05d6dadd3c54cd7f1cf11f7d1e\": container with ID starting with e713ec2c9e59ec516f5c2241a0e87501f4e83e05d6dadd3c54cd7f1cf11f7d1e not found: ID does not exist" Feb 14 04:19:51 crc kubenswrapper[4867]: I0214 04:19:51.337149 4867 scope.go:117] "RemoveContainer" containerID="cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288" Feb 14 04:19:51 crc kubenswrapper[4867]: E0214 04:19:51.337358 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\": container with ID starting with cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288 not found: ID does not exist" containerID="cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288" Feb 14 04:19:51 crc kubenswrapper[4867]: I0214 04:19:51.337385 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288"} err="failed to get container status \"cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\": rpc error: code = NotFound desc = could not find container \"cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\": container with ID starting with cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288 not found: ID does not exist" Feb 14 04:19:52 crc kubenswrapper[4867]: I0214 04:19:52.020280 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" event={"ID":"6b78a78d-1660-47ec-a3c6-b826a798ef37","Type":"ContainerStarted","Data":"e39d145a2223f01efcf78f1860ba0edcf7bacd85f3acadcbbcfc8a2538a351a3"} Feb 14 04:19:52 crc kubenswrapper[4867]: I0214 04:19:52.020785 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" event={"ID":"6b78a78d-1660-47ec-a3c6-b826a798ef37","Type":"ContainerStarted","Data":"c46d5153b8539c36d82f02078b671594fe0c7c666a32d5cb85c9333db720bd6f"} Feb 14 04:19:52 crc kubenswrapper[4867]: I0214 04:19:52.020800 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" event={"ID":"6b78a78d-1660-47ec-a3c6-b826a798ef37","Type":"ContainerStarted","Data":"e54daa57901cc4b680b7f5f69f638890e05b8c6b7a460d696bac8aae447d5ec5"} Feb 14 04:19:52 crc kubenswrapper[4867]: I0214 04:19:52.020820 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" event={"ID":"6b78a78d-1660-47ec-a3c6-b826a798ef37","Type":"ContainerStarted","Data":"28b23ae4549cd208242d8e0a94c5d121764cc9af9566976e376ca094c0931a05"} Feb 14 04:19:52 crc kubenswrapper[4867]: I0214 04:19:52.020831 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" event={"ID":"6b78a78d-1660-47ec-a3c6-b826a798ef37","Type":"ContainerStarted","Data":"2d0e373d70ea57ce93d860ad33bb4a9ad3c10f6ac08fe79c5a4f8ecd58b4104d"} Feb 14 04:19:53 crc kubenswrapper[4867]: I0214 04:19:53.006850 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" path="/var/lib/kubelet/pods/34391a30-5865-46e9-af5f-705cc3b11fba/volumes" Feb 14 04:19:53 crc kubenswrapper[4867]: I0214 04:19:53.030608 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" event={"ID":"6b78a78d-1660-47ec-a3c6-b826a798ef37","Type":"ContainerStarted","Data":"4053b5fc4e7c5d07a1bbdd744111208b45c99da6277ba21afde0062a749c4888"} Feb 14 04:19:53 crc kubenswrapper[4867]: I0214 04:19:53.322119 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-vwlcr"] Feb 14 04:19:53 crc kubenswrapper[4867]: I0214 04:19:53.323631 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-vwlcr" Feb 14 04:19:53 crc kubenswrapper[4867]: I0214 04:19:53.326978 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Feb 14 04:19:53 crc kubenswrapper[4867]: I0214 04:19:53.327167 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Feb 14 04:19:53 crc kubenswrapper[4867]: I0214 04:19:53.333382 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-9jn2g" Feb 14 04:19:53 crc kubenswrapper[4867]: I0214 04:19:53.406594 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vp2q6\" (UniqueName: \"kubernetes.io/projected/987816d4-f9a4-47da-983c-317f9a3f4d86-kube-api-access-vp2q6\") pod \"obo-prometheus-operator-68bc856cb9-vwlcr\" (UID: \"987816d4-f9a4-47da-983c-317f9a3f4d86\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-vwlcr" Feb 14 04:19:53 crc kubenswrapper[4867]: I0214 04:19:53.480567 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr"] Feb 14 04:19:53 crc kubenswrapper[4867]: I0214 04:19:53.482164 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr" Feb 14 04:19:53 crc kubenswrapper[4867]: I0214 04:19:53.488970 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-5jt8v" Feb 14 04:19:53 crc kubenswrapper[4867]: I0214 04:19:53.489278 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Feb 14 04:19:53 crc kubenswrapper[4867]: I0214 04:19:53.499909 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj"] Feb 14 04:19:53 crc kubenswrapper[4867]: I0214 04:19:53.501042 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj" Feb 14 04:19:53 crc kubenswrapper[4867]: I0214 04:19:53.507940 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vp2q6\" (UniqueName: \"kubernetes.io/projected/987816d4-f9a4-47da-983c-317f9a3f4d86-kube-api-access-vp2q6\") pod \"obo-prometheus-operator-68bc856cb9-vwlcr\" (UID: \"987816d4-f9a4-47da-983c-317f9a3f4d86\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-vwlcr" Feb 14 04:19:53 crc kubenswrapper[4867]: I0214 04:19:53.535146 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vp2q6\" (UniqueName: \"kubernetes.io/projected/987816d4-f9a4-47da-983c-317f9a3f4d86-kube-api-access-vp2q6\") pod \"obo-prometheus-operator-68bc856cb9-vwlcr\" (UID: \"987816d4-f9a4-47da-983c-317f9a3f4d86\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-vwlcr" Feb 14 04:19:53 crc kubenswrapper[4867]: I0214 04:19:53.610739 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5ecc414b-6bac-4b24-99c5-e2d1fb67f314-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr\" (UID: \"5ecc414b-6bac-4b24-99c5-e2d1fb67f314\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr" Feb 14 04:19:53 crc kubenswrapper[4867]: I0214 04:19:53.611176 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8c7f9ea9-2c5c-4e9c-97b2-02dd8a216d06-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj\" (UID: \"8c7f9ea9-2c5c-4e9c-97b2-02dd8a216d06\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj" Feb 14 04:19:53 crc kubenswrapper[4867]: I0214 04:19:53.611352 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5ecc414b-6bac-4b24-99c5-e2d1fb67f314-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr\" (UID: \"5ecc414b-6bac-4b24-99c5-e2d1fb67f314\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr" Feb 14 04:19:53 crc kubenswrapper[4867]: I0214 04:19:53.611401 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8c7f9ea9-2c5c-4e9c-97b2-02dd8a216d06-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj\" (UID: \"8c7f9ea9-2c5c-4e9c-97b2-02dd8a216d06\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj" Feb 14 04:19:53 crc kubenswrapper[4867]: I0214 04:19:53.640101 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-vwlcr" Feb 14 04:19:53 crc kubenswrapper[4867]: I0214 04:19:53.686996 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-kv4j7"] Feb 14 04:19:53 crc kubenswrapper[4867]: I0214 04:19:53.688594 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-kv4j7" Feb 14 04:19:53 crc kubenswrapper[4867]: I0214 04:19:53.695959 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Feb 14 04:19:53 crc kubenswrapper[4867]: I0214 04:19:53.696690 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-m6hgx" Feb 14 04:19:53 crc kubenswrapper[4867]: E0214 04:19:53.700939 4867 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-vwlcr_openshift-operators_987816d4-f9a4-47da-983c-317f9a3f4d86_0(8670ebfb80def73022e9ac71dc0d989292b6b36e0f026f519130146e71180d72): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 14 04:19:53 crc kubenswrapper[4867]: E0214 04:19:53.701060 4867 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-vwlcr_openshift-operators_987816d4-f9a4-47da-983c-317f9a3f4d86_0(8670ebfb80def73022e9ac71dc0d989292b6b36e0f026f519130146e71180d72): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-vwlcr" Feb 14 04:19:53 crc kubenswrapper[4867]: E0214 04:19:53.701135 4867 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-vwlcr_openshift-operators_987816d4-f9a4-47da-983c-317f9a3f4d86_0(8670ebfb80def73022e9ac71dc0d989292b6b36e0f026f519130146e71180d72): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-vwlcr" Feb 14 04:19:53 crc kubenswrapper[4867]: E0214 04:19:53.701233 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-vwlcr_openshift-operators(987816d4-f9a4-47da-983c-317f9a3f4d86)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-vwlcr_openshift-operators(987816d4-f9a4-47da-983c-317f9a3f4d86)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-vwlcr_openshift-operators_987816d4-f9a4-47da-983c-317f9a3f4d86_0(8670ebfb80def73022e9ac71dc0d989292b6b36e0f026f519130146e71180d72): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-vwlcr" podUID="987816d4-f9a4-47da-983c-317f9a3f4d86" Feb 14 04:19:53 crc kubenswrapper[4867]: I0214 04:19:53.712965 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5ecc414b-6bac-4b24-99c5-e2d1fb67f314-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr\" (UID: \"5ecc414b-6bac-4b24-99c5-e2d1fb67f314\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr" Feb 14 04:19:53 crc kubenswrapper[4867]: I0214 04:19:53.713198 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8c7f9ea9-2c5c-4e9c-97b2-02dd8a216d06-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj\" (UID: \"8c7f9ea9-2c5c-4e9c-97b2-02dd8a216d06\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj" Feb 14 04:19:53 crc kubenswrapper[4867]: I0214 04:19:53.713299 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsgnl\" (UniqueName: \"kubernetes.io/projected/94f47db9-4437-4b3e-aee5-f6f65e715e62-kube-api-access-fsgnl\") pod \"observability-operator-59bdc8b94-kv4j7\" (UID: \"94f47db9-4437-4b3e-aee5-f6f65e715e62\") " pod="openshift-operators/observability-operator-59bdc8b94-kv4j7" Feb 14 04:19:53 crc kubenswrapper[4867]: I0214 04:19:53.713391 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5ecc414b-6bac-4b24-99c5-e2d1fb67f314-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr\" (UID: \"5ecc414b-6bac-4b24-99c5-e2d1fb67f314\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr" Feb 14 04:19:53 crc kubenswrapper[4867]: I0214 04:19:53.713469 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8c7f9ea9-2c5c-4e9c-97b2-02dd8a216d06-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj\" (UID: \"8c7f9ea9-2c5c-4e9c-97b2-02dd8a216d06\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj" Feb 14 04:19:53 crc kubenswrapper[4867]: I0214 04:19:53.713576 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/94f47db9-4437-4b3e-aee5-f6f65e715e62-observability-operator-tls\") pod \"observability-operator-59bdc8b94-kv4j7\" (UID: \"94f47db9-4437-4b3e-aee5-f6f65e715e62\") " pod="openshift-operators/observability-operator-59bdc8b94-kv4j7" Feb 14 04:19:53 crc kubenswrapper[4867]: I0214 04:19:53.723097 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8c7f9ea9-2c5c-4e9c-97b2-02dd8a216d06-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj\" (UID: \"8c7f9ea9-2c5c-4e9c-97b2-02dd8a216d06\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj" Feb 14 04:19:53 crc kubenswrapper[4867]: I0214 04:19:53.723097 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5ecc414b-6bac-4b24-99c5-e2d1fb67f314-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr\" (UID: \"5ecc414b-6bac-4b24-99c5-e2d1fb67f314\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr" Feb 14 04:19:53 crc kubenswrapper[4867]: I0214 04:19:53.723910 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8c7f9ea9-2c5c-4e9c-97b2-02dd8a216d06-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj\" (UID: \"8c7f9ea9-2c5c-4e9c-97b2-02dd8a216d06\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj" Feb 14 04:19:53 crc kubenswrapper[4867]: I0214 04:19:53.727616 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5ecc414b-6bac-4b24-99c5-e2d1fb67f314-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr\" (UID: \"5ecc414b-6bac-4b24-99c5-e2d1fb67f314\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr" Feb 14 04:19:53 crc kubenswrapper[4867]: I0214 04:19:53.798085 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr" Feb 14 04:19:53 crc kubenswrapper[4867]: I0214 04:19:53.814722 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsgnl\" (UniqueName: \"kubernetes.io/projected/94f47db9-4437-4b3e-aee5-f6f65e715e62-kube-api-access-fsgnl\") pod \"observability-operator-59bdc8b94-kv4j7\" (UID: \"94f47db9-4437-4b3e-aee5-f6f65e715e62\") " pod="openshift-operators/observability-operator-59bdc8b94-kv4j7" Feb 14 04:19:53 crc kubenswrapper[4867]: I0214 04:19:53.814785 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/94f47db9-4437-4b3e-aee5-f6f65e715e62-observability-operator-tls\") pod \"observability-operator-59bdc8b94-kv4j7\" (UID: \"94f47db9-4437-4b3e-aee5-f6f65e715e62\") " pod="openshift-operators/observability-operator-59bdc8b94-kv4j7" Feb 14 04:19:53 crc kubenswrapper[4867]: I0214 04:19:53.818748 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj" Feb 14 04:19:53 crc kubenswrapper[4867]: I0214 04:19:53.822169 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/94f47db9-4437-4b3e-aee5-f6f65e715e62-observability-operator-tls\") pod \"observability-operator-59bdc8b94-kv4j7\" (UID: \"94f47db9-4437-4b3e-aee5-f6f65e715e62\") " pod="openshift-operators/observability-operator-59bdc8b94-kv4j7" Feb 14 04:19:53 crc kubenswrapper[4867]: E0214 04:19:53.834013 4867 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr_openshift-operators_5ecc414b-6bac-4b24-99c5-e2d1fb67f314_0(42b0ab45a98bb0c5369ed3eec4f3619b039a6a34acb087535ffe1c2a8171bd8e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 14 04:19:53 crc kubenswrapper[4867]: E0214 04:19:53.834160 4867 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr_openshift-operators_5ecc414b-6bac-4b24-99c5-e2d1fb67f314_0(42b0ab45a98bb0c5369ed3eec4f3619b039a6a34acb087535ffe1c2a8171bd8e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr" Feb 14 04:19:53 crc kubenswrapper[4867]: E0214 04:19:53.834236 4867 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr_openshift-operators_5ecc414b-6bac-4b24-99c5-e2d1fb67f314_0(42b0ab45a98bb0c5369ed3eec4f3619b039a6a34acb087535ffe1c2a8171bd8e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr" Feb 14 04:19:53 crc kubenswrapper[4867]: E0214 04:19:53.834336 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr_openshift-operators(5ecc414b-6bac-4b24-99c5-e2d1fb67f314)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr_openshift-operators(5ecc414b-6bac-4b24-99c5-e2d1fb67f314)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr_openshift-operators_5ecc414b-6bac-4b24-99c5-e2d1fb67f314_0(42b0ab45a98bb0c5369ed3eec4f3619b039a6a34acb087535ffe1c2a8171bd8e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr" podUID="5ecc414b-6bac-4b24-99c5-e2d1fb67f314" Feb 14 04:19:53 crc kubenswrapper[4867]: I0214 04:19:53.834580 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsgnl\" (UniqueName: \"kubernetes.io/projected/94f47db9-4437-4b3e-aee5-f6f65e715e62-kube-api-access-fsgnl\") pod \"observability-operator-59bdc8b94-kv4j7\" (UID: \"94f47db9-4437-4b3e-aee5-f6f65e715e62\") " pod="openshift-operators/observability-operator-59bdc8b94-kv4j7" Feb 14 04:19:53 crc kubenswrapper[4867]: E0214 04:19:53.855430 4867 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj_openshift-operators_8c7f9ea9-2c5c-4e9c-97b2-02dd8a216d06_0(fe1c9f4f2a49c49402576aa486145a6b9b778095b39ec24b789372223cc5e663): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 14 04:19:53 crc kubenswrapper[4867]: E0214 04:19:53.855498 4867 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj_openshift-operators_8c7f9ea9-2c5c-4e9c-97b2-02dd8a216d06_0(fe1c9f4f2a49c49402576aa486145a6b9b778095b39ec24b789372223cc5e663): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj" Feb 14 04:19:53 crc kubenswrapper[4867]: E0214 04:19:53.855536 4867 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj_openshift-operators_8c7f9ea9-2c5c-4e9c-97b2-02dd8a216d06_0(fe1c9f4f2a49c49402576aa486145a6b9b778095b39ec24b789372223cc5e663): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj" Feb 14 04:19:53 crc kubenswrapper[4867]: E0214 04:19:53.855580 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj_openshift-operators(8c7f9ea9-2c5c-4e9c-97b2-02dd8a216d06)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj_openshift-operators(8c7f9ea9-2c5c-4e9c-97b2-02dd8a216d06)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj_openshift-operators_8c7f9ea9-2c5c-4e9c-97b2-02dd8a216d06_0(fe1c9f4f2a49c49402576aa486145a6b9b778095b39ec24b789372223cc5e663): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj" podUID="8c7f9ea9-2c5c-4e9c-97b2-02dd8a216d06" Feb 14 04:19:53 crc kubenswrapper[4867]: I0214 04:19:53.891376 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-7qfh9"] Feb 14 04:19:53 crc kubenswrapper[4867]: I0214 04:19:53.892140 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-7qfh9" Feb 14 04:19:53 crc kubenswrapper[4867]: I0214 04:19:53.895913 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-dp82x" Feb 14 04:19:53 crc kubenswrapper[4867]: I0214 04:19:53.915573 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85ck8\" (UniqueName: \"kubernetes.io/projected/31f03187-50f6-4015-afdc-422455a63006-kube-api-access-85ck8\") pod \"perses-operator-5bf474d74f-7qfh9\" (UID: \"31f03187-50f6-4015-afdc-422455a63006\") " pod="openshift-operators/perses-operator-5bf474d74f-7qfh9" Feb 14 04:19:53 crc kubenswrapper[4867]: I0214 04:19:53.915624 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/31f03187-50f6-4015-afdc-422455a63006-openshift-service-ca\") pod \"perses-operator-5bf474d74f-7qfh9\" (UID: \"31f03187-50f6-4015-afdc-422455a63006\") " pod="openshift-operators/perses-operator-5bf474d74f-7qfh9" Feb 14 04:19:54 crc kubenswrapper[4867]: I0214 04:19:54.016764 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85ck8\" (UniqueName: \"kubernetes.io/projected/31f03187-50f6-4015-afdc-422455a63006-kube-api-access-85ck8\") pod \"perses-operator-5bf474d74f-7qfh9\" (UID: \"31f03187-50f6-4015-afdc-422455a63006\") " pod="openshift-operators/perses-operator-5bf474d74f-7qfh9" Feb 14 04:19:54 crc kubenswrapper[4867]: I0214 04:19:54.016989 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/31f03187-50f6-4015-afdc-422455a63006-openshift-service-ca\") pod \"perses-operator-5bf474d74f-7qfh9\" (UID: \"31f03187-50f6-4015-afdc-422455a63006\") " pod="openshift-operators/perses-operator-5bf474d74f-7qfh9" Feb 14 04:19:54 crc kubenswrapper[4867]: I0214 04:19:54.017904 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/31f03187-50f6-4015-afdc-422455a63006-openshift-service-ca\") pod \"perses-operator-5bf474d74f-7qfh9\" (UID: \"31f03187-50f6-4015-afdc-422455a63006\") " pod="openshift-operators/perses-operator-5bf474d74f-7qfh9" Feb 14 04:19:54 crc kubenswrapper[4867]: I0214 04:19:54.052835 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85ck8\" (UniqueName: \"kubernetes.io/projected/31f03187-50f6-4015-afdc-422455a63006-kube-api-access-85ck8\") pod \"perses-operator-5bf474d74f-7qfh9\" (UID: \"31f03187-50f6-4015-afdc-422455a63006\") " pod="openshift-operators/perses-operator-5bf474d74f-7qfh9" Feb 14 04:19:54 crc kubenswrapper[4867]: I0214 04:19:54.061761 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-kv4j7" Feb 14 04:19:54 crc kubenswrapper[4867]: E0214 04:19:54.085652 4867 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-kv4j7_openshift-operators_94f47db9-4437-4b3e-aee5-f6f65e715e62_0(4b082bee08d8df616d15b9598d3c489d24d41f7cdf77bb00ac47b37923c2e1eb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 14 04:19:54 crc kubenswrapper[4867]: E0214 04:19:54.085835 4867 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-kv4j7_openshift-operators_94f47db9-4437-4b3e-aee5-f6f65e715e62_0(4b082bee08d8df616d15b9598d3c489d24d41f7cdf77bb00ac47b37923c2e1eb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-kv4j7" Feb 14 04:19:54 crc kubenswrapper[4867]: E0214 04:19:54.085909 4867 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-kv4j7_openshift-operators_94f47db9-4437-4b3e-aee5-f6f65e715e62_0(4b082bee08d8df616d15b9598d3c489d24d41f7cdf77bb00ac47b37923c2e1eb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-kv4j7" Feb 14 04:19:54 crc kubenswrapper[4867]: E0214 04:19:54.085995 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-kv4j7_openshift-operators(94f47db9-4437-4b3e-aee5-f6f65e715e62)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-kv4j7_openshift-operators(94f47db9-4437-4b3e-aee5-f6f65e715e62)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-kv4j7_openshift-operators_94f47db9-4437-4b3e-aee5-f6f65e715e62_0(4b082bee08d8df616d15b9598d3c489d24d41f7cdf77bb00ac47b37923c2e1eb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-kv4j7" podUID="94f47db9-4437-4b3e-aee5-f6f65e715e62" Feb 14 04:19:54 crc kubenswrapper[4867]: I0214 04:19:54.211994 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-7qfh9" Feb 14 04:19:54 crc kubenswrapper[4867]: E0214 04:19:54.248480 4867 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-7qfh9_openshift-operators_31f03187-50f6-4015-afdc-422455a63006_0(30752b41e66f722175658a9a417aaa79c79e317ef4960e2ce3a798adad548b27): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 14 04:19:54 crc kubenswrapper[4867]: E0214 04:19:54.248567 4867 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-7qfh9_openshift-operators_31f03187-50f6-4015-afdc-422455a63006_0(30752b41e66f722175658a9a417aaa79c79e317ef4960e2ce3a798adad548b27): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-7qfh9" Feb 14 04:19:54 crc kubenswrapper[4867]: E0214 04:19:54.248589 4867 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-7qfh9_openshift-operators_31f03187-50f6-4015-afdc-422455a63006_0(30752b41e66f722175658a9a417aaa79c79e317ef4960e2ce3a798adad548b27): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-7qfh9" Feb 14 04:19:54 crc kubenswrapper[4867]: E0214 04:19:54.248632 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-7qfh9_openshift-operators(31f03187-50f6-4015-afdc-422455a63006)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-7qfh9_openshift-operators(31f03187-50f6-4015-afdc-422455a63006)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-7qfh9_openshift-operators_31f03187-50f6-4015-afdc-422455a63006_0(30752b41e66f722175658a9a417aaa79c79e317ef4960e2ce3a798adad548b27): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-7qfh9" podUID="31f03187-50f6-4015-afdc-422455a63006" Feb 14 04:19:55 crc kubenswrapper[4867]: I0214 04:19:55.045841 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" event={"ID":"6b78a78d-1660-47ec-a3c6-b826a798ef37","Type":"ContainerStarted","Data":"e4b291afb2db520f18f44963148c2bf71665ebbd19355fb7c116485db5332b52"} Feb 14 04:19:57 crc kubenswrapper[4867]: I0214 04:19:57.064147 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" event={"ID":"6b78a78d-1660-47ec-a3c6-b826a798ef37","Type":"ContainerStarted","Data":"02243c13ceb5b936438b09dd590ac5f5a805cbc8f6fdb50df02d30457d02d0e6"} Feb 14 04:19:57 crc kubenswrapper[4867]: I0214 04:19:57.066127 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:57 crc kubenswrapper[4867]: I0214 04:19:57.066154 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:57 crc kubenswrapper[4867]: I0214 04:19:57.110621 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" podStartSLOduration=7.110603088 podStartE2EDuration="7.110603088s" podCreationTimestamp="2026-02-14 04:19:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:19:57.106278805 +0000 UTC m=+629.187216119" watchObservedRunningTime="2026-02-14 04:19:57.110603088 +0000 UTC m=+629.191540402" Feb 14 04:19:57 crc kubenswrapper[4867]: I0214 04:19:57.118693 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:58 crc kubenswrapper[4867]: I0214 04:19:58.070534 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:58 crc kubenswrapper[4867]: I0214 04:19:58.112072 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:58 crc kubenswrapper[4867]: I0214 04:19:58.193502 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr"] Feb 14 04:19:58 crc kubenswrapper[4867]: I0214 04:19:58.193733 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr" Feb 14 04:19:58 crc kubenswrapper[4867]: I0214 04:19:58.194464 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr" Feb 14 04:19:58 crc kubenswrapper[4867]: I0214 04:19:58.196982 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-vwlcr"] Feb 14 04:19:58 crc kubenswrapper[4867]: I0214 04:19:58.197074 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-vwlcr" Feb 14 04:19:58 crc kubenswrapper[4867]: I0214 04:19:58.197361 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-vwlcr" Feb 14 04:19:58 crc kubenswrapper[4867]: I0214 04:19:58.201794 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj"] Feb 14 04:19:58 crc kubenswrapper[4867]: I0214 04:19:58.201930 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj" Feb 14 04:19:58 crc kubenswrapper[4867]: I0214 04:19:58.202463 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj" Feb 14 04:19:58 crc kubenswrapper[4867]: I0214 04:19:58.216234 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-kv4j7"] Feb 14 04:19:58 crc kubenswrapper[4867]: I0214 04:19:58.216366 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-kv4j7" Feb 14 04:19:58 crc kubenswrapper[4867]: I0214 04:19:58.216880 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-kv4j7" Feb 14 04:19:58 crc kubenswrapper[4867]: E0214 04:19:58.263694 4867 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr_openshift-operators_5ecc414b-6bac-4b24-99c5-e2d1fb67f314_0(710e9eb82aac9befa3a5293c944632b703e88c2f0ed67b30b9b9efcde2540209): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 14 04:19:58 crc kubenswrapper[4867]: E0214 04:19:58.263760 4867 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr_openshift-operators_5ecc414b-6bac-4b24-99c5-e2d1fb67f314_0(710e9eb82aac9befa3a5293c944632b703e88c2f0ed67b30b9b9efcde2540209): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr" Feb 14 04:19:58 crc kubenswrapper[4867]: E0214 04:19:58.263789 4867 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr_openshift-operators_5ecc414b-6bac-4b24-99c5-e2d1fb67f314_0(710e9eb82aac9befa3a5293c944632b703e88c2f0ed67b30b9b9efcde2540209): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr" Feb 14 04:19:58 crc kubenswrapper[4867]: E0214 04:19:58.263837 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr_openshift-operators(5ecc414b-6bac-4b24-99c5-e2d1fb67f314)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr_openshift-operators(5ecc414b-6bac-4b24-99c5-e2d1fb67f314)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr_openshift-operators_5ecc414b-6bac-4b24-99c5-e2d1fb67f314_0(710e9eb82aac9befa3a5293c944632b703e88c2f0ed67b30b9b9efcde2540209): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr" podUID="5ecc414b-6bac-4b24-99c5-e2d1fb67f314" Feb 14 04:19:58 crc kubenswrapper[4867]: E0214 04:19:58.299791 4867 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj_openshift-operators_8c7f9ea9-2c5c-4e9c-97b2-02dd8a216d06_0(8b72fb8c0c74f29bd828fa07dc48b0cd8902d95bcd12a840ce9e750d6679024e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 14 04:19:58 crc kubenswrapper[4867]: E0214 04:19:58.299897 4867 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj_openshift-operators_8c7f9ea9-2c5c-4e9c-97b2-02dd8a216d06_0(8b72fb8c0c74f29bd828fa07dc48b0cd8902d95bcd12a840ce9e750d6679024e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj" Feb 14 04:19:58 crc kubenswrapper[4867]: E0214 04:19:58.299932 4867 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj_openshift-operators_8c7f9ea9-2c5c-4e9c-97b2-02dd8a216d06_0(8b72fb8c0c74f29bd828fa07dc48b0cd8902d95bcd12a840ce9e750d6679024e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj" Feb 14 04:19:58 crc kubenswrapper[4867]: E0214 04:19:58.299987 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj_openshift-operators(8c7f9ea9-2c5c-4e9c-97b2-02dd8a216d06)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj_openshift-operators(8c7f9ea9-2c5c-4e9c-97b2-02dd8a216d06)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj_openshift-operators_8c7f9ea9-2c5c-4e9c-97b2-02dd8a216d06_0(8b72fb8c0c74f29bd828fa07dc48b0cd8902d95bcd12a840ce9e750d6679024e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj" podUID="8c7f9ea9-2c5c-4e9c-97b2-02dd8a216d06" Feb 14 04:19:58 crc kubenswrapper[4867]: E0214 04:19:58.299800 4867 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-kv4j7_openshift-operators_94f47db9-4437-4b3e-aee5-f6f65e715e62_0(cc9a9d0ff2448e861e04f466dd2c904d6bf08c8e212891b521c6e1a93466684a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 14 04:19:58 crc kubenswrapper[4867]: E0214 04:19:58.300131 4867 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-kv4j7_openshift-operators_94f47db9-4437-4b3e-aee5-f6f65e715e62_0(cc9a9d0ff2448e861e04f466dd2c904d6bf08c8e212891b521c6e1a93466684a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-kv4j7" Feb 14 04:19:58 crc kubenswrapper[4867]: E0214 04:19:58.300161 4867 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-kv4j7_openshift-operators_94f47db9-4437-4b3e-aee5-f6f65e715e62_0(cc9a9d0ff2448e861e04f466dd2c904d6bf08c8e212891b521c6e1a93466684a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-kv4j7" Feb 14 04:19:58 crc kubenswrapper[4867]: E0214 04:19:58.300210 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-kv4j7_openshift-operators(94f47db9-4437-4b3e-aee5-f6f65e715e62)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-kv4j7_openshift-operators(94f47db9-4437-4b3e-aee5-f6f65e715e62)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-kv4j7_openshift-operators_94f47db9-4437-4b3e-aee5-f6f65e715e62_0(cc9a9d0ff2448e861e04f466dd2c904d6bf08c8e212891b521c6e1a93466684a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-kv4j7" podUID="94f47db9-4437-4b3e-aee5-f6f65e715e62" Feb 14 04:19:58 crc kubenswrapper[4867]: E0214 04:19:58.305768 4867 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-vwlcr_openshift-operators_987816d4-f9a4-47da-983c-317f9a3f4d86_0(cffb5a3566e011e87476d91749d28cb5d7174605988597ae1457fc67e06568cf): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 14 04:19:58 crc kubenswrapper[4867]: E0214 04:19:58.305843 4867 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-vwlcr_openshift-operators_987816d4-f9a4-47da-983c-317f9a3f4d86_0(cffb5a3566e011e87476d91749d28cb5d7174605988597ae1457fc67e06568cf): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-vwlcr" Feb 14 04:19:58 crc kubenswrapper[4867]: E0214 04:19:58.305870 4867 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-vwlcr_openshift-operators_987816d4-f9a4-47da-983c-317f9a3f4d86_0(cffb5a3566e011e87476d91749d28cb5d7174605988597ae1457fc67e06568cf): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-vwlcr" Feb 14 04:19:58 crc kubenswrapper[4867]: E0214 04:19:58.305921 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-vwlcr_openshift-operators(987816d4-f9a4-47da-983c-317f9a3f4d86)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-vwlcr_openshift-operators(987816d4-f9a4-47da-983c-317f9a3f4d86)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-vwlcr_openshift-operators_987816d4-f9a4-47da-983c-317f9a3f4d86_0(cffb5a3566e011e87476d91749d28cb5d7174605988597ae1457fc67e06568cf): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-vwlcr" podUID="987816d4-f9a4-47da-983c-317f9a3f4d86" Feb 14 04:19:58 crc kubenswrapper[4867]: I0214 04:19:58.339574 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-7qfh9"] Feb 14 04:19:58 crc kubenswrapper[4867]: I0214 04:19:58.339752 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-7qfh9" Feb 14 04:19:58 crc kubenswrapper[4867]: I0214 04:19:58.340373 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-7qfh9" Feb 14 04:19:58 crc kubenswrapper[4867]: E0214 04:19:58.397723 4867 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-7qfh9_openshift-operators_31f03187-50f6-4015-afdc-422455a63006_0(1586869cea8578ccfe7db68de413be8426fda91b15912bcf3aedfb7fb5b209d6): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 14 04:19:58 crc kubenswrapper[4867]: E0214 04:19:58.399380 4867 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-7qfh9_openshift-operators_31f03187-50f6-4015-afdc-422455a63006_0(1586869cea8578ccfe7db68de413be8426fda91b15912bcf3aedfb7fb5b209d6): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-7qfh9" Feb 14 04:19:58 crc kubenswrapper[4867]: E0214 04:19:58.399480 4867 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-7qfh9_openshift-operators_31f03187-50f6-4015-afdc-422455a63006_0(1586869cea8578ccfe7db68de413be8426fda91b15912bcf3aedfb7fb5b209d6): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-7qfh9" Feb 14 04:19:58 crc kubenswrapper[4867]: E0214 04:19:58.399650 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-7qfh9_openshift-operators(31f03187-50f6-4015-afdc-422455a63006)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-7qfh9_openshift-operators(31f03187-50f6-4015-afdc-422455a63006)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-7qfh9_openshift-operators_31f03187-50f6-4015-afdc-422455a63006_0(1586869cea8578ccfe7db68de413be8426fda91b15912bcf3aedfb7fb5b209d6): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-7qfh9" podUID="31f03187-50f6-4015-afdc-422455a63006" Feb 14 04:20:03 crc kubenswrapper[4867]: I0214 04:20:03.997460 4867 scope.go:117] "RemoveContainer" containerID="b07a230a65d345e7f64ecb41b905a120a6174dc5229f73c67b086608b27b5a72" Feb 14 04:20:04 crc kubenswrapper[4867]: E0214 04:20:03.998376 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-fl729_openshift-multus(fb77d03e-6ead-48b5-a96a-db4cbd540192)\"" pod="openshift-multus/multus-fl729" podUID="fb77d03e-6ead-48b5-a96a-db4cbd540192" Feb 14 04:20:08 crc kubenswrapper[4867]: I0214 04:20:08.996362 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-7qfh9" Feb 14 04:20:08 crc kubenswrapper[4867]: I0214 04:20:08.996400 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-kv4j7" Feb 14 04:20:08 crc kubenswrapper[4867]: I0214 04:20:08.996530 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj" Feb 14 04:20:09 crc kubenswrapper[4867]: I0214 04:20:09.003895 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-7qfh9" Feb 14 04:20:09 crc kubenswrapper[4867]: I0214 04:20:09.004216 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-kv4j7" Feb 14 04:20:09 crc kubenswrapper[4867]: I0214 04:20:09.004939 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj" Feb 14 04:20:09 crc kubenswrapper[4867]: E0214 04:20:09.074907 4867 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-7qfh9_openshift-operators_31f03187-50f6-4015-afdc-422455a63006_0(dbf23939473170c04f7c49bdb95bdb503375ed57c7166938eb037a7d179e262c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 14 04:20:09 crc kubenswrapper[4867]: E0214 04:20:09.074997 4867 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-7qfh9_openshift-operators_31f03187-50f6-4015-afdc-422455a63006_0(dbf23939473170c04f7c49bdb95bdb503375ed57c7166938eb037a7d179e262c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-7qfh9" Feb 14 04:20:09 crc kubenswrapper[4867]: E0214 04:20:09.075026 4867 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-7qfh9_openshift-operators_31f03187-50f6-4015-afdc-422455a63006_0(dbf23939473170c04f7c49bdb95bdb503375ed57c7166938eb037a7d179e262c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-7qfh9" Feb 14 04:20:09 crc kubenswrapper[4867]: E0214 04:20:09.075079 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-7qfh9_openshift-operators(31f03187-50f6-4015-afdc-422455a63006)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-7qfh9_openshift-operators(31f03187-50f6-4015-afdc-422455a63006)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-7qfh9_openshift-operators_31f03187-50f6-4015-afdc-422455a63006_0(dbf23939473170c04f7c49bdb95bdb503375ed57c7166938eb037a7d179e262c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-7qfh9" podUID="31f03187-50f6-4015-afdc-422455a63006" Feb 14 04:20:09 crc kubenswrapper[4867]: E0214 04:20:09.078570 4867 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj_openshift-operators_8c7f9ea9-2c5c-4e9c-97b2-02dd8a216d06_0(5c109dbd675ce1f37912ce01fc12701c6b0a745d3d79ad504bde775fd2f2811c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 14 04:20:09 crc kubenswrapper[4867]: E0214 04:20:09.078691 4867 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj_openshift-operators_8c7f9ea9-2c5c-4e9c-97b2-02dd8a216d06_0(5c109dbd675ce1f37912ce01fc12701c6b0a745d3d79ad504bde775fd2f2811c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj" Feb 14 04:20:09 crc kubenswrapper[4867]: E0214 04:20:09.078732 4867 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj_openshift-operators_8c7f9ea9-2c5c-4e9c-97b2-02dd8a216d06_0(5c109dbd675ce1f37912ce01fc12701c6b0a745d3d79ad504bde775fd2f2811c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj" Feb 14 04:20:09 crc kubenswrapper[4867]: E0214 04:20:09.078828 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj_openshift-operators(8c7f9ea9-2c5c-4e9c-97b2-02dd8a216d06)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj_openshift-operators(8c7f9ea9-2c5c-4e9c-97b2-02dd8a216d06)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj_openshift-operators_8c7f9ea9-2c5c-4e9c-97b2-02dd8a216d06_0(5c109dbd675ce1f37912ce01fc12701c6b0a745d3d79ad504bde775fd2f2811c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj" podUID="8c7f9ea9-2c5c-4e9c-97b2-02dd8a216d06" Feb 14 04:20:09 crc kubenswrapper[4867]: E0214 04:20:09.105535 4867 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-kv4j7_openshift-operators_94f47db9-4437-4b3e-aee5-f6f65e715e62_0(5fa4df8545a62fa65b3158364646356fd5dc9c34c1116b5c69b7111570dff521): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 14 04:20:09 crc kubenswrapper[4867]: E0214 04:20:09.105635 4867 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-kv4j7_openshift-operators_94f47db9-4437-4b3e-aee5-f6f65e715e62_0(5fa4df8545a62fa65b3158364646356fd5dc9c34c1116b5c69b7111570dff521): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-kv4j7" Feb 14 04:20:09 crc kubenswrapper[4867]: E0214 04:20:09.105666 4867 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-kv4j7_openshift-operators_94f47db9-4437-4b3e-aee5-f6f65e715e62_0(5fa4df8545a62fa65b3158364646356fd5dc9c34c1116b5c69b7111570dff521): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-kv4j7" Feb 14 04:20:09 crc kubenswrapper[4867]: E0214 04:20:09.105744 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-kv4j7_openshift-operators(94f47db9-4437-4b3e-aee5-f6f65e715e62)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-kv4j7_openshift-operators(94f47db9-4437-4b3e-aee5-f6f65e715e62)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-kv4j7_openshift-operators_94f47db9-4437-4b3e-aee5-f6f65e715e62_0(5fa4df8545a62fa65b3158364646356fd5dc9c34c1116b5c69b7111570dff521): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-kv4j7" podUID="94f47db9-4437-4b3e-aee5-f6f65e715e62" Feb 14 04:20:09 crc kubenswrapper[4867]: I0214 04:20:09.997150 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-vwlcr" Feb 14 04:20:09 crc kubenswrapper[4867]: I0214 04:20:09.998314 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-vwlcr" Feb 14 04:20:10 crc kubenswrapper[4867]: E0214 04:20:10.020842 4867 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-vwlcr_openshift-operators_987816d4-f9a4-47da-983c-317f9a3f4d86_0(bc12c67cdb43ba6d20249fb5fb7dc6c2c75b92e921295a1e4507ae9d8d31116d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 14 04:20:10 crc kubenswrapper[4867]: E0214 04:20:10.020963 4867 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-vwlcr_openshift-operators_987816d4-f9a4-47da-983c-317f9a3f4d86_0(bc12c67cdb43ba6d20249fb5fb7dc6c2c75b92e921295a1e4507ae9d8d31116d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-vwlcr" Feb 14 04:20:10 crc kubenswrapper[4867]: E0214 04:20:10.021002 4867 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-vwlcr_openshift-operators_987816d4-f9a4-47da-983c-317f9a3f4d86_0(bc12c67cdb43ba6d20249fb5fb7dc6c2c75b92e921295a1e4507ae9d8d31116d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-vwlcr" Feb 14 04:20:10 crc kubenswrapper[4867]: E0214 04:20:10.021084 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-vwlcr_openshift-operators(987816d4-f9a4-47da-983c-317f9a3f4d86)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-vwlcr_openshift-operators(987816d4-f9a4-47da-983c-317f9a3f4d86)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-vwlcr_openshift-operators_987816d4-f9a4-47da-983c-317f9a3f4d86_0(bc12c67cdb43ba6d20249fb5fb7dc6c2c75b92e921295a1e4507ae9d8d31116d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-vwlcr" podUID="987816d4-f9a4-47da-983c-317f9a3f4d86" Feb 14 04:20:13 crc kubenswrapper[4867]: I0214 04:20:13.997027 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr" Feb 14 04:20:13 crc kubenswrapper[4867]: I0214 04:20:13.997821 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr" Feb 14 04:20:14 crc kubenswrapper[4867]: E0214 04:20:14.035121 4867 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr_openshift-operators_5ecc414b-6bac-4b24-99c5-e2d1fb67f314_0(85a457d248997e9b41e579f4ad1b8732f8b1ac35f12cb9c5b5ae8fa9740c6455): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 14 04:20:14 crc kubenswrapper[4867]: E0214 04:20:14.035185 4867 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr_openshift-operators_5ecc414b-6bac-4b24-99c5-e2d1fb67f314_0(85a457d248997e9b41e579f4ad1b8732f8b1ac35f12cb9c5b5ae8fa9740c6455): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr" Feb 14 04:20:14 crc kubenswrapper[4867]: E0214 04:20:14.035207 4867 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr_openshift-operators_5ecc414b-6bac-4b24-99c5-e2d1fb67f314_0(85a457d248997e9b41e579f4ad1b8732f8b1ac35f12cb9c5b5ae8fa9740c6455): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr" Feb 14 04:20:14 crc kubenswrapper[4867]: E0214 04:20:14.035261 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr_openshift-operators(5ecc414b-6bac-4b24-99c5-e2d1fb67f314)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr_openshift-operators(5ecc414b-6bac-4b24-99c5-e2d1fb67f314)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr_openshift-operators_5ecc414b-6bac-4b24-99c5-e2d1fb67f314_0(85a457d248997e9b41e579f4ad1b8732f8b1ac35f12cb9c5b5ae8fa9740c6455): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr" podUID="5ecc414b-6bac-4b24-99c5-e2d1fb67f314" Feb 14 04:20:15 crc kubenswrapper[4867]: I0214 04:20:15.997833 4867 scope.go:117] "RemoveContainer" containerID="b07a230a65d345e7f64ecb41b905a120a6174dc5229f73c67b086608b27b5a72" Feb 14 04:20:16 crc kubenswrapper[4867]: I0214 04:20:16.190810 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fl729_fb77d03e-6ead-48b5-a96a-db4cbd540192/kube-multus/2.log" Feb 14 04:20:16 crc kubenswrapper[4867]: I0214 04:20:16.191179 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fl729" event={"ID":"fb77d03e-6ead-48b5-a96a-db4cbd540192","Type":"ContainerStarted","Data":"64b5a084853a3d1ad08a41ce2324a38f7ea9e21f0b5662f7fbd7a03aa0fb2e2b"} Feb 14 04:20:20 crc kubenswrapper[4867]: I0214 04:20:20.560294 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:20:20 crc kubenswrapper[4867]: I0214 04:20:20.996801 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-vwlcr" Feb 14 04:20:20 crc kubenswrapper[4867]: I0214 04:20:20.997772 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-vwlcr" Feb 14 04:20:21 crc kubenswrapper[4867]: I0214 04:20:21.430166 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-vwlcr"] Feb 14 04:20:21 crc kubenswrapper[4867]: I0214 04:20:21.997217 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-kv4j7" Feb 14 04:20:21 crc kubenswrapper[4867]: I0214 04:20:21.998017 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-kv4j7" Feb 14 04:20:22 crc kubenswrapper[4867]: I0214 04:20:22.253653 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-vwlcr" event={"ID":"987816d4-f9a4-47da-983c-317f9a3f4d86","Type":"ContainerStarted","Data":"26824e7cea2dae02dc3534ea1722997deea1091e4a48e6085d54be1784dce4e4"} Feb 14 04:20:22 crc kubenswrapper[4867]: W0214 04:20:22.566010 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod94f47db9_4437_4b3e_aee5_f6f65e715e62.slice/crio-7aa0fd1cf526f0e35cec383083999cfdfd69adcfa5a134fc3ad39677eafce452 WatchSource:0}: Error finding container 7aa0fd1cf526f0e35cec383083999cfdfd69adcfa5a134fc3ad39677eafce452: Status 404 returned error can't find the container with id 7aa0fd1cf526f0e35cec383083999cfdfd69adcfa5a134fc3ad39677eafce452 Feb 14 04:20:22 crc kubenswrapper[4867]: I0214 04:20:22.577342 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-kv4j7"] Feb 14 04:20:22 crc kubenswrapper[4867]: I0214 04:20:22.997213 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-7qfh9" Feb 14 04:20:22 crc kubenswrapper[4867]: I0214 04:20:22.998234 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-7qfh9" Feb 14 04:20:23 crc kubenswrapper[4867]: I0214 04:20:23.260136 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-kv4j7" event={"ID":"94f47db9-4437-4b3e-aee5-f6f65e715e62","Type":"ContainerStarted","Data":"7aa0fd1cf526f0e35cec383083999cfdfd69adcfa5a134fc3ad39677eafce452"} Feb 14 04:20:23 crc kubenswrapper[4867]: I0214 04:20:23.457781 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-7qfh9"] Feb 14 04:20:23 crc kubenswrapper[4867]: W0214 04:20:23.464988 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod31f03187_50f6_4015_afdc_422455a63006.slice/crio-f8c957039d7c9e3fd7c41aff90dbdd42e7c13e6b0758f453407b6fbf0c679dbd WatchSource:0}: Error finding container f8c957039d7c9e3fd7c41aff90dbdd42e7c13e6b0758f453407b6fbf0c679dbd: Status 404 returned error can't find the container with id f8c957039d7c9e3fd7c41aff90dbdd42e7c13e6b0758f453407b6fbf0c679dbd Feb 14 04:20:23 crc kubenswrapper[4867]: I0214 04:20:23.996926 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj" Feb 14 04:20:23 crc kubenswrapper[4867]: I0214 04:20:23.997389 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj" Feb 14 04:20:24 crc kubenswrapper[4867]: I0214 04:20:24.271063 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-7qfh9" event={"ID":"31f03187-50f6-4015-afdc-422455a63006","Type":"ContainerStarted","Data":"f8c957039d7c9e3fd7c41aff90dbdd42e7c13e6b0758f453407b6fbf0c679dbd"} Feb 14 04:20:25 crc kubenswrapper[4867]: I0214 04:20:25.000666 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr" Feb 14 04:20:25 crc kubenswrapper[4867]: I0214 04:20:25.001305 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr" Feb 14 04:20:26 crc kubenswrapper[4867]: I0214 04:20:26.280307 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr"] Feb 14 04:20:26 crc kubenswrapper[4867]: W0214 04:20:26.297655 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5ecc414b_6bac_4b24_99c5_e2d1fb67f314.slice/crio-71a8d1f58b33dd7770709fc95712d7a9634de0fef0abbf1b1c32a81748e38b40 WatchSource:0}: Error finding container 71a8d1f58b33dd7770709fc95712d7a9634de0fef0abbf1b1c32a81748e38b40: Status 404 returned error can't find the container with id 71a8d1f58b33dd7770709fc95712d7a9634de0fef0abbf1b1c32a81748e38b40 Feb 14 04:20:26 crc kubenswrapper[4867]: I0214 04:20:26.327280 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj"] Feb 14 04:20:26 crc kubenswrapper[4867]: W0214 04:20:26.334579 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8c7f9ea9_2c5c_4e9c_97b2_02dd8a216d06.slice/crio-728f5e53a7eb5b51e067555e69ab17aa273bb22ad5b3107682ad45619622eda3 WatchSource:0}: Error finding container 728f5e53a7eb5b51e067555e69ab17aa273bb22ad5b3107682ad45619622eda3: Status 404 returned error can't find the container with id 728f5e53a7eb5b51e067555e69ab17aa273bb22ad5b3107682ad45619622eda3 Feb 14 04:20:27 crc kubenswrapper[4867]: I0214 04:20:27.308940 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-vwlcr" event={"ID":"987816d4-f9a4-47da-983c-317f9a3f4d86","Type":"ContainerStarted","Data":"cbf9077b91953cb3be07a2606e135c0b662c7ab6313046b9f2d7f2c4a5008722"} Feb 14 04:20:27 crc kubenswrapper[4867]: I0214 04:20:27.310754 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj" event={"ID":"8c7f9ea9-2c5c-4e9c-97b2-02dd8a216d06","Type":"ContainerStarted","Data":"728f5e53a7eb5b51e067555e69ab17aa273bb22ad5b3107682ad45619622eda3"} Feb 14 04:20:27 crc kubenswrapper[4867]: I0214 04:20:27.312108 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr" event={"ID":"5ecc414b-6bac-4b24-99c5-e2d1fb67f314","Type":"ContainerStarted","Data":"71a8d1f58b33dd7770709fc95712d7a9634de0fef0abbf1b1c32a81748e38b40"} Feb 14 04:20:29 crc kubenswrapper[4867]: I0214 04:20:29.046813 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-vwlcr" podStartSLOduration=31.423637257 podStartE2EDuration="36.046790021s" podCreationTimestamp="2026-02-14 04:19:53 +0000 UTC" firstStartedPulling="2026-02-14 04:20:21.449434826 +0000 UTC m=+653.530372160" lastFinishedPulling="2026-02-14 04:20:26.07258761 +0000 UTC m=+658.153524924" observedRunningTime="2026-02-14 04:20:27.342403718 +0000 UTC m=+659.423341342" watchObservedRunningTime="2026-02-14 04:20:29.046790021 +0000 UTC m=+661.127727335" Feb 14 04:20:31 crc kubenswrapper[4867]: I0214 04:20:31.358961 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-7qfh9" event={"ID":"31f03187-50f6-4015-afdc-422455a63006","Type":"ContainerStarted","Data":"bb4dc5e070beeca7160b299e1daf4e9ad29a2f879f5242e662fd2e946c49bc73"} Feb 14 04:20:31 crc kubenswrapper[4867]: I0214 04:20:31.359698 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-7qfh9" Feb 14 04:20:31 crc kubenswrapper[4867]: I0214 04:20:31.361089 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-kv4j7" event={"ID":"94f47db9-4437-4b3e-aee5-f6f65e715e62","Type":"ContainerStarted","Data":"71bc852131d72d72e543b26eddd8266b87750cb6e354e70eb54aa965c01b1cbc"} Feb 14 04:20:31 crc kubenswrapper[4867]: I0214 04:20:31.362117 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-kv4j7" Feb 14 04:20:31 crc kubenswrapper[4867]: I0214 04:20:31.367898 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-kv4j7" Feb 14 04:20:31 crc kubenswrapper[4867]: I0214 04:20:31.422444 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-7qfh9" podStartSLOduration=31.441289864 podStartE2EDuration="38.422402381s" podCreationTimestamp="2026-02-14 04:19:53 +0000 UTC" firstStartedPulling="2026-02-14 04:20:23.470563456 +0000 UTC m=+655.551500780" lastFinishedPulling="2026-02-14 04:20:30.451675983 +0000 UTC m=+662.532613297" observedRunningTime="2026-02-14 04:20:31.388783848 +0000 UTC m=+663.469721162" watchObservedRunningTime="2026-02-14 04:20:31.422402381 +0000 UTC m=+663.503339695" Feb 14 04:20:31 crc kubenswrapper[4867]: I0214 04:20:31.429863 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-kv4j7" podStartSLOduration=30.514869796 podStartE2EDuration="38.429828323s" podCreationTimestamp="2026-02-14 04:19:53 +0000 UTC" firstStartedPulling="2026-02-14 04:20:22.570590055 +0000 UTC m=+654.651527379" lastFinishedPulling="2026-02-14 04:20:30.485548592 +0000 UTC m=+662.566485906" observedRunningTime="2026-02-14 04:20:31.412768381 +0000 UTC m=+663.493705695" watchObservedRunningTime="2026-02-14 04:20:31.429828323 +0000 UTC m=+663.510765637" Feb 14 04:20:32 crc kubenswrapper[4867]: I0214 04:20:32.373056 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj" event={"ID":"8c7f9ea9-2c5c-4e9c-97b2-02dd8a216d06","Type":"ContainerStarted","Data":"307febcdcdb9c846745d1413ad562cac6ff49caa8d054c359db0a38e9e44ac19"} Feb 14 04:20:32 crc kubenswrapper[4867]: I0214 04:20:32.376700 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr" event={"ID":"5ecc414b-6bac-4b24-99c5-e2d1fb67f314","Type":"ContainerStarted","Data":"f8cd56a8e3e561bf25e54947fdc28176cacf29e6f79645dff743e3cef10f1b11"} Feb 14 04:20:32 crc kubenswrapper[4867]: I0214 04:20:32.405043 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj" podStartSLOduration=33.905051559 podStartE2EDuration="39.405026526s" podCreationTimestamp="2026-02-14 04:19:53 +0000 UTC" firstStartedPulling="2026-02-14 04:20:26.338252633 +0000 UTC m=+658.419189947" lastFinishedPulling="2026-02-14 04:20:31.83822759 +0000 UTC m=+663.919164914" observedRunningTime="2026-02-14 04:20:32.39940491 +0000 UTC m=+664.480342224" watchObservedRunningTime="2026-02-14 04:20:32.405026526 +0000 UTC m=+664.485963840" Feb 14 04:20:41 crc kubenswrapper[4867]: I0214 04:20:41.065991 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr" podStartSLOduration=42.544124155 podStartE2EDuration="48.065953939s" podCreationTimestamp="2026-02-14 04:19:53 +0000 UTC" firstStartedPulling="2026-02-14 04:20:26.302722161 +0000 UTC m=+658.383659475" lastFinishedPulling="2026-02-14 04:20:31.824551945 +0000 UTC m=+663.905489259" observedRunningTime="2026-02-14 04:20:32.445030954 +0000 UTC m=+664.525968268" watchObservedRunningTime="2026-02-14 04:20:41.065953939 +0000 UTC m=+673.146891253" Feb 14 04:20:41 crc kubenswrapper[4867]: I0214 04:20:41.070908 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-s4258"] Feb 14 04:20:41 crc kubenswrapper[4867]: I0214 04:20:41.092835 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-s4258" Feb 14 04:20:41 crc kubenswrapper[4867]: I0214 04:20:41.099047 4867 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-sqjrv" Feb 14 04:20:41 crc kubenswrapper[4867]: I0214 04:20:41.099432 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 14 04:20:41 crc kubenswrapper[4867]: I0214 04:20:41.113885 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 14 04:20:41 crc kubenswrapper[4867]: I0214 04:20:41.119589 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-s4258"] Feb 14 04:20:41 crc kubenswrapper[4867]: I0214 04:20:41.146134 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-gslqt"] Feb 14 04:20:41 crc kubenswrapper[4867]: I0214 04:20:41.147132 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-gslqt" Feb 14 04:20:41 crc kubenswrapper[4867]: I0214 04:20:41.152204 4867 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-59r5z" Feb 14 04:20:41 crc kubenswrapper[4867]: I0214 04:20:41.156104 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-gslqt"] Feb 14 04:20:41 crc kubenswrapper[4867]: I0214 04:20:41.160299 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-xlg4t"] Feb 14 04:20:41 crc kubenswrapper[4867]: I0214 04:20:41.161113 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-xlg4t" Feb 14 04:20:41 crc kubenswrapper[4867]: I0214 04:20:41.162845 4867 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-prrbb" Feb 14 04:20:41 crc kubenswrapper[4867]: I0214 04:20:41.169922 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbwjr\" (UniqueName: \"kubernetes.io/projected/2224c85e-13be-400d-abf8-6b412d8c55ee-kube-api-access-gbwjr\") pod \"cert-manager-cainjector-cf98fcc89-s4258\" (UID: \"2224c85e-13be-400d-abf8-6b412d8c55ee\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-s4258" Feb 14 04:20:41 crc kubenswrapper[4867]: I0214 04:20:41.169997 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tp29\" (UniqueName: \"kubernetes.io/projected/1f305679-0f4d-440e-a053-7b3627eaae9c-kube-api-access-9tp29\") pod \"cert-manager-858654f9db-gslqt\" (UID: \"1f305679-0f4d-440e-a053-7b3627eaae9c\") " pod="cert-manager/cert-manager-858654f9db-gslqt" Feb 14 04:20:41 crc kubenswrapper[4867]: I0214 04:20:41.172168 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-xlg4t"] Feb 14 04:20:41 crc kubenswrapper[4867]: I0214 04:20:41.272185 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4xrk\" (UniqueName: \"kubernetes.io/projected/34f53dfe-4707-4a5c-8745-c4ed944c6a6a-kube-api-access-n4xrk\") pod \"cert-manager-webhook-687f57d79b-xlg4t\" (UID: \"34f53dfe-4707-4a5c-8745-c4ed944c6a6a\") " pod="cert-manager/cert-manager-webhook-687f57d79b-xlg4t" Feb 14 04:20:41 crc kubenswrapper[4867]: I0214 04:20:41.272364 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbwjr\" (UniqueName: \"kubernetes.io/projected/2224c85e-13be-400d-abf8-6b412d8c55ee-kube-api-access-gbwjr\") pod \"cert-manager-cainjector-cf98fcc89-s4258\" (UID: \"2224c85e-13be-400d-abf8-6b412d8c55ee\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-s4258" Feb 14 04:20:41 crc kubenswrapper[4867]: I0214 04:20:41.272422 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tp29\" (UniqueName: \"kubernetes.io/projected/1f305679-0f4d-440e-a053-7b3627eaae9c-kube-api-access-9tp29\") pod \"cert-manager-858654f9db-gslqt\" (UID: \"1f305679-0f4d-440e-a053-7b3627eaae9c\") " pod="cert-manager/cert-manager-858654f9db-gslqt" Feb 14 04:20:41 crc kubenswrapper[4867]: I0214 04:20:41.296417 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tp29\" (UniqueName: \"kubernetes.io/projected/1f305679-0f4d-440e-a053-7b3627eaae9c-kube-api-access-9tp29\") pod \"cert-manager-858654f9db-gslqt\" (UID: \"1f305679-0f4d-440e-a053-7b3627eaae9c\") " pod="cert-manager/cert-manager-858654f9db-gslqt" Feb 14 04:20:41 crc kubenswrapper[4867]: I0214 04:20:41.297077 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbwjr\" (UniqueName: \"kubernetes.io/projected/2224c85e-13be-400d-abf8-6b412d8c55ee-kube-api-access-gbwjr\") pod \"cert-manager-cainjector-cf98fcc89-s4258\" (UID: \"2224c85e-13be-400d-abf8-6b412d8c55ee\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-s4258" Feb 14 04:20:41 crc kubenswrapper[4867]: I0214 04:20:41.374982 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4xrk\" (UniqueName: \"kubernetes.io/projected/34f53dfe-4707-4a5c-8745-c4ed944c6a6a-kube-api-access-n4xrk\") pod \"cert-manager-webhook-687f57d79b-xlg4t\" (UID: \"34f53dfe-4707-4a5c-8745-c4ed944c6a6a\") " pod="cert-manager/cert-manager-webhook-687f57d79b-xlg4t" Feb 14 04:20:41 crc kubenswrapper[4867]: I0214 04:20:41.396296 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4xrk\" (UniqueName: \"kubernetes.io/projected/34f53dfe-4707-4a5c-8745-c4ed944c6a6a-kube-api-access-n4xrk\") pod \"cert-manager-webhook-687f57d79b-xlg4t\" (UID: \"34f53dfe-4707-4a5c-8745-c4ed944c6a6a\") " pod="cert-manager/cert-manager-webhook-687f57d79b-xlg4t" Feb 14 04:20:41 crc kubenswrapper[4867]: I0214 04:20:41.436026 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-s4258" Feb 14 04:20:41 crc kubenswrapper[4867]: I0214 04:20:41.479128 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-gslqt" Feb 14 04:20:41 crc kubenswrapper[4867]: I0214 04:20:41.493973 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-xlg4t" Feb 14 04:20:41 crc kubenswrapper[4867]: I0214 04:20:41.912989 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-s4258"] Feb 14 04:20:41 crc kubenswrapper[4867]: I0214 04:20:41.989377 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-xlg4t"] Feb 14 04:20:41 crc kubenswrapper[4867]: W0214 04:20:41.993588 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod34f53dfe_4707_4a5c_8745_c4ed944c6a6a.slice/crio-cceea836a859096c2528ef16d5b9fc5fd75550478c027f54bed28b0c0f55ab75 WatchSource:0}: Error finding container cceea836a859096c2528ef16d5b9fc5fd75550478c027f54bed28b0c0f55ab75: Status 404 returned error can't find the container with id cceea836a859096c2528ef16d5b9fc5fd75550478c027f54bed28b0c0f55ab75 Feb 14 04:20:42 crc kubenswrapper[4867]: I0214 04:20:42.000215 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-gslqt"] Feb 14 04:20:42 crc kubenswrapper[4867]: W0214 04:20:42.001423 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1f305679_0f4d_440e_a053_7b3627eaae9c.slice/crio-036c882d9b8b4bb344257333e24d37bd0a6818e678322c2c019421efa57ea5e0 WatchSource:0}: Error finding container 036c882d9b8b4bb344257333e24d37bd0a6818e678322c2c019421efa57ea5e0: Status 404 returned error can't find the container with id 036c882d9b8b4bb344257333e24d37bd0a6818e678322c2c019421efa57ea5e0 Feb 14 04:20:42 crc kubenswrapper[4867]: I0214 04:20:42.436912 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-s4258" event={"ID":"2224c85e-13be-400d-abf8-6b412d8c55ee","Type":"ContainerStarted","Data":"e94df9d68f3febd087b09eb3471b463abe9c0f2f8cd35bcbe0c1a1e6258073d1"} Feb 14 04:20:42 crc kubenswrapper[4867]: I0214 04:20:42.439739 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-xlg4t" event={"ID":"34f53dfe-4707-4a5c-8745-c4ed944c6a6a","Type":"ContainerStarted","Data":"cceea836a859096c2528ef16d5b9fc5fd75550478c027f54bed28b0c0f55ab75"} Feb 14 04:20:42 crc kubenswrapper[4867]: I0214 04:20:42.441293 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-gslqt" event={"ID":"1f305679-0f4d-440e-a053-7b3627eaae9c","Type":"ContainerStarted","Data":"036c882d9b8b4bb344257333e24d37bd0a6818e678322c2c019421efa57ea5e0"} Feb 14 04:20:44 crc kubenswrapper[4867]: I0214 04:20:44.214941 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-7qfh9" Feb 14 04:20:46 crc kubenswrapper[4867]: I0214 04:20:46.484071 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-gslqt" event={"ID":"1f305679-0f4d-440e-a053-7b3627eaae9c","Type":"ContainerStarted","Data":"e95342b2b45d020e15caa63292a465041cb8836c16cb24c6bb87232b7fd208eb"} Feb 14 04:20:46 crc kubenswrapper[4867]: I0214 04:20:46.499119 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-s4258" event={"ID":"2224c85e-13be-400d-abf8-6b412d8c55ee","Type":"ContainerStarted","Data":"c36b93a354b0b41cf00fdfe21e7201f2636dd31bad4023539836d40f441ab4b3"} Feb 14 04:20:46 crc kubenswrapper[4867]: I0214 04:20:46.501755 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-gslqt" podStartSLOduration=2.101220248 podStartE2EDuration="5.501743491s" podCreationTimestamp="2026-02-14 04:20:41 +0000 UTC" firstStartedPulling="2026-02-14 04:20:42.003539746 +0000 UTC m=+674.084477070" lastFinishedPulling="2026-02-14 04:20:45.404062999 +0000 UTC m=+677.485000313" observedRunningTime="2026-02-14 04:20:46.500135889 +0000 UTC m=+678.581073203" watchObservedRunningTime="2026-02-14 04:20:46.501743491 +0000 UTC m=+678.582680805" Feb 14 04:20:46 crc kubenswrapper[4867]: I0214 04:20:46.518478 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-s4258" podStartSLOduration=2.052952155 podStartE2EDuration="5.518460724s" podCreationTimestamp="2026-02-14 04:20:41 +0000 UTC" firstStartedPulling="2026-02-14 04:20:41.929612158 +0000 UTC m=+674.010549472" lastFinishedPulling="2026-02-14 04:20:45.395120727 +0000 UTC m=+677.476058041" observedRunningTime="2026-02-14 04:20:46.517654823 +0000 UTC m=+678.598592137" watchObservedRunningTime="2026-02-14 04:20:46.518460724 +0000 UTC m=+678.599398038" Feb 14 04:20:48 crc kubenswrapper[4867]: I0214 04:20:48.510945 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-xlg4t" event={"ID":"34f53dfe-4707-4a5c-8745-c4ed944c6a6a","Type":"ContainerStarted","Data":"43675d8fb1f5f7952da09285b8b5e7514389d4902a871cc81a26a7e94a924dce"} Feb 14 04:20:48 crc kubenswrapper[4867]: I0214 04:20:48.511639 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-xlg4t" Feb 14 04:20:48 crc kubenswrapper[4867]: I0214 04:20:48.527192 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-xlg4t" podStartSLOduration=1.7720035859999999 podStartE2EDuration="7.527176814s" podCreationTimestamp="2026-02-14 04:20:41 +0000 UTC" firstStartedPulling="2026-02-14 04:20:41.996269798 +0000 UTC m=+674.077207112" lastFinishedPulling="2026-02-14 04:20:47.751443026 +0000 UTC m=+679.832380340" observedRunningTime="2026-02-14 04:20:48.524555355 +0000 UTC m=+680.605492679" watchObservedRunningTime="2026-02-14 04:20:48.527176814 +0000 UTC m=+680.608114128" Feb 14 04:20:56 crc kubenswrapper[4867]: I0214 04:20:56.496670 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-xlg4t" Feb 14 04:21:22 crc kubenswrapper[4867]: I0214 04:21:22.053794 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kxs9j"] Feb 14 04:21:22 crc kubenswrapper[4867]: I0214 04:21:22.055405 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kxs9j" Feb 14 04:21:22 crc kubenswrapper[4867]: I0214 04:21:22.059013 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 14 04:21:22 crc kubenswrapper[4867]: I0214 04:21:22.066163 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kxs9j"] Feb 14 04:21:22 crc kubenswrapper[4867]: I0214 04:21:22.105937 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe-util\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kxs9j\" (UID: \"af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kxs9j" Feb 14 04:21:22 crc kubenswrapper[4867]: I0214 04:21:22.106236 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe-bundle\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kxs9j\" (UID: \"af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kxs9j" Feb 14 04:21:22 crc kubenswrapper[4867]: I0214 04:21:22.106963 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dm59h\" (UniqueName: \"kubernetes.io/projected/af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe-kube-api-access-dm59h\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kxs9j\" (UID: \"af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kxs9j" Feb 14 04:21:22 crc kubenswrapper[4867]: I0214 04:21:22.208871 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dm59h\" (UniqueName: \"kubernetes.io/projected/af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe-kube-api-access-dm59h\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kxs9j\" (UID: \"af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kxs9j" Feb 14 04:21:22 crc kubenswrapper[4867]: I0214 04:21:22.208961 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe-util\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kxs9j\" (UID: \"af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kxs9j" Feb 14 04:21:22 crc kubenswrapper[4867]: I0214 04:21:22.209465 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe-util\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kxs9j\" (UID: \"af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kxs9j" Feb 14 04:21:22 crc kubenswrapper[4867]: I0214 04:21:22.209693 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe-bundle\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kxs9j\" (UID: \"af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kxs9j" Feb 14 04:21:22 crc kubenswrapper[4867]: I0214 04:21:22.209994 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe-bundle\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kxs9j\" (UID: \"af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kxs9j" Feb 14 04:21:22 crc kubenswrapper[4867]: I0214 04:21:22.234007 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dm59h\" (UniqueName: \"kubernetes.io/projected/af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe-kube-api-access-dm59h\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kxs9j\" (UID: \"af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kxs9j" Feb 14 04:21:22 crc kubenswrapper[4867]: I0214 04:21:22.252059 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19fjnlv"] Feb 14 04:21:22 crc kubenswrapper[4867]: I0214 04:21:22.253426 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19fjnlv" Feb 14 04:21:22 crc kubenswrapper[4867]: I0214 04:21:22.257498 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19fjnlv"] Feb 14 04:21:22 crc kubenswrapper[4867]: I0214 04:21:22.311000 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/936b69da-ce28-43de-8fcf-82e83936de1b-bundle\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19fjnlv\" (UID: \"936b69da-ce28-43de-8fcf-82e83936de1b\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19fjnlv" Feb 14 04:21:22 crc kubenswrapper[4867]: I0214 04:21:22.311049 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/936b69da-ce28-43de-8fcf-82e83936de1b-util\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19fjnlv\" (UID: \"936b69da-ce28-43de-8fcf-82e83936de1b\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19fjnlv" Feb 14 04:21:22 crc kubenswrapper[4867]: I0214 04:21:22.311091 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnpvd\" (UniqueName: \"kubernetes.io/projected/936b69da-ce28-43de-8fcf-82e83936de1b-kube-api-access-pnpvd\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19fjnlv\" (UID: \"936b69da-ce28-43de-8fcf-82e83936de1b\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19fjnlv" Feb 14 04:21:22 crc kubenswrapper[4867]: I0214 04:21:22.403099 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kxs9j" Feb 14 04:21:22 crc kubenswrapper[4867]: I0214 04:21:22.413030 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnpvd\" (UniqueName: \"kubernetes.io/projected/936b69da-ce28-43de-8fcf-82e83936de1b-kube-api-access-pnpvd\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19fjnlv\" (UID: \"936b69da-ce28-43de-8fcf-82e83936de1b\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19fjnlv" Feb 14 04:21:22 crc kubenswrapper[4867]: I0214 04:21:22.413184 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/936b69da-ce28-43de-8fcf-82e83936de1b-bundle\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19fjnlv\" (UID: \"936b69da-ce28-43de-8fcf-82e83936de1b\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19fjnlv" Feb 14 04:21:22 crc kubenswrapper[4867]: I0214 04:21:22.413214 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/936b69da-ce28-43de-8fcf-82e83936de1b-util\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19fjnlv\" (UID: \"936b69da-ce28-43de-8fcf-82e83936de1b\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19fjnlv" Feb 14 04:21:22 crc kubenswrapper[4867]: I0214 04:21:22.413852 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/936b69da-ce28-43de-8fcf-82e83936de1b-util\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19fjnlv\" (UID: \"936b69da-ce28-43de-8fcf-82e83936de1b\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19fjnlv" Feb 14 04:21:22 crc kubenswrapper[4867]: I0214 04:21:22.414099 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/936b69da-ce28-43de-8fcf-82e83936de1b-bundle\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19fjnlv\" (UID: \"936b69da-ce28-43de-8fcf-82e83936de1b\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19fjnlv" Feb 14 04:21:22 crc kubenswrapper[4867]: I0214 04:21:22.445127 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnpvd\" (UniqueName: \"kubernetes.io/projected/936b69da-ce28-43de-8fcf-82e83936de1b-kube-api-access-pnpvd\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19fjnlv\" (UID: \"936b69da-ce28-43de-8fcf-82e83936de1b\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19fjnlv" Feb 14 04:21:22 crc kubenswrapper[4867]: I0214 04:21:22.577801 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19fjnlv" Feb 14 04:21:22 crc kubenswrapper[4867]: I0214 04:21:22.851835 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kxs9j"] Feb 14 04:21:22 crc kubenswrapper[4867]: W0214 04:21:22.862723 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaf62ec3e_1c1b_400e_bdb9_ba34fc8ef5fe.slice/crio-b5e274fdc6cbf91b4e5bee40ce408a9125d51193a6f3175ada706d276c5b1981 WatchSource:0}: Error finding container b5e274fdc6cbf91b4e5bee40ce408a9125d51193a6f3175ada706d276c5b1981: Status 404 returned error can't find the container with id b5e274fdc6cbf91b4e5bee40ce408a9125d51193a6f3175ada706d276c5b1981 Feb 14 04:21:23 crc kubenswrapper[4867]: I0214 04:21:23.006391 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19fjnlv"] Feb 14 04:21:23 crc kubenswrapper[4867]: W0214 04:21:23.015271 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod936b69da_ce28_43de_8fcf_82e83936de1b.slice/crio-0a21e4148e4071bf6bd851a591d5c60b2b0ebe95fb20bb5bdb5bed099e6a4944 WatchSource:0}: Error finding container 0a21e4148e4071bf6bd851a591d5c60b2b0ebe95fb20bb5bdb5bed099e6a4944: Status 404 returned error can't find the container with id 0a21e4148e4071bf6bd851a591d5c60b2b0ebe95fb20bb5bdb5bed099e6a4944 Feb 14 04:21:23 crc kubenswrapper[4867]: I0214 04:21:23.772173 4867 generic.go:334] "Generic (PLEG): container finished" podID="936b69da-ce28-43de-8fcf-82e83936de1b" containerID="6bfa7c6acd7c8c92626e99a48aecf16ab0c89ae282cd4b5118f689bf23a2ab52" exitCode=0 Feb 14 04:21:23 crc kubenswrapper[4867]: I0214 04:21:23.772247 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19fjnlv" event={"ID":"936b69da-ce28-43de-8fcf-82e83936de1b","Type":"ContainerDied","Data":"6bfa7c6acd7c8c92626e99a48aecf16ab0c89ae282cd4b5118f689bf23a2ab52"} Feb 14 04:21:23 crc kubenswrapper[4867]: I0214 04:21:23.772491 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19fjnlv" event={"ID":"936b69da-ce28-43de-8fcf-82e83936de1b","Type":"ContainerStarted","Data":"0a21e4148e4071bf6bd851a591d5c60b2b0ebe95fb20bb5bdb5bed099e6a4944"} Feb 14 04:21:23 crc kubenswrapper[4867]: I0214 04:21:23.778411 4867 generic.go:334] "Generic (PLEG): container finished" podID="af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe" containerID="9e99be5c9abef532eba38f6dff91b8ae91fc0cced050278867e635b112e193c5" exitCode=0 Feb 14 04:21:23 crc kubenswrapper[4867]: I0214 04:21:23.778456 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kxs9j" event={"ID":"af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe","Type":"ContainerDied","Data":"9e99be5c9abef532eba38f6dff91b8ae91fc0cced050278867e635b112e193c5"} Feb 14 04:21:23 crc kubenswrapper[4867]: I0214 04:21:23.778487 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kxs9j" event={"ID":"af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe","Type":"ContainerStarted","Data":"b5e274fdc6cbf91b4e5bee40ce408a9125d51193a6f3175ada706d276c5b1981"} Feb 14 04:21:25 crc kubenswrapper[4867]: I0214 04:21:25.794896 4867 generic.go:334] "Generic (PLEG): container finished" podID="936b69da-ce28-43de-8fcf-82e83936de1b" containerID="8f5d917721b65e0e84135424ad3901bd1acb03297e90b28dad3e52574bf58538" exitCode=0 Feb 14 04:21:25 crc kubenswrapper[4867]: I0214 04:21:25.794965 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19fjnlv" event={"ID":"936b69da-ce28-43de-8fcf-82e83936de1b","Type":"ContainerDied","Data":"8f5d917721b65e0e84135424ad3901bd1acb03297e90b28dad3e52574bf58538"} Feb 14 04:21:25 crc kubenswrapper[4867]: I0214 04:21:25.798708 4867 generic.go:334] "Generic (PLEG): container finished" podID="af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe" containerID="febc7d092e485320820377013c4367f941560dd0a33e6efe70f78f0bf91202e8" exitCode=0 Feb 14 04:21:25 crc kubenswrapper[4867]: I0214 04:21:25.798764 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kxs9j" event={"ID":"af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe","Type":"ContainerDied","Data":"febc7d092e485320820377013c4367f941560dd0a33e6efe70f78f0bf91202e8"} Feb 14 04:21:26 crc kubenswrapper[4867]: I0214 04:21:26.809623 4867 generic.go:334] "Generic (PLEG): container finished" podID="936b69da-ce28-43de-8fcf-82e83936de1b" containerID="70d0e664d24e7987723b393e88112cf3a22e64c5f670b8ef56e251a1202d5cd7" exitCode=0 Feb 14 04:21:26 crc kubenswrapper[4867]: I0214 04:21:26.809734 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19fjnlv" event={"ID":"936b69da-ce28-43de-8fcf-82e83936de1b","Type":"ContainerDied","Data":"70d0e664d24e7987723b393e88112cf3a22e64c5f670b8ef56e251a1202d5cd7"} Feb 14 04:21:26 crc kubenswrapper[4867]: I0214 04:21:26.812721 4867 generic.go:334] "Generic (PLEG): container finished" podID="af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe" containerID="89369855acb4f048b792a2b970408f0f9a668d7d4ff843ff88f8102b02cc83d4" exitCode=0 Feb 14 04:21:26 crc kubenswrapper[4867]: I0214 04:21:26.812770 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kxs9j" event={"ID":"af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe","Type":"ContainerDied","Data":"89369855acb4f048b792a2b970408f0f9a668d7d4ff843ff88f8102b02cc83d4"} Feb 14 04:21:28 crc kubenswrapper[4867]: I0214 04:21:28.147789 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kxs9j" Feb 14 04:21:28 crc kubenswrapper[4867]: I0214 04:21:28.153212 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19fjnlv" Feb 14 04:21:28 crc kubenswrapper[4867]: I0214 04:21:28.211208 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/936b69da-ce28-43de-8fcf-82e83936de1b-util\") pod \"936b69da-ce28-43de-8fcf-82e83936de1b\" (UID: \"936b69da-ce28-43de-8fcf-82e83936de1b\") " Feb 14 04:21:28 crc kubenswrapper[4867]: I0214 04:21:28.211261 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe-util\") pod \"af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe\" (UID: \"af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe\") " Feb 14 04:21:28 crc kubenswrapper[4867]: I0214 04:21:28.211287 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pnpvd\" (UniqueName: \"kubernetes.io/projected/936b69da-ce28-43de-8fcf-82e83936de1b-kube-api-access-pnpvd\") pod \"936b69da-ce28-43de-8fcf-82e83936de1b\" (UID: \"936b69da-ce28-43de-8fcf-82e83936de1b\") " Feb 14 04:21:28 crc kubenswrapper[4867]: I0214 04:21:28.211323 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/936b69da-ce28-43de-8fcf-82e83936de1b-bundle\") pod \"936b69da-ce28-43de-8fcf-82e83936de1b\" (UID: \"936b69da-ce28-43de-8fcf-82e83936de1b\") " Feb 14 04:21:28 crc kubenswrapper[4867]: I0214 04:21:28.211439 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dm59h\" (UniqueName: \"kubernetes.io/projected/af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe-kube-api-access-dm59h\") pod \"af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe\" (UID: \"af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe\") " Feb 14 04:21:28 crc kubenswrapper[4867]: I0214 04:21:28.211470 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe-bundle\") pod \"af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe\" (UID: \"af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe\") " Feb 14 04:21:28 crc kubenswrapper[4867]: I0214 04:21:28.212443 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/936b69da-ce28-43de-8fcf-82e83936de1b-bundle" (OuterVolumeSpecName: "bundle") pod "936b69da-ce28-43de-8fcf-82e83936de1b" (UID: "936b69da-ce28-43de-8fcf-82e83936de1b"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:21:28 crc kubenswrapper[4867]: I0214 04:21:28.212535 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe-bundle" (OuterVolumeSpecName: "bundle") pod "af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe" (UID: "af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:21:28 crc kubenswrapper[4867]: I0214 04:21:28.219010 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/936b69da-ce28-43de-8fcf-82e83936de1b-kube-api-access-pnpvd" (OuterVolumeSpecName: "kube-api-access-pnpvd") pod "936b69da-ce28-43de-8fcf-82e83936de1b" (UID: "936b69da-ce28-43de-8fcf-82e83936de1b"). InnerVolumeSpecName "kube-api-access-pnpvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:21:28 crc kubenswrapper[4867]: I0214 04:21:28.219825 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe-kube-api-access-dm59h" (OuterVolumeSpecName: "kube-api-access-dm59h") pod "af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe" (UID: "af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe"). InnerVolumeSpecName "kube-api-access-dm59h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:21:28 crc kubenswrapper[4867]: I0214 04:21:28.226383 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/936b69da-ce28-43de-8fcf-82e83936de1b-util" (OuterVolumeSpecName: "util") pod "936b69da-ce28-43de-8fcf-82e83936de1b" (UID: "936b69da-ce28-43de-8fcf-82e83936de1b"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:21:28 crc kubenswrapper[4867]: I0214 04:21:28.312976 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dm59h\" (UniqueName: \"kubernetes.io/projected/af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe-kube-api-access-dm59h\") on node \"crc\" DevicePath \"\"" Feb 14 04:21:28 crc kubenswrapper[4867]: I0214 04:21:28.313006 4867 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:21:28 crc kubenswrapper[4867]: I0214 04:21:28.313018 4867 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/936b69da-ce28-43de-8fcf-82e83936de1b-util\") on node \"crc\" DevicePath \"\"" Feb 14 04:21:28 crc kubenswrapper[4867]: I0214 04:21:28.313047 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pnpvd\" (UniqueName: \"kubernetes.io/projected/936b69da-ce28-43de-8fcf-82e83936de1b-kube-api-access-pnpvd\") on node \"crc\" DevicePath \"\"" Feb 14 04:21:28 crc kubenswrapper[4867]: I0214 04:21:28.313061 4867 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/936b69da-ce28-43de-8fcf-82e83936de1b-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:21:28 crc kubenswrapper[4867]: I0214 04:21:28.486447 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe-util" (OuterVolumeSpecName: "util") pod "af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe" (UID: "af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:21:28 crc kubenswrapper[4867]: I0214 04:21:28.517235 4867 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe-util\") on node \"crc\" DevicePath \"\"" Feb 14 04:21:28 crc kubenswrapper[4867]: I0214 04:21:28.832200 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kxs9j" event={"ID":"af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe","Type":"ContainerDied","Data":"b5e274fdc6cbf91b4e5bee40ce408a9125d51193a6f3175ada706d276c5b1981"} Feb 14 04:21:28 crc kubenswrapper[4867]: I0214 04:21:28.832243 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5e274fdc6cbf91b4e5bee40ce408a9125d51193a6f3175ada706d276c5b1981" Feb 14 04:21:28 crc kubenswrapper[4867]: I0214 04:21:28.832329 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kxs9j" Feb 14 04:21:28 crc kubenswrapper[4867]: I0214 04:21:28.837007 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19fjnlv" Feb 14 04:21:28 crc kubenswrapper[4867]: I0214 04:21:28.836983 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19fjnlv" event={"ID":"936b69da-ce28-43de-8fcf-82e83936de1b","Type":"ContainerDied","Data":"0a21e4148e4071bf6bd851a591d5c60b2b0ebe95fb20bb5bdb5bed099e6a4944"} Feb 14 04:21:28 crc kubenswrapper[4867]: I0214 04:21:28.837196 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a21e4148e4071bf6bd851a591d5c60b2b0ebe95fb20bb5bdb5bed099e6a4944" Feb 14 04:21:39 crc kubenswrapper[4867]: I0214 04:21:39.421116 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-5479889c99-ltnxf"] Feb 14 04:21:39 crc kubenswrapper[4867]: E0214 04:21:39.421919 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="936b69da-ce28-43de-8fcf-82e83936de1b" containerName="extract" Feb 14 04:21:39 crc kubenswrapper[4867]: I0214 04:21:39.421934 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="936b69da-ce28-43de-8fcf-82e83936de1b" containerName="extract" Feb 14 04:21:39 crc kubenswrapper[4867]: E0214 04:21:39.421948 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="936b69da-ce28-43de-8fcf-82e83936de1b" containerName="pull" Feb 14 04:21:39 crc kubenswrapper[4867]: I0214 04:21:39.421955 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="936b69da-ce28-43de-8fcf-82e83936de1b" containerName="pull" Feb 14 04:21:39 crc kubenswrapper[4867]: E0214 04:21:39.421966 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="936b69da-ce28-43de-8fcf-82e83936de1b" containerName="util" Feb 14 04:21:39 crc kubenswrapper[4867]: I0214 04:21:39.421972 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="936b69da-ce28-43de-8fcf-82e83936de1b" containerName="util" Feb 14 04:21:39 crc kubenswrapper[4867]: E0214 04:21:39.421992 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe" containerName="extract" Feb 14 04:21:39 crc kubenswrapper[4867]: I0214 04:21:39.421998 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe" containerName="extract" Feb 14 04:21:39 crc kubenswrapper[4867]: E0214 04:21:39.422009 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe" containerName="pull" Feb 14 04:21:39 crc kubenswrapper[4867]: I0214 04:21:39.422015 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe" containerName="pull" Feb 14 04:21:39 crc kubenswrapper[4867]: E0214 04:21:39.422027 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe" containerName="util" Feb 14 04:21:39 crc kubenswrapper[4867]: I0214 04:21:39.422032 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe" containerName="util" Feb 14 04:21:39 crc kubenswrapper[4867]: I0214 04:21:39.422162 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe" containerName="extract" Feb 14 04:21:39 crc kubenswrapper[4867]: I0214 04:21:39.422178 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="936b69da-ce28-43de-8fcf-82e83936de1b" containerName="extract" Feb 14 04:21:39 crc kubenswrapper[4867]: I0214 04:21:39.422904 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-5479889c99-ltnxf" Feb 14 04:21:39 crc kubenswrapper[4867]: I0214 04:21:39.429151 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-service-cert" Feb 14 04:21:39 crc kubenswrapper[4867]: I0214 04:21:39.429424 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-metrics" Feb 14 04:21:39 crc kubenswrapper[4867]: I0214 04:21:39.430094 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"loki-operator-manager-config" Feb 14 04:21:39 crc kubenswrapper[4867]: I0214 04:21:39.430231 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"kube-root-ca.crt" Feb 14 04:21:39 crc kubenswrapper[4867]: I0214 04:21:39.430340 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"openshift-service-ca.crt" Feb 14 04:21:39 crc kubenswrapper[4867]: I0214 04:21:39.430459 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-dockercfg-7c2k7" Feb 14 04:21:39 crc kubenswrapper[4867]: I0214 04:21:39.463378 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-5479889c99-ltnxf"] Feb 14 04:21:39 crc kubenswrapper[4867]: I0214 04:21:39.580487 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4a918644-d451-4f71-8a69-627b0de1ebb7-apiservice-cert\") pod \"loki-operator-controller-manager-5479889c99-ltnxf\" (UID: \"4a918644-d451-4f71-8a69-627b0de1ebb7\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5479889c99-ltnxf" Feb 14 04:21:39 crc kubenswrapper[4867]: I0214 04:21:39.580540 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/4a918644-d451-4f71-8a69-627b0de1ebb7-manager-config\") pod \"loki-operator-controller-manager-5479889c99-ltnxf\" (UID: \"4a918644-d451-4f71-8a69-627b0de1ebb7\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5479889c99-ltnxf" Feb 14 04:21:39 crc kubenswrapper[4867]: I0214 04:21:39.580587 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2rd6\" (UniqueName: \"kubernetes.io/projected/4a918644-d451-4f71-8a69-627b0de1ebb7-kube-api-access-b2rd6\") pod \"loki-operator-controller-manager-5479889c99-ltnxf\" (UID: \"4a918644-d451-4f71-8a69-627b0de1ebb7\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5479889c99-ltnxf" Feb 14 04:21:39 crc kubenswrapper[4867]: I0214 04:21:39.580710 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4a918644-d451-4f71-8a69-627b0de1ebb7-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-5479889c99-ltnxf\" (UID: \"4a918644-d451-4f71-8a69-627b0de1ebb7\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5479889c99-ltnxf" Feb 14 04:21:39 crc kubenswrapper[4867]: I0214 04:21:39.580785 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4a918644-d451-4f71-8a69-627b0de1ebb7-webhook-cert\") pod \"loki-operator-controller-manager-5479889c99-ltnxf\" (UID: \"4a918644-d451-4f71-8a69-627b0de1ebb7\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5479889c99-ltnxf" Feb 14 04:21:39 crc kubenswrapper[4867]: I0214 04:21:39.681944 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4a918644-d451-4f71-8a69-627b0de1ebb7-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-5479889c99-ltnxf\" (UID: \"4a918644-d451-4f71-8a69-627b0de1ebb7\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5479889c99-ltnxf" Feb 14 04:21:39 crc kubenswrapper[4867]: I0214 04:21:39.682012 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4a918644-d451-4f71-8a69-627b0de1ebb7-webhook-cert\") pod \"loki-operator-controller-manager-5479889c99-ltnxf\" (UID: \"4a918644-d451-4f71-8a69-627b0de1ebb7\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5479889c99-ltnxf" Feb 14 04:21:39 crc kubenswrapper[4867]: I0214 04:21:39.682062 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4a918644-d451-4f71-8a69-627b0de1ebb7-apiservice-cert\") pod \"loki-operator-controller-manager-5479889c99-ltnxf\" (UID: \"4a918644-d451-4f71-8a69-627b0de1ebb7\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5479889c99-ltnxf" Feb 14 04:21:39 crc kubenswrapper[4867]: I0214 04:21:39.682082 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/4a918644-d451-4f71-8a69-627b0de1ebb7-manager-config\") pod \"loki-operator-controller-manager-5479889c99-ltnxf\" (UID: \"4a918644-d451-4f71-8a69-627b0de1ebb7\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5479889c99-ltnxf" Feb 14 04:21:39 crc kubenswrapper[4867]: I0214 04:21:39.682123 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2rd6\" (UniqueName: \"kubernetes.io/projected/4a918644-d451-4f71-8a69-627b0de1ebb7-kube-api-access-b2rd6\") pod \"loki-operator-controller-manager-5479889c99-ltnxf\" (UID: \"4a918644-d451-4f71-8a69-627b0de1ebb7\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5479889c99-ltnxf" Feb 14 04:21:39 crc kubenswrapper[4867]: I0214 04:21:39.683231 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/4a918644-d451-4f71-8a69-627b0de1ebb7-manager-config\") pod \"loki-operator-controller-manager-5479889c99-ltnxf\" (UID: \"4a918644-d451-4f71-8a69-627b0de1ebb7\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5479889c99-ltnxf" Feb 14 04:21:39 crc kubenswrapper[4867]: I0214 04:21:39.689087 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4a918644-d451-4f71-8a69-627b0de1ebb7-webhook-cert\") pod \"loki-operator-controller-manager-5479889c99-ltnxf\" (UID: \"4a918644-d451-4f71-8a69-627b0de1ebb7\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5479889c99-ltnxf" Feb 14 04:21:39 crc kubenswrapper[4867]: I0214 04:21:39.691138 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4a918644-d451-4f71-8a69-627b0de1ebb7-apiservice-cert\") pod \"loki-operator-controller-manager-5479889c99-ltnxf\" (UID: \"4a918644-d451-4f71-8a69-627b0de1ebb7\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5479889c99-ltnxf" Feb 14 04:21:39 crc kubenswrapper[4867]: I0214 04:21:39.695111 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4a918644-d451-4f71-8a69-627b0de1ebb7-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-5479889c99-ltnxf\" (UID: \"4a918644-d451-4f71-8a69-627b0de1ebb7\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5479889c99-ltnxf" Feb 14 04:21:39 crc kubenswrapper[4867]: I0214 04:21:39.738255 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2rd6\" (UniqueName: \"kubernetes.io/projected/4a918644-d451-4f71-8a69-627b0de1ebb7-kube-api-access-b2rd6\") pod \"loki-operator-controller-manager-5479889c99-ltnxf\" (UID: \"4a918644-d451-4f71-8a69-627b0de1ebb7\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5479889c99-ltnxf" Feb 14 04:21:39 crc kubenswrapper[4867]: I0214 04:21:39.744006 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-5479889c99-ltnxf" Feb 14 04:21:40 crc kubenswrapper[4867]: I0214 04:21:40.042288 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-5479889c99-ltnxf"] Feb 14 04:21:40 crc kubenswrapper[4867]: I0214 04:21:40.969178 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-5479889c99-ltnxf" event={"ID":"4a918644-d451-4f71-8a69-627b0de1ebb7","Type":"ContainerStarted","Data":"b994afbb522d24b99f5b88b1fbd3b41a5d82670388c2b6f24ccd5dd218f84162"} Feb 14 04:21:42 crc kubenswrapper[4867]: I0214 04:21:42.169206 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/cluster-logging-operator-c769fd969-pmdnk"] Feb 14 04:21:42 crc kubenswrapper[4867]: I0214 04:21:42.170064 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-c769fd969-pmdnk" Feb 14 04:21:42 crc kubenswrapper[4867]: I0214 04:21:42.171962 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"openshift-service-ca.crt" Feb 14 04:21:42 crc kubenswrapper[4867]: I0214 04:21:42.172210 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"kube-root-ca.crt" Feb 14 04:21:42 crc kubenswrapper[4867]: I0214 04:21:42.172686 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"cluster-logging-operator-dockercfg-v884l" Feb 14 04:21:42 crc kubenswrapper[4867]: I0214 04:21:42.189310 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-c769fd969-pmdnk"] Feb 14 04:21:42 crc kubenswrapper[4867]: I0214 04:21:42.231154 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vmf2\" (UniqueName: \"kubernetes.io/projected/89b20edb-1b24-48e1-accf-f0a2b65c8da1-kube-api-access-6vmf2\") pod \"cluster-logging-operator-c769fd969-pmdnk\" (UID: \"89b20edb-1b24-48e1-accf-f0a2b65c8da1\") " pod="openshift-logging/cluster-logging-operator-c769fd969-pmdnk" Feb 14 04:21:42 crc kubenswrapper[4867]: I0214 04:21:42.332195 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vmf2\" (UniqueName: \"kubernetes.io/projected/89b20edb-1b24-48e1-accf-f0a2b65c8da1-kube-api-access-6vmf2\") pod \"cluster-logging-operator-c769fd969-pmdnk\" (UID: \"89b20edb-1b24-48e1-accf-f0a2b65c8da1\") " pod="openshift-logging/cluster-logging-operator-c769fd969-pmdnk" Feb 14 04:21:42 crc kubenswrapper[4867]: I0214 04:21:42.361111 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vmf2\" (UniqueName: \"kubernetes.io/projected/89b20edb-1b24-48e1-accf-f0a2b65c8da1-kube-api-access-6vmf2\") pod \"cluster-logging-operator-c769fd969-pmdnk\" (UID: \"89b20edb-1b24-48e1-accf-f0a2b65c8da1\") " pod="openshift-logging/cluster-logging-operator-c769fd969-pmdnk" Feb 14 04:21:42 crc kubenswrapper[4867]: I0214 04:21:42.488306 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-c769fd969-pmdnk" Feb 14 04:21:42 crc kubenswrapper[4867]: I0214 04:21:42.993964 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-c769fd969-pmdnk"] Feb 14 04:21:46 crc kubenswrapper[4867]: I0214 04:21:46.007035 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-c769fd969-pmdnk" event={"ID":"89b20edb-1b24-48e1-accf-f0a2b65c8da1","Type":"ContainerStarted","Data":"6b1e31a6875202fbbbcbddba12eee32dd303506c468bcfc3eeffb2edb2233e83"} Feb 14 04:21:46 crc kubenswrapper[4867]: I0214 04:21:46.009210 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-5479889c99-ltnxf" event={"ID":"4a918644-d451-4f71-8a69-627b0de1ebb7","Type":"ContainerStarted","Data":"45aa757658fb299c4e4089cef9945c1427c62ec817c7670b4ba12f2330eb044e"} Feb 14 04:21:58 crc kubenswrapper[4867]: I0214 04:21:58.116686 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-c769fd969-pmdnk" event={"ID":"89b20edb-1b24-48e1-accf-f0a2b65c8da1","Type":"ContainerStarted","Data":"3cf55d4e6e13765ab8cd9dc9a5d145fd9be51067503785dcd4d85e10f972cae1"} Feb 14 04:21:58 crc kubenswrapper[4867]: I0214 04:21:58.119392 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-5479889c99-ltnxf" event={"ID":"4a918644-d451-4f71-8a69-627b0de1ebb7","Type":"ContainerStarted","Data":"d87cafe09abaf2bf091dfef60ad31bf9fbb60a8b8a09fb6c7224b5451333cab6"} Feb 14 04:21:58 crc kubenswrapper[4867]: I0214 04:21:58.119647 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-5479889c99-ltnxf" Feb 14 04:21:58 crc kubenswrapper[4867]: I0214 04:21:58.121898 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-5479889c99-ltnxf" Feb 14 04:21:58 crc kubenswrapper[4867]: I0214 04:21:58.139694 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/cluster-logging-operator-c769fd969-pmdnk" podStartSLOduration=3.921919325 podStartE2EDuration="16.139668184s" podCreationTimestamp="2026-02-14 04:21:42 +0000 UTC" firstStartedPulling="2026-02-14 04:21:45.155871913 +0000 UTC m=+737.236809227" lastFinishedPulling="2026-02-14 04:21:57.373620772 +0000 UTC m=+749.454558086" observedRunningTime="2026-02-14 04:21:58.132691004 +0000 UTC m=+750.213628318" watchObservedRunningTime="2026-02-14 04:21:58.139668184 +0000 UTC m=+750.220605498" Feb 14 04:21:58 crc kubenswrapper[4867]: I0214 04:21:58.175968 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators-redhat/loki-operator-controller-manager-5479889c99-ltnxf" podStartSLOduration=1.856826361 podStartE2EDuration="19.175951042s" podCreationTimestamp="2026-02-14 04:21:39 +0000 UTC" firstStartedPulling="2026-02-14 04:21:40.055254091 +0000 UTC m=+732.136191405" lastFinishedPulling="2026-02-14 04:21:57.374378772 +0000 UTC m=+749.455316086" observedRunningTime="2026-02-14 04:21:58.174269938 +0000 UTC m=+750.255207292" watchObservedRunningTime="2026-02-14 04:21:58.175951042 +0000 UTC m=+750.256888356" Feb 14 04:22:00 crc kubenswrapper[4867]: I0214 04:22:00.886390 4867 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 14 04:22:01 crc kubenswrapper[4867]: I0214 04:22:01.251226 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 04:22:01 crc kubenswrapper[4867]: I0214 04:22:01.251323 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 04:22:02 crc kubenswrapper[4867]: I0214 04:22:02.451744 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["minio-dev/minio"] Feb 14 04:22:02 crc kubenswrapper[4867]: I0214 04:22:02.453778 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Feb 14 04:22:02 crc kubenswrapper[4867]: I0214 04:22:02.457162 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"kube-root-ca.crt" Feb 14 04:22:02 crc kubenswrapper[4867]: I0214 04:22:02.457444 4867 reflector.go:368] Caches populated for *v1.Secret from object-"minio-dev"/"default-dockercfg-vzthp" Feb 14 04:22:02 crc kubenswrapper[4867]: I0214 04:22:02.457681 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"openshift-service-ca.crt" Feb 14 04:22:02 crc kubenswrapper[4867]: I0214 04:22:02.460032 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Feb 14 04:22:02 crc kubenswrapper[4867]: I0214 04:22:02.482242 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c5183e0b-0b24-4d6e-b6f6-c0b18653433e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c5183e0b-0b24-4d6e-b6f6-c0b18653433e\") pod \"minio\" (UID: \"ca1edb5b-df43-4a3d-83ea-01030d18e02e\") " pod="minio-dev/minio" Feb 14 04:22:02 crc kubenswrapper[4867]: I0214 04:22:02.482343 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8lr8\" (UniqueName: \"kubernetes.io/projected/ca1edb5b-df43-4a3d-83ea-01030d18e02e-kube-api-access-q8lr8\") pod \"minio\" (UID: \"ca1edb5b-df43-4a3d-83ea-01030d18e02e\") " pod="minio-dev/minio" Feb 14 04:22:02 crc kubenswrapper[4867]: I0214 04:22:02.583465 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c5183e0b-0b24-4d6e-b6f6-c0b18653433e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c5183e0b-0b24-4d6e-b6f6-c0b18653433e\") pod \"minio\" (UID: \"ca1edb5b-df43-4a3d-83ea-01030d18e02e\") " pod="minio-dev/minio" Feb 14 04:22:02 crc kubenswrapper[4867]: I0214 04:22:02.583568 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8lr8\" (UniqueName: \"kubernetes.io/projected/ca1edb5b-df43-4a3d-83ea-01030d18e02e-kube-api-access-q8lr8\") pod \"minio\" (UID: \"ca1edb5b-df43-4a3d-83ea-01030d18e02e\") " pod="minio-dev/minio" Feb 14 04:22:02 crc kubenswrapper[4867]: I0214 04:22:02.586631 4867 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 04:22:02 crc kubenswrapper[4867]: I0214 04:22:02.586668 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c5183e0b-0b24-4d6e-b6f6-c0b18653433e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c5183e0b-0b24-4d6e-b6f6-c0b18653433e\") pod \"minio\" (UID: \"ca1edb5b-df43-4a3d-83ea-01030d18e02e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/eb91cce80dbcfbcaac1779d0ca18fe386616c5db8c3101f1555325d53b799300/globalmount\"" pod="minio-dev/minio" Feb 14 04:22:02 crc kubenswrapper[4867]: I0214 04:22:02.601764 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8lr8\" (UniqueName: \"kubernetes.io/projected/ca1edb5b-df43-4a3d-83ea-01030d18e02e-kube-api-access-q8lr8\") pod \"minio\" (UID: \"ca1edb5b-df43-4a3d-83ea-01030d18e02e\") " pod="minio-dev/minio" Feb 14 04:22:02 crc kubenswrapper[4867]: I0214 04:22:02.627936 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c5183e0b-0b24-4d6e-b6f6-c0b18653433e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c5183e0b-0b24-4d6e-b6f6-c0b18653433e\") pod \"minio\" (UID: \"ca1edb5b-df43-4a3d-83ea-01030d18e02e\") " pod="minio-dev/minio" Feb 14 04:22:02 crc kubenswrapper[4867]: I0214 04:22:02.799075 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Feb 14 04:22:03 crc kubenswrapper[4867]: I0214 04:22:03.449333 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Feb 14 04:22:04 crc kubenswrapper[4867]: I0214 04:22:04.162593 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"ca1edb5b-df43-4a3d-83ea-01030d18e02e","Type":"ContainerStarted","Data":"46a2d9186bae73d2af69ff00c8a20c35525c98e39295b27fccdeeb957d08e4e1"} Feb 14 04:22:08 crc kubenswrapper[4867]: I0214 04:22:08.189636 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"ca1edb5b-df43-4a3d-83ea-01030d18e02e","Type":"ContainerStarted","Data":"9c21f44f4c013c7abebbf1fb3807ffb0abaac14a1032897789288eda0314b507"} Feb 14 04:22:08 crc kubenswrapper[4867]: I0214 04:22:08.206644 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="minio-dev/minio" podStartSLOduration=5.641241569 podStartE2EDuration="9.206619008s" podCreationTimestamp="2026-02-14 04:21:59 +0000 UTC" firstStartedPulling="2026-02-14 04:22:03.465056596 +0000 UTC m=+755.545993910" lastFinishedPulling="2026-02-14 04:22:07.030434035 +0000 UTC m=+759.111371349" observedRunningTime="2026-02-14 04:22:08.201707781 +0000 UTC m=+760.282645115" watchObservedRunningTime="2026-02-14 04:22:08.206619008 +0000 UTC m=+760.287556362" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.258195 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-distributor-5d5548c9f5-7zdqp"] Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.260356 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-7zdqp" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.262242 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-5d5548c9f5-7zdqp"] Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.267430 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-dockercfg-xq9r8" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.267618 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-http" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.267808 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-grpc" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.267817 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-config" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.272103 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-ca-bundle" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.326893 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bh797\" (UniqueName: \"kubernetes.io/projected/c9201352-8585-47d4-9c13-b9e21ac4cd9f-kube-api-access-bh797\") pod \"logging-loki-distributor-5d5548c9f5-7zdqp\" (UID: \"c9201352-8585-47d4-9c13-b9e21ac4cd9f\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-7zdqp" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.327475 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9201352-8585-47d4-9c13-b9e21ac4cd9f-config\") pod \"logging-loki-distributor-5d5548c9f5-7zdqp\" (UID: \"c9201352-8585-47d4-9c13-b9e21ac4cd9f\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-7zdqp" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.327603 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/c9201352-8585-47d4-9c13-b9e21ac4cd9f-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5d5548c9f5-7zdqp\" (UID: \"c9201352-8585-47d4-9c13-b9e21ac4cd9f\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-7zdqp" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.327697 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/c9201352-8585-47d4-9c13-b9e21ac4cd9f-logging-loki-distributor-http\") pod \"logging-loki-distributor-5d5548c9f5-7zdqp\" (UID: \"c9201352-8585-47d4-9c13-b9e21ac4cd9f\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-7zdqp" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.327806 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c9201352-8585-47d4-9c13-b9e21ac4cd9f-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5d5548c9f5-7zdqp\" (UID: \"c9201352-8585-47d4-9c13-b9e21ac4cd9f\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-7zdqp" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.429610 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bh797\" (UniqueName: \"kubernetes.io/projected/c9201352-8585-47d4-9c13-b9e21ac4cd9f-kube-api-access-bh797\") pod \"logging-loki-distributor-5d5548c9f5-7zdqp\" (UID: \"c9201352-8585-47d4-9c13-b9e21ac4cd9f\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-7zdqp" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.429913 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9201352-8585-47d4-9c13-b9e21ac4cd9f-config\") pod \"logging-loki-distributor-5d5548c9f5-7zdqp\" (UID: \"c9201352-8585-47d4-9c13-b9e21ac4cd9f\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-7zdqp" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.430008 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/c9201352-8585-47d4-9c13-b9e21ac4cd9f-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5d5548c9f5-7zdqp\" (UID: \"c9201352-8585-47d4-9c13-b9e21ac4cd9f\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-7zdqp" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.430106 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/c9201352-8585-47d4-9c13-b9e21ac4cd9f-logging-loki-distributor-http\") pod \"logging-loki-distributor-5d5548c9f5-7zdqp\" (UID: \"c9201352-8585-47d4-9c13-b9e21ac4cd9f\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-7zdqp" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.430212 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c9201352-8585-47d4-9c13-b9e21ac4cd9f-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5d5548c9f5-7zdqp\" (UID: \"c9201352-8585-47d4-9c13-b9e21ac4cd9f\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-7zdqp" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.431197 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c9201352-8585-47d4-9c13-b9e21ac4cd9f-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5d5548c9f5-7zdqp\" (UID: \"c9201352-8585-47d4-9c13-b9e21ac4cd9f\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-7zdqp" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.431278 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9201352-8585-47d4-9c13-b9e21ac4cd9f-config\") pod \"logging-loki-distributor-5d5548c9f5-7zdqp\" (UID: \"c9201352-8585-47d4-9c13-b9e21ac4cd9f\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-7zdqp" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.445585 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/c9201352-8585-47d4-9c13-b9e21ac4cd9f-logging-loki-distributor-http\") pod \"logging-loki-distributor-5d5548c9f5-7zdqp\" (UID: \"c9201352-8585-47d4-9c13-b9e21ac4cd9f\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-7zdqp" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.448223 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/c9201352-8585-47d4-9c13-b9e21ac4cd9f-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5d5548c9f5-7zdqp\" (UID: \"c9201352-8585-47d4-9c13-b9e21ac4cd9f\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-7zdqp" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.472884 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bh797\" (UniqueName: \"kubernetes.io/projected/c9201352-8585-47d4-9c13-b9e21ac4cd9f-kube-api-access-bh797\") pod \"logging-loki-distributor-5d5548c9f5-7zdqp\" (UID: \"c9201352-8585-47d4-9c13-b9e21ac4cd9f\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-7zdqp" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.510220 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-querier-76bf7b6d45-5td7f"] Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.511038 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-76bf7b6d45-5td7f" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.524001 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-s3" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.524337 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-http" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.524551 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-grpc" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.543826 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-76bf7b6d45-5td7f"] Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.583559 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-7zdqp" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.637122 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/9c48c070-b4b3-48af-b40a-d82788f764d9-logging-loki-querier-grpc\") pod \"logging-loki-querier-76bf7b6d45-5td7f\" (UID: \"9c48c070-b4b3-48af-b40a-d82788f764d9\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-5td7f" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.637178 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jb885\" (UniqueName: \"kubernetes.io/projected/9c48c070-b4b3-48af-b40a-d82788f764d9-kube-api-access-jb885\") pod \"logging-loki-querier-76bf7b6d45-5td7f\" (UID: \"9c48c070-b4b3-48af-b40a-d82788f764d9\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-5td7f" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.637197 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/9c48c070-b4b3-48af-b40a-d82788f764d9-logging-loki-s3\") pod \"logging-loki-querier-76bf7b6d45-5td7f\" (UID: \"9c48c070-b4b3-48af-b40a-d82788f764d9\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-5td7f" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.637262 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c48c070-b4b3-48af-b40a-d82788f764d9-config\") pod \"logging-loki-querier-76bf7b6d45-5td7f\" (UID: \"9c48c070-b4b3-48af-b40a-d82788f764d9\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-5td7f" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.637285 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/9c48c070-b4b3-48af-b40a-d82788f764d9-logging-loki-querier-http\") pod \"logging-loki-querier-76bf7b6d45-5td7f\" (UID: \"9c48c070-b4b3-48af-b40a-d82788f764d9\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-5td7f" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.637304 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c48c070-b4b3-48af-b40a-d82788f764d9-logging-loki-ca-bundle\") pod \"logging-loki-querier-76bf7b6d45-5td7f\" (UID: \"9c48c070-b4b3-48af-b40a-d82788f764d9\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-5td7f" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.694150 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-query-frontend-6d6859c548-cfcbp"] Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.694969 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-cfcbp" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.700934 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-http" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.701109 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-grpc" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.717480 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-6d6859c548-cfcbp"] Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.740220 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c48c070-b4b3-48af-b40a-d82788f764d9-logging-loki-ca-bundle\") pod \"logging-loki-querier-76bf7b6d45-5td7f\" (UID: \"9c48c070-b4b3-48af-b40a-d82788f764d9\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-5td7f" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.740281 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/837b4fe4-f827-4882-8af7-225b18bb3e22-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-6d6859c548-cfcbp\" (UID: \"837b4fe4-f827-4882-8af7-225b18bb3e22\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-cfcbp" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.740309 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/837b4fe4-f827-4882-8af7-225b18bb3e22-config\") pod \"logging-loki-query-frontend-6d6859c548-cfcbp\" (UID: \"837b4fe4-f827-4882-8af7-225b18bb3e22\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-cfcbp" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.740339 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/9c48c070-b4b3-48af-b40a-d82788f764d9-logging-loki-querier-grpc\") pod \"logging-loki-querier-76bf7b6d45-5td7f\" (UID: \"9c48c070-b4b3-48af-b40a-d82788f764d9\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-5td7f" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.740368 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/9c48c070-b4b3-48af-b40a-d82788f764d9-logging-loki-s3\") pod \"logging-loki-querier-76bf7b6d45-5td7f\" (UID: \"9c48c070-b4b3-48af-b40a-d82788f764d9\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-5td7f" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.740383 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jb885\" (UniqueName: \"kubernetes.io/projected/9c48c070-b4b3-48af-b40a-d82788f764d9-kube-api-access-jb885\") pod \"logging-loki-querier-76bf7b6d45-5td7f\" (UID: \"9c48c070-b4b3-48af-b40a-d82788f764d9\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-5td7f" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.740460 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnwwr\" (UniqueName: \"kubernetes.io/projected/837b4fe4-f827-4882-8af7-225b18bb3e22-kube-api-access-fnwwr\") pod \"logging-loki-query-frontend-6d6859c548-cfcbp\" (UID: \"837b4fe4-f827-4882-8af7-225b18bb3e22\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-cfcbp" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.740490 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/837b4fe4-f827-4882-8af7-225b18bb3e22-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-6d6859c548-cfcbp\" (UID: \"837b4fe4-f827-4882-8af7-225b18bb3e22\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-cfcbp" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.740528 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c48c070-b4b3-48af-b40a-d82788f764d9-config\") pod \"logging-loki-querier-76bf7b6d45-5td7f\" (UID: \"9c48c070-b4b3-48af-b40a-d82788f764d9\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-5td7f" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.740551 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/9c48c070-b4b3-48af-b40a-d82788f764d9-logging-loki-querier-http\") pod \"logging-loki-querier-76bf7b6d45-5td7f\" (UID: \"9c48c070-b4b3-48af-b40a-d82788f764d9\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-5td7f" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.740569 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/837b4fe4-f827-4882-8af7-225b18bb3e22-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-6d6859c548-cfcbp\" (UID: \"837b4fe4-f827-4882-8af7-225b18bb3e22\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-cfcbp" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.741424 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c48c070-b4b3-48af-b40a-d82788f764d9-logging-loki-ca-bundle\") pod \"logging-loki-querier-76bf7b6d45-5td7f\" (UID: \"9c48c070-b4b3-48af-b40a-d82788f764d9\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-5td7f" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.747452 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c48c070-b4b3-48af-b40a-d82788f764d9-config\") pod \"logging-loki-querier-76bf7b6d45-5td7f\" (UID: \"9c48c070-b4b3-48af-b40a-d82788f764d9\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-5td7f" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.752532 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/9c48c070-b4b3-48af-b40a-d82788f764d9-logging-loki-s3\") pod \"logging-loki-querier-76bf7b6d45-5td7f\" (UID: \"9c48c070-b4b3-48af-b40a-d82788f764d9\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-5td7f" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.752826 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/9c48c070-b4b3-48af-b40a-d82788f764d9-logging-loki-querier-http\") pod \"logging-loki-querier-76bf7b6d45-5td7f\" (UID: \"9c48c070-b4b3-48af-b40a-d82788f764d9\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-5td7f" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.753473 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/9c48c070-b4b3-48af-b40a-d82788f764d9-logging-loki-querier-grpc\") pod \"logging-loki-querier-76bf7b6d45-5td7f\" (UID: \"9c48c070-b4b3-48af-b40a-d82788f764d9\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-5td7f" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.776681 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jb885\" (UniqueName: \"kubernetes.io/projected/9c48c070-b4b3-48af-b40a-d82788f764d9-kube-api-access-jb885\") pod \"logging-loki-querier-76bf7b6d45-5td7f\" (UID: \"9c48c070-b4b3-48af-b40a-d82788f764d9\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-5td7f" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.843000 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnwwr\" (UniqueName: \"kubernetes.io/projected/837b4fe4-f827-4882-8af7-225b18bb3e22-kube-api-access-fnwwr\") pod \"logging-loki-query-frontend-6d6859c548-cfcbp\" (UID: \"837b4fe4-f827-4882-8af7-225b18bb3e22\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-cfcbp" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.843068 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/837b4fe4-f827-4882-8af7-225b18bb3e22-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-6d6859c548-cfcbp\" (UID: \"837b4fe4-f827-4882-8af7-225b18bb3e22\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-cfcbp" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.843105 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/837b4fe4-f827-4882-8af7-225b18bb3e22-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-6d6859c548-cfcbp\" (UID: \"837b4fe4-f827-4882-8af7-225b18bb3e22\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-cfcbp" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.843148 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/837b4fe4-f827-4882-8af7-225b18bb3e22-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-6d6859c548-cfcbp\" (UID: \"837b4fe4-f827-4882-8af7-225b18bb3e22\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-cfcbp" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.843174 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/837b4fe4-f827-4882-8af7-225b18bb3e22-config\") pod \"logging-loki-query-frontend-6d6859c548-cfcbp\" (UID: \"837b4fe4-f827-4882-8af7-225b18bb3e22\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-cfcbp" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.844224 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/837b4fe4-f827-4882-8af7-225b18bb3e22-config\") pod \"logging-loki-query-frontend-6d6859c548-cfcbp\" (UID: \"837b4fe4-f827-4882-8af7-225b18bb3e22\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-cfcbp" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.844864 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/837b4fe4-f827-4882-8af7-225b18bb3e22-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-6d6859c548-cfcbp\" (UID: \"837b4fe4-f827-4882-8af7-225b18bb3e22\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-cfcbp" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.862487 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/837b4fe4-f827-4882-8af7-225b18bb3e22-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-6d6859c548-cfcbp\" (UID: \"837b4fe4-f827-4882-8af7-225b18bb3e22\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-cfcbp" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.862579 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-767ffcbf75-l82l4"] Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.871445 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-767ffcbf75-md7ts"] Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.873612 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-767ffcbf75-md7ts" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.873683 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-767ffcbf75-l82l4" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.882351 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-76bf7b6d45-5td7f" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.888619 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.888925 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.889170 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway-ca-bundle" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.889289 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-client-http" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.889402 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-dockercfg-nrktg" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.889619 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-http" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.897177 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnwwr\" (UniqueName: \"kubernetes.io/projected/837b4fe4-f827-4882-8af7-225b18bb3e22-kube-api-access-fnwwr\") pod \"logging-loki-query-frontend-6d6859c548-cfcbp\" (UID: \"837b4fe4-f827-4882-8af7-225b18bb3e22\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-cfcbp" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.907243 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/837b4fe4-f827-4882-8af7-225b18bb3e22-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-6d6859c548-cfcbp\" (UID: \"837b4fe4-f827-4882-8af7-225b18bb3e22\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-cfcbp" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.916726 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-767ffcbf75-l82l4"] Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.942316 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-767ffcbf75-md7ts"] Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.944716 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/d28844dc-6974-446b-bd9a-b22586858387-lokistack-gateway\") pod \"logging-loki-gateway-767ffcbf75-md7ts\" (UID: \"d28844dc-6974-446b-bd9a-b22586858387\") " pod="openshift-logging/logging-loki-gateway-767ffcbf75-md7ts" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.944915 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5-tenants\") pod \"logging-loki-gateway-767ffcbf75-l82l4\" (UID: \"0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5\") " pod="openshift-logging/logging-loki-gateway-767ffcbf75-l82l4" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.944935 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-767ffcbf75-l82l4\" (UID: \"0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5\") " pod="openshift-logging/logging-loki-gateway-767ffcbf75-l82l4" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.944958 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5-logging-loki-ca-bundle\") pod \"logging-loki-gateway-767ffcbf75-l82l4\" (UID: \"0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5\") " pod="openshift-logging/logging-loki-gateway-767ffcbf75-l82l4" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.944975 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5-lokistack-gateway\") pod \"logging-loki-gateway-767ffcbf75-l82l4\" (UID: \"0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5\") " pod="openshift-logging/logging-loki-gateway-767ffcbf75-l82l4" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.944993 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5-tls-secret\") pod \"logging-loki-gateway-767ffcbf75-l82l4\" (UID: \"0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5\") " pod="openshift-logging/logging-loki-gateway-767ffcbf75-l82l4" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.945018 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/d28844dc-6974-446b-bd9a-b22586858387-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-767ffcbf75-md7ts\" (UID: \"d28844dc-6974-446b-bd9a-b22586858387\") " pod="openshift-logging/logging-loki-gateway-767ffcbf75-md7ts" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.945042 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brz4x\" (UniqueName: \"kubernetes.io/projected/0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5-kube-api-access-brz4x\") pod \"logging-loki-gateway-767ffcbf75-l82l4\" (UID: \"0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5\") " pod="openshift-logging/logging-loki-gateway-767ffcbf75-l82l4" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.945064 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5-rbac\") pod \"logging-loki-gateway-767ffcbf75-l82l4\" (UID: \"0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5\") " pod="openshift-logging/logging-loki-gateway-767ffcbf75-l82l4" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.945088 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpx5r\" (UniqueName: \"kubernetes.io/projected/d28844dc-6974-446b-bd9a-b22586858387-kube-api-access-qpx5r\") pod \"logging-loki-gateway-767ffcbf75-md7ts\" (UID: \"d28844dc-6974-446b-bd9a-b22586858387\") " pod="openshift-logging/logging-loki-gateway-767ffcbf75-md7ts" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.945228 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d28844dc-6974-446b-bd9a-b22586858387-logging-loki-ca-bundle\") pod \"logging-loki-gateway-767ffcbf75-md7ts\" (UID: \"d28844dc-6974-446b-bd9a-b22586858387\") " pod="openshift-logging/logging-loki-gateway-767ffcbf75-md7ts" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.945282 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/d28844dc-6974-446b-bd9a-b22586858387-tls-secret\") pod \"logging-loki-gateway-767ffcbf75-md7ts\" (UID: \"d28844dc-6974-446b-bd9a-b22586858387\") " pod="openshift-logging/logging-loki-gateway-767ffcbf75-md7ts" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.945300 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d28844dc-6974-446b-bd9a-b22586858387-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-767ffcbf75-md7ts\" (UID: \"d28844dc-6974-446b-bd9a-b22586858387\") " pod="openshift-logging/logging-loki-gateway-767ffcbf75-md7ts" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.945335 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/d28844dc-6974-446b-bd9a-b22586858387-rbac\") pod \"logging-loki-gateway-767ffcbf75-md7ts\" (UID: \"d28844dc-6974-446b-bd9a-b22586858387\") " pod="openshift-logging/logging-loki-gateway-767ffcbf75-md7ts" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.945464 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/d28844dc-6974-446b-bd9a-b22586858387-tenants\") pod \"logging-loki-gateway-767ffcbf75-md7ts\" (UID: \"d28844dc-6974-446b-bd9a-b22586858387\") " pod="openshift-logging/logging-loki-gateway-767ffcbf75-md7ts" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.945486 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-767ffcbf75-l82l4\" (UID: \"0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5\") " pod="openshift-logging/logging-loki-gateway-767ffcbf75-l82l4" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.046316 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d28844dc-6974-446b-bd9a-b22586858387-logging-loki-ca-bundle\") pod \"logging-loki-gateway-767ffcbf75-md7ts\" (UID: \"d28844dc-6974-446b-bd9a-b22586858387\") " pod="openshift-logging/logging-loki-gateway-767ffcbf75-md7ts" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.046374 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/d28844dc-6974-446b-bd9a-b22586858387-tls-secret\") pod \"logging-loki-gateway-767ffcbf75-md7ts\" (UID: \"d28844dc-6974-446b-bd9a-b22586858387\") " pod="openshift-logging/logging-loki-gateway-767ffcbf75-md7ts" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.046405 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d28844dc-6974-446b-bd9a-b22586858387-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-767ffcbf75-md7ts\" (UID: \"d28844dc-6974-446b-bd9a-b22586858387\") " pod="openshift-logging/logging-loki-gateway-767ffcbf75-md7ts" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.046460 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/d28844dc-6974-446b-bd9a-b22586858387-rbac\") pod \"logging-loki-gateway-767ffcbf75-md7ts\" (UID: \"d28844dc-6974-446b-bd9a-b22586858387\") " pod="openshift-logging/logging-loki-gateway-767ffcbf75-md7ts" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.046524 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/d28844dc-6974-446b-bd9a-b22586858387-tenants\") pod \"logging-loki-gateway-767ffcbf75-md7ts\" (UID: \"d28844dc-6974-446b-bd9a-b22586858387\") " pod="openshift-logging/logging-loki-gateway-767ffcbf75-md7ts" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.046550 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-767ffcbf75-l82l4\" (UID: \"0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5\") " pod="openshift-logging/logging-loki-gateway-767ffcbf75-l82l4" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.046626 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5-tenants\") pod \"logging-loki-gateway-767ffcbf75-l82l4\" (UID: \"0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5\") " pod="openshift-logging/logging-loki-gateway-767ffcbf75-l82l4" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.046648 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/d28844dc-6974-446b-bd9a-b22586858387-lokistack-gateway\") pod \"logging-loki-gateway-767ffcbf75-md7ts\" (UID: \"d28844dc-6974-446b-bd9a-b22586858387\") " pod="openshift-logging/logging-loki-gateway-767ffcbf75-md7ts" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.046676 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-767ffcbf75-l82l4\" (UID: \"0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5\") " pod="openshift-logging/logging-loki-gateway-767ffcbf75-l82l4" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.046717 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5-logging-loki-ca-bundle\") pod \"logging-loki-gateway-767ffcbf75-l82l4\" (UID: \"0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5\") " pod="openshift-logging/logging-loki-gateway-767ffcbf75-l82l4" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.046739 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5-lokistack-gateway\") pod \"logging-loki-gateway-767ffcbf75-l82l4\" (UID: \"0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5\") " pod="openshift-logging/logging-loki-gateway-767ffcbf75-l82l4" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.046760 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5-tls-secret\") pod \"logging-loki-gateway-767ffcbf75-l82l4\" (UID: \"0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5\") " pod="openshift-logging/logging-loki-gateway-767ffcbf75-l82l4" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.046794 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/d28844dc-6974-446b-bd9a-b22586858387-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-767ffcbf75-md7ts\" (UID: \"d28844dc-6974-446b-bd9a-b22586858387\") " pod="openshift-logging/logging-loki-gateway-767ffcbf75-md7ts" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.046839 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brz4x\" (UniqueName: \"kubernetes.io/projected/0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5-kube-api-access-brz4x\") pod \"logging-loki-gateway-767ffcbf75-l82l4\" (UID: \"0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5\") " pod="openshift-logging/logging-loki-gateway-767ffcbf75-l82l4" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.046867 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5-rbac\") pod \"logging-loki-gateway-767ffcbf75-l82l4\" (UID: \"0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5\") " pod="openshift-logging/logging-loki-gateway-767ffcbf75-l82l4" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.046907 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpx5r\" (UniqueName: \"kubernetes.io/projected/d28844dc-6974-446b-bd9a-b22586858387-kube-api-access-qpx5r\") pod \"logging-loki-gateway-767ffcbf75-md7ts\" (UID: \"d28844dc-6974-446b-bd9a-b22586858387\") " pod="openshift-logging/logging-loki-gateway-767ffcbf75-md7ts" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.048229 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/d28844dc-6974-446b-bd9a-b22586858387-rbac\") pod \"logging-loki-gateway-767ffcbf75-md7ts\" (UID: \"d28844dc-6974-446b-bd9a-b22586858387\") " pod="openshift-logging/logging-loki-gateway-767ffcbf75-md7ts" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.048693 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d28844dc-6974-446b-bd9a-b22586858387-logging-loki-ca-bundle\") pod \"logging-loki-gateway-767ffcbf75-md7ts\" (UID: \"d28844dc-6974-446b-bd9a-b22586858387\") " pod="openshift-logging/logging-loki-gateway-767ffcbf75-md7ts" Feb 14 04:22:14 crc kubenswrapper[4867]: E0214 04:22:14.048791 4867 secret.go:188] Couldn't get secret openshift-logging/logging-loki-gateway-http: secret "logging-loki-gateway-http" not found Feb 14 04:22:14 crc kubenswrapper[4867]: E0214 04:22:14.048844 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d28844dc-6974-446b-bd9a-b22586858387-tls-secret podName:d28844dc-6974-446b-bd9a-b22586858387 nodeName:}" failed. No retries permitted until 2026-02-14 04:22:14.548826949 +0000 UTC m=+766.629764353 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-secret" (UniqueName: "kubernetes.io/secret/d28844dc-6974-446b-bd9a-b22586858387-tls-secret") pod "logging-loki-gateway-767ffcbf75-md7ts" (UID: "d28844dc-6974-446b-bd9a-b22586858387") : secret "logging-loki-gateway-http" not found Feb 14 04:22:14 crc kubenswrapper[4867]: E0214 04:22:14.054593 4867 secret.go:188] Couldn't get secret openshift-logging/logging-loki-gateway-http: secret "logging-loki-gateway-http" not found Feb 14 04:22:14 crc kubenswrapper[4867]: E0214 04:22:14.054647 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5-tls-secret podName:0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5 nodeName:}" failed. No retries permitted until 2026-02-14 04:22:14.554630859 +0000 UTC m=+766.635568173 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-secret" (UniqueName: "kubernetes.io/secret/0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5-tls-secret") pod "logging-loki-gateway-767ffcbf75-l82l4" (UID: "0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5") : secret "logging-loki-gateway-http" not found Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.054825 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5-logging-loki-ca-bundle\") pod \"logging-loki-gateway-767ffcbf75-l82l4\" (UID: \"0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5\") " pod="openshift-logging/logging-loki-gateway-767ffcbf75-l82l4" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.054838 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-767ffcbf75-l82l4\" (UID: \"0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5\") " pod="openshift-logging/logging-loki-gateway-767ffcbf75-l82l4" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.055546 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5-lokistack-gateway\") pod \"logging-loki-gateway-767ffcbf75-l82l4\" (UID: \"0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5\") " pod="openshift-logging/logging-loki-gateway-767ffcbf75-l82l4" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.055767 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/d28844dc-6974-446b-bd9a-b22586858387-lokistack-gateway\") pod \"logging-loki-gateway-767ffcbf75-md7ts\" (UID: \"d28844dc-6974-446b-bd9a-b22586858387\") " pod="openshift-logging/logging-loki-gateway-767ffcbf75-md7ts" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.056779 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5-rbac\") pod \"logging-loki-gateway-767ffcbf75-l82l4\" (UID: \"0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5\") " pod="openshift-logging/logging-loki-gateway-767ffcbf75-l82l4" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.058405 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/d28844dc-6974-446b-bd9a-b22586858387-tenants\") pod \"logging-loki-gateway-767ffcbf75-md7ts\" (UID: \"d28844dc-6974-446b-bd9a-b22586858387\") " pod="openshift-logging/logging-loki-gateway-767ffcbf75-md7ts" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.059247 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-767ffcbf75-l82l4\" (UID: \"0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5\") " pod="openshift-logging/logging-loki-gateway-767ffcbf75-l82l4" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.060418 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/d28844dc-6974-446b-bd9a-b22586858387-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-767ffcbf75-md7ts\" (UID: \"d28844dc-6974-446b-bd9a-b22586858387\") " pod="openshift-logging/logging-loki-gateway-767ffcbf75-md7ts" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.061840 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5-tenants\") pod \"logging-loki-gateway-767ffcbf75-l82l4\" (UID: \"0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5\") " pod="openshift-logging/logging-loki-gateway-767ffcbf75-l82l4" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.062149 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-cfcbp" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.070054 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpx5r\" (UniqueName: \"kubernetes.io/projected/d28844dc-6974-446b-bd9a-b22586858387-kube-api-access-qpx5r\") pod \"logging-loki-gateway-767ffcbf75-md7ts\" (UID: \"d28844dc-6974-446b-bd9a-b22586858387\") " pod="openshift-logging/logging-loki-gateway-767ffcbf75-md7ts" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.074867 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d28844dc-6974-446b-bd9a-b22586858387-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-767ffcbf75-md7ts\" (UID: \"d28844dc-6974-446b-bd9a-b22586858387\") " pod="openshift-logging/logging-loki-gateway-767ffcbf75-md7ts" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.081193 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-5d5548c9f5-7zdqp"] Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.089563 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brz4x\" (UniqueName: \"kubernetes.io/projected/0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5-kube-api-access-brz4x\") pod \"logging-loki-gateway-767ffcbf75-l82l4\" (UID: \"0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5\") " pod="openshift-logging/logging-loki-gateway-767ffcbf75-l82l4" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.274427 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-7zdqp" event={"ID":"c9201352-8585-47d4-9c13-b9e21ac4cd9f","Type":"ContainerStarted","Data":"d6f02e514e7f08c4229f0d59d59d435798427e01cc5fc499d4c40561df5d700a"} Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.451277 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.454133 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.458794 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-http" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.459031 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-grpc" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.488371 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-76bf7b6d45-5td7f"] Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.501103 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.550611 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.551437 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.553406 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c5d2e2aa-1056-4380-a637-cb59984f8098\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c5d2e2aa-1056-4380-a637-cb59984f8098\") pod \"logging-loki-ingester-0\" (UID: \"775ca902-fd03-4191-9440-ea598768d4e6\") " pod="openshift-logging/logging-loki-ingester-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.553444 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/d28844dc-6974-446b-bd9a-b22586858387-tls-secret\") pod \"logging-loki-gateway-767ffcbf75-md7ts\" (UID: \"d28844dc-6974-446b-bd9a-b22586858387\") " pod="openshift-logging/logging-loki-gateway-767ffcbf75-md7ts" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.554368 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-grpc" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.554494 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-http" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.560891 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/d28844dc-6974-446b-bd9a-b22586858387-tls-secret\") pod \"logging-loki-gateway-767ffcbf75-md7ts\" (UID: \"d28844dc-6974-446b-bd9a-b22586858387\") " pod="openshift-logging/logging-loki-gateway-767ffcbf75-md7ts" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.563349 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.646280 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-6d6859c548-cfcbp"] Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.655003 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5-tls-secret\") pod \"logging-loki-gateway-767ffcbf75-l82l4\" (UID: \"0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5\") " pod="openshift-logging/logging-loki-gateway-767ffcbf75-l82l4" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.655058 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/775ca902-fd03-4191-9440-ea598768d4e6-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"775ca902-fd03-4191-9440-ea598768d4e6\") " pod="openshift-logging/logging-loki-ingester-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.655127 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c5d2e2aa-1056-4380-a637-cb59984f8098\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c5d2e2aa-1056-4380-a637-cb59984f8098\") pod \"logging-loki-ingester-0\" (UID: \"775ca902-fd03-4191-9440-ea598768d4e6\") " pod="openshift-logging/logging-loki-ingester-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.655221 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a9c75345-8af5-49da-bd74-3fd013a2bafd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a9c75345-8af5-49da-bd74-3fd013a2bafd\") pod \"logging-loki-ingester-0\" (UID: \"775ca902-fd03-4191-9440-ea598768d4e6\") " pod="openshift-logging/logging-loki-ingester-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.655276 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/775ca902-fd03-4191-9440-ea598768d4e6-config\") pod \"logging-loki-ingester-0\" (UID: \"775ca902-fd03-4191-9440-ea598768d4e6\") " pod="openshift-logging/logging-loki-ingester-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.655417 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9zmn\" (UniqueName: \"kubernetes.io/projected/775ca902-fd03-4191-9440-ea598768d4e6-kube-api-access-l9zmn\") pod \"logging-loki-ingester-0\" (UID: \"775ca902-fd03-4191-9440-ea598768d4e6\") " pod="openshift-logging/logging-loki-ingester-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.655481 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/775ca902-fd03-4191-9440-ea598768d4e6-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"775ca902-fd03-4191-9440-ea598768d4e6\") " pod="openshift-logging/logging-loki-ingester-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.655539 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/775ca902-fd03-4191-9440-ea598768d4e6-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"775ca902-fd03-4191-9440-ea598768d4e6\") " pod="openshift-logging/logging-loki-ingester-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.656001 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/775ca902-fd03-4191-9440-ea598768d4e6-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"775ca902-fd03-4191-9440-ea598768d4e6\") " pod="openshift-logging/logging-loki-ingester-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.659631 4867 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.659679 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c5d2e2aa-1056-4380-a637-cb59984f8098\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c5d2e2aa-1056-4380-a637-cb59984f8098\") pod \"logging-loki-ingester-0\" (UID: \"775ca902-fd03-4191-9440-ea598768d4e6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/bf92a0cb196b1b992931cfb10952aecbe618752564bc39ecf0c6e130663d619b/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.660617 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5-tls-secret\") pod \"logging-loki-gateway-767ffcbf75-l82l4\" (UID: \"0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5\") " pod="openshift-logging/logging-loki-gateway-767ffcbf75-l82l4" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.689079 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c5d2e2aa-1056-4380-a637-cb59984f8098\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c5d2e2aa-1056-4380-a637-cb59984f8098\") pod \"logging-loki-ingester-0\" (UID: \"775ca902-fd03-4191-9440-ea598768d4e6\") " pod="openshift-logging/logging-loki-ingester-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.757405 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a9c75345-8af5-49da-bd74-3fd013a2bafd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a9c75345-8af5-49da-bd74-3fd013a2bafd\") pod \"logging-loki-ingester-0\" (UID: \"775ca902-fd03-4191-9440-ea598768d4e6\") " pod="openshift-logging/logging-loki-ingester-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.757460 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/775ca902-fd03-4191-9440-ea598768d4e6-config\") pod \"logging-loki-ingester-0\" (UID: \"775ca902-fd03-4191-9440-ea598768d4e6\") " pod="openshift-logging/logging-loki-ingester-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.757530 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/6975f95f-884b-4952-8bf8-0d18537e3403-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"6975f95f-884b-4952-8bf8-0d18537e3403\") " pod="openshift-logging/logging-loki-compactor-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.757559 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9zmn\" (UniqueName: \"kubernetes.io/projected/775ca902-fd03-4191-9440-ea598768d4e6-kube-api-access-l9zmn\") pod \"logging-loki-ingester-0\" (UID: \"775ca902-fd03-4191-9440-ea598768d4e6\") " pod="openshift-logging/logging-loki-ingester-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.757586 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/775ca902-fd03-4191-9440-ea598768d4e6-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"775ca902-fd03-4191-9440-ea598768d4e6\") " pod="openshift-logging/logging-loki-ingester-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.757608 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/775ca902-fd03-4191-9440-ea598768d4e6-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"775ca902-fd03-4191-9440-ea598768d4e6\") " pod="openshift-logging/logging-loki-ingester-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.757640 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c95492d6-57e6-4336-afce-f3d2d1a9a88d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c95492d6-57e6-4336-afce-f3d2d1a9a88d\") pod \"logging-loki-compactor-0\" (UID: \"6975f95f-884b-4952-8bf8-0d18537e3403\") " pod="openshift-logging/logging-loki-compactor-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.757700 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/775ca902-fd03-4191-9440-ea598768d4e6-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"775ca902-fd03-4191-9440-ea598768d4e6\") " pod="openshift-logging/logging-loki-ingester-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.758264 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6975f95f-884b-4952-8bf8-0d18537e3403-config\") pod \"logging-loki-compactor-0\" (UID: \"6975f95f-884b-4952-8bf8-0d18537e3403\") " pod="openshift-logging/logging-loki-compactor-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.758308 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shzlw\" (UniqueName: \"kubernetes.io/projected/6975f95f-884b-4952-8bf8-0d18537e3403-kube-api-access-shzlw\") pod \"logging-loki-compactor-0\" (UID: \"6975f95f-884b-4952-8bf8-0d18537e3403\") " pod="openshift-logging/logging-loki-compactor-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.758346 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/6975f95f-884b-4952-8bf8-0d18537e3403-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"6975f95f-884b-4952-8bf8-0d18537e3403\") " pod="openshift-logging/logging-loki-compactor-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.758387 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/775ca902-fd03-4191-9440-ea598768d4e6-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"775ca902-fd03-4191-9440-ea598768d4e6\") " pod="openshift-logging/logging-loki-ingester-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.758424 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6975f95f-884b-4952-8bf8-0d18537e3403-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"6975f95f-884b-4952-8bf8-0d18537e3403\") " pod="openshift-logging/logging-loki-compactor-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.758635 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/6975f95f-884b-4952-8bf8-0d18537e3403-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"6975f95f-884b-4952-8bf8-0d18537e3403\") " pod="openshift-logging/logging-loki-compactor-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.759187 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/775ca902-fd03-4191-9440-ea598768d4e6-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"775ca902-fd03-4191-9440-ea598768d4e6\") " pod="openshift-logging/logging-loki-ingester-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.759444 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/775ca902-fd03-4191-9440-ea598768d4e6-config\") pod \"logging-loki-ingester-0\" (UID: \"775ca902-fd03-4191-9440-ea598768d4e6\") " pod="openshift-logging/logging-loki-ingester-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.760866 4867 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.760903 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a9c75345-8af5-49da-bd74-3fd013a2bafd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a9c75345-8af5-49da-bd74-3fd013a2bafd\") pod \"logging-loki-ingester-0\" (UID: \"775ca902-fd03-4191-9440-ea598768d4e6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/4319a35af2a769dfaedede67f86e0598a0eb8249043dc7339b30d4dc2ae902c5/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.761399 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/775ca902-fd03-4191-9440-ea598768d4e6-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"775ca902-fd03-4191-9440-ea598768d4e6\") " pod="openshift-logging/logging-loki-ingester-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.761689 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/775ca902-fd03-4191-9440-ea598768d4e6-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"775ca902-fd03-4191-9440-ea598768d4e6\") " pod="openshift-logging/logging-loki-ingester-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.761846 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/775ca902-fd03-4191-9440-ea598768d4e6-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"775ca902-fd03-4191-9440-ea598768d4e6\") " pod="openshift-logging/logging-loki-ingester-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.779187 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9zmn\" (UniqueName: \"kubernetes.io/projected/775ca902-fd03-4191-9440-ea598768d4e6-kube-api-access-l9zmn\") pod \"logging-loki-ingester-0\" (UID: \"775ca902-fd03-4191-9440-ea598768d4e6\") " pod="openshift-logging/logging-loki-ingester-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.779597 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.780473 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.785452 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-grpc" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.785884 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-http" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.791840 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a9c75345-8af5-49da-bd74-3fd013a2bafd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a9c75345-8af5-49da-bd74-3fd013a2bafd\") pod \"logging-loki-ingester-0\" (UID: \"775ca902-fd03-4191-9440-ea598768d4e6\") " pod="openshift-logging/logging-loki-ingester-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.796724 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.821642 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.827694 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-767ffcbf75-md7ts" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.844542 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-767ffcbf75-l82l4" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.860203 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/6975f95f-884b-4952-8bf8-0d18537e3403-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"6975f95f-884b-4952-8bf8-0d18537e3403\") " pod="openshift-logging/logging-loki-compactor-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.860572 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c95492d6-57e6-4336-afce-f3d2d1a9a88d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c95492d6-57e6-4336-afce-f3d2d1a9a88d\") pod \"logging-loki-compactor-0\" (UID: \"6975f95f-884b-4952-8bf8-0d18537e3403\") " pod="openshift-logging/logging-loki-compactor-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.860635 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6975f95f-884b-4952-8bf8-0d18537e3403-config\") pod \"logging-loki-compactor-0\" (UID: \"6975f95f-884b-4952-8bf8-0d18537e3403\") " pod="openshift-logging/logging-loki-compactor-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.860662 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shzlw\" (UniqueName: \"kubernetes.io/projected/6975f95f-884b-4952-8bf8-0d18537e3403-kube-api-access-shzlw\") pod \"logging-loki-compactor-0\" (UID: \"6975f95f-884b-4952-8bf8-0d18537e3403\") " pod="openshift-logging/logging-loki-compactor-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.860720 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/6975f95f-884b-4952-8bf8-0d18537e3403-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"6975f95f-884b-4952-8bf8-0d18537e3403\") " pod="openshift-logging/logging-loki-compactor-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.860750 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6975f95f-884b-4952-8bf8-0d18537e3403-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"6975f95f-884b-4952-8bf8-0d18537e3403\") " pod="openshift-logging/logging-loki-compactor-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.860791 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/6975f95f-884b-4952-8bf8-0d18537e3403-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"6975f95f-884b-4952-8bf8-0d18537e3403\") " pod="openshift-logging/logging-loki-compactor-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.861781 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6975f95f-884b-4952-8bf8-0d18537e3403-config\") pod \"logging-loki-compactor-0\" (UID: \"6975f95f-884b-4952-8bf8-0d18537e3403\") " pod="openshift-logging/logging-loki-compactor-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.862012 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6975f95f-884b-4952-8bf8-0d18537e3403-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"6975f95f-884b-4952-8bf8-0d18537e3403\") " pod="openshift-logging/logging-loki-compactor-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.863157 4867 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.863198 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c95492d6-57e6-4336-afce-f3d2d1a9a88d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c95492d6-57e6-4336-afce-f3d2d1a9a88d\") pod \"logging-loki-compactor-0\" (UID: \"6975f95f-884b-4952-8bf8-0d18537e3403\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/24180df4643c6a16eed522d6b0ea5a8e9075be778452dd4ea758fdd573b59001/globalmount\"" pod="openshift-logging/logging-loki-compactor-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.864389 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/6975f95f-884b-4952-8bf8-0d18537e3403-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"6975f95f-884b-4952-8bf8-0d18537e3403\") " pod="openshift-logging/logging-loki-compactor-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.867596 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/6975f95f-884b-4952-8bf8-0d18537e3403-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"6975f95f-884b-4952-8bf8-0d18537e3403\") " pod="openshift-logging/logging-loki-compactor-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.868785 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/6975f95f-884b-4952-8bf8-0d18537e3403-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"6975f95f-884b-4952-8bf8-0d18537e3403\") " pod="openshift-logging/logging-loki-compactor-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.877147 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shzlw\" (UniqueName: \"kubernetes.io/projected/6975f95f-884b-4952-8bf8-0d18537e3403-kube-api-access-shzlw\") pod \"logging-loki-compactor-0\" (UID: \"6975f95f-884b-4952-8bf8-0d18537e3403\") " pod="openshift-logging/logging-loki-compactor-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.895656 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c95492d6-57e6-4336-afce-f3d2d1a9a88d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c95492d6-57e6-4336-afce-f3d2d1a9a88d\") pod \"logging-loki-compactor-0\" (UID: \"6975f95f-884b-4952-8bf8-0d18537e3403\") " pod="openshift-logging/logging-loki-compactor-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.962331 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlcjl\" (UniqueName: \"kubernetes.io/projected/3c3333e0-ec4e-41bf-8296-9469ad3ac9cd-kube-api-access-mlcjl\") pod \"logging-loki-index-gateway-0\" (UID: \"3c3333e0-ec4e-41bf-8296-9469ad3ac9cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.962388 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/3c3333e0-ec4e-41bf-8296-9469ad3ac9cd-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"3c3333e0-ec4e-41bf-8296-9469ad3ac9cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.962541 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2839edae-c7c1-4435-82fc-182943bb1f83\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2839edae-c7c1-4435-82fc-182943bb1f83\") pod \"logging-loki-index-gateway-0\" (UID: \"3c3333e0-ec4e-41bf-8296-9469ad3ac9cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.962668 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c3333e0-ec4e-41bf-8296-9469ad3ac9cd-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"3c3333e0-ec4e-41bf-8296-9469ad3ac9cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.962781 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c3333e0-ec4e-41bf-8296-9469ad3ac9cd-config\") pod \"logging-loki-index-gateway-0\" (UID: \"3c3333e0-ec4e-41bf-8296-9469ad3ac9cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.962807 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/3c3333e0-ec4e-41bf-8296-9469ad3ac9cd-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"3c3333e0-ec4e-41bf-8296-9469ad3ac9cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.962998 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/3c3333e0-ec4e-41bf-8296-9469ad3ac9cd-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"3c3333e0-ec4e-41bf-8296-9469ad3ac9cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 04:22:15 crc kubenswrapper[4867]: I0214 04:22:15.064204 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2839edae-c7c1-4435-82fc-182943bb1f83\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2839edae-c7c1-4435-82fc-182943bb1f83\") pod \"logging-loki-index-gateway-0\" (UID: \"3c3333e0-ec4e-41bf-8296-9469ad3ac9cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 04:22:15 crc kubenswrapper[4867]: I0214 04:22:15.064248 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c3333e0-ec4e-41bf-8296-9469ad3ac9cd-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"3c3333e0-ec4e-41bf-8296-9469ad3ac9cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 04:22:15 crc kubenswrapper[4867]: I0214 04:22:15.064308 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c3333e0-ec4e-41bf-8296-9469ad3ac9cd-config\") pod \"logging-loki-index-gateway-0\" (UID: \"3c3333e0-ec4e-41bf-8296-9469ad3ac9cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 04:22:15 crc kubenswrapper[4867]: I0214 04:22:15.064329 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/3c3333e0-ec4e-41bf-8296-9469ad3ac9cd-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"3c3333e0-ec4e-41bf-8296-9469ad3ac9cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 04:22:15 crc kubenswrapper[4867]: I0214 04:22:15.064376 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/3c3333e0-ec4e-41bf-8296-9469ad3ac9cd-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"3c3333e0-ec4e-41bf-8296-9469ad3ac9cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 04:22:15 crc kubenswrapper[4867]: I0214 04:22:15.064619 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlcjl\" (UniqueName: \"kubernetes.io/projected/3c3333e0-ec4e-41bf-8296-9469ad3ac9cd-kube-api-access-mlcjl\") pod \"logging-loki-index-gateway-0\" (UID: \"3c3333e0-ec4e-41bf-8296-9469ad3ac9cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 04:22:15 crc kubenswrapper[4867]: I0214 04:22:15.064650 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/3c3333e0-ec4e-41bf-8296-9469ad3ac9cd-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"3c3333e0-ec4e-41bf-8296-9469ad3ac9cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 04:22:15 crc kubenswrapper[4867]: I0214 04:22:15.067879 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c3333e0-ec4e-41bf-8296-9469ad3ac9cd-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"3c3333e0-ec4e-41bf-8296-9469ad3ac9cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 04:22:15 crc kubenswrapper[4867]: I0214 04:22:15.070057 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/3c3333e0-ec4e-41bf-8296-9469ad3ac9cd-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"3c3333e0-ec4e-41bf-8296-9469ad3ac9cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 04:22:15 crc kubenswrapper[4867]: I0214 04:22:15.070126 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c3333e0-ec4e-41bf-8296-9469ad3ac9cd-config\") pod \"logging-loki-index-gateway-0\" (UID: \"3c3333e0-ec4e-41bf-8296-9469ad3ac9cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 04:22:15 crc kubenswrapper[4867]: I0214 04:22:15.071251 4867 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 04:22:15 crc kubenswrapper[4867]: I0214 04:22:15.071286 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2839edae-c7c1-4435-82fc-182943bb1f83\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2839edae-c7c1-4435-82fc-182943bb1f83\") pod \"logging-loki-index-gateway-0\" (UID: \"3c3333e0-ec4e-41bf-8296-9469ad3ac9cd\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b6dd80fe1b9813ac525647e268fd40f85f3de84eda8cd138bd497820e0ff03be/globalmount\"" pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 04:22:15 crc kubenswrapper[4867]: I0214 04:22:15.072038 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/3c3333e0-ec4e-41bf-8296-9469ad3ac9cd-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"3c3333e0-ec4e-41bf-8296-9469ad3ac9cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 04:22:15 crc kubenswrapper[4867]: I0214 04:22:15.080294 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/3c3333e0-ec4e-41bf-8296-9469ad3ac9cd-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"3c3333e0-ec4e-41bf-8296-9469ad3ac9cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 04:22:15 crc kubenswrapper[4867]: I0214 04:22:15.085546 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlcjl\" (UniqueName: \"kubernetes.io/projected/3c3333e0-ec4e-41bf-8296-9469ad3ac9cd-kube-api-access-mlcjl\") pod \"logging-loki-index-gateway-0\" (UID: \"3c3333e0-ec4e-41bf-8296-9469ad3ac9cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 04:22:15 crc kubenswrapper[4867]: I0214 04:22:15.100032 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2839edae-c7c1-4435-82fc-182943bb1f83\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2839edae-c7c1-4435-82fc-182943bb1f83\") pod \"logging-loki-index-gateway-0\" (UID: \"3c3333e0-ec4e-41bf-8296-9469ad3ac9cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 04:22:15 crc kubenswrapper[4867]: I0214 04:22:15.195060 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Feb 14 04:22:15 crc kubenswrapper[4867]: I0214 04:22:15.291447 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-767ffcbf75-md7ts"] Feb 14 04:22:15 crc kubenswrapper[4867]: I0214 04:22:15.294293 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-76bf7b6d45-5td7f" event={"ID":"9c48c070-b4b3-48af-b40a-d82788f764d9","Type":"ContainerStarted","Data":"b5dbc7d5851ce0132216b247584b19bd35c9b7580e440b6d7a66ef4521fe7b43"} Feb 14 04:22:15 crc kubenswrapper[4867]: I0214 04:22:15.296956 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-cfcbp" event={"ID":"837b4fe4-f827-4882-8af7-225b18bb3e22","Type":"ContainerStarted","Data":"234ac99a30a2b802e31a96f1f42cb9fed6dc6de9f0592ffe849d8767f95f062b"} Feb 14 04:22:15 crc kubenswrapper[4867]: W0214 04:22:15.302790 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd28844dc_6974_446b_bd9a_b22586858387.slice/crio-907d6b304fac7f3f885ae186c6c57be5e30a63f0514f4475a8b2ab889c76398b WatchSource:0}: Error finding container 907d6b304fac7f3f885ae186c6c57be5e30a63f0514f4475a8b2ab889c76398b: Status 404 returned error can't find the container with id 907d6b304fac7f3f885ae186c6c57be5e30a63f0514f4475a8b2ab889c76398b Feb 14 04:22:15 crc kubenswrapper[4867]: I0214 04:22:15.345583 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Feb 14 04:22:15 crc kubenswrapper[4867]: W0214 04:22:15.346995 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod775ca902_fd03_4191_9440_ea598768d4e6.slice/crio-98e2355e61cf5f15175d1f160c47ae329fa7da7e652a90d2841336fb51d86aa2 WatchSource:0}: Error finding container 98e2355e61cf5f15175d1f160c47ae329fa7da7e652a90d2841336fb51d86aa2: Status 404 returned error can't find the container with id 98e2355e61cf5f15175d1f160c47ae329fa7da7e652a90d2841336fb51d86aa2 Feb 14 04:22:15 crc kubenswrapper[4867]: I0214 04:22:15.397226 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 04:22:15 crc kubenswrapper[4867]: I0214 04:22:15.403048 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-767ffcbf75-l82l4"] Feb 14 04:22:15 crc kubenswrapper[4867]: W0214 04:22:15.405709 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0c1f86e8_fb7b_40a7_9cc7_07bc9aa74ce5.slice/crio-907aed794e07145bfef053123ca8d749decb53bed5bced57d31ce3fd0b0e57ee WatchSource:0}: Error finding container 907aed794e07145bfef053123ca8d749decb53bed5bced57d31ce3fd0b0e57ee: Status 404 returned error can't find the container with id 907aed794e07145bfef053123ca8d749decb53bed5bced57d31ce3fd0b0e57ee Feb 14 04:22:15 crc kubenswrapper[4867]: I0214 04:22:15.659923 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Feb 14 04:22:15 crc kubenswrapper[4867]: I0214 04:22:15.822879 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Feb 14 04:22:15 crc kubenswrapper[4867]: W0214 04:22:15.831372 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3c3333e0_ec4e_41bf_8296_9469ad3ac9cd.slice/crio-f113dcbafbace35f21d5c6191aa68d4a89b791daca06fc59b0739a1cac749997 WatchSource:0}: Error finding container f113dcbafbace35f21d5c6191aa68d4a89b791daca06fc59b0739a1cac749997: Status 404 returned error can't find the container with id f113dcbafbace35f21d5c6191aa68d4a89b791daca06fc59b0739a1cac749997 Feb 14 04:22:16 crc kubenswrapper[4867]: I0214 04:22:16.304273 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"6975f95f-884b-4952-8bf8-0d18537e3403","Type":"ContainerStarted","Data":"ccb2aaf0f62e18390459fa7694a18b237133ff0a92fd3eb37c5f2dc22a0a5e3b"} Feb 14 04:22:16 crc kubenswrapper[4867]: I0214 04:22:16.305780 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-767ffcbf75-md7ts" event={"ID":"d28844dc-6974-446b-bd9a-b22586858387","Type":"ContainerStarted","Data":"907d6b304fac7f3f885ae186c6c57be5e30a63f0514f4475a8b2ab889c76398b"} Feb 14 04:22:16 crc kubenswrapper[4867]: I0214 04:22:16.306969 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"3c3333e0-ec4e-41bf-8296-9469ad3ac9cd","Type":"ContainerStarted","Data":"f113dcbafbace35f21d5c6191aa68d4a89b791daca06fc59b0739a1cac749997"} Feb 14 04:22:16 crc kubenswrapper[4867]: I0214 04:22:16.308705 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"775ca902-fd03-4191-9440-ea598768d4e6","Type":"ContainerStarted","Data":"98e2355e61cf5f15175d1f160c47ae329fa7da7e652a90d2841336fb51d86aa2"} Feb 14 04:22:16 crc kubenswrapper[4867]: I0214 04:22:16.309868 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-767ffcbf75-l82l4" event={"ID":"0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5","Type":"ContainerStarted","Data":"907aed794e07145bfef053123ca8d749decb53bed5bced57d31ce3fd0b0e57ee"} Feb 14 04:22:20 crc kubenswrapper[4867]: I0214 04:22:20.345021 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"775ca902-fd03-4191-9440-ea598768d4e6","Type":"ContainerStarted","Data":"169ea66b1988e22285b262d54bcbc4608cacdd0fb3c9b28f6847dfac5ebc59df"} Feb 14 04:22:20 crc kubenswrapper[4867]: I0214 04:22:20.346669 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-ingester-0" Feb 14 04:22:20 crc kubenswrapper[4867]: I0214 04:22:20.348843 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-cfcbp" event={"ID":"837b4fe4-f827-4882-8af7-225b18bb3e22","Type":"ContainerStarted","Data":"d70bb07fbdd4508db5891d729549052ca61be7cbad3897d210038ece393b3511"} Feb 14 04:22:20 crc kubenswrapper[4867]: I0214 04:22:20.348964 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-cfcbp" Feb 14 04:22:20 crc kubenswrapper[4867]: I0214 04:22:20.351774 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-767ffcbf75-l82l4" event={"ID":"0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5","Type":"ContainerStarted","Data":"7ba2905855e993272c4b214c140509bda872171927a69642acd3d02ea21861bc"} Feb 14 04:22:20 crc kubenswrapper[4867]: I0214 04:22:20.353900 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-76bf7b6d45-5td7f" event={"ID":"9c48c070-b4b3-48af-b40a-d82788f764d9","Type":"ContainerStarted","Data":"5abe207a494d942303c556158a2f9a268c5c53bfb9a2421323bba0befcf8d3ce"} Feb 14 04:22:20 crc kubenswrapper[4867]: I0214 04:22:20.354785 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-querier-76bf7b6d45-5td7f" Feb 14 04:22:20 crc kubenswrapper[4867]: I0214 04:22:20.356487 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"6975f95f-884b-4952-8bf8-0d18537e3403","Type":"ContainerStarted","Data":"8b251bbe031bf6811830e0645ff930876c078e88e55861ac152f5fdd78eca244"} Feb 14 04:22:20 crc kubenswrapper[4867]: I0214 04:22:20.356965 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-compactor-0" Feb 14 04:22:20 crc kubenswrapper[4867]: I0214 04:22:20.358484 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-7zdqp" event={"ID":"c9201352-8585-47d4-9c13-b9e21ac4cd9f","Type":"ContainerStarted","Data":"895ac13ddc5c863fbcaa197af2ca920f7c947310cec88faba4298d72b0c48a52"} Feb 14 04:22:20 crc kubenswrapper[4867]: I0214 04:22:20.358912 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-7zdqp" Feb 14 04:22:20 crc kubenswrapper[4867]: I0214 04:22:20.360356 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-767ffcbf75-md7ts" event={"ID":"d28844dc-6974-446b-bd9a-b22586858387","Type":"ContainerStarted","Data":"bbc0c912e4d0d5cba98c93f5d6b482101035594d917d9228e2d79b3bbaaa5652"} Feb 14 04:22:20 crc kubenswrapper[4867]: I0214 04:22:20.362027 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"3c3333e0-ec4e-41bf-8296-9469ad3ac9cd","Type":"ContainerStarted","Data":"5d686ab1322425ee880bef6b00c46db7139b5114d527e4b634095da9caacea96"} Feb 14 04:22:20 crc kubenswrapper[4867]: I0214 04:22:20.362634 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 04:22:20 crc kubenswrapper[4867]: I0214 04:22:20.379455 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-ingester-0" podStartSLOduration=2.914532866 podStartE2EDuration="7.379434284s" podCreationTimestamp="2026-02-14 04:22:13 +0000 UTC" firstStartedPulling="2026-02-14 04:22:15.349923721 +0000 UTC m=+767.430861035" lastFinishedPulling="2026-02-14 04:22:19.814825139 +0000 UTC m=+771.895762453" observedRunningTime="2026-02-14 04:22:20.370877782 +0000 UTC m=+772.451815106" watchObservedRunningTime="2026-02-14 04:22:20.379434284 +0000 UTC m=+772.460371608" Feb 14 04:22:20 crc kubenswrapper[4867]: I0214 04:22:20.389876 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-querier-76bf7b6d45-5td7f" podStartSLOduration=2.053994401 podStartE2EDuration="7.389856043s" podCreationTimestamp="2026-02-14 04:22:13 +0000 UTC" firstStartedPulling="2026-02-14 04:22:14.473409444 +0000 UTC m=+766.554346758" lastFinishedPulling="2026-02-14 04:22:19.809271066 +0000 UTC m=+771.890208400" observedRunningTime="2026-02-14 04:22:20.388761085 +0000 UTC m=+772.469698399" watchObservedRunningTime="2026-02-14 04:22:20.389856043 +0000 UTC m=+772.470793367" Feb 14 04:22:20 crc kubenswrapper[4867]: I0214 04:22:20.416555 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-7zdqp" podStartSLOduration=1.778248984 podStartE2EDuration="7.416533503s" podCreationTimestamp="2026-02-14 04:22:13 +0000 UTC" firstStartedPulling="2026-02-14 04:22:14.102475796 +0000 UTC m=+766.183413110" lastFinishedPulling="2026-02-14 04:22:19.740760315 +0000 UTC m=+771.821697629" observedRunningTime="2026-02-14 04:22:20.412009726 +0000 UTC m=+772.492947070" watchObservedRunningTime="2026-02-14 04:22:20.416533503 +0000 UTC m=+772.497470817" Feb 14 04:22:20 crc kubenswrapper[4867]: I0214 04:22:20.452685 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-cfcbp" podStartSLOduration=2.292465225 podStartE2EDuration="7.452663476s" podCreationTimestamp="2026-02-14 04:22:13 +0000 UTC" firstStartedPulling="2026-02-14 04:22:14.646983721 +0000 UTC m=+766.727921035" lastFinishedPulling="2026-02-14 04:22:19.807181972 +0000 UTC m=+771.888119286" observedRunningTime="2026-02-14 04:22:20.447221056 +0000 UTC m=+772.528158360" watchObservedRunningTime="2026-02-14 04:22:20.452663476 +0000 UTC m=+772.533600780" Feb 14 04:22:20 crc kubenswrapper[4867]: I0214 04:22:20.476193 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-compactor-0" podStartSLOduration=3.344414766 podStartE2EDuration="7.476177064s" podCreationTimestamp="2026-02-14 04:22:13 +0000 UTC" firstStartedPulling="2026-02-14 04:22:15.684821347 +0000 UTC m=+767.765758661" lastFinishedPulling="2026-02-14 04:22:19.816583645 +0000 UTC m=+771.897520959" observedRunningTime="2026-02-14 04:22:20.474822899 +0000 UTC m=+772.555760213" watchObservedRunningTime="2026-02-14 04:22:20.476177064 +0000 UTC m=+772.557114368" Feb 14 04:22:20 crc kubenswrapper[4867]: I0214 04:22:20.508469 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-index-gateway-0" podStartSLOduration=3.502843772 podStartE2EDuration="7.508446668s" podCreationTimestamp="2026-02-14 04:22:13 +0000 UTC" firstStartedPulling="2026-02-14 04:22:15.833675425 +0000 UTC m=+767.914612739" lastFinishedPulling="2026-02-14 04:22:19.839278321 +0000 UTC m=+771.920215635" observedRunningTime="2026-02-14 04:22:20.503846809 +0000 UTC m=+772.584784123" watchObservedRunningTime="2026-02-14 04:22:20.508446668 +0000 UTC m=+772.589383982" Feb 14 04:22:22 crc kubenswrapper[4867]: I0214 04:22:22.378957 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-767ffcbf75-md7ts" event={"ID":"d28844dc-6974-446b-bd9a-b22586858387","Type":"ContainerStarted","Data":"6a006b19e56e3cf92b6649207f18201d86cdee688ceac33c20505054bb27deb4"} Feb 14 04:22:22 crc kubenswrapper[4867]: I0214 04:22:22.380212 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-767ffcbf75-md7ts" Feb 14 04:22:22 crc kubenswrapper[4867]: I0214 04:22:22.380259 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-767ffcbf75-md7ts" Feb 14 04:22:22 crc kubenswrapper[4867]: I0214 04:22:22.382873 4867 patch_prober.go:28] interesting pod/logging-loki-gateway-767ffcbf75-md7ts container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:8083/ready\": dial tcp 10.217.0.54:8083: connect: connection refused" start-of-body= Feb 14 04:22:22 crc kubenswrapper[4867]: I0214 04:22:22.382970 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-767ffcbf75-md7ts" podUID="d28844dc-6974-446b-bd9a-b22586858387" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.54:8083/ready\": dial tcp 10.217.0.54:8083: connect: connection refused" Feb 14 04:22:22 crc kubenswrapper[4867]: I0214 04:22:22.400311 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-767ffcbf75-md7ts" Feb 14 04:22:22 crc kubenswrapper[4867]: I0214 04:22:22.406188 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-767ffcbf75-md7ts" podStartSLOduration=2.52474702 podStartE2EDuration="9.406167302s" podCreationTimestamp="2026-02-14 04:22:13 +0000 UTC" firstStartedPulling="2026-02-14 04:22:15.316484256 +0000 UTC m=+767.397421580" lastFinishedPulling="2026-02-14 04:22:22.197904548 +0000 UTC m=+774.278841862" observedRunningTime="2026-02-14 04:22:22.40185207 +0000 UTC m=+774.482789384" watchObservedRunningTime="2026-02-14 04:22:22.406167302 +0000 UTC m=+774.487104626" Feb 14 04:22:23 crc kubenswrapper[4867]: I0214 04:22:23.404871 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-767ffcbf75-l82l4" event={"ID":"0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5","Type":"ContainerStarted","Data":"7408fe839eaef5d59649618802b010c8092e9a7dffe1ac25d667580b82d9b2e6"} Feb 14 04:22:23 crc kubenswrapper[4867]: I0214 04:22:23.411477 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-767ffcbf75-md7ts" Feb 14 04:22:23 crc kubenswrapper[4867]: I0214 04:22:23.435207 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-767ffcbf75-l82l4" podStartSLOduration=3.658736251 podStartE2EDuration="10.435181019s" podCreationTimestamp="2026-02-14 04:22:13 +0000 UTC" firstStartedPulling="2026-02-14 04:22:15.414218763 +0000 UTC m=+767.495156077" lastFinishedPulling="2026-02-14 04:22:22.190663531 +0000 UTC m=+774.271600845" observedRunningTime="2026-02-14 04:22:23.423723403 +0000 UTC m=+775.504660767" watchObservedRunningTime="2026-02-14 04:22:23.435181019 +0000 UTC m=+775.516118333" Feb 14 04:22:24 crc kubenswrapper[4867]: I0214 04:22:24.415860 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-767ffcbf75-l82l4" Feb 14 04:22:24 crc kubenswrapper[4867]: I0214 04:22:24.416321 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-767ffcbf75-l82l4" Feb 14 04:22:24 crc kubenswrapper[4867]: I0214 04:22:24.426332 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-767ffcbf75-l82l4" Feb 14 04:22:24 crc kubenswrapper[4867]: I0214 04:22:24.430142 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-767ffcbf75-l82l4" Feb 14 04:22:31 crc kubenswrapper[4867]: I0214 04:22:31.251501 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 04:22:31 crc kubenswrapper[4867]: I0214 04:22:31.252381 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 04:22:35 crc kubenswrapper[4867]: I0214 04:22:35.208224 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-compactor-0" Feb 14 04:22:35 crc kubenswrapper[4867]: I0214 04:22:35.407137 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 04:22:43 crc kubenswrapper[4867]: I0214 04:22:43.590189 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-7zdqp" Feb 14 04:22:43 crc kubenswrapper[4867]: I0214 04:22:43.892042 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-querier-76bf7b6d45-5td7f" Feb 14 04:22:44 crc kubenswrapper[4867]: I0214 04:22:44.077969 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-cfcbp" Feb 14 04:22:44 crc kubenswrapper[4867]: I0214 04:22:44.835347 4867 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Feb 14 04:22:44 crc kubenswrapper[4867]: I0214 04:22:44.835483 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="775ca902-fd03-4191-9440-ea598768d4e6" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 14 04:22:54 crc kubenswrapper[4867]: I0214 04:22:54.827987 4867 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Feb 14 04:22:54 crc kubenswrapper[4867]: I0214 04:22:54.828883 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="775ca902-fd03-4191-9440-ea598768d4e6" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 14 04:23:01 crc kubenswrapper[4867]: I0214 04:23:01.251573 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 04:23:01 crc kubenswrapper[4867]: I0214 04:23:01.252654 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 04:23:01 crc kubenswrapper[4867]: I0214 04:23:01.252738 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" Feb 14 04:23:01 crc kubenswrapper[4867]: I0214 04:23:01.253918 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"51f114f48cb9a2cff6d859aa7aea42ea438df249b54ac2cc89b9fb1c0a39a59a"} pod="openshift-machine-config-operator/machine-config-daemon-4s95t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 04:23:01 crc kubenswrapper[4867]: I0214 04:23:01.254000 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" containerID="cri-o://51f114f48cb9a2cff6d859aa7aea42ea438df249b54ac2cc89b9fb1c0a39a59a" gracePeriod=600 Feb 14 04:23:01 crc kubenswrapper[4867]: I0214 04:23:01.747912 4867 generic.go:334] "Generic (PLEG): container finished" podID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerID="51f114f48cb9a2cff6d859aa7aea42ea438df249b54ac2cc89b9fb1c0a39a59a" exitCode=0 Feb 14 04:23:01 crc kubenswrapper[4867]: I0214 04:23:01.747987 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" event={"ID":"5992e46c-bce7-4b9f-82f2-c7ffb93286cd","Type":"ContainerDied","Data":"51f114f48cb9a2cff6d859aa7aea42ea438df249b54ac2cc89b9fb1c0a39a59a"} Feb 14 04:23:01 crc kubenswrapper[4867]: I0214 04:23:01.748320 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" event={"ID":"5992e46c-bce7-4b9f-82f2-c7ffb93286cd","Type":"ContainerStarted","Data":"3ce87267e4cadbd1bac903bbe9da7eec07159552420bcd52dda15fc535f1ace5"} Feb 14 04:23:01 crc kubenswrapper[4867]: I0214 04:23:01.748347 4867 scope.go:117] "RemoveContainer" containerID="2de3d61c1f6c01b61b6559aa8687b810bcfdab61e971db1007a35ef4d563c645" Feb 14 04:23:04 crc kubenswrapper[4867]: I0214 04:23:04.829387 4867 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Feb 14 04:23:04 crc kubenswrapper[4867]: I0214 04:23:04.830629 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="775ca902-fd03-4191-9440-ea598768d4e6" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 14 04:23:14 crc kubenswrapper[4867]: I0214 04:23:14.830562 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-ingester-0" Feb 14 04:23:32 crc kubenswrapper[4867]: I0214 04:23:32.511442 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-9wcmp"] Feb 14 04:23:32 crc kubenswrapper[4867]: I0214 04:23:32.513333 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-9wcmp" Feb 14 04:23:32 crc kubenswrapper[4867]: I0214 04:23:32.516848 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Feb 14 04:23:32 crc kubenswrapper[4867]: I0214 04:23:32.517029 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Feb 14 04:23:32 crc kubenswrapper[4867]: I0214 04:23:32.517148 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Feb 14 04:23:32 crc kubenswrapper[4867]: I0214 04:23:32.517991 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-zjsbd" Feb 14 04:23:32 crc kubenswrapper[4867]: I0214 04:23:32.518976 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Feb 14 04:23:32 crc kubenswrapper[4867]: I0214 04:23:32.526760 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-9wcmp"] Feb 14 04:23:32 crc kubenswrapper[4867]: I0214 04:23:32.529123 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Feb 14 04:23:32 crc kubenswrapper[4867]: I0214 04:23:32.659770 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-sa-token\") pod \"collector-9wcmp\" (UID: \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\") " pod="openshift-logging/collector-9wcmp" Feb 14 04:23:32 crc kubenswrapper[4867]: I0214 04:23:32.659976 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-trusted-ca\") pod \"collector-9wcmp\" (UID: \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\") " pod="openshift-logging/collector-9wcmp" Feb 14 04:23:32 crc kubenswrapper[4867]: I0214 04:23:32.660049 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-config-openshift-service-cacrt\") pod \"collector-9wcmp\" (UID: \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\") " pod="openshift-logging/collector-9wcmp" Feb 14 04:23:32 crc kubenswrapper[4867]: I0214 04:23:32.660092 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbnd4\" (UniqueName: \"kubernetes.io/projected/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-kube-api-access-kbnd4\") pod \"collector-9wcmp\" (UID: \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\") " pod="openshift-logging/collector-9wcmp" Feb 14 04:23:32 crc kubenswrapper[4867]: I0214 04:23:32.660145 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-entrypoint\") pod \"collector-9wcmp\" (UID: \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\") " pod="openshift-logging/collector-9wcmp" Feb 14 04:23:32 crc kubenswrapper[4867]: I0214 04:23:32.660243 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-collector-token\") pod \"collector-9wcmp\" (UID: \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\") " pod="openshift-logging/collector-9wcmp" Feb 14 04:23:32 crc kubenswrapper[4867]: I0214 04:23:32.660275 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-metrics\") pod \"collector-9wcmp\" (UID: \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\") " pod="openshift-logging/collector-9wcmp" Feb 14 04:23:32 crc kubenswrapper[4867]: I0214 04:23:32.660292 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-tmp\") pod \"collector-9wcmp\" (UID: \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\") " pod="openshift-logging/collector-9wcmp" Feb 14 04:23:32 crc kubenswrapper[4867]: I0214 04:23:32.660415 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-config\") pod \"collector-9wcmp\" (UID: \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\") " pod="openshift-logging/collector-9wcmp" Feb 14 04:23:32 crc kubenswrapper[4867]: I0214 04:23:32.660458 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-datadir\") pod \"collector-9wcmp\" (UID: \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\") " pod="openshift-logging/collector-9wcmp" Feb 14 04:23:32 crc kubenswrapper[4867]: I0214 04:23:32.660488 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-collector-syslog-receiver\") pod \"collector-9wcmp\" (UID: \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\") " pod="openshift-logging/collector-9wcmp" Feb 14 04:23:32 crc kubenswrapper[4867]: I0214 04:23:32.672094 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-9wcmp"] Feb 14 04:23:32 crc kubenswrapper[4867]: E0214 04:23:32.672805 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[collector-syslog-receiver collector-token config config-openshift-service-cacrt datadir entrypoint kube-api-access-kbnd4 metrics sa-token tmp trusted-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-logging/collector-9wcmp" podUID="a2144ced-e8cb-4b28-82f2-65e8dbd4688f" Feb 14 04:23:32 crc kubenswrapper[4867]: I0214 04:23:32.762024 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-entrypoint\") pod \"collector-9wcmp\" (UID: \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\") " pod="openshift-logging/collector-9wcmp" Feb 14 04:23:32 crc kubenswrapper[4867]: I0214 04:23:32.762102 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-collector-token\") pod \"collector-9wcmp\" (UID: \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\") " pod="openshift-logging/collector-9wcmp" Feb 14 04:23:32 crc kubenswrapper[4867]: I0214 04:23:32.762141 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-metrics\") pod \"collector-9wcmp\" (UID: \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\") " pod="openshift-logging/collector-9wcmp" Feb 14 04:23:32 crc kubenswrapper[4867]: I0214 04:23:32.762162 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-tmp\") pod \"collector-9wcmp\" (UID: \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\") " pod="openshift-logging/collector-9wcmp" Feb 14 04:23:32 crc kubenswrapper[4867]: I0214 04:23:32.762220 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-config\") pod \"collector-9wcmp\" (UID: \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\") " pod="openshift-logging/collector-9wcmp" Feb 14 04:23:32 crc kubenswrapper[4867]: I0214 04:23:32.762244 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-datadir\") pod \"collector-9wcmp\" (UID: \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\") " pod="openshift-logging/collector-9wcmp" Feb 14 04:23:32 crc kubenswrapper[4867]: I0214 04:23:32.762268 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-collector-syslog-receiver\") pod \"collector-9wcmp\" (UID: \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\") " pod="openshift-logging/collector-9wcmp" Feb 14 04:23:32 crc kubenswrapper[4867]: I0214 04:23:32.762316 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-sa-token\") pod \"collector-9wcmp\" (UID: \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\") " pod="openshift-logging/collector-9wcmp" Feb 14 04:23:32 crc kubenswrapper[4867]: E0214 04:23:32.762333 4867 secret.go:188] Couldn't get secret openshift-logging/collector-metrics: secret "collector-metrics" not found Feb 14 04:23:32 crc kubenswrapper[4867]: I0214 04:23:32.762387 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-datadir\") pod \"collector-9wcmp\" (UID: \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\") " pod="openshift-logging/collector-9wcmp" Feb 14 04:23:32 crc kubenswrapper[4867]: E0214 04:23:32.762415 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-metrics podName:a2144ced-e8cb-4b28-82f2-65e8dbd4688f nodeName:}" failed. No retries permitted until 2026-02-14 04:23:33.262394946 +0000 UTC m=+845.343332270 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics" (UniqueName: "kubernetes.io/secret/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-metrics") pod "collector-9wcmp" (UID: "a2144ced-e8cb-4b28-82f2-65e8dbd4688f") : secret "collector-metrics" not found Feb 14 04:23:32 crc kubenswrapper[4867]: I0214 04:23:32.763204 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-trusted-ca\") pod \"collector-9wcmp\" (UID: \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\") " pod="openshift-logging/collector-9wcmp" Feb 14 04:23:32 crc kubenswrapper[4867]: I0214 04:23:32.763247 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-config-openshift-service-cacrt\") pod \"collector-9wcmp\" (UID: \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\") " pod="openshift-logging/collector-9wcmp" Feb 14 04:23:32 crc kubenswrapper[4867]: I0214 04:23:32.763285 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbnd4\" (UniqueName: \"kubernetes.io/projected/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-kube-api-access-kbnd4\") pod \"collector-9wcmp\" (UID: \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\") " pod="openshift-logging/collector-9wcmp" Feb 14 04:23:32 crc kubenswrapper[4867]: I0214 04:23:32.763801 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-config\") pod \"collector-9wcmp\" (UID: \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\") " pod="openshift-logging/collector-9wcmp" Feb 14 04:23:32 crc kubenswrapper[4867]: I0214 04:23:32.763943 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-config-openshift-service-cacrt\") pod \"collector-9wcmp\" (UID: \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\") " pod="openshift-logging/collector-9wcmp" Feb 14 04:23:32 crc kubenswrapper[4867]: I0214 04:23:32.764164 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-entrypoint\") pod \"collector-9wcmp\" (UID: \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\") " pod="openshift-logging/collector-9wcmp" Feb 14 04:23:32 crc kubenswrapper[4867]: I0214 04:23:32.764311 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-trusted-ca\") pod \"collector-9wcmp\" (UID: \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\") " pod="openshift-logging/collector-9wcmp" Feb 14 04:23:32 crc kubenswrapper[4867]: I0214 04:23:32.768208 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-tmp\") pod \"collector-9wcmp\" (UID: \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\") " pod="openshift-logging/collector-9wcmp" Feb 14 04:23:32 crc kubenswrapper[4867]: I0214 04:23:32.768433 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-collector-token\") pod \"collector-9wcmp\" (UID: \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\") " pod="openshift-logging/collector-9wcmp" Feb 14 04:23:32 crc kubenswrapper[4867]: I0214 04:23:32.769698 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-collector-syslog-receiver\") pod \"collector-9wcmp\" (UID: \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\") " pod="openshift-logging/collector-9wcmp" Feb 14 04:23:32 crc kubenswrapper[4867]: I0214 04:23:32.783789 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-sa-token\") pod \"collector-9wcmp\" (UID: \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\") " pod="openshift-logging/collector-9wcmp" Feb 14 04:23:32 crc kubenswrapper[4867]: I0214 04:23:32.784730 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbnd4\" (UniqueName: \"kubernetes.io/projected/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-kube-api-access-kbnd4\") pod \"collector-9wcmp\" (UID: \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\") " pod="openshift-logging/collector-9wcmp" Feb 14 04:23:33 crc kubenswrapper[4867]: I0214 04:23:33.026959 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-9wcmp" Feb 14 04:23:33 crc kubenswrapper[4867]: I0214 04:23:33.036754 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-9wcmp" Feb 14 04:23:33 crc kubenswrapper[4867]: I0214 04:23:33.167833 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-config\") pod \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\" (UID: \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\") " Feb 14 04:23:33 crc kubenswrapper[4867]: I0214 04:23:33.167905 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-sa-token\") pod \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\" (UID: \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\") " Feb 14 04:23:33 crc kubenswrapper[4867]: I0214 04:23:33.167986 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-config-openshift-service-cacrt\") pod \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\" (UID: \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\") " Feb 14 04:23:33 crc kubenswrapper[4867]: I0214 04:23:33.168024 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-collector-syslog-receiver\") pod \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\" (UID: \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\") " Feb 14 04:23:33 crc kubenswrapper[4867]: I0214 04:23:33.168050 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-entrypoint\") pod \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\" (UID: \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\") " Feb 14 04:23:33 crc kubenswrapper[4867]: I0214 04:23:33.168075 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-trusted-ca\") pod \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\" (UID: \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\") " Feb 14 04:23:33 crc kubenswrapper[4867]: I0214 04:23:33.168151 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-tmp\") pod \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\" (UID: \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\") " Feb 14 04:23:33 crc kubenswrapper[4867]: I0214 04:23:33.168181 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-collector-token\") pod \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\" (UID: \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\") " Feb 14 04:23:33 crc kubenswrapper[4867]: I0214 04:23:33.168211 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kbnd4\" (UniqueName: \"kubernetes.io/projected/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-kube-api-access-kbnd4\") pod \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\" (UID: \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\") " Feb 14 04:23:33 crc kubenswrapper[4867]: I0214 04:23:33.168247 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-datadir\") pod \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\" (UID: \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\") " Feb 14 04:23:33 crc kubenswrapper[4867]: I0214 04:23:33.168787 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-config" (OuterVolumeSpecName: "config") pod "a2144ced-e8cb-4b28-82f2-65e8dbd4688f" (UID: "a2144ced-e8cb-4b28-82f2-65e8dbd4688f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:23:33 crc kubenswrapper[4867]: I0214 04:23:33.168925 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-datadir" (OuterVolumeSpecName: "datadir") pod "a2144ced-e8cb-4b28-82f2-65e8dbd4688f" (UID: "a2144ced-e8cb-4b28-82f2-65e8dbd4688f"). InnerVolumeSpecName "datadir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 04:23:33 crc kubenswrapper[4867]: I0214 04:23:33.169397 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-entrypoint" (OuterVolumeSpecName: "entrypoint") pod "a2144ced-e8cb-4b28-82f2-65e8dbd4688f" (UID: "a2144ced-e8cb-4b28-82f2-65e8dbd4688f"). InnerVolumeSpecName "entrypoint". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:23:33 crc kubenswrapper[4867]: I0214 04:23:33.169439 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-config-openshift-service-cacrt" (OuterVolumeSpecName: "config-openshift-service-cacrt") pod "a2144ced-e8cb-4b28-82f2-65e8dbd4688f" (UID: "a2144ced-e8cb-4b28-82f2-65e8dbd4688f"). InnerVolumeSpecName "config-openshift-service-cacrt". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:23:33 crc kubenswrapper[4867]: I0214 04:23:33.169979 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a2144ced-e8cb-4b28-82f2-65e8dbd4688f" (UID: "a2144ced-e8cb-4b28-82f2-65e8dbd4688f"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:23:33 crc kubenswrapper[4867]: I0214 04:23:33.170061 4867 reconciler_common.go:293] "Volume detached for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-config-openshift-service-cacrt\") on node \"crc\" DevicePath \"\"" Feb 14 04:23:33 crc kubenswrapper[4867]: I0214 04:23:33.170748 4867 reconciler_common.go:293] "Volume detached for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-entrypoint\") on node \"crc\" DevicePath \"\"" Feb 14 04:23:33 crc kubenswrapper[4867]: I0214 04:23:33.170786 4867 reconciler_common.go:293] "Volume detached for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-datadir\") on node \"crc\" DevicePath \"\"" Feb 14 04:23:33 crc kubenswrapper[4867]: I0214 04:23:33.170800 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:23:33 crc kubenswrapper[4867]: I0214 04:23:33.172881 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-collector-syslog-receiver" (OuterVolumeSpecName: "collector-syslog-receiver") pod "a2144ced-e8cb-4b28-82f2-65e8dbd4688f" (UID: "a2144ced-e8cb-4b28-82f2-65e8dbd4688f"). InnerVolumeSpecName "collector-syslog-receiver". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:23:33 crc kubenswrapper[4867]: I0214 04:23:33.173078 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-kube-api-access-kbnd4" (OuterVolumeSpecName: "kube-api-access-kbnd4") pod "a2144ced-e8cb-4b28-82f2-65e8dbd4688f" (UID: "a2144ced-e8cb-4b28-82f2-65e8dbd4688f"). InnerVolumeSpecName "kube-api-access-kbnd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:23:33 crc kubenswrapper[4867]: I0214 04:23:33.173342 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-sa-token" (OuterVolumeSpecName: "sa-token") pod "a2144ced-e8cb-4b28-82f2-65e8dbd4688f" (UID: "a2144ced-e8cb-4b28-82f2-65e8dbd4688f"). InnerVolumeSpecName "sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:23:33 crc kubenswrapper[4867]: I0214 04:23:33.174377 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-collector-token" (OuterVolumeSpecName: "collector-token") pod "a2144ced-e8cb-4b28-82f2-65e8dbd4688f" (UID: "a2144ced-e8cb-4b28-82f2-65e8dbd4688f"). InnerVolumeSpecName "collector-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:23:33 crc kubenswrapper[4867]: I0214 04:23:33.178694 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-tmp" (OuterVolumeSpecName: "tmp") pod "a2144ced-e8cb-4b28-82f2-65e8dbd4688f" (UID: "a2144ced-e8cb-4b28-82f2-65e8dbd4688f"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:23:33 crc kubenswrapper[4867]: I0214 04:23:33.272236 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-metrics\") pod \"collector-9wcmp\" (UID: \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\") " pod="openshift-logging/collector-9wcmp" Feb 14 04:23:33 crc kubenswrapper[4867]: I0214 04:23:33.272565 4867 reconciler_common.go:293] "Volume detached for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-collector-token\") on node \"crc\" DevicePath \"\"" Feb 14 04:23:33 crc kubenswrapper[4867]: I0214 04:23:33.272584 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kbnd4\" (UniqueName: \"kubernetes.io/projected/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-kube-api-access-kbnd4\") on node \"crc\" DevicePath \"\"" Feb 14 04:23:33 crc kubenswrapper[4867]: I0214 04:23:33.272600 4867 reconciler_common.go:293] "Volume detached for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-sa-token\") on node \"crc\" DevicePath \"\"" Feb 14 04:23:33 crc kubenswrapper[4867]: I0214 04:23:33.272613 4867 reconciler_common.go:293] "Volume detached for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-collector-syslog-receiver\") on node \"crc\" DevicePath \"\"" Feb 14 04:23:33 crc kubenswrapper[4867]: I0214 04:23:33.272624 4867 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 14 04:23:33 crc kubenswrapper[4867]: I0214 04:23:33.272636 4867 reconciler_common.go:293] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-tmp\") on node \"crc\" DevicePath \"\"" Feb 14 04:23:33 crc kubenswrapper[4867]: I0214 04:23:33.276365 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-metrics\") pod \"collector-9wcmp\" (UID: \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\") " pod="openshift-logging/collector-9wcmp" Feb 14 04:23:33 crc kubenswrapper[4867]: I0214 04:23:33.373649 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-metrics\") pod \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\" (UID: \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\") " Feb 14 04:23:33 crc kubenswrapper[4867]: I0214 04:23:33.376302 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-metrics" (OuterVolumeSpecName: "metrics") pod "a2144ced-e8cb-4b28-82f2-65e8dbd4688f" (UID: "a2144ced-e8cb-4b28-82f2-65e8dbd4688f"). InnerVolumeSpecName "metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:23:33 crc kubenswrapper[4867]: I0214 04:23:33.475625 4867 reconciler_common.go:293] "Volume detached for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-metrics\") on node \"crc\" DevicePath \"\"" Feb 14 04:23:34 crc kubenswrapper[4867]: I0214 04:23:34.036721 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-9wcmp" Feb 14 04:23:34 crc kubenswrapper[4867]: I0214 04:23:34.118217 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-9wcmp"] Feb 14 04:23:34 crc kubenswrapper[4867]: I0214 04:23:34.124272 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-logging/collector-9wcmp"] Feb 14 04:23:34 crc kubenswrapper[4867]: I0214 04:23:34.129156 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-4tm7t"] Feb 14 04:23:34 crc kubenswrapper[4867]: I0214 04:23:34.130080 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-4tm7t" Feb 14 04:23:34 crc kubenswrapper[4867]: I0214 04:23:34.136252 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Feb 14 04:23:34 crc kubenswrapper[4867]: I0214 04:23:34.136318 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Feb 14 04:23:34 crc kubenswrapper[4867]: I0214 04:23:34.136259 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-zjsbd" Feb 14 04:23:34 crc kubenswrapper[4867]: I0214 04:23:34.136660 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Feb 14 04:23:34 crc kubenswrapper[4867]: I0214 04:23:34.136789 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Feb 14 04:23:34 crc kubenswrapper[4867]: I0214 04:23:34.144556 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Feb 14 04:23:34 crc kubenswrapper[4867]: I0214 04:23:34.146903 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-4tm7t"] Feb 14 04:23:34 crc kubenswrapper[4867]: I0214 04:23:34.289034 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/0b309a8c-060a-4e8b-9731-3c4c3aab56f7-sa-token\") pod \"collector-4tm7t\" (UID: \"0b309a8c-060a-4e8b-9731-3c4c3aab56f7\") " pod="openshift-logging/collector-4tm7t" Feb 14 04:23:34 crc kubenswrapper[4867]: I0214 04:23:34.289087 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/0b309a8c-060a-4e8b-9731-3c4c3aab56f7-collector-token\") pod \"collector-4tm7t\" (UID: \"0b309a8c-060a-4e8b-9731-3c4c3aab56f7\") " pod="openshift-logging/collector-4tm7t" Feb 14 04:23:34 crc kubenswrapper[4867]: I0214 04:23:34.289129 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/0b309a8c-060a-4e8b-9731-3c4c3aab56f7-metrics\") pod \"collector-4tm7t\" (UID: \"0b309a8c-060a-4e8b-9731-3c4c3aab56f7\") " pod="openshift-logging/collector-4tm7t" Feb 14 04:23:34 crc kubenswrapper[4867]: I0214 04:23:34.289161 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b309a8c-060a-4e8b-9731-3c4c3aab56f7-config\") pod \"collector-4tm7t\" (UID: \"0b309a8c-060a-4e8b-9731-3c4c3aab56f7\") " pod="openshift-logging/collector-4tm7t" Feb 14 04:23:34 crc kubenswrapper[4867]: I0214 04:23:34.289181 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbgkj\" (UniqueName: \"kubernetes.io/projected/0b309a8c-060a-4e8b-9731-3c4c3aab56f7-kube-api-access-zbgkj\") pod \"collector-4tm7t\" (UID: \"0b309a8c-060a-4e8b-9731-3c4c3aab56f7\") " pod="openshift-logging/collector-4tm7t" Feb 14 04:23:34 crc kubenswrapper[4867]: I0214 04:23:34.289205 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/0b309a8c-060a-4e8b-9731-3c4c3aab56f7-datadir\") pod \"collector-4tm7t\" (UID: \"0b309a8c-060a-4e8b-9731-3c4c3aab56f7\") " pod="openshift-logging/collector-4tm7t" Feb 14 04:23:34 crc kubenswrapper[4867]: I0214 04:23:34.289219 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b309a8c-060a-4e8b-9731-3c4c3aab56f7-tmp\") pod \"collector-4tm7t\" (UID: \"0b309a8c-060a-4e8b-9731-3c4c3aab56f7\") " pod="openshift-logging/collector-4tm7t" Feb 14 04:23:34 crc kubenswrapper[4867]: I0214 04:23:34.289252 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/0b309a8c-060a-4e8b-9731-3c4c3aab56f7-config-openshift-service-cacrt\") pod \"collector-4tm7t\" (UID: \"0b309a8c-060a-4e8b-9731-3c4c3aab56f7\") " pod="openshift-logging/collector-4tm7t" Feb 14 04:23:34 crc kubenswrapper[4867]: I0214 04:23:34.289270 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/0b309a8c-060a-4e8b-9731-3c4c3aab56f7-entrypoint\") pod \"collector-4tm7t\" (UID: \"0b309a8c-060a-4e8b-9731-3c4c3aab56f7\") " pod="openshift-logging/collector-4tm7t" Feb 14 04:23:34 crc kubenswrapper[4867]: I0214 04:23:34.289299 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0b309a8c-060a-4e8b-9731-3c4c3aab56f7-trusted-ca\") pod \"collector-4tm7t\" (UID: \"0b309a8c-060a-4e8b-9731-3c4c3aab56f7\") " pod="openshift-logging/collector-4tm7t" Feb 14 04:23:34 crc kubenswrapper[4867]: I0214 04:23:34.289325 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/0b309a8c-060a-4e8b-9731-3c4c3aab56f7-collector-syslog-receiver\") pod \"collector-4tm7t\" (UID: \"0b309a8c-060a-4e8b-9731-3c4c3aab56f7\") " pod="openshift-logging/collector-4tm7t" Feb 14 04:23:34 crc kubenswrapper[4867]: I0214 04:23:34.390740 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/0b309a8c-060a-4e8b-9731-3c4c3aab56f7-sa-token\") pod \"collector-4tm7t\" (UID: \"0b309a8c-060a-4e8b-9731-3c4c3aab56f7\") " pod="openshift-logging/collector-4tm7t" Feb 14 04:23:34 crc kubenswrapper[4867]: I0214 04:23:34.390781 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/0b309a8c-060a-4e8b-9731-3c4c3aab56f7-collector-token\") pod \"collector-4tm7t\" (UID: \"0b309a8c-060a-4e8b-9731-3c4c3aab56f7\") " pod="openshift-logging/collector-4tm7t" Feb 14 04:23:34 crc kubenswrapper[4867]: I0214 04:23:34.390824 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/0b309a8c-060a-4e8b-9731-3c4c3aab56f7-metrics\") pod \"collector-4tm7t\" (UID: \"0b309a8c-060a-4e8b-9731-3c4c3aab56f7\") " pod="openshift-logging/collector-4tm7t" Feb 14 04:23:34 crc kubenswrapper[4867]: I0214 04:23:34.390852 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b309a8c-060a-4e8b-9731-3c4c3aab56f7-config\") pod \"collector-4tm7t\" (UID: \"0b309a8c-060a-4e8b-9731-3c4c3aab56f7\") " pod="openshift-logging/collector-4tm7t" Feb 14 04:23:34 crc kubenswrapper[4867]: I0214 04:23:34.390869 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbgkj\" (UniqueName: \"kubernetes.io/projected/0b309a8c-060a-4e8b-9731-3c4c3aab56f7-kube-api-access-zbgkj\") pod \"collector-4tm7t\" (UID: \"0b309a8c-060a-4e8b-9731-3c4c3aab56f7\") " pod="openshift-logging/collector-4tm7t" Feb 14 04:23:34 crc kubenswrapper[4867]: I0214 04:23:34.390890 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/0b309a8c-060a-4e8b-9731-3c4c3aab56f7-datadir\") pod \"collector-4tm7t\" (UID: \"0b309a8c-060a-4e8b-9731-3c4c3aab56f7\") " pod="openshift-logging/collector-4tm7t" Feb 14 04:23:34 crc kubenswrapper[4867]: I0214 04:23:34.390906 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b309a8c-060a-4e8b-9731-3c4c3aab56f7-tmp\") pod \"collector-4tm7t\" (UID: \"0b309a8c-060a-4e8b-9731-3c4c3aab56f7\") " pod="openshift-logging/collector-4tm7t" Feb 14 04:23:34 crc kubenswrapper[4867]: I0214 04:23:34.390934 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/0b309a8c-060a-4e8b-9731-3c4c3aab56f7-config-openshift-service-cacrt\") pod \"collector-4tm7t\" (UID: \"0b309a8c-060a-4e8b-9731-3c4c3aab56f7\") " pod="openshift-logging/collector-4tm7t" Feb 14 04:23:34 crc kubenswrapper[4867]: I0214 04:23:34.390952 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/0b309a8c-060a-4e8b-9731-3c4c3aab56f7-entrypoint\") pod \"collector-4tm7t\" (UID: \"0b309a8c-060a-4e8b-9731-3c4c3aab56f7\") " pod="openshift-logging/collector-4tm7t" Feb 14 04:23:34 crc kubenswrapper[4867]: I0214 04:23:34.390986 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0b309a8c-060a-4e8b-9731-3c4c3aab56f7-trusted-ca\") pod \"collector-4tm7t\" (UID: \"0b309a8c-060a-4e8b-9731-3c4c3aab56f7\") " pod="openshift-logging/collector-4tm7t" Feb 14 04:23:34 crc kubenswrapper[4867]: I0214 04:23:34.391013 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/0b309a8c-060a-4e8b-9731-3c4c3aab56f7-collector-syslog-receiver\") pod \"collector-4tm7t\" (UID: \"0b309a8c-060a-4e8b-9731-3c4c3aab56f7\") " pod="openshift-logging/collector-4tm7t" Feb 14 04:23:34 crc kubenswrapper[4867]: I0214 04:23:34.391583 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/0b309a8c-060a-4e8b-9731-3c4c3aab56f7-datadir\") pod \"collector-4tm7t\" (UID: \"0b309a8c-060a-4e8b-9731-3c4c3aab56f7\") " pod="openshift-logging/collector-4tm7t" Feb 14 04:23:34 crc kubenswrapper[4867]: I0214 04:23:34.392337 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/0b309a8c-060a-4e8b-9731-3c4c3aab56f7-entrypoint\") pod \"collector-4tm7t\" (UID: \"0b309a8c-060a-4e8b-9731-3c4c3aab56f7\") " pod="openshift-logging/collector-4tm7t" Feb 14 04:23:34 crc kubenswrapper[4867]: I0214 04:23:34.392399 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b309a8c-060a-4e8b-9731-3c4c3aab56f7-config\") pod \"collector-4tm7t\" (UID: \"0b309a8c-060a-4e8b-9731-3c4c3aab56f7\") " pod="openshift-logging/collector-4tm7t" Feb 14 04:23:34 crc kubenswrapper[4867]: I0214 04:23:34.392627 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0b309a8c-060a-4e8b-9731-3c4c3aab56f7-trusted-ca\") pod \"collector-4tm7t\" (UID: \"0b309a8c-060a-4e8b-9731-3c4c3aab56f7\") " pod="openshift-logging/collector-4tm7t" Feb 14 04:23:34 crc kubenswrapper[4867]: I0214 04:23:34.392716 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/0b309a8c-060a-4e8b-9731-3c4c3aab56f7-config-openshift-service-cacrt\") pod \"collector-4tm7t\" (UID: \"0b309a8c-060a-4e8b-9731-3c4c3aab56f7\") " pod="openshift-logging/collector-4tm7t" Feb 14 04:23:34 crc kubenswrapper[4867]: I0214 04:23:34.395443 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/0b309a8c-060a-4e8b-9731-3c4c3aab56f7-collector-token\") pod \"collector-4tm7t\" (UID: \"0b309a8c-060a-4e8b-9731-3c4c3aab56f7\") " pod="openshift-logging/collector-4tm7t" Feb 14 04:23:34 crc kubenswrapper[4867]: I0214 04:23:34.398779 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b309a8c-060a-4e8b-9731-3c4c3aab56f7-tmp\") pod \"collector-4tm7t\" (UID: \"0b309a8c-060a-4e8b-9731-3c4c3aab56f7\") " pod="openshift-logging/collector-4tm7t" Feb 14 04:23:34 crc kubenswrapper[4867]: I0214 04:23:34.398835 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/0b309a8c-060a-4e8b-9731-3c4c3aab56f7-metrics\") pod \"collector-4tm7t\" (UID: \"0b309a8c-060a-4e8b-9731-3c4c3aab56f7\") " pod="openshift-logging/collector-4tm7t" Feb 14 04:23:34 crc kubenswrapper[4867]: I0214 04:23:34.403062 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/0b309a8c-060a-4e8b-9731-3c4c3aab56f7-collector-syslog-receiver\") pod \"collector-4tm7t\" (UID: \"0b309a8c-060a-4e8b-9731-3c4c3aab56f7\") " pod="openshift-logging/collector-4tm7t" Feb 14 04:23:34 crc kubenswrapper[4867]: I0214 04:23:34.407645 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbgkj\" (UniqueName: \"kubernetes.io/projected/0b309a8c-060a-4e8b-9731-3c4c3aab56f7-kube-api-access-zbgkj\") pod \"collector-4tm7t\" (UID: \"0b309a8c-060a-4e8b-9731-3c4c3aab56f7\") " pod="openshift-logging/collector-4tm7t" Feb 14 04:23:34 crc kubenswrapper[4867]: I0214 04:23:34.407734 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/0b309a8c-060a-4e8b-9731-3c4c3aab56f7-sa-token\") pod \"collector-4tm7t\" (UID: \"0b309a8c-060a-4e8b-9731-3c4c3aab56f7\") " pod="openshift-logging/collector-4tm7t" Feb 14 04:23:34 crc kubenswrapper[4867]: I0214 04:23:34.448573 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-4tm7t" Feb 14 04:23:34 crc kubenswrapper[4867]: I0214 04:23:34.966531 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-4tm7t"] Feb 14 04:23:34 crc kubenswrapper[4867]: W0214 04:23:34.978658 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b309a8c_060a_4e8b_9731_3c4c3aab56f7.slice/crio-94df3171a232a76ec3943d5da2c7d86c0f99fb1eb5fdeed528545e2a39454ca0 WatchSource:0}: Error finding container 94df3171a232a76ec3943d5da2c7d86c0f99fb1eb5fdeed528545e2a39454ca0: Status 404 returned error can't find the container with id 94df3171a232a76ec3943d5da2c7d86c0f99fb1eb5fdeed528545e2a39454ca0 Feb 14 04:23:35 crc kubenswrapper[4867]: I0214 04:23:35.014646 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2144ced-e8cb-4b28-82f2-65e8dbd4688f" path="/var/lib/kubelet/pods/a2144ced-e8cb-4b28-82f2-65e8dbd4688f/volumes" Feb 14 04:23:35 crc kubenswrapper[4867]: I0214 04:23:35.045721 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-4tm7t" event={"ID":"0b309a8c-060a-4e8b-9731-3c4c3aab56f7","Type":"ContainerStarted","Data":"94df3171a232a76ec3943d5da2c7d86c0f99fb1eb5fdeed528545e2a39454ca0"} Feb 14 04:23:42 crc kubenswrapper[4867]: I0214 04:23:42.109441 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-4tm7t" event={"ID":"0b309a8c-060a-4e8b-9731-3c4c3aab56f7","Type":"ContainerStarted","Data":"cfc255139d34f5006f0cf92f0c59e4813687cfe1a16dab3d8448096c2259ec0c"} Feb 14 04:23:42 crc kubenswrapper[4867]: I0214 04:23:42.129375 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/collector-4tm7t" podStartSLOduration=1.531214951 podStartE2EDuration="8.129354063s" podCreationTimestamp="2026-02-14 04:23:34 +0000 UTC" firstStartedPulling="2026-02-14 04:23:34.982085641 +0000 UTC m=+847.063022995" lastFinishedPulling="2026-02-14 04:23:41.580224793 +0000 UTC m=+853.661162107" observedRunningTime="2026-02-14 04:23:42.128954203 +0000 UTC m=+854.209891517" watchObservedRunningTime="2026-02-14 04:23:42.129354063 +0000 UTC m=+854.210291377" Feb 14 04:24:10 crc kubenswrapper[4867]: I0214 04:24:10.965388 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadnhlb"] Feb 14 04:24:10 crc kubenswrapper[4867]: I0214 04:24:10.967717 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadnhlb" Feb 14 04:24:10 crc kubenswrapper[4867]: I0214 04:24:10.969862 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 14 04:24:10 crc kubenswrapper[4867]: I0214 04:24:10.990708 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadnhlb"] Feb 14 04:24:11 crc kubenswrapper[4867]: I0214 04:24:11.129812 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2sb7\" (UniqueName: \"kubernetes.io/projected/10159ab6-8862-4a8a-afd2-3fb5920f2cae-kube-api-access-c2sb7\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadnhlb\" (UID: \"10159ab6-8862-4a8a-afd2-3fb5920f2cae\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadnhlb" Feb 14 04:24:11 crc kubenswrapper[4867]: I0214 04:24:11.129891 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/10159ab6-8862-4a8a-afd2-3fb5920f2cae-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadnhlb\" (UID: \"10159ab6-8862-4a8a-afd2-3fb5920f2cae\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadnhlb" Feb 14 04:24:11 crc kubenswrapper[4867]: I0214 04:24:11.129954 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/10159ab6-8862-4a8a-afd2-3fb5920f2cae-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadnhlb\" (UID: \"10159ab6-8862-4a8a-afd2-3fb5920f2cae\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadnhlb" Feb 14 04:24:11 crc kubenswrapper[4867]: I0214 04:24:11.231680 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/10159ab6-8862-4a8a-afd2-3fb5920f2cae-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadnhlb\" (UID: \"10159ab6-8862-4a8a-afd2-3fb5920f2cae\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadnhlb" Feb 14 04:24:11 crc kubenswrapper[4867]: I0214 04:24:11.231759 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/10159ab6-8862-4a8a-afd2-3fb5920f2cae-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadnhlb\" (UID: \"10159ab6-8862-4a8a-afd2-3fb5920f2cae\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadnhlb" Feb 14 04:24:11 crc kubenswrapper[4867]: I0214 04:24:11.231820 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2sb7\" (UniqueName: \"kubernetes.io/projected/10159ab6-8862-4a8a-afd2-3fb5920f2cae-kube-api-access-c2sb7\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadnhlb\" (UID: \"10159ab6-8862-4a8a-afd2-3fb5920f2cae\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadnhlb" Feb 14 04:24:11 crc kubenswrapper[4867]: I0214 04:24:11.232282 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/10159ab6-8862-4a8a-afd2-3fb5920f2cae-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadnhlb\" (UID: \"10159ab6-8862-4a8a-afd2-3fb5920f2cae\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadnhlb" Feb 14 04:24:11 crc kubenswrapper[4867]: I0214 04:24:11.232322 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/10159ab6-8862-4a8a-afd2-3fb5920f2cae-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadnhlb\" (UID: \"10159ab6-8862-4a8a-afd2-3fb5920f2cae\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadnhlb" Feb 14 04:24:11 crc kubenswrapper[4867]: I0214 04:24:11.249692 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2sb7\" (UniqueName: \"kubernetes.io/projected/10159ab6-8862-4a8a-afd2-3fb5920f2cae-kube-api-access-c2sb7\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadnhlb\" (UID: \"10159ab6-8862-4a8a-afd2-3fb5920f2cae\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadnhlb" Feb 14 04:24:11 crc kubenswrapper[4867]: I0214 04:24:11.282444 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadnhlb" Feb 14 04:24:11 crc kubenswrapper[4867]: I0214 04:24:11.830318 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadnhlb"] Feb 14 04:24:12 crc kubenswrapper[4867]: I0214 04:24:12.384109 4867 generic.go:334] "Generic (PLEG): container finished" podID="10159ab6-8862-4a8a-afd2-3fb5920f2cae" containerID="4207d38a5fa1fca3e108eb003826a071655e3828ac35f556609321943c1c2c47" exitCode=0 Feb 14 04:24:12 crc kubenswrapper[4867]: I0214 04:24:12.384366 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadnhlb" event={"ID":"10159ab6-8862-4a8a-afd2-3fb5920f2cae","Type":"ContainerDied","Data":"4207d38a5fa1fca3e108eb003826a071655e3828ac35f556609321943c1c2c47"} Feb 14 04:24:12 crc kubenswrapper[4867]: I0214 04:24:12.384392 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadnhlb" event={"ID":"10159ab6-8862-4a8a-afd2-3fb5920f2cae","Type":"ContainerStarted","Data":"988acb574b275ef9c7560746b8921c03fcb97d3fa46d8c5aa6fea99f5187d294"} Feb 14 04:24:13 crc kubenswrapper[4867]: I0214 04:24:13.184602 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-t9pkj"] Feb 14 04:24:13 crc kubenswrapper[4867]: I0214 04:24:13.187794 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t9pkj" Feb 14 04:24:13 crc kubenswrapper[4867]: I0214 04:24:13.199625 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t9pkj"] Feb 14 04:24:13 crc kubenswrapper[4867]: I0214 04:24:13.366688 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ad1164b-e852-484b-b290-6d32e24d3d8e-utilities\") pod \"redhat-operators-t9pkj\" (UID: \"5ad1164b-e852-484b-b290-6d32e24d3d8e\") " pod="openshift-marketplace/redhat-operators-t9pkj" Feb 14 04:24:13 crc kubenswrapper[4867]: I0214 04:24:13.367280 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2z7g\" (UniqueName: \"kubernetes.io/projected/5ad1164b-e852-484b-b290-6d32e24d3d8e-kube-api-access-p2z7g\") pod \"redhat-operators-t9pkj\" (UID: \"5ad1164b-e852-484b-b290-6d32e24d3d8e\") " pod="openshift-marketplace/redhat-operators-t9pkj" Feb 14 04:24:13 crc kubenswrapper[4867]: I0214 04:24:13.367475 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ad1164b-e852-484b-b290-6d32e24d3d8e-catalog-content\") pod \"redhat-operators-t9pkj\" (UID: \"5ad1164b-e852-484b-b290-6d32e24d3d8e\") " pod="openshift-marketplace/redhat-operators-t9pkj" Feb 14 04:24:13 crc kubenswrapper[4867]: I0214 04:24:13.469190 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ad1164b-e852-484b-b290-6d32e24d3d8e-catalog-content\") pod \"redhat-operators-t9pkj\" (UID: \"5ad1164b-e852-484b-b290-6d32e24d3d8e\") " pod="openshift-marketplace/redhat-operators-t9pkj" Feb 14 04:24:13 crc kubenswrapper[4867]: I0214 04:24:13.469277 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ad1164b-e852-484b-b290-6d32e24d3d8e-utilities\") pod \"redhat-operators-t9pkj\" (UID: \"5ad1164b-e852-484b-b290-6d32e24d3d8e\") " pod="openshift-marketplace/redhat-operators-t9pkj" Feb 14 04:24:13 crc kubenswrapper[4867]: I0214 04:24:13.469369 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2z7g\" (UniqueName: \"kubernetes.io/projected/5ad1164b-e852-484b-b290-6d32e24d3d8e-kube-api-access-p2z7g\") pod \"redhat-operators-t9pkj\" (UID: \"5ad1164b-e852-484b-b290-6d32e24d3d8e\") " pod="openshift-marketplace/redhat-operators-t9pkj" Feb 14 04:24:13 crc kubenswrapper[4867]: I0214 04:24:13.469793 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ad1164b-e852-484b-b290-6d32e24d3d8e-catalog-content\") pod \"redhat-operators-t9pkj\" (UID: \"5ad1164b-e852-484b-b290-6d32e24d3d8e\") " pod="openshift-marketplace/redhat-operators-t9pkj" Feb 14 04:24:13 crc kubenswrapper[4867]: I0214 04:24:13.470006 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ad1164b-e852-484b-b290-6d32e24d3d8e-utilities\") pod \"redhat-operators-t9pkj\" (UID: \"5ad1164b-e852-484b-b290-6d32e24d3d8e\") " pod="openshift-marketplace/redhat-operators-t9pkj" Feb 14 04:24:13 crc kubenswrapper[4867]: I0214 04:24:13.493157 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2z7g\" (UniqueName: \"kubernetes.io/projected/5ad1164b-e852-484b-b290-6d32e24d3d8e-kube-api-access-p2z7g\") pod \"redhat-operators-t9pkj\" (UID: \"5ad1164b-e852-484b-b290-6d32e24d3d8e\") " pod="openshift-marketplace/redhat-operators-t9pkj" Feb 14 04:24:13 crc kubenswrapper[4867]: I0214 04:24:13.506987 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t9pkj" Feb 14 04:24:13 crc kubenswrapper[4867]: I0214 04:24:13.992480 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t9pkj"] Feb 14 04:24:14 crc kubenswrapper[4867]: I0214 04:24:14.400428 4867 generic.go:334] "Generic (PLEG): container finished" podID="5ad1164b-e852-484b-b290-6d32e24d3d8e" containerID="59213eb5738330920a318fc901360e949cd08966f19e4aeca5c0862bcdd388f1" exitCode=0 Feb 14 04:24:14 crc kubenswrapper[4867]: I0214 04:24:14.400471 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t9pkj" event={"ID":"5ad1164b-e852-484b-b290-6d32e24d3d8e","Type":"ContainerDied","Data":"59213eb5738330920a318fc901360e949cd08966f19e4aeca5c0862bcdd388f1"} Feb 14 04:24:14 crc kubenswrapper[4867]: I0214 04:24:14.400538 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t9pkj" event={"ID":"5ad1164b-e852-484b-b290-6d32e24d3d8e","Type":"ContainerStarted","Data":"0d2d792529598a3e2dcb124798b2124c8ff8af3c6b135bf970fd763b2135679b"} Feb 14 04:24:14 crc kubenswrapper[4867]: I0214 04:24:14.404072 4867 generic.go:334] "Generic (PLEG): container finished" podID="10159ab6-8862-4a8a-afd2-3fb5920f2cae" containerID="a64861927453dca5f17bbb20043fda3e88d8a529d848ca7e5278cd400e0c0eb0" exitCode=0 Feb 14 04:24:14 crc kubenswrapper[4867]: I0214 04:24:14.404145 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadnhlb" event={"ID":"10159ab6-8862-4a8a-afd2-3fb5920f2cae","Type":"ContainerDied","Data":"a64861927453dca5f17bbb20043fda3e88d8a529d848ca7e5278cd400e0c0eb0"} Feb 14 04:24:15 crc kubenswrapper[4867]: I0214 04:24:15.428829 4867 generic.go:334] "Generic (PLEG): container finished" podID="10159ab6-8862-4a8a-afd2-3fb5920f2cae" containerID="c5fa47781c87791c7e8f1959a10cc57347dbb2a20e8a17b099544e91349440e7" exitCode=0 Feb 14 04:24:15 crc kubenswrapper[4867]: I0214 04:24:15.429095 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadnhlb" event={"ID":"10159ab6-8862-4a8a-afd2-3fb5920f2cae","Type":"ContainerDied","Data":"c5fa47781c87791c7e8f1959a10cc57347dbb2a20e8a17b099544e91349440e7"} Feb 14 04:24:15 crc kubenswrapper[4867]: I0214 04:24:15.432649 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t9pkj" event={"ID":"5ad1164b-e852-484b-b290-6d32e24d3d8e","Type":"ContainerStarted","Data":"d771e48ac93f52501b02ff902419db985dbdc75b66d985499f576fcba9d2c8cf"} Feb 14 04:24:17 crc kubenswrapper[4867]: I0214 04:24:17.117246 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadnhlb" Feb 14 04:24:17 crc kubenswrapper[4867]: I0214 04:24:17.231436 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/10159ab6-8862-4a8a-afd2-3fb5920f2cae-util\") pod \"10159ab6-8862-4a8a-afd2-3fb5920f2cae\" (UID: \"10159ab6-8862-4a8a-afd2-3fb5920f2cae\") " Feb 14 04:24:17 crc kubenswrapper[4867]: I0214 04:24:17.231536 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c2sb7\" (UniqueName: \"kubernetes.io/projected/10159ab6-8862-4a8a-afd2-3fb5920f2cae-kube-api-access-c2sb7\") pod \"10159ab6-8862-4a8a-afd2-3fb5920f2cae\" (UID: \"10159ab6-8862-4a8a-afd2-3fb5920f2cae\") " Feb 14 04:24:17 crc kubenswrapper[4867]: I0214 04:24:17.231615 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/10159ab6-8862-4a8a-afd2-3fb5920f2cae-bundle\") pod \"10159ab6-8862-4a8a-afd2-3fb5920f2cae\" (UID: \"10159ab6-8862-4a8a-afd2-3fb5920f2cae\") " Feb 14 04:24:17 crc kubenswrapper[4867]: I0214 04:24:17.232401 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10159ab6-8862-4a8a-afd2-3fb5920f2cae-bundle" (OuterVolumeSpecName: "bundle") pod "10159ab6-8862-4a8a-afd2-3fb5920f2cae" (UID: "10159ab6-8862-4a8a-afd2-3fb5920f2cae"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:24:17 crc kubenswrapper[4867]: I0214 04:24:17.238329 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10159ab6-8862-4a8a-afd2-3fb5920f2cae-kube-api-access-c2sb7" (OuterVolumeSpecName: "kube-api-access-c2sb7") pod "10159ab6-8862-4a8a-afd2-3fb5920f2cae" (UID: "10159ab6-8862-4a8a-afd2-3fb5920f2cae"). InnerVolumeSpecName "kube-api-access-c2sb7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:24:17 crc kubenswrapper[4867]: I0214 04:24:17.333377 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c2sb7\" (UniqueName: \"kubernetes.io/projected/10159ab6-8862-4a8a-afd2-3fb5920f2cae-kube-api-access-c2sb7\") on node \"crc\" DevicePath \"\"" Feb 14 04:24:17 crc kubenswrapper[4867]: I0214 04:24:17.334121 4867 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/10159ab6-8862-4a8a-afd2-3fb5920f2cae-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:24:17 crc kubenswrapper[4867]: I0214 04:24:17.450291 4867 generic.go:334] "Generic (PLEG): container finished" podID="5ad1164b-e852-484b-b290-6d32e24d3d8e" containerID="d771e48ac93f52501b02ff902419db985dbdc75b66d985499f576fcba9d2c8cf" exitCode=0 Feb 14 04:24:17 crc kubenswrapper[4867]: I0214 04:24:17.450364 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t9pkj" event={"ID":"5ad1164b-e852-484b-b290-6d32e24d3d8e","Type":"ContainerDied","Data":"d771e48ac93f52501b02ff902419db985dbdc75b66d985499f576fcba9d2c8cf"} Feb 14 04:24:17 crc kubenswrapper[4867]: I0214 04:24:17.452871 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadnhlb" event={"ID":"10159ab6-8862-4a8a-afd2-3fb5920f2cae","Type":"ContainerDied","Data":"988acb574b275ef9c7560746b8921c03fcb97d3fa46d8c5aa6fea99f5187d294"} Feb 14 04:24:17 crc kubenswrapper[4867]: I0214 04:24:17.452911 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="988acb574b275ef9c7560746b8921c03fcb97d3fa46d8c5aa6fea99f5187d294" Feb 14 04:24:17 crc kubenswrapper[4867]: I0214 04:24:17.452938 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadnhlb" Feb 14 04:24:17 crc kubenswrapper[4867]: I0214 04:24:17.456418 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10159ab6-8862-4a8a-afd2-3fb5920f2cae-util" (OuterVolumeSpecName: "util") pod "10159ab6-8862-4a8a-afd2-3fb5920f2cae" (UID: "10159ab6-8862-4a8a-afd2-3fb5920f2cae"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:24:17 crc kubenswrapper[4867]: I0214 04:24:17.537381 4867 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/10159ab6-8862-4a8a-afd2-3fb5920f2cae-util\") on node \"crc\" DevicePath \"\"" Feb 14 04:24:18 crc kubenswrapper[4867]: I0214 04:24:18.462483 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t9pkj" event={"ID":"5ad1164b-e852-484b-b290-6d32e24d3d8e","Type":"ContainerStarted","Data":"49c660544513666115d86e4b4e0e7ddb150debdf0be5426823aa42b267bda347"} Feb 14 04:24:18 crc kubenswrapper[4867]: I0214 04:24:18.482301 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-t9pkj" podStartSLOduration=2.006370509 podStartE2EDuration="5.482283186s" podCreationTimestamp="2026-02-14 04:24:13 +0000 UTC" firstStartedPulling="2026-02-14 04:24:14.401999923 +0000 UTC m=+886.482937237" lastFinishedPulling="2026-02-14 04:24:17.8779126 +0000 UTC m=+889.958849914" observedRunningTime="2026-02-14 04:24:18.482116811 +0000 UTC m=+890.563054125" watchObservedRunningTime="2026-02-14 04:24:18.482283186 +0000 UTC m=+890.563220500" Feb 14 04:24:21 crc kubenswrapper[4867]: I0214 04:24:21.172059 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-tjfgz"] Feb 14 04:24:21 crc kubenswrapper[4867]: E0214 04:24:21.172622 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10159ab6-8862-4a8a-afd2-3fb5920f2cae" containerName="util" Feb 14 04:24:21 crc kubenswrapper[4867]: I0214 04:24:21.172634 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="10159ab6-8862-4a8a-afd2-3fb5920f2cae" containerName="util" Feb 14 04:24:21 crc kubenswrapper[4867]: E0214 04:24:21.172664 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10159ab6-8862-4a8a-afd2-3fb5920f2cae" containerName="pull" Feb 14 04:24:21 crc kubenswrapper[4867]: I0214 04:24:21.172669 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="10159ab6-8862-4a8a-afd2-3fb5920f2cae" containerName="pull" Feb 14 04:24:21 crc kubenswrapper[4867]: E0214 04:24:21.172677 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10159ab6-8862-4a8a-afd2-3fb5920f2cae" containerName="extract" Feb 14 04:24:21 crc kubenswrapper[4867]: I0214 04:24:21.172683 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="10159ab6-8862-4a8a-afd2-3fb5920f2cae" containerName="extract" Feb 14 04:24:21 crc kubenswrapper[4867]: I0214 04:24:21.172813 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="10159ab6-8862-4a8a-afd2-3fb5920f2cae" containerName="extract" Feb 14 04:24:21 crc kubenswrapper[4867]: I0214 04:24:21.173347 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-tjfgz" Feb 14 04:24:21 crc kubenswrapper[4867]: I0214 04:24:21.177953 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-b457g" Feb 14 04:24:21 crc kubenswrapper[4867]: I0214 04:24:21.178005 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 14 04:24:21 crc kubenswrapper[4867]: I0214 04:24:21.177954 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 14 04:24:21 crc kubenswrapper[4867]: I0214 04:24:21.195913 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-tjfgz"] Feb 14 04:24:21 crc kubenswrapper[4867]: I0214 04:24:21.303761 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwthd\" (UniqueName: \"kubernetes.io/projected/914b3f92-c030-4d1e-8454-96a7220f851e-kube-api-access-pwthd\") pod \"nmstate-operator-694c9596b7-tjfgz\" (UID: \"914b3f92-c030-4d1e-8454-96a7220f851e\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-tjfgz" Feb 14 04:24:21 crc kubenswrapper[4867]: I0214 04:24:21.405832 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwthd\" (UniqueName: \"kubernetes.io/projected/914b3f92-c030-4d1e-8454-96a7220f851e-kube-api-access-pwthd\") pod \"nmstate-operator-694c9596b7-tjfgz\" (UID: \"914b3f92-c030-4d1e-8454-96a7220f851e\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-tjfgz" Feb 14 04:24:21 crc kubenswrapper[4867]: I0214 04:24:21.427917 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwthd\" (UniqueName: \"kubernetes.io/projected/914b3f92-c030-4d1e-8454-96a7220f851e-kube-api-access-pwthd\") pod \"nmstate-operator-694c9596b7-tjfgz\" (UID: \"914b3f92-c030-4d1e-8454-96a7220f851e\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-tjfgz" Feb 14 04:24:21 crc kubenswrapper[4867]: I0214 04:24:21.490759 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-tjfgz" Feb 14 04:24:21 crc kubenswrapper[4867]: I0214 04:24:21.830183 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-tjfgz"] Feb 14 04:24:22 crc kubenswrapper[4867]: I0214 04:24:22.490937 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-tjfgz" event={"ID":"914b3f92-c030-4d1e-8454-96a7220f851e","Type":"ContainerStarted","Data":"6bdb56fc6f29899e41d5a95bb762934117046b0d52eec86a1351d05e29a285ae"} Feb 14 04:24:23 crc kubenswrapper[4867]: I0214 04:24:23.507158 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-t9pkj" Feb 14 04:24:23 crc kubenswrapper[4867]: I0214 04:24:23.507597 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-t9pkj" Feb 14 04:24:24 crc kubenswrapper[4867]: I0214 04:24:24.551155 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-t9pkj" podUID="5ad1164b-e852-484b-b290-6d32e24d3d8e" containerName="registry-server" probeResult="failure" output=< Feb 14 04:24:24 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 04:24:24 crc kubenswrapper[4867]: > Feb 14 04:24:25 crc kubenswrapper[4867]: I0214 04:24:25.516204 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-tjfgz" event={"ID":"914b3f92-c030-4d1e-8454-96a7220f851e","Type":"ContainerStarted","Data":"799d565c553e09c7f8e1cee56462c881ef66c749993e6574e9634e536fa08fc5"} Feb 14 04:24:25 crc kubenswrapper[4867]: I0214 04:24:25.546419 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-694c9596b7-tjfgz" podStartSLOduration=1.668434808 podStartE2EDuration="4.546403079s" podCreationTimestamp="2026-02-14 04:24:21 +0000 UTC" firstStartedPulling="2026-02-14 04:24:21.835117248 +0000 UTC m=+893.916054562" lastFinishedPulling="2026-02-14 04:24:24.713085519 +0000 UTC m=+896.794022833" observedRunningTime="2026-02-14 04:24:25.545128306 +0000 UTC m=+897.626065630" watchObservedRunningTime="2026-02-14 04:24:25.546403079 +0000 UTC m=+897.627340393" Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.397205 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-57gj6"] Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.398935 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-57gj6" Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.405296 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-w9w2j" Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.407700 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-khbvf"] Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.408445 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-khbvf" Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.410053 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.417021 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s62hl\" (UniqueName: \"kubernetes.io/projected/fdb6e297-9da3-41ff-a6f3-de81833178c8-kube-api-access-s62hl\") pod \"nmstate-webhook-866bcb46dc-khbvf\" (UID: \"fdb6e297-9da3-41ff-a6f3-de81833178c8\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-khbvf" Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.417067 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pgbn\" (UniqueName: \"kubernetes.io/projected/c9fcfe59-df8c-4433-a47f-8b07f90d98bc-kube-api-access-7pgbn\") pod \"nmstate-metrics-58c85c668d-57gj6\" (UID: \"c9fcfe59-df8c-4433-a47f-8b07f90d98bc\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-57gj6" Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.417128 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/fdb6e297-9da3-41ff-a6f3-de81833178c8-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-khbvf\" (UID: \"fdb6e297-9da3-41ff-a6f3-de81833178c8\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-khbvf" Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.417186 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-57gj6"] Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.473005 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-khbvf"] Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.502239 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-k6p82"] Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.504156 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-k6p82" Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.518292 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s62hl\" (UniqueName: \"kubernetes.io/projected/fdb6e297-9da3-41ff-a6f3-de81833178c8-kube-api-access-s62hl\") pod \"nmstate-webhook-866bcb46dc-khbvf\" (UID: \"fdb6e297-9da3-41ff-a6f3-de81833178c8\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-khbvf" Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.518346 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pgbn\" (UniqueName: \"kubernetes.io/projected/c9fcfe59-df8c-4433-a47f-8b07f90d98bc-kube-api-access-7pgbn\") pod \"nmstate-metrics-58c85c668d-57gj6\" (UID: \"c9fcfe59-df8c-4433-a47f-8b07f90d98bc\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-57gj6" Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.518419 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/fdb6e297-9da3-41ff-a6f3-de81833178c8-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-khbvf\" (UID: \"fdb6e297-9da3-41ff-a6f3-de81833178c8\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-khbvf" Feb 14 04:24:32 crc kubenswrapper[4867]: E0214 04:24:32.518584 4867 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Feb 14 04:24:32 crc kubenswrapper[4867]: E0214 04:24:32.518630 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fdb6e297-9da3-41ff-a6f3-de81833178c8-tls-key-pair podName:fdb6e297-9da3-41ff-a6f3-de81833178c8 nodeName:}" failed. No retries permitted until 2026-02-14 04:24:33.018613355 +0000 UTC m=+905.099550669 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/fdb6e297-9da3-41ff-a6f3-de81833178c8-tls-key-pair") pod "nmstate-webhook-866bcb46dc-khbvf" (UID: "fdb6e297-9da3-41ff-a6f3-de81833178c8") : secret "openshift-nmstate-webhook" not found Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.543538 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s62hl\" (UniqueName: \"kubernetes.io/projected/fdb6e297-9da3-41ff-a6f3-de81833178c8-kube-api-access-s62hl\") pod \"nmstate-webhook-866bcb46dc-khbvf\" (UID: \"fdb6e297-9da3-41ff-a6f3-de81833178c8\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-khbvf" Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.544938 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pgbn\" (UniqueName: \"kubernetes.io/projected/c9fcfe59-df8c-4433-a47f-8b07f90d98bc-kube-api-access-7pgbn\") pod \"nmstate-metrics-58c85c668d-57gj6\" (UID: \"c9fcfe59-df8c-4433-a47f-8b07f90d98bc\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-57gj6" Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.613215 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-xwq77"] Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.614387 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-xwq77" Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.616027 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-h5hx7" Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.617911 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.620618 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/ee9c78b0-77e6-47b0-8e8b-763d69cbd9aa-ovs-socket\") pod \"nmstate-handler-k6p82\" (UID: \"ee9c78b0-77e6-47b0-8e8b-763d69cbd9aa\") " pod="openshift-nmstate/nmstate-handler-k6p82" Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.620885 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/ee9c78b0-77e6-47b0-8e8b-763d69cbd9aa-nmstate-lock\") pod \"nmstate-handler-k6p82\" (UID: \"ee9c78b0-77e6-47b0-8e8b-763d69cbd9aa\") " pod="openshift-nmstate/nmstate-handler-k6p82" Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.621031 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/ee9c78b0-77e6-47b0-8e8b-763d69cbd9aa-dbus-socket\") pod \"nmstate-handler-k6p82\" (UID: \"ee9c78b0-77e6-47b0-8e8b-763d69cbd9aa\") " pod="openshift-nmstate/nmstate-handler-k6p82" Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.621080 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8pgp\" (UniqueName: \"kubernetes.io/projected/ee9c78b0-77e6-47b0-8e8b-763d69cbd9aa-kube-api-access-f8pgp\") pod \"nmstate-handler-k6p82\" (UID: \"ee9c78b0-77e6-47b0-8e8b-763d69cbd9aa\") " pod="openshift-nmstate/nmstate-handler-k6p82" Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.625765 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.626480 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-xwq77"] Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.717149 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-57gj6" Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.723020 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/ee9c78b0-77e6-47b0-8e8b-763d69cbd9aa-dbus-socket\") pod \"nmstate-handler-k6p82\" (UID: \"ee9c78b0-77e6-47b0-8e8b-763d69cbd9aa\") " pod="openshift-nmstate/nmstate-handler-k6p82" Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.723070 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8pgp\" (UniqueName: \"kubernetes.io/projected/ee9c78b0-77e6-47b0-8e8b-763d69cbd9aa-kube-api-access-f8pgp\") pod \"nmstate-handler-k6p82\" (UID: \"ee9c78b0-77e6-47b0-8e8b-763d69cbd9aa\") " pod="openshift-nmstate/nmstate-handler-k6p82" Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.723104 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/bd1547ee-0518-45af-bb63-9001da6fa7de-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-xwq77\" (UID: \"bd1547ee-0518-45af-bb63-9001da6fa7de\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-xwq77" Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.723142 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/bd1547ee-0518-45af-bb63-9001da6fa7de-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-xwq77\" (UID: \"bd1547ee-0518-45af-bb63-9001da6fa7de\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-xwq77" Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.723169 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vgjx\" (UniqueName: \"kubernetes.io/projected/bd1547ee-0518-45af-bb63-9001da6fa7de-kube-api-access-8vgjx\") pod \"nmstate-console-plugin-5c78fc5d65-xwq77\" (UID: \"bd1547ee-0518-45af-bb63-9001da6fa7de\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-xwq77" Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.723201 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/ee9c78b0-77e6-47b0-8e8b-763d69cbd9aa-ovs-socket\") pod \"nmstate-handler-k6p82\" (UID: \"ee9c78b0-77e6-47b0-8e8b-763d69cbd9aa\") " pod="openshift-nmstate/nmstate-handler-k6p82" Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.723238 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/ee9c78b0-77e6-47b0-8e8b-763d69cbd9aa-nmstate-lock\") pod \"nmstate-handler-k6p82\" (UID: \"ee9c78b0-77e6-47b0-8e8b-763d69cbd9aa\") " pod="openshift-nmstate/nmstate-handler-k6p82" Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.723331 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/ee9c78b0-77e6-47b0-8e8b-763d69cbd9aa-nmstate-lock\") pod \"nmstate-handler-k6p82\" (UID: \"ee9c78b0-77e6-47b0-8e8b-763d69cbd9aa\") " pod="openshift-nmstate/nmstate-handler-k6p82" Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.723648 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/ee9c78b0-77e6-47b0-8e8b-763d69cbd9aa-ovs-socket\") pod \"nmstate-handler-k6p82\" (UID: \"ee9c78b0-77e6-47b0-8e8b-763d69cbd9aa\") " pod="openshift-nmstate/nmstate-handler-k6p82" Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.723719 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/ee9c78b0-77e6-47b0-8e8b-763d69cbd9aa-dbus-socket\") pod \"nmstate-handler-k6p82\" (UID: \"ee9c78b0-77e6-47b0-8e8b-763d69cbd9aa\") " pod="openshift-nmstate/nmstate-handler-k6p82" Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.775978 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8pgp\" (UniqueName: \"kubernetes.io/projected/ee9c78b0-77e6-47b0-8e8b-763d69cbd9aa-kube-api-access-f8pgp\") pod \"nmstate-handler-k6p82\" (UID: \"ee9c78b0-77e6-47b0-8e8b-763d69cbd9aa\") " pod="openshift-nmstate/nmstate-handler-k6p82" Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.824804 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-k6p82" Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.825443 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/bd1547ee-0518-45af-bb63-9001da6fa7de-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-xwq77\" (UID: \"bd1547ee-0518-45af-bb63-9001da6fa7de\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-xwq77" Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.825537 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/bd1547ee-0518-45af-bb63-9001da6fa7de-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-xwq77\" (UID: \"bd1547ee-0518-45af-bb63-9001da6fa7de\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-xwq77" Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.825583 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vgjx\" (UniqueName: \"kubernetes.io/projected/bd1547ee-0518-45af-bb63-9001da6fa7de-kube-api-access-8vgjx\") pod \"nmstate-console-plugin-5c78fc5d65-xwq77\" (UID: \"bd1547ee-0518-45af-bb63-9001da6fa7de\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-xwq77" Feb 14 04:24:32 crc kubenswrapper[4867]: E0214 04:24:32.825626 4867 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Feb 14 04:24:32 crc kubenswrapper[4867]: E0214 04:24:32.825692 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd1547ee-0518-45af-bb63-9001da6fa7de-plugin-serving-cert podName:bd1547ee-0518-45af-bb63-9001da6fa7de nodeName:}" failed. No retries permitted until 2026-02-14 04:24:33.32567724 +0000 UTC m=+905.406614554 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/bd1547ee-0518-45af-bb63-9001da6fa7de-plugin-serving-cert") pod "nmstate-console-plugin-5c78fc5d65-xwq77" (UID: "bd1547ee-0518-45af-bb63-9001da6fa7de") : secret "plugin-serving-cert" not found Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.826302 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/bd1547ee-0518-45af-bb63-9001da6fa7de-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-xwq77\" (UID: \"bd1547ee-0518-45af-bb63-9001da6fa7de\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-xwq77" Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.830484 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6c8864b6b5-mwdd6"] Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.831412 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6c8864b6b5-mwdd6" Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.853862 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6c8864b6b5-mwdd6"] Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.863327 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vgjx\" (UniqueName: \"kubernetes.io/projected/bd1547ee-0518-45af-bb63-9001da6fa7de-kube-api-access-8vgjx\") pod \"nmstate-console-plugin-5c78fc5d65-xwq77\" (UID: \"bd1547ee-0518-45af-bb63-9001da6fa7de\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-xwq77" Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.927150 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c4a25aef-4eee-4b48-b50a-0bf8fb0c1602-console-config\") pod \"console-6c8864b6b5-mwdd6\" (UID: \"c4a25aef-4eee-4b48-b50a-0bf8fb0c1602\") " pod="openshift-console/console-6c8864b6b5-mwdd6" Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.927189 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c4a25aef-4eee-4b48-b50a-0bf8fb0c1602-service-ca\") pod \"console-6c8864b6b5-mwdd6\" (UID: \"c4a25aef-4eee-4b48-b50a-0bf8fb0c1602\") " pod="openshift-console/console-6c8864b6b5-mwdd6" Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.927221 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4a25aef-4eee-4b48-b50a-0bf8fb0c1602-trusted-ca-bundle\") pod \"console-6c8864b6b5-mwdd6\" (UID: \"c4a25aef-4eee-4b48-b50a-0bf8fb0c1602\") " pod="openshift-console/console-6c8864b6b5-mwdd6" Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.927252 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnn87\" (UniqueName: \"kubernetes.io/projected/c4a25aef-4eee-4b48-b50a-0bf8fb0c1602-kube-api-access-lnn87\") pod \"console-6c8864b6b5-mwdd6\" (UID: \"c4a25aef-4eee-4b48-b50a-0bf8fb0c1602\") " pod="openshift-console/console-6c8864b6b5-mwdd6" Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.927376 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c4a25aef-4eee-4b48-b50a-0bf8fb0c1602-console-oauth-config\") pod \"console-6c8864b6b5-mwdd6\" (UID: \"c4a25aef-4eee-4b48-b50a-0bf8fb0c1602\") " pod="openshift-console/console-6c8864b6b5-mwdd6" Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.927403 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c4a25aef-4eee-4b48-b50a-0bf8fb0c1602-oauth-serving-cert\") pod \"console-6c8864b6b5-mwdd6\" (UID: \"c4a25aef-4eee-4b48-b50a-0bf8fb0c1602\") " pod="openshift-console/console-6c8864b6b5-mwdd6" Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.927426 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c4a25aef-4eee-4b48-b50a-0bf8fb0c1602-console-serving-cert\") pod \"console-6c8864b6b5-mwdd6\" (UID: \"c4a25aef-4eee-4b48-b50a-0bf8fb0c1602\") " pod="openshift-console/console-6c8864b6b5-mwdd6" Feb 14 04:24:33 crc kubenswrapper[4867]: I0214 04:24:33.036744 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c4a25aef-4eee-4b48-b50a-0bf8fb0c1602-oauth-serving-cert\") pod \"console-6c8864b6b5-mwdd6\" (UID: \"c4a25aef-4eee-4b48-b50a-0bf8fb0c1602\") " pod="openshift-console/console-6c8864b6b5-mwdd6" Feb 14 04:24:33 crc kubenswrapper[4867]: I0214 04:24:33.036796 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c4a25aef-4eee-4b48-b50a-0bf8fb0c1602-console-serving-cert\") pod \"console-6c8864b6b5-mwdd6\" (UID: \"c4a25aef-4eee-4b48-b50a-0bf8fb0c1602\") " pod="openshift-console/console-6c8864b6b5-mwdd6" Feb 14 04:24:33 crc kubenswrapper[4867]: I0214 04:24:33.036886 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c4a25aef-4eee-4b48-b50a-0bf8fb0c1602-console-config\") pod \"console-6c8864b6b5-mwdd6\" (UID: \"c4a25aef-4eee-4b48-b50a-0bf8fb0c1602\") " pod="openshift-console/console-6c8864b6b5-mwdd6" Feb 14 04:24:33 crc kubenswrapper[4867]: I0214 04:24:33.036908 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c4a25aef-4eee-4b48-b50a-0bf8fb0c1602-service-ca\") pod \"console-6c8864b6b5-mwdd6\" (UID: \"c4a25aef-4eee-4b48-b50a-0bf8fb0c1602\") " pod="openshift-console/console-6c8864b6b5-mwdd6" Feb 14 04:24:33 crc kubenswrapper[4867]: I0214 04:24:33.036928 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4a25aef-4eee-4b48-b50a-0bf8fb0c1602-trusted-ca-bundle\") pod \"console-6c8864b6b5-mwdd6\" (UID: \"c4a25aef-4eee-4b48-b50a-0bf8fb0c1602\") " pod="openshift-console/console-6c8864b6b5-mwdd6" Feb 14 04:24:33 crc kubenswrapper[4867]: I0214 04:24:33.036962 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnn87\" (UniqueName: \"kubernetes.io/projected/c4a25aef-4eee-4b48-b50a-0bf8fb0c1602-kube-api-access-lnn87\") pod \"console-6c8864b6b5-mwdd6\" (UID: \"c4a25aef-4eee-4b48-b50a-0bf8fb0c1602\") " pod="openshift-console/console-6c8864b6b5-mwdd6" Feb 14 04:24:33 crc kubenswrapper[4867]: I0214 04:24:33.037062 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/fdb6e297-9da3-41ff-a6f3-de81833178c8-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-khbvf\" (UID: \"fdb6e297-9da3-41ff-a6f3-de81833178c8\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-khbvf" Feb 14 04:24:33 crc kubenswrapper[4867]: I0214 04:24:33.037098 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c4a25aef-4eee-4b48-b50a-0bf8fb0c1602-console-oauth-config\") pod \"console-6c8864b6b5-mwdd6\" (UID: \"c4a25aef-4eee-4b48-b50a-0bf8fb0c1602\") " pod="openshift-console/console-6c8864b6b5-mwdd6" Feb 14 04:24:33 crc kubenswrapper[4867]: I0214 04:24:33.043437 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c4a25aef-4eee-4b48-b50a-0bf8fb0c1602-console-oauth-config\") pod \"console-6c8864b6b5-mwdd6\" (UID: \"c4a25aef-4eee-4b48-b50a-0bf8fb0c1602\") " pod="openshift-console/console-6c8864b6b5-mwdd6" Feb 14 04:24:33 crc kubenswrapper[4867]: I0214 04:24:33.044400 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c4a25aef-4eee-4b48-b50a-0bf8fb0c1602-service-ca\") pod \"console-6c8864b6b5-mwdd6\" (UID: \"c4a25aef-4eee-4b48-b50a-0bf8fb0c1602\") " pod="openshift-console/console-6c8864b6b5-mwdd6" Feb 14 04:24:33 crc kubenswrapper[4867]: I0214 04:24:33.044991 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c4a25aef-4eee-4b48-b50a-0bf8fb0c1602-oauth-serving-cert\") pod \"console-6c8864b6b5-mwdd6\" (UID: \"c4a25aef-4eee-4b48-b50a-0bf8fb0c1602\") " pod="openshift-console/console-6c8864b6b5-mwdd6" Feb 14 04:24:33 crc kubenswrapper[4867]: I0214 04:24:33.045157 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c4a25aef-4eee-4b48-b50a-0bf8fb0c1602-console-config\") pod \"console-6c8864b6b5-mwdd6\" (UID: \"c4a25aef-4eee-4b48-b50a-0bf8fb0c1602\") " pod="openshift-console/console-6c8864b6b5-mwdd6" Feb 14 04:24:33 crc kubenswrapper[4867]: I0214 04:24:33.045685 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4a25aef-4eee-4b48-b50a-0bf8fb0c1602-trusted-ca-bundle\") pod \"console-6c8864b6b5-mwdd6\" (UID: \"c4a25aef-4eee-4b48-b50a-0bf8fb0c1602\") " pod="openshift-console/console-6c8864b6b5-mwdd6" Feb 14 04:24:33 crc kubenswrapper[4867]: I0214 04:24:33.048568 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/fdb6e297-9da3-41ff-a6f3-de81833178c8-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-khbvf\" (UID: \"fdb6e297-9da3-41ff-a6f3-de81833178c8\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-khbvf" Feb 14 04:24:33 crc kubenswrapper[4867]: I0214 04:24:33.065319 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c4a25aef-4eee-4b48-b50a-0bf8fb0c1602-console-serving-cert\") pod \"console-6c8864b6b5-mwdd6\" (UID: \"c4a25aef-4eee-4b48-b50a-0bf8fb0c1602\") " pod="openshift-console/console-6c8864b6b5-mwdd6" Feb 14 04:24:33 crc kubenswrapper[4867]: I0214 04:24:33.068374 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-khbvf" Feb 14 04:24:33 crc kubenswrapper[4867]: I0214 04:24:33.072275 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnn87\" (UniqueName: \"kubernetes.io/projected/c4a25aef-4eee-4b48-b50a-0bf8fb0c1602-kube-api-access-lnn87\") pod \"console-6c8864b6b5-mwdd6\" (UID: \"c4a25aef-4eee-4b48-b50a-0bf8fb0c1602\") " pod="openshift-console/console-6c8864b6b5-mwdd6" Feb 14 04:24:33 crc kubenswrapper[4867]: I0214 04:24:33.158369 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6c8864b6b5-mwdd6" Feb 14 04:24:33 crc kubenswrapper[4867]: I0214 04:24:33.353874 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/bd1547ee-0518-45af-bb63-9001da6fa7de-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-xwq77\" (UID: \"bd1547ee-0518-45af-bb63-9001da6fa7de\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-xwq77" Feb 14 04:24:33 crc kubenswrapper[4867]: I0214 04:24:33.358859 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/bd1547ee-0518-45af-bb63-9001da6fa7de-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-xwq77\" (UID: \"bd1547ee-0518-45af-bb63-9001da6fa7de\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-xwq77" Feb 14 04:24:33 crc kubenswrapper[4867]: I0214 04:24:33.434473 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-57gj6"] Feb 14 04:24:33 crc kubenswrapper[4867]: W0214 04:24:33.448379 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc9fcfe59_df8c_4433_a47f_8b07f90d98bc.slice/crio-a0e66ea9c79f8d6e023eb262660f5323bc21fdd4846407aaf96dc2cdaf5e6029 WatchSource:0}: Error finding container a0e66ea9c79f8d6e023eb262660f5323bc21fdd4846407aaf96dc2cdaf5e6029: Status 404 returned error can't find the container with id a0e66ea9c79f8d6e023eb262660f5323bc21fdd4846407aaf96dc2cdaf5e6029 Feb 14 04:24:33 crc kubenswrapper[4867]: I0214 04:24:33.535892 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-xwq77" Feb 14 04:24:33 crc kubenswrapper[4867]: I0214 04:24:33.564558 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-t9pkj" Feb 14 04:24:33 crc kubenswrapper[4867]: I0214 04:24:33.613386 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-57gj6" event={"ID":"c9fcfe59-df8c-4433-a47f-8b07f90d98bc","Type":"ContainerStarted","Data":"a0e66ea9c79f8d6e023eb262660f5323bc21fdd4846407aaf96dc2cdaf5e6029"} Feb 14 04:24:33 crc kubenswrapper[4867]: I0214 04:24:33.615923 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-k6p82" event={"ID":"ee9c78b0-77e6-47b0-8e8b-763d69cbd9aa","Type":"ContainerStarted","Data":"b91c206570513148d6d5eff0600ac77cf7f699da03c866663af40025b8c9f3b6"} Feb 14 04:24:33 crc kubenswrapper[4867]: I0214 04:24:33.636831 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-t9pkj" Feb 14 04:24:33 crc kubenswrapper[4867]: I0214 04:24:33.642431 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-khbvf"] Feb 14 04:24:33 crc kubenswrapper[4867]: W0214 04:24:33.647004 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfdb6e297_9da3_41ff_a6f3_de81833178c8.slice/crio-689889f9b3e174df210a7d68d031b1261c6773a4fe020c63face34083ea6736a WatchSource:0}: Error finding container 689889f9b3e174df210a7d68d031b1261c6773a4fe020c63face34083ea6736a: Status 404 returned error can't find the container with id 689889f9b3e174df210a7d68d031b1261c6773a4fe020c63face34083ea6736a Feb 14 04:24:33 crc kubenswrapper[4867]: I0214 04:24:33.774678 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6c8864b6b5-mwdd6"] Feb 14 04:24:33 crc kubenswrapper[4867]: I0214 04:24:33.805496 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t9pkj"] Feb 14 04:24:34 crc kubenswrapper[4867]: I0214 04:24:34.084635 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-xwq77"] Feb 14 04:24:34 crc kubenswrapper[4867]: W0214 04:24:34.095002 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd1547ee_0518_45af_bb63_9001da6fa7de.slice/crio-fde8b37addf77f8549b56f0db91acffea96ae7b27f2a47862316f331aa921780 WatchSource:0}: Error finding container fde8b37addf77f8549b56f0db91acffea96ae7b27f2a47862316f331aa921780: Status 404 returned error can't find the container with id fde8b37addf77f8549b56f0db91acffea96ae7b27f2a47862316f331aa921780 Feb 14 04:24:34 crc kubenswrapper[4867]: I0214 04:24:34.625136 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-khbvf" event={"ID":"fdb6e297-9da3-41ff-a6f3-de81833178c8","Type":"ContainerStarted","Data":"689889f9b3e174df210a7d68d031b1261c6773a4fe020c63face34083ea6736a"} Feb 14 04:24:34 crc kubenswrapper[4867]: I0214 04:24:34.626927 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-xwq77" event={"ID":"bd1547ee-0518-45af-bb63-9001da6fa7de","Type":"ContainerStarted","Data":"fde8b37addf77f8549b56f0db91acffea96ae7b27f2a47862316f331aa921780"} Feb 14 04:24:34 crc kubenswrapper[4867]: I0214 04:24:34.629092 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-t9pkj" podUID="5ad1164b-e852-484b-b290-6d32e24d3d8e" containerName="registry-server" containerID="cri-o://49c660544513666115d86e4b4e0e7ddb150debdf0be5426823aa42b267bda347" gracePeriod=2 Feb 14 04:24:34 crc kubenswrapper[4867]: I0214 04:24:34.629656 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6c8864b6b5-mwdd6" event={"ID":"c4a25aef-4eee-4b48-b50a-0bf8fb0c1602","Type":"ContainerStarted","Data":"c2a0f0ef4fc35a56210a1bd277b9f8c3dbe6b717fe6cba021a58146d554cbf3e"} Feb 14 04:24:34 crc kubenswrapper[4867]: I0214 04:24:34.629706 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6c8864b6b5-mwdd6" event={"ID":"c4a25aef-4eee-4b48-b50a-0bf8fb0c1602","Type":"ContainerStarted","Data":"999f569ca24af828fccac613f37abfd55e6b13b288390e3bcddcc9896a94a3f7"} Feb 14 04:24:34 crc kubenswrapper[4867]: I0214 04:24:34.670349 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6c8864b6b5-mwdd6" podStartSLOduration=2.670321785 podStartE2EDuration="2.670321785s" podCreationTimestamp="2026-02-14 04:24:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:24:34.658472437 +0000 UTC m=+906.739409791" watchObservedRunningTime="2026-02-14 04:24:34.670321785 +0000 UTC m=+906.751259099" Feb 14 04:24:35 crc kubenswrapper[4867]: I0214 04:24:35.182793 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t9pkj" Feb 14 04:24:35 crc kubenswrapper[4867]: I0214 04:24:35.297070 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ad1164b-e852-484b-b290-6d32e24d3d8e-catalog-content\") pod \"5ad1164b-e852-484b-b290-6d32e24d3d8e\" (UID: \"5ad1164b-e852-484b-b290-6d32e24d3d8e\") " Feb 14 04:24:35 crc kubenswrapper[4867]: I0214 04:24:35.297140 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2z7g\" (UniqueName: \"kubernetes.io/projected/5ad1164b-e852-484b-b290-6d32e24d3d8e-kube-api-access-p2z7g\") pod \"5ad1164b-e852-484b-b290-6d32e24d3d8e\" (UID: \"5ad1164b-e852-484b-b290-6d32e24d3d8e\") " Feb 14 04:24:35 crc kubenswrapper[4867]: I0214 04:24:35.297305 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ad1164b-e852-484b-b290-6d32e24d3d8e-utilities\") pod \"5ad1164b-e852-484b-b290-6d32e24d3d8e\" (UID: \"5ad1164b-e852-484b-b290-6d32e24d3d8e\") " Feb 14 04:24:35 crc kubenswrapper[4867]: I0214 04:24:35.298477 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ad1164b-e852-484b-b290-6d32e24d3d8e-utilities" (OuterVolumeSpecName: "utilities") pod "5ad1164b-e852-484b-b290-6d32e24d3d8e" (UID: "5ad1164b-e852-484b-b290-6d32e24d3d8e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:24:35 crc kubenswrapper[4867]: I0214 04:24:35.322293 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ad1164b-e852-484b-b290-6d32e24d3d8e-kube-api-access-p2z7g" (OuterVolumeSpecName: "kube-api-access-p2z7g") pod "5ad1164b-e852-484b-b290-6d32e24d3d8e" (UID: "5ad1164b-e852-484b-b290-6d32e24d3d8e"). InnerVolumeSpecName "kube-api-access-p2z7g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:24:35 crc kubenswrapper[4867]: I0214 04:24:35.399855 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2z7g\" (UniqueName: \"kubernetes.io/projected/5ad1164b-e852-484b-b290-6d32e24d3d8e-kube-api-access-p2z7g\") on node \"crc\" DevicePath \"\"" Feb 14 04:24:35 crc kubenswrapper[4867]: I0214 04:24:35.400108 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ad1164b-e852-484b-b290-6d32e24d3d8e-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 04:24:35 crc kubenswrapper[4867]: I0214 04:24:35.432704 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ad1164b-e852-484b-b290-6d32e24d3d8e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5ad1164b-e852-484b-b290-6d32e24d3d8e" (UID: "5ad1164b-e852-484b-b290-6d32e24d3d8e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:24:35 crc kubenswrapper[4867]: I0214 04:24:35.501705 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ad1164b-e852-484b-b290-6d32e24d3d8e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 04:24:35 crc kubenswrapper[4867]: I0214 04:24:35.641166 4867 generic.go:334] "Generic (PLEG): container finished" podID="5ad1164b-e852-484b-b290-6d32e24d3d8e" containerID="49c660544513666115d86e4b4e0e7ddb150debdf0be5426823aa42b267bda347" exitCode=0 Feb 14 04:24:35 crc kubenswrapper[4867]: I0214 04:24:35.641269 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t9pkj" Feb 14 04:24:35 crc kubenswrapper[4867]: I0214 04:24:35.641298 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t9pkj" event={"ID":"5ad1164b-e852-484b-b290-6d32e24d3d8e","Type":"ContainerDied","Data":"49c660544513666115d86e4b4e0e7ddb150debdf0be5426823aa42b267bda347"} Feb 14 04:24:35 crc kubenswrapper[4867]: I0214 04:24:35.641412 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t9pkj" event={"ID":"5ad1164b-e852-484b-b290-6d32e24d3d8e","Type":"ContainerDied","Data":"0d2d792529598a3e2dcb124798b2124c8ff8af3c6b135bf970fd763b2135679b"} Feb 14 04:24:35 crc kubenswrapper[4867]: I0214 04:24:35.641437 4867 scope.go:117] "RemoveContainer" containerID="49c660544513666115d86e4b4e0e7ddb150debdf0be5426823aa42b267bda347" Feb 14 04:24:35 crc kubenswrapper[4867]: I0214 04:24:35.673004 4867 scope.go:117] "RemoveContainer" containerID="d771e48ac93f52501b02ff902419db985dbdc75b66d985499f576fcba9d2c8cf" Feb 14 04:24:35 crc kubenswrapper[4867]: I0214 04:24:35.674128 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t9pkj"] Feb 14 04:24:35 crc kubenswrapper[4867]: I0214 04:24:35.680078 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-t9pkj"] Feb 14 04:24:35 crc kubenswrapper[4867]: I0214 04:24:35.729209 4867 scope.go:117] "RemoveContainer" containerID="59213eb5738330920a318fc901360e949cd08966f19e4aeca5c0862bcdd388f1" Feb 14 04:24:35 crc kubenswrapper[4867]: I0214 04:24:35.781733 4867 scope.go:117] "RemoveContainer" containerID="49c660544513666115d86e4b4e0e7ddb150debdf0be5426823aa42b267bda347" Feb 14 04:24:35 crc kubenswrapper[4867]: E0214 04:24:35.782287 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49c660544513666115d86e4b4e0e7ddb150debdf0be5426823aa42b267bda347\": container with ID starting with 49c660544513666115d86e4b4e0e7ddb150debdf0be5426823aa42b267bda347 not found: ID does not exist" containerID="49c660544513666115d86e4b4e0e7ddb150debdf0be5426823aa42b267bda347" Feb 14 04:24:35 crc kubenswrapper[4867]: I0214 04:24:35.782339 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49c660544513666115d86e4b4e0e7ddb150debdf0be5426823aa42b267bda347"} err="failed to get container status \"49c660544513666115d86e4b4e0e7ddb150debdf0be5426823aa42b267bda347\": rpc error: code = NotFound desc = could not find container \"49c660544513666115d86e4b4e0e7ddb150debdf0be5426823aa42b267bda347\": container with ID starting with 49c660544513666115d86e4b4e0e7ddb150debdf0be5426823aa42b267bda347 not found: ID does not exist" Feb 14 04:24:35 crc kubenswrapper[4867]: I0214 04:24:35.782371 4867 scope.go:117] "RemoveContainer" containerID="d771e48ac93f52501b02ff902419db985dbdc75b66d985499f576fcba9d2c8cf" Feb 14 04:24:35 crc kubenswrapper[4867]: E0214 04:24:35.783954 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d771e48ac93f52501b02ff902419db985dbdc75b66d985499f576fcba9d2c8cf\": container with ID starting with d771e48ac93f52501b02ff902419db985dbdc75b66d985499f576fcba9d2c8cf not found: ID does not exist" containerID="d771e48ac93f52501b02ff902419db985dbdc75b66d985499f576fcba9d2c8cf" Feb 14 04:24:35 crc kubenswrapper[4867]: I0214 04:24:35.783974 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d771e48ac93f52501b02ff902419db985dbdc75b66d985499f576fcba9d2c8cf"} err="failed to get container status \"d771e48ac93f52501b02ff902419db985dbdc75b66d985499f576fcba9d2c8cf\": rpc error: code = NotFound desc = could not find container \"d771e48ac93f52501b02ff902419db985dbdc75b66d985499f576fcba9d2c8cf\": container with ID starting with d771e48ac93f52501b02ff902419db985dbdc75b66d985499f576fcba9d2c8cf not found: ID does not exist" Feb 14 04:24:35 crc kubenswrapper[4867]: I0214 04:24:35.783989 4867 scope.go:117] "RemoveContainer" containerID="59213eb5738330920a318fc901360e949cd08966f19e4aeca5c0862bcdd388f1" Feb 14 04:24:35 crc kubenswrapper[4867]: E0214 04:24:35.784517 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59213eb5738330920a318fc901360e949cd08966f19e4aeca5c0862bcdd388f1\": container with ID starting with 59213eb5738330920a318fc901360e949cd08966f19e4aeca5c0862bcdd388f1 not found: ID does not exist" containerID="59213eb5738330920a318fc901360e949cd08966f19e4aeca5c0862bcdd388f1" Feb 14 04:24:35 crc kubenswrapper[4867]: I0214 04:24:35.784550 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59213eb5738330920a318fc901360e949cd08966f19e4aeca5c0862bcdd388f1"} err="failed to get container status \"59213eb5738330920a318fc901360e949cd08966f19e4aeca5c0862bcdd388f1\": rpc error: code = NotFound desc = could not find container \"59213eb5738330920a318fc901360e949cd08966f19e4aeca5c0862bcdd388f1\": container with ID starting with 59213eb5738330920a318fc901360e949cd08966f19e4aeca5c0862bcdd388f1 not found: ID does not exist" Feb 14 04:24:37 crc kubenswrapper[4867]: I0214 04:24:37.007628 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ad1164b-e852-484b-b290-6d32e24d3d8e" path="/var/lib/kubelet/pods/5ad1164b-e852-484b-b290-6d32e24d3d8e/volumes" Feb 14 04:24:37 crc kubenswrapper[4867]: I0214 04:24:37.662684 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-57gj6" event={"ID":"c9fcfe59-df8c-4433-a47f-8b07f90d98bc","Type":"ContainerStarted","Data":"4e20e0b744f80694499d413e780a6fe175467627a13bcf53143fa0e3950eb199"} Feb 14 04:24:37 crc kubenswrapper[4867]: I0214 04:24:37.664866 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-xwq77" event={"ID":"bd1547ee-0518-45af-bb63-9001da6fa7de","Type":"ContainerStarted","Data":"d53e858246870ebc705630eeddac5777f35bf1ff9c3e7e2104365186b6739e00"} Feb 14 04:24:37 crc kubenswrapper[4867]: I0214 04:24:37.667339 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-khbvf" event={"ID":"fdb6e297-9da3-41ff-a6f3-de81833178c8","Type":"ContainerStarted","Data":"5e85c689adbcce35a22683e54c5bb7f86cec1fbb103cf18d989ea1230fc5d615"} Feb 14 04:24:37 crc kubenswrapper[4867]: I0214 04:24:37.682140 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-xwq77" podStartSLOduration=2.463098864 podStartE2EDuration="5.68211797s" podCreationTimestamp="2026-02-14 04:24:32 +0000 UTC" firstStartedPulling="2026-02-14 04:24:34.097564671 +0000 UTC m=+906.178501975" lastFinishedPulling="2026-02-14 04:24:37.316583767 +0000 UTC m=+909.397521081" observedRunningTime="2026-02-14 04:24:37.677907341 +0000 UTC m=+909.758844655" watchObservedRunningTime="2026-02-14 04:24:37.68211797 +0000 UTC m=+909.763055284" Feb 14 04:24:38 crc kubenswrapper[4867]: I0214 04:24:38.675235 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-k6p82" event={"ID":"ee9c78b0-77e6-47b0-8e8b-763d69cbd9aa","Type":"ContainerStarted","Data":"b008e3ff644420661244317668e9c1ae0286046bea6a7ee3f1a5406bea640614"} Feb 14 04:24:38 crc kubenswrapper[4867]: I0214 04:24:38.675668 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-khbvf" Feb 14 04:24:38 crc kubenswrapper[4867]: I0214 04:24:38.696917 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-k6p82" podStartSLOduration=2.292169434 podStartE2EDuration="6.696890074s" podCreationTimestamp="2026-02-14 04:24:32 +0000 UTC" firstStartedPulling="2026-02-14 04:24:32.941673212 +0000 UTC m=+905.022610526" lastFinishedPulling="2026-02-14 04:24:37.346393852 +0000 UTC m=+909.427331166" observedRunningTime="2026-02-14 04:24:38.687208683 +0000 UTC m=+910.768145997" watchObservedRunningTime="2026-02-14 04:24:38.696890074 +0000 UTC m=+910.777827418" Feb 14 04:24:38 crc kubenswrapper[4867]: I0214 04:24:38.714007 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-khbvf" podStartSLOduration=3.040199081 podStartE2EDuration="6.713983138s" podCreationTimestamp="2026-02-14 04:24:32 +0000 UTC" firstStartedPulling="2026-02-14 04:24:33.649828853 +0000 UTC m=+905.730766167" lastFinishedPulling="2026-02-14 04:24:37.32361291 +0000 UTC m=+909.404550224" observedRunningTime="2026-02-14 04:24:38.701658838 +0000 UTC m=+910.782596162" watchObservedRunningTime="2026-02-14 04:24:38.713983138 +0000 UTC m=+910.794920492" Feb 14 04:24:39 crc kubenswrapper[4867]: I0214 04:24:39.682547 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-k6p82" Feb 14 04:24:40 crc kubenswrapper[4867]: I0214 04:24:40.690536 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-57gj6" event={"ID":"c9fcfe59-df8c-4433-a47f-8b07f90d98bc","Type":"ContainerStarted","Data":"94f36c17e98ae4ab51239fa7c7e1510c22698ff66613e759b06ddc01e8aca414"} Feb 14 04:24:40 crc kubenswrapper[4867]: I0214 04:24:40.712298 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-58c85c668d-57gj6" podStartSLOduration=1.8842576009999998 podStartE2EDuration="8.712273123s" podCreationTimestamp="2026-02-14 04:24:32 +0000 UTC" firstStartedPulling="2026-02-14 04:24:33.45257067 +0000 UTC m=+905.533507984" lastFinishedPulling="2026-02-14 04:24:40.280586192 +0000 UTC m=+912.361523506" observedRunningTime="2026-02-14 04:24:40.705813135 +0000 UTC m=+912.786750449" watchObservedRunningTime="2026-02-14 04:24:40.712273123 +0000 UTC m=+912.793210437" Feb 14 04:24:42 crc kubenswrapper[4867]: I0214 04:24:42.850650 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-k6p82" Feb 14 04:24:43 crc kubenswrapper[4867]: I0214 04:24:43.159954 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6c8864b6b5-mwdd6" Feb 14 04:24:43 crc kubenswrapper[4867]: I0214 04:24:43.159991 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6c8864b6b5-mwdd6" Feb 14 04:24:43 crc kubenswrapper[4867]: I0214 04:24:43.165624 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6c8864b6b5-mwdd6" Feb 14 04:24:43 crc kubenswrapper[4867]: I0214 04:24:43.721266 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6c8864b6b5-mwdd6" Feb 14 04:24:43 crc kubenswrapper[4867]: I0214 04:24:43.794284 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6687988ff8-hggh9"] Feb 14 04:24:53 crc kubenswrapper[4867]: I0214 04:24:53.078197 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-khbvf" Feb 14 04:25:01 crc kubenswrapper[4867]: I0214 04:25:01.250972 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 04:25:01 crc kubenswrapper[4867]: I0214 04:25:01.251737 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 04:25:01 crc kubenswrapper[4867]: I0214 04:25:01.734633 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7m8sv"] Feb 14 04:25:01 crc kubenswrapper[4867]: E0214 04:25:01.735026 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ad1164b-e852-484b-b290-6d32e24d3d8e" containerName="extract-utilities" Feb 14 04:25:01 crc kubenswrapper[4867]: I0214 04:25:01.735048 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ad1164b-e852-484b-b290-6d32e24d3d8e" containerName="extract-utilities" Feb 14 04:25:01 crc kubenswrapper[4867]: E0214 04:25:01.735078 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ad1164b-e852-484b-b290-6d32e24d3d8e" containerName="registry-server" Feb 14 04:25:01 crc kubenswrapper[4867]: I0214 04:25:01.735088 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ad1164b-e852-484b-b290-6d32e24d3d8e" containerName="registry-server" Feb 14 04:25:01 crc kubenswrapper[4867]: E0214 04:25:01.735104 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ad1164b-e852-484b-b290-6d32e24d3d8e" containerName="extract-content" Feb 14 04:25:01 crc kubenswrapper[4867]: I0214 04:25:01.735112 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ad1164b-e852-484b-b290-6d32e24d3d8e" containerName="extract-content" Feb 14 04:25:01 crc kubenswrapper[4867]: I0214 04:25:01.735284 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ad1164b-e852-484b-b290-6d32e24d3d8e" containerName="registry-server" Feb 14 04:25:01 crc kubenswrapper[4867]: I0214 04:25:01.736678 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7m8sv" Feb 14 04:25:01 crc kubenswrapper[4867]: I0214 04:25:01.756574 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7m8sv"] Feb 14 04:25:01 crc kubenswrapper[4867]: I0214 04:25:01.806766 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d98e15fa-a08a-4710-a903-60a1af5ff85c-catalog-content\") pod \"redhat-marketplace-7m8sv\" (UID: \"d98e15fa-a08a-4710-a903-60a1af5ff85c\") " pod="openshift-marketplace/redhat-marketplace-7m8sv" Feb 14 04:25:01 crc kubenswrapper[4867]: I0214 04:25:01.806918 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d98e15fa-a08a-4710-a903-60a1af5ff85c-utilities\") pod \"redhat-marketplace-7m8sv\" (UID: \"d98e15fa-a08a-4710-a903-60a1af5ff85c\") " pod="openshift-marketplace/redhat-marketplace-7m8sv" Feb 14 04:25:01 crc kubenswrapper[4867]: I0214 04:25:01.806962 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgcq4\" (UniqueName: \"kubernetes.io/projected/d98e15fa-a08a-4710-a903-60a1af5ff85c-kube-api-access-fgcq4\") pod \"redhat-marketplace-7m8sv\" (UID: \"d98e15fa-a08a-4710-a903-60a1af5ff85c\") " pod="openshift-marketplace/redhat-marketplace-7m8sv" Feb 14 04:25:01 crc kubenswrapper[4867]: I0214 04:25:01.907895 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d98e15fa-a08a-4710-a903-60a1af5ff85c-catalog-content\") pod \"redhat-marketplace-7m8sv\" (UID: \"d98e15fa-a08a-4710-a903-60a1af5ff85c\") " pod="openshift-marketplace/redhat-marketplace-7m8sv" Feb 14 04:25:01 crc kubenswrapper[4867]: I0214 04:25:01.907991 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d98e15fa-a08a-4710-a903-60a1af5ff85c-utilities\") pod \"redhat-marketplace-7m8sv\" (UID: \"d98e15fa-a08a-4710-a903-60a1af5ff85c\") " pod="openshift-marketplace/redhat-marketplace-7m8sv" Feb 14 04:25:01 crc kubenswrapper[4867]: I0214 04:25:01.908032 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgcq4\" (UniqueName: \"kubernetes.io/projected/d98e15fa-a08a-4710-a903-60a1af5ff85c-kube-api-access-fgcq4\") pod \"redhat-marketplace-7m8sv\" (UID: \"d98e15fa-a08a-4710-a903-60a1af5ff85c\") " pod="openshift-marketplace/redhat-marketplace-7m8sv" Feb 14 04:25:01 crc kubenswrapper[4867]: I0214 04:25:01.909019 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d98e15fa-a08a-4710-a903-60a1af5ff85c-catalog-content\") pod \"redhat-marketplace-7m8sv\" (UID: \"d98e15fa-a08a-4710-a903-60a1af5ff85c\") " pod="openshift-marketplace/redhat-marketplace-7m8sv" Feb 14 04:25:01 crc kubenswrapper[4867]: I0214 04:25:01.909043 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d98e15fa-a08a-4710-a903-60a1af5ff85c-utilities\") pod \"redhat-marketplace-7m8sv\" (UID: \"d98e15fa-a08a-4710-a903-60a1af5ff85c\") " pod="openshift-marketplace/redhat-marketplace-7m8sv" Feb 14 04:25:01 crc kubenswrapper[4867]: I0214 04:25:01.931421 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgcq4\" (UniqueName: \"kubernetes.io/projected/d98e15fa-a08a-4710-a903-60a1af5ff85c-kube-api-access-fgcq4\") pod \"redhat-marketplace-7m8sv\" (UID: \"d98e15fa-a08a-4710-a903-60a1af5ff85c\") " pod="openshift-marketplace/redhat-marketplace-7m8sv" Feb 14 04:25:02 crc kubenswrapper[4867]: I0214 04:25:02.054763 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7m8sv" Feb 14 04:25:02 crc kubenswrapper[4867]: I0214 04:25:02.621943 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7m8sv"] Feb 14 04:25:02 crc kubenswrapper[4867]: I0214 04:25:02.892119 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7m8sv" event={"ID":"d98e15fa-a08a-4710-a903-60a1af5ff85c","Type":"ContainerStarted","Data":"be841d62f3009374faac139bf7c9000724217c0d043cee6d7f13deb00ae9eb01"} Feb 14 04:25:02 crc kubenswrapper[4867]: I0214 04:25:02.892722 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7m8sv" event={"ID":"d98e15fa-a08a-4710-a903-60a1af5ff85c","Type":"ContainerStarted","Data":"0f8af18980c35ed58409e3eef5c5ce346989fd381ffc5f93082d6eedce320de7"} Feb 14 04:25:03 crc kubenswrapper[4867]: I0214 04:25:03.908589 4867 generic.go:334] "Generic (PLEG): container finished" podID="d98e15fa-a08a-4710-a903-60a1af5ff85c" containerID="be841d62f3009374faac139bf7c9000724217c0d043cee6d7f13deb00ae9eb01" exitCode=0 Feb 14 04:25:03 crc kubenswrapper[4867]: I0214 04:25:03.909086 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7m8sv" event={"ID":"d98e15fa-a08a-4710-a903-60a1af5ff85c","Type":"ContainerDied","Data":"be841d62f3009374faac139bf7c9000724217c0d043cee6d7f13deb00ae9eb01"} Feb 14 04:25:03 crc kubenswrapper[4867]: I0214 04:25:03.912402 4867 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 14 04:25:05 crc kubenswrapper[4867]: I0214 04:25:05.934051 4867 generic.go:334] "Generic (PLEG): container finished" podID="d98e15fa-a08a-4710-a903-60a1af5ff85c" containerID="781f5d11a5a6f97d66dc2c2ec0eae435679b8dd4779f05f225fb3ce5dd559a2c" exitCode=0 Feb 14 04:25:05 crc kubenswrapper[4867]: I0214 04:25:05.934177 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7m8sv" event={"ID":"d98e15fa-a08a-4710-a903-60a1af5ff85c","Type":"ContainerDied","Data":"781f5d11a5a6f97d66dc2c2ec0eae435679b8dd4779f05f225fb3ce5dd559a2c"} Feb 14 04:25:06 crc kubenswrapper[4867]: I0214 04:25:06.959093 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7m8sv" event={"ID":"d98e15fa-a08a-4710-a903-60a1af5ff85c","Type":"ContainerStarted","Data":"800ec12ae40651afc5994d9e62ff224d7af9d3df94bec204c2ad4dc1516bde35"} Feb 14 04:25:06 crc kubenswrapper[4867]: I0214 04:25:06.988111 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7m8sv" podStartSLOduration=3.484357437 podStartE2EDuration="5.988093448s" podCreationTimestamp="2026-02-14 04:25:01 +0000 UTC" firstStartedPulling="2026-02-14 04:25:03.912166117 +0000 UTC m=+935.993103431" lastFinishedPulling="2026-02-14 04:25:06.415902128 +0000 UTC m=+938.496839442" observedRunningTime="2026-02-14 04:25:06.984134776 +0000 UTC m=+939.065072090" watchObservedRunningTime="2026-02-14 04:25:06.988093448 +0000 UTC m=+939.069030762" Feb 14 04:25:08 crc kubenswrapper[4867]: I0214 04:25:08.867427 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-6687988ff8-hggh9" podUID="2d9ba4d6-e777-4a10-96d1-30a492f9ecf6" containerName="console" containerID="cri-o://3645eb9bc387f910ed152e2f9ff7796cc316b5c34a3967a69643a1f6d547d063" gracePeriod=15 Feb 14 04:25:09 crc kubenswrapper[4867]: I0214 04:25:09.388331 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6687988ff8-hggh9_2d9ba4d6-e777-4a10-96d1-30a492f9ecf6/console/0.log" Feb 14 04:25:09 crc kubenswrapper[4867]: I0214 04:25:09.389804 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6687988ff8-hggh9" Feb 14 04:25:09 crc kubenswrapper[4867]: I0214 04:25:09.463820 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2d9ba4d6-e777-4a10-96d1-30a492f9ecf6-trusted-ca-bundle\") pod \"2d9ba4d6-e777-4a10-96d1-30a492f9ecf6\" (UID: \"2d9ba4d6-e777-4a10-96d1-30a492f9ecf6\") " Feb 14 04:25:09 crc kubenswrapper[4867]: I0214 04:25:09.463946 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2d9ba4d6-e777-4a10-96d1-30a492f9ecf6-oauth-serving-cert\") pod \"2d9ba4d6-e777-4a10-96d1-30a492f9ecf6\" (UID: \"2d9ba4d6-e777-4a10-96d1-30a492f9ecf6\") " Feb 14 04:25:09 crc kubenswrapper[4867]: I0214 04:25:09.463995 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2d9ba4d6-e777-4a10-96d1-30a492f9ecf6-console-oauth-config\") pod \"2d9ba4d6-e777-4a10-96d1-30a492f9ecf6\" (UID: \"2d9ba4d6-e777-4a10-96d1-30a492f9ecf6\") " Feb 14 04:25:09 crc kubenswrapper[4867]: I0214 04:25:09.464060 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pm2vd\" (UniqueName: \"kubernetes.io/projected/2d9ba4d6-e777-4a10-96d1-30a492f9ecf6-kube-api-access-pm2vd\") pod \"2d9ba4d6-e777-4a10-96d1-30a492f9ecf6\" (UID: \"2d9ba4d6-e777-4a10-96d1-30a492f9ecf6\") " Feb 14 04:25:09 crc kubenswrapper[4867]: I0214 04:25:09.464117 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2d9ba4d6-e777-4a10-96d1-30a492f9ecf6-console-config\") pod \"2d9ba4d6-e777-4a10-96d1-30a492f9ecf6\" (UID: \"2d9ba4d6-e777-4a10-96d1-30a492f9ecf6\") " Feb 14 04:25:09 crc kubenswrapper[4867]: I0214 04:25:09.464150 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2d9ba4d6-e777-4a10-96d1-30a492f9ecf6-service-ca\") pod \"2d9ba4d6-e777-4a10-96d1-30a492f9ecf6\" (UID: \"2d9ba4d6-e777-4a10-96d1-30a492f9ecf6\") " Feb 14 04:25:09 crc kubenswrapper[4867]: I0214 04:25:09.464180 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2d9ba4d6-e777-4a10-96d1-30a492f9ecf6-console-serving-cert\") pod \"2d9ba4d6-e777-4a10-96d1-30a492f9ecf6\" (UID: \"2d9ba4d6-e777-4a10-96d1-30a492f9ecf6\") " Feb 14 04:25:09 crc kubenswrapper[4867]: I0214 04:25:09.465635 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d9ba4d6-e777-4a10-96d1-30a492f9ecf6-service-ca" (OuterVolumeSpecName: "service-ca") pod "2d9ba4d6-e777-4a10-96d1-30a492f9ecf6" (UID: "2d9ba4d6-e777-4a10-96d1-30a492f9ecf6"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:25:09 crc kubenswrapper[4867]: I0214 04:25:09.465633 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d9ba4d6-e777-4a10-96d1-30a492f9ecf6-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "2d9ba4d6-e777-4a10-96d1-30a492f9ecf6" (UID: "2d9ba4d6-e777-4a10-96d1-30a492f9ecf6"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:25:09 crc kubenswrapper[4867]: I0214 04:25:09.465757 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d9ba4d6-e777-4a10-96d1-30a492f9ecf6-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "2d9ba4d6-e777-4a10-96d1-30a492f9ecf6" (UID: "2d9ba4d6-e777-4a10-96d1-30a492f9ecf6"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:25:09 crc kubenswrapper[4867]: I0214 04:25:09.467671 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d9ba4d6-e777-4a10-96d1-30a492f9ecf6-console-config" (OuterVolumeSpecName: "console-config") pod "2d9ba4d6-e777-4a10-96d1-30a492f9ecf6" (UID: "2d9ba4d6-e777-4a10-96d1-30a492f9ecf6"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:25:09 crc kubenswrapper[4867]: I0214 04:25:09.475674 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d9ba4d6-e777-4a10-96d1-30a492f9ecf6-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "2d9ba4d6-e777-4a10-96d1-30a492f9ecf6" (UID: "2d9ba4d6-e777-4a10-96d1-30a492f9ecf6"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:25:09 crc kubenswrapper[4867]: I0214 04:25:09.476875 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d9ba4d6-e777-4a10-96d1-30a492f9ecf6-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "2d9ba4d6-e777-4a10-96d1-30a492f9ecf6" (UID: "2d9ba4d6-e777-4a10-96d1-30a492f9ecf6"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:25:09 crc kubenswrapper[4867]: I0214 04:25:09.477149 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d9ba4d6-e777-4a10-96d1-30a492f9ecf6-kube-api-access-pm2vd" (OuterVolumeSpecName: "kube-api-access-pm2vd") pod "2d9ba4d6-e777-4a10-96d1-30a492f9ecf6" (UID: "2d9ba4d6-e777-4a10-96d1-30a492f9ecf6"). InnerVolumeSpecName "kube-api-access-pm2vd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:25:09 crc kubenswrapper[4867]: I0214 04:25:09.567623 4867 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2d9ba4d6-e777-4a10-96d1-30a492f9ecf6-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:25:09 crc kubenswrapper[4867]: I0214 04:25:09.567673 4867 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2d9ba4d6-e777-4a10-96d1-30a492f9ecf6-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:25:09 crc kubenswrapper[4867]: I0214 04:25:09.567687 4867 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2d9ba4d6-e777-4a10-96d1-30a492f9ecf6-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:25:09 crc kubenswrapper[4867]: I0214 04:25:09.567701 4867 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2d9ba4d6-e777-4a10-96d1-30a492f9ecf6-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:25:09 crc kubenswrapper[4867]: I0214 04:25:09.567717 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pm2vd\" (UniqueName: \"kubernetes.io/projected/2d9ba4d6-e777-4a10-96d1-30a492f9ecf6-kube-api-access-pm2vd\") on node \"crc\" DevicePath \"\"" Feb 14 04:25:09 crc kubenswrapper[4867]: I0214 04:25:09.567736 4867 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2d9ba4d6-e777-4a10-96d1-30a492f9ecf6-console-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:25:09 crc kubenswrapper[4867]: I0214 04:25:09.567780 4867 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2d9ba4d6-e777-4a10-96d1-30a492f9ecf6-service-ca\") on node \"crc\" DevicePath \"\"" Feb 14 04:25:09 crc kubenswrapper[4867]: I0214 04:25:09.991829 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6687988ff8-hggh9_2d9ba4d6-e777-4a10-96d1-30a492f9ecf6/console/0.log" Feb 14 04:25:09 crc kubenswrapper[4867]: I0214 04:25:09.992177 4867 generic.go:334] "Generic (PLEG): container finished" podID="2d9ba4d6-e777-4a10-96d1-30a492f9ecf6" containerID="3645eb9bc387f910ed152e2f9ff7796cc316b5c34a3967a69643a1f6d547d063" exitCode=2 Feb 14 04:25:09 crc kubenswrapper[4867]: I0214 04:25:09.992208 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6687988ff8-hggh9" event={"ID":"2d9ba4d6-e777-4a10-96d1-30a492f9ecf6","Type":"ContainerDied","Data":"3645eb9bc387f910ed152e2f9ff7796cc316b5c34a3967a69643a1f6d547d063"} Feb 14 04:25:09 crc kubenswrapper[4867]: I0214 04:25:09.992235 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6687988ff8-hggh9" event={"ID":"2d9ba4d6-e777-4a10-96d1-30a492f9ecf6","Type":"ContainerDied","Data":"129cdcd69132d20dcbb1f824da4d34637e927a59f414ddd5999cdc93d09a0538"} Feb 14 04:25:09 crc kubenswrapper[4867]: I0214 04:25:09.992257 4867 scope.go:117] "RemoveContainer" containerID="3645eb9bc387f910ed152e2f9ff7796cc316b5c34a3967a69643a1f6d547d063" Feb 14 04:25:09 crc kubenswrapper[4867]: I0214 04:25:09.992291 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6687988ff8-hggh9" Feb 14 04:25:10 crc kubenswrapper[4867]: I0214 04:25:10.032947 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6687988ff8-hggh9"] Feb 14 04:25:10 crc kubenswrapper[4867]: I0214 04:25:10.036787 4867 scope.go:117] "RemoveContainer" containerID="3645eb9bc387f910ed152e2f9ff7796cc316b5c34a3967a69643a1f6d547d063" Feb 14 04:25:10 crc kubenswrapper[4867]: E0214 04:25:10.037690 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3645eb9bc387f910ed152e2f9ff7796cc316b5c34a3967a69643a1f6d547d063\": container with ID starting with 3645eb9bc387f910ed152e2f9ff7796cc316b5c34a3967a69643a1f6d547d063 not found: ID does not exist" containerID="3645eb9bc387f910ed152e2f9ff7796cc316b5c34a3967a69643a1f6d547d063" Feb 14 04:25:10 crc kubenswrapper[4867]: I0214 04:25:10.037740 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3645eb9bc387f910ed152e2f9ff7796cc316b5c34a3967a69643a1f6d547d063"} err="failed to get container status \"3645eb9bc387f910ed152e2f9ff7796cc316b5c34a3967a69643a1f6d547d063\": rpc error: code = NotFound desc = could not find container \"3645eb9bc387f910ed152e2f9ff7796cc316b5c34a3967a69643a1f6d547d063\": container with ID starting with 3645eb9bc387f910ed152e2f9ff7796cc316b5c34a3967a69643a1f6d547d063 not found: ID does not exist" Feb 14 04:25:10 crc kubenswrapper[4867]: I0214 04:25:10.039775 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-6687988ff8-hggh9"] Feb 14 04:25:11 crc kubenswrapper[4867]: I0214 04:25:11.008376 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d9ba4d6-e777-4a10-96d1-30a492f9ecf6" path="/var/lib/kubelet/pods/2d9ba4d6-e777-4a10-96d1-30a492f9ecf6/volumes" Feb 14 04:25:12 crc kubenswrapper[4867]: I0214 04:25:12.055017 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7m8sv" Feb 14 04:25:12 crc kubenswrapper[4867]: I0214 04:25:12.055110 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7m8sv" Feb 14 04:25:12 crc kubenswrapper[4867]: I0214 04:25:12.108415 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7m8sv" Feb 14 04:25:13 crc kubenswrapper[4867]: I0214 04:25:13.071973 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7m8sv" Feb 14 04:25:14 crc kubenswrapper[4867]: I0214 04:25:14.444710 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7m8sv"] Feb 14 04:25:14 crc kubenswrapper[4867]: I0214 04:25:14.703071 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vbtkn"] Feb 14 04:25:14 crc kubenswrapper[4867]: E0214 04:25:14.703459 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d9ba4d6-e777-4a10-96d1-30a492f9ecf6" containerName="console" Feb 14 04:25:14 crc kubenswrapper[4867]: I0214 04:25:14.703477 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d9ba4d6-e777-4a10-96d1-30a492f9ecf6" containerName="console" Feb 14 04:25:14 crc kubenswrapper[4867]: I0214 04:25:14.703689 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d9ba4d6-e777-4a10-96d1-30a492f9ecf6" containerName="console" Feb 14 04:25:14 crc kubenswrapper[4867]: I0214 04:25:14.705032 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vbtkn" Feb 14 04:25:14 crc kubenswrapper[4867]: I0214 04:25:14.715653 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vbtkn"] Feb 14 04:25:14 crc kubenswrapper[4867]: I0214 04:25:14.720207 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 14 04:25:14 crc kubenswrapper[4867]: I0214 04:25:14.869571 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cc14a3a2-05fa-4675-bace-02675c564e5f-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vbtkn\" (UID: \"cc14a3a2-05fa-4675-bace-02675c564e5f\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vbtkn" Feb 14 04:25:14 crc kubenswrapper[4867]: I0214 04:25:14.869677 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cc14a3a2-05fa-4675-bace-02675c564e5f-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vbtkn\" (UID: \"cc14a3a2-05fa-4675-bace-02675c564e5f\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vbtkn" Feb 14 04:25:14 crc kubenswrapper[4867]: I0214 04:25:14.869707 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47s2h\" (UniqueName: \"kubernetes.io/projected/cc14a3a2-05fa-4675-bace-02675c564e5f-kube-api-access-47s2h\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vbtkn\" (UID: \"cc14a3a2-05fa-4675-bace-02675c564e5f\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vbtkn" Feb 14 04:25:14 crc kubenswrapper[4867]: I0214 04:25:14.972101 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cc14a3a2-05fa-4675-bace-02675c564e5f-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vbtkn\" (UID: \"cc14a3a2-05fa-4675-bace-02675c564e5f\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vbtkn" Feb 14 04:25:14 crc kubenswrapper[4867]: I0214 04:25:14.972809 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cc14a3a2-05fa-4675-bace-02675c564e5f-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vbtkn\" (UID: \"cc14a3a2-05fa-4675-bace-02675c564e5f\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vbtkn" Feb 14 04:25:14 crc kubenswrapper[4867]: I0214 04:25:14.972983 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47s2h\" (UniqueName: \"kubernetes.io/projected/cc14a3a2-05fa-4675-bace-02675c564e5f-kube-api-access-47s2h\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vbtkn\" (UID: \"cc14a3a2-05fa-4675-bace-02675c564e5f\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vbtkn" Feb 14 04:25:14 crc kubenswrapper[4867]: I0214 04:25:14.972822 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cc14a3a2-05fa-4675-bace-02675c564e5f-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vbtkn\" (UID: \"cc14a3a2-05fa-4675-bace-02675c564e5f\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vbtkn" Feb 14 04:25:14 crc kubenswrapper[4867]: I0214 04:25:14.973612 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cc14a3a2-05fa-4675-bace-02675c564e5f-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vbtkn\" (UID: \"cc14a3a2-05fa-4675-bace-02675c564e5f\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vbtkn" Feb 14 04:25:15 crc kubenswrapper[4867]: I0214 04:25:15.003669 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47s2h\" (UniqueName: \"kubernetes.io/projected/cc14a3a2-05fa-4675-bace-02675c564e5f-kube-api-access-47s2h\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vbtkn\" (UID: \"cc14a3a2-05fa-4675-bace-02675c564e5f\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vbtkn" Feb 14 04:25:15 crc kubenswrapper[4867]: I0214 04:25:15.029568 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7m8sv" podUID="d98e15fa-a08a-4710-a903-60a1af5ff85c" containerName="registry-server" containerID="cri-o://800ec12ae40651afc5994d9e62ff224d7af9d3df94bec204c2ad4dc1516bde35" gracePeriod=2 Feb 14 04:25:15 crc kubenswrapper[4867]: I0214 04:25:15.040492 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vbtkn" Feb 14 04:25:15 crc kubenswrapper[4867]: I0214 04:25:15.524129 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7m8sv" Feb 14 04:25:15 crc kubenswrapper[4867]: I0214 04:25:15.596618 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vbtkn"] Feb 14 04:25:15 crc kubenswrapper[4867]: I0214 04:25:15.687435 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fgcq4\" (UniqueName: \"kubernetes.io/projected/d98e15fa-a08a-4710-a903-60a1af5ff85c-kube-api-access-fgcq4\") pod \"d98e15fa-a08a-4710-a903-60a1af5ff85c\" (UID: \"d98e15fa-a08a-4710-a903-60a1af5ff85c\") " Feb 14 04:25:15 crc kubenswrapper[4867]: I0214 04:25:15.687521 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d98e15fa-a08a-4710-a903-60a1af5ff85c-utilities\") pod \"d98e15fa-a08a-4710-a903-60a1af5ff85c\" (UID: \"d98e15fa-a08a-4710-a903-60a1af5ff85c\") " Feb 14 04:25:15 crc kubenswrapper[4867]: I0214 04:25:15.687701 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d98e15fa-a08a-4710-a903-60a1af5ff85c-catalog-content\") pod \"d98e15fa-a08a-4710-a903-60a1af5ff85c\" (UID: \"d98e15fa-a08a-4710-a903-60a1af5ff85c\") " Feb 14 04:25:15 crc kubenswrapper[4867]: I0214 04:25:15.688826 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d98e15fa-a08a-4710-a903-60a1af5ff85c-utilities" (OuterVolumeSpecName: "utilities") pod "d98e15fa-a08a-4710-a903-60a1af5ff85c" (UID: "d98e15fa-a08a-4710-a903-60a1af5ff85c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:25:15 crc kubenswrapper[4867]: I0214 04:25:15.696710 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d98e15fa-a08a-4710-a903-60a1af5ff85c-kube-api-access-fgcq4" (OuterVolumeSpecName: "kube-api-access-fgcq4") pod "d98e15fa-a08a-4710-a903-60a1af5ff85c" (UID: "d98e15fa-a08a-4710-a903-60a1af5ff85c"). InnerVolumeSpecName "kube-api-access-fgcq4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:25:15 crc kubenswrapper[4867]: I0214 04:25:15.718495 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d98e15fa-a08a-4710-a903-60a1af5ff85c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d98e15fa-a08a-4710-a903-60a1af5ff85c" (UID: "d98e15fa-a08a-4710-a903-60a1af5ff85c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:25:15 crc kubenswrapper[4867]: I0214 04:25:15.790172 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d98e15fa-a08a-4710-a903-60a1af5ff85c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 04:25:15 crc kubenswrapper[4867]: I0214 04:25:15.790217 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fgcq4\" (UniqueName: \"kubernetes.io/projected/d98e15fa-a08a-4710-a903-60a1af5ff85c-kube-api-access-fgcq4\") on node \"crc\" DevicePath \"\"" Feb 14 04:25:15 crc kubenswrapper[4867]: I0214 04:25:15.790233 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d98e15fa-a08a-4710-a903-60a1af5ff85c-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 04:25:16 crc kubenswrapper[4867]: I0214 04:25:16.040012 4867 generic.go:334] "Generic (PLEG): container finished" podID="cc14a3a2-05fa-4675-bace-02675c564e5f" containerID="025cfdbf2a758606cb832c39de19cbd957cd6a91a34d8ad3c65d524e3f69a579" exitCode=0 Feb 14 04:25:16 crc kubenswrapper[4867]: I0214 04:25:16.040127 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vbtkn" event={"ID":"cc14a3a2-05fa-4675-bace-02675c564e5f","Type":"ContainerDied","Data":"025cfdbf2a758606cb832c39de19cbd957cd6a91a34d8ad3c65d524e3f69a579"} Feb 14 04:25:16 crc kubenswrapper[4867]: I0214 04:25:16.040158 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vbtkn" event={"ID":"cc14a3a2-05fa-4675-bace-02675c564e5f","Type":"ContainerStarted","Data":"2aaee53e90cd8a02d4834edc174933e454497e41bdbb5e7b0688f330535eb7cf"} Feb 14 04:25:16 crc kubenswrapper[4867]: I0214 04:25:16.043329 4867 generic.go:334] "Generic (PLEG): container finished" podID="d98e15fa-a08a-4710-a903-60a1af5ff85c" containerID="800ec12ae40651afc5994d9e62ff224d7af9d3df94bec204c2ad4dc1516bde35" exitCode=0 Feb 14 04:25:16 crc kubenswrapper[4867]: I0214 04:25:16.043378 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7m8sv" event={"ID":"d98e15fa-a08a-4710-a903-60a1af5ff85c","Type":"ContainerDied","Data":"800ec12ae40651afc5994d9e62ff224d7af9d3df94bec204c2ad4dc1516bde35"} Feb 14 04:25:16 crc kubenswrapper[4867]: I0214 04:25:16.043406 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7m8sv" event={"ID":"d98e15fa-a08a-4710-a903-60a1af5ff85c","Type":"ContainerDied","Data":"0f8af18980c35ed58409e3eef5c5ce346989fd381ffc5f93082d6eedce320de7"} Feb 14 04:25:16 crc kubenswrapper[4867]: I0214 04:25:16.043430 4867 scope.go:117] "RemoveContainer" containerID="800ec12ae40651afc5994d9e62ff224d7af9d3df94bec204c2ad4dc1516bde35" Feb 14 04:25:16 crc kubenswrapper[4867]: I0214 04:25:16.043605 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7m8sv" Feb 14 04:25:16 crc kubenswrapper[4867]: I0214 04:25:16.063536 4867 scope.go:117] "RemoveContainer" containerID="781f5d11a5a6f97d66dc2c2ec0eae435679b8dd4779f05f225fb3ce5dd559a2c" Feb 14 04:25:16 crc kubenswrapper[4867]: I0214 04:25:16.082827 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7m8sv"] Feb 14 04:25:16 crc kubenswrapper[4867]: I0214 04:25:16.090798 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7m8sv"] Feb 14 04:25:16 crc kubenswrapper[4867]: I0214 04:25:16.099732 4867 scope.go:117] "RemoveContainer" containerID="be841d62f3009374faac139bf7c9000724217c0d043cee6d7f13deb00ae9eb01" Feb 14 04:25:16 crc kubenswrapper[4867]: I0214 04:25:16.135073 4867 scope.go:117] "RemoveContainer" containerID="800ec12ae40651afc5994d9e62ff224d7af9d3df94bec204c2ad4dc1516bde35" Feb 14 04:25:16 crc kubenswrapper[4867]: E0214 04:25:16.135571 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"800ec12ae40651afc5994d9e62ff224d7af9d3df94bec204c2ad4dc1516bde35\": container with ID starting with 800ec12ae40651afc5994d9e62ff224d7af9d3df94bec204c2ad4dc1516bde35 not found: ID does not exist" containerID="800ec12ae40651afc5994d9e62ff224d7af9d3df94bec204c2ad4dc1516bde35" Feb 14 04:25:16 crc kubenswrapper[4867]: I0214 04:25:16.135621 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"800ec12ae40651afc5994d9e62ff224d7af9d3df94bec204c2ad4dc1516bde35"} err="failed to get container status \"800ec12ae40651afc5994d9e62ff224d7af9d3df94bec204c2ad4dc1516bde35\": rpc error: code = NotFound desc = could not find container \"800ec12ae40651afc5994d9e62ff224d7af9d3df94bec204c2ad4dc1516bde35\": container with ID starting with 800ec12ae40651afc5994d9e62ff224d7af9d3df94bec204c2ad4dc1516bde35 not found: ID does not exist" Feb 14 04:25:16 crc kubenswrapper[4867]: I0214 04:25:16.135650 4867 scope.go:117] "RemoveContainer" containerID="781f5d11a5a6f97d66dc2c2ec0eae435679b8dd4779f05f225fb3ce5dd559a2c" Feb 14 04:25:16 crc kubenswrapper[4867]: E0214 04:25:16.135987 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"781f5d11a5a6f97d66dc2c2ec0eae435679b8dd4779f05f225fb3ce5dd559a2c\": container with ID starting with 781f5d11a5a6f97d66dc2c2ec0eae435679b8dd4779f05f225fb3ce5dd559a2c not found: ID does not exist" containerID="781f5d11a5a6f97d66dc2c2ec0eae435679b8dd4779f05f225fb3ce5dd559a2c" Feb 14 04:25:16 crc kubenswrapper[4867]: I0214 04:25:16.136020 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"781f5d11a5a6f97d66dc2c2ec0eae435679b8dd4779f05f225fb3ce5dd559a2c"} err="failed to get container status \"781f5d11a5a6f97d66dc2c2ec0eae435679b8dd4779f05f225fb3ce5dd559a2c\": rpc error: code = NotFound desc = could not find container \"781f5d11a5a6f97d66dc2c2ec0eae435679b8dd4779f05f225fb3ce5dd559a2c\": container with ID starting with 781f5d11a5a6f97d66dc2c2ec0eae435679b8dd4779f05f225fb3ce5dd559a2c not found: ID does not exist" Feb 14 04:25:16 crc kubenswrapper[4867]: I0214 04:25:16.136047 4867 scope.go:117] "RemoveContainer" containerID="be841d62f3009374faac139bf7c9000724217c0d043cee6d7f13deb00ae9eb01" Feb 14 04:25:16 crc kubenswrapper[4867]: E0214 04:25:16.136375 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be841d62f3009374faac139bf7c9000724217c0d043cee6d7f13deb00ae9eb01\": container with ID starting with be841d62f3009374faac139bf7c9000724217c0d043cee6d7f13deb00ae9eb01 not found: ID does not exist" containerID="be841d62f3009374faac139bf7c9000724217c0d043cee6d7f13deb00ae9eb01" Feb 14 04:25:16 crc kubenswrapper[4867]: I0214 04:25:16.136404 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be841d62f3009374faac139bf7c9000724217c0d043cee6d7f13deb00ae9eb01"} err="failed to get container status \"be841d62f3009374faac139bf7c9000724217c0d043cee6d7f13deb00ae9eb01\": rpc error: code = NotFound desc = could not find container \"be841d62f3009374faac139bf7c9000724217c0d043cee6d7f13deb00ae9eb01\": container with ID starting with be841d62f3009374faac139bf7c9000724217c0d043cee6d7f13deb00ae9eb01 not found: ID does not exist" Feb 14 04:25:17 crc kubenswrapper[4867]: I0214 04:25:17.005915 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d98e15fa-a08a-4710-a903-60a1af5ff85c" path="/var/lib/kubelet/pods/d98e15fa-a08a-4710-a903-60a1af5ff85c/volumes" Feb 14 04:25:18 crc kubenswrapper[4867]: I0214 04:25:18.061952 4867 generic.go:334] "Generic (PLEG): container finished" podID="cc14a3a2-05fa-4675-bace-02675c564e5f" containerID="bd4ca1932fd255aa202749888d70a75889f2b31069893060643a2caae1e51f9a" exitCode=0 Feb 14 04:25:18 crc kubenswrapper[4867]: I0214 04:25:18.062089 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vbtkn" event={"ID":"cc14a3a2-05fa-4675-bace-02675c564e5f","Type":"ContainerDied","Data":"bd4ca1932fd255aa202749888d70a75889f2b31069893060643a2caae1e51f9a"} Feb 14 04:25:19 crc kubenswrapper[4867]: I0214 04:25:19.072569 4867 generic.go:334] "Generic (PLEG): container finished" podID="cc14a3a2-05fa-4675-bace-02675c564e5f" containerID="4256fd9fddc4b76fc03a089854dcfa3f61c0df98de19f12dfa8e554deb082fdc" exitCode=0 Feb 14 04:25:19 crc kubenswrapper[4867]: I0214 04:25:19.072625 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vbtkn" event={"ID":"cc14a3a2-05fa-4675-bace-02675c564e5f","Type":"ContainerDied","Data":"4256fd9fddc4b76fc03a089854dcfa3f61c0df98de19f12dfa8e554deb082fdc"} Feb 14 04:25:20 crc kubenswrapper[4867]: I0214 04:25:20.382843 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vbtkn" Feb 14 04:25:20 crc kubenswrapper[4867]: I0214 04:25:20.482204 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cc14a3a2-05fa-4675-bace-02675c564e5f-bundle\") pod \"cc14a3a2-05fa-4675-bace-02675c564e5f\" (UID: \"cc14a3a2-05fa-4675-bace-02675c564e5f\") " Feb 14 04:25:20 crc kubenswrapper[4867]: I0214 04:25:20.482688 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-47s2h\" (UniqueName: \"kubernetes.io/projected/cc14a3a2-05fa-4675-bace-02675c564e5f-kube-api-access-47s2h\") pod \"cc14a3a2-05fa-4675-bace-02675c564e5f\" (UID: \"cc14a3a2-05fa-4675-bace-02675c564e5f\") " Feb 14 04:25:20 crc kubenswrapper[4867]: I0214 04:25:20.482741 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cc14a3a2-05fa-4675-bace-02675c564e5f-util\") pod \"cc14a3a2-05fa-4675-bace-02675c564e5f\" (UID: \"cc14a3a2-05fa-4675-bace-02675c564e5f\") " Feb 14 04:25:20 crc kubenswrapper[4867]: I0214 04:25:20.484393 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc14a3a2-05fa-4675-bace-02675c564e5f-bundle" (OuterVolumeSpecName: "bundle") pod "cc14a3a2-05fa-4675-bace-02675c564e5f" (UID: "cc14a3a2-05fa-4675-bace-02675c564e5f"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:25:20 crc kubenswrapper[4867]: I0214 04:25:20.489148 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc14a3a2-05fa-4675-bace-02675c564e5f-kube-api-access-47s2h" (OuterVolumeSpecName: "kube-api-access-47s2h") pod "cc14a3a2-05fa-4675-bace-02675c564e5f" (UID: "cc14a3a2-05fa-4675-bace-02675c564e5f"). InnerVolumeSpecName "kube-api-access-47s2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:25:20 crc kubenswrapper[4867]: I0214 04:25:20.511097 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc14a3a2-05fa-4675-bace-02675c564e5f-util" (OuterVolumeSpecName: "util") pod "cc14a3a2-05fa-4675-bace-02675c564e5f" (UID: "cc14a3a2-05fa-4675-bace-02675c564e5f"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:25:20 crc kubenswrapper[4867]: I0214 04:25:20.583635 4867 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cc14a3a2-05fa-4675-bace-02675c564e5f-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:25:20 crc kubenswrapper[4867]: I0214 04:25:20.583689 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-47s2h\" (UniqueName: \"kubernetes.io/projected/cc14a3a2-05fa-4675-bace-02675c564e5f-kube-api-access-47s2h\") on node \"crc\" DevicePath \"\"" Feb 14 04:25:20 crc kubenswrapper[4867]: I0214 04:25:20.583702 4867 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cc14a3a2-05fa-4675-bace-02675c564e5f-util\") on node \"crc\" DevicePath \"\"" Feb 14 04:25:21 crc kubenswrapper[4867]: I0214 04:25:21.100776 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vbtkn" event={"ID":"cc14a3a2-05fa-4675-bace-02675c564e5f","Type":"ContainerDied","Data":"2aaee53e90cd8a02d4834edc174933e454497e41bdbb5e7b0688f330535eb7cf"} Feb 14 04:25:21 crc kubenswrapper[4867]: I0214 04:25:21.100859 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2aaee53e90cd8a02d4834edc174933e454497e41bdbb5e7b0688f330535eb7cf" Feb 14 04:25:21 crc kubenswrapper[4867]: I0214 04:25:21.100882 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vbtkn" Feb 14 04:25:23 crc kubenswrapper[4867]: I0214 04:25:23.053221 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-89zzb"] Feb 14 04:25:23 crc kubenswrapper[4867]: E0214 04:25:23.053809 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc14a3a2-05fa-4675-bace-02675c564e5f" containerName="extract" Feb 14 04:25:23 crc kubenswrapper[4867]: I0214 04:25:23.053823 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc14a3a2-05fa-4675-bace-02675c564e5f" containerName="extract" Feb 14 04:25:23 crc kubenswrapper[4867]: E0214 04:25:23.053839 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d98e15fa-a08a-4710-a903-60a1af5ff85c" containerName="registry-server" Feb 14 04:25:23 crc kubenswrapper[4867]: I0214 04:25:23.053845 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="d98e15fa-a08a-4710-a903-60a1af5ff85c" containerName="registry-server" Feb 14 04:25:23 crc kubenswrapper[4867]: E0214 04:25:23.053865 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc14a3a2-05fa-4675-bace-02675c564e5f" containerName="util" Feb 14 04:25:23 crc kubenswrapper[4867]: I0214 04:25:23.053870 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc14a3a2-05fa-4675-bace-02675c564e5f" containerName="util" Feb 14 04:25:23 crc kubenswrapper[4867]: E0214 04:25:23.053881 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d98e15fa-a08a-4710-a903-60a1af5ff85c" containerName="extract-utilities" Feb 14 04:25:23 crc kubenswrapper[4867]: I0214 04:25:23.053887 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="d98e15fa-a08a-4710-a903-60a1af5ff85c" containerName="extract-utilities" Feb 14 04:25:23 crc kubenswrapper[4867]: E0214 04:25:23.053896 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc14a3a2-05fa-4675-bace-02675c564e5f" containerName="pull" Feb 14 04:25:23 crc kubenswrapper[4867]: I0214 04:25:23.053902 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc14a3a2-05fa-4675-bace-02675c564e5f" containerName="pull" Feb 14 04:25:23 crc kubenswrapper[4867]: E0214 04:25:23.053914 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d98e15fa-a08a-4710-a903-60a1af5ff85c" containerName="extract-content" Feb 14 04:25:23 crc kubenswrapper[4867]: I0214 04:25:23.053919 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="d98e15fa-a08a-4710-a903-60a1af5ff85c" containerName="extract-content" Feb 14 04:25:23 crc kubenswrapper[4867]: I0214 04:25:23.054044 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc14a3a2-05fa-4675-bace-02675c564e5f" containerName="extract" Feb 14 04:25:23 crc kubenswrapper[4867]: I0214 04:25:23.054054 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="d98e15fa-a08a-4710-a903-60a1af5ff85c" containerName="registry-server" Feb 14 04:25:23 crc kubenswrapper[4867]: I0214 04:25:23.055214 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-89zzb" Feb 14 04:25:23 crc kubenswrapper[4867]: I0214 04:25:23.069244 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-89zzb"] Feb 14 04:25:23 crc kubenswrapper[4867]: I0214 04:25:23.225643 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41593fcf-d77d-43cb-897b-bf50bbc07d31-utilities\") pod \"certified-operators-89zzb\" (UID: \"41593fcf-d77d-43cb-897b-bf50bbc07d31\") " pod="openshift-marketplace/certified-operators-89zzb" Feb 14 04:25:23 crc kubenswrapper[4867]: I0214 04:25:23.225959 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41593fcf-d77d-43cb-897b-bf50bbc07d31-catalog-content\") pod \"certified-operators-89zzb\" (UID: \"41593fcf-d77d-43cb-897b-bf50bbc07d31\") " pod="openshift-marketplace/certified-operators-89zzb" Feb 14 04:25:23 crc kubenswrapper[4867]: I0214 04:25:23.226074 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fhtd\" (UniqueName: \"kubernetes.io/projected/41593fcf-d77d-43cb-897b-bf50bbc07d31-kube-api-access-4fhtd\") pod \"certified-operators-89zzb\" (UID: \"41593fcf-d77d-43cb-897b-bf50bbc07d31\") " pod="openshift-marketplace/certified-operators-89zzb" Feb 14 04:25:23 crc kubenswrapper[4867]: I0214 04:25:23.327804 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41593fcf-d77d-43cb-897b-bf50bbc07d31-utilities\") pod \"certified-operators-89zzb\" (UID: \"41593fcf-d77d-43cb-897b-bf50bbc07d31\") " pod="openshift-marketplace/certified-operators-89zzb" Feb 14 04:25:23 crc kubenswrapper[4867]: I0214 04:25:23.327915 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41593fcf-d77d-43cb-897b-bf50bbc07d31-catalog-content\") pod \"certified-operators-89zzb\" (UID: \"41593fcf-d77d-43cb-897b-bf50bbc07d31\") " pod="openshift-marketplace/certified-operators-89zzb" Feb 14 04:25:23 crc kubenswrapper[4867]: I0214 04:25:23.327944 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fhtd\" (UniqueName: \"kubernetes.io/projected/41593fcf-d77d-43cb-897b-bf50bbc07d31-kube-api-access-4fhtd\") pod \"certified-operators-89zzb\" (UID: \"41593fcf-d77d-43cb-897b-bf50bbc07d31\") " pod="openshift-marketplace/certified-operators-89zzb" Feb 14 04:25:23 crc kubenswrapper[4867]: I0214 04:25:23.328849 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41593fcf-d77d-43cb-897b-bf50bbc07d31-utilities\") pod \"certified-operators-89zzb\" (UID: \"41593fcf-d77d-43cb-897b-bf50bbc07d31\") " pod="openshift-marketplace/certified-operators-89zzb" Feb 14 04:25:23 crc kubenswrapper[4867]: I0214 04:25:23.328882 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41593fcf-d77d-43cb-897b-bf50bbc07d31-catalog-content\") pod \"certified-operators-89zzb\" (UID: \"41593fcf-d77d-43cb-897b-bf50bbc07d31\") " pod="openshift-marketplace/certified-operators-89zzb" Feb 14 04:25:23 crc kubenswrapper[4867]: I0214 04:25:23.361665 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fhtd\" (UniqueName: \"kubernetes.io/projected/41593fcf-d77d-43cb-897b-bf50bbc07d31-kube-api-access-4fhtd\") pod \"certified-operators-89zzb\" (UID: \"41593fcf-d77d-43cb-897b-bf50bbc07d31\") " pod="openshift-marketplace/certified-operators-89zzb" Feb 14 04:25:23 crc kubenswrapper[4867]: I0214 04:25:23.377356 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-89zzb" Feb 14 04:25:23 crc kubenswrapper[4867]: I0214 04:25:23.907184 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-89zzb"] Feb 14 04:25:24 crc kubenswrapper[4867]: I0214 04:25:24.120659 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-89zzb" event={"ID":"41593fcf-d77d-43cb-897b-bf50bbc07d31","Type":"ContainerStarted","Data":"5c0f380549657313e0565dc481c122d115c86229dca3f0afe73563f2bb24adf6"} Feb 14 04:25:24 crc kubenswrapper[4867]: I0214 04:25:24.120949 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-89zzb" event={"ID":"41593fcf-d77d-43cb-897b-bf50bbc07d31","Type":"ContainerStarted","Data":"b1c5650acb46edc5c20087f88f5e194cf319b08e3b50efe1315f88ecbf3e0799"} Feb 14 04:25:25 crc kubenswrapper[4867]: I0214 04:25:25.131148 4867 generic.go:334] "Generic (PLEG): container finished" podID="41593fcf-d77d-43cb-897b-bf50bbc07d31" containerID="5c0f380549657313e0565dc481c122d115c86229dca3f0afe73563f2bb24adf6" exitCode=0 Feb 14 04:25:25 crc kubenswrapper[4867]: I0214 04:25:25.131212 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-89zzb" event={"ID":"41593fcf-d77d-43cb-897b-bf50bbc07d31","Type":"ContainerDied","Data":"5c0f380549657313e0565dc481c122d115c86229dca3f0afe73563f2bb24adf6"} Feb 14 04:25:26 crc kubenswrapper[4867]: I0214 04:25:26.140100 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-89zzb" event={"ID":"41593fcf-d77d-43cb-897b-bf50bbc07d31","Type":"ContainerStarted","Data":"f4d29f8ea9676c2101890d6b580cc624a0fb609f17c4b40302ee52454cdc91b7"} Feb 14 04:25:26 crc kubenswrapper[4867]: E0214 04:25:26.649638 4867 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod41593fcf_d77d_43cb_897b_bf50bbc07d31.slice/crio-f4d29f8ea9676c2101890d6b580cc624a0fb609f17c4b40302ee52454cdc91b7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod41593fcf_d77d_43cb_897b_bf50bbc07d31.slice/crio-conmon-f4d29f8ea9676c2101890d6b580cc624a0fb609f17c4b40302ee52454cdc91b7.scope\": RecentStats: unable to find data in memory cache]" Feb 14 04:25:27 crc kubenswrapper[4867]: I0214 04:25:27.148660 4867 generic.go:334] "Generic (PLEG): container finished" podID="41593fcf-d77d-43cb-897b-bf50bbc07d31" containerID="f4d29f8ea9676c2101890d6b580cc624a0fb609f17c4b40302ee52454cdc91b7" exitCode=0 Feb 14 04:25:27 crc kubenswrapper[4867]: I0214 04:25:27.148760 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-89zzb" event={"ID":"41593fcf-d77d-43cb-897b-bf50bbc07d31","Type":"ContainerDied","Data":"f4d29f8ea9676c2101890d6b580cc624a0fb609f17c4b40302ee52454cdc91b7"} Feb 14 04:25:28 crc kubenswrapper[4867]: I0214 04:25:28.160030 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-89zzb" event={"ID":"41593fcf-d77d-43cb-897b-bf50bbc07d31","Type":"ContainerStarted","Data":"84b14d36d17a6852928a4165379f01ef8bd89cd3b51c2f9a1fa85599bcd5a4af"} Feb 14 04:25:28 crc kubenswrapper[4867]: I0214 04:25:28.183983 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-89zzb" podStartSLOduration=2.787306109 podStartE2EDuration="5.1839624s" podCreationTimestamp="2026-02-14 04:25:23 +0000 UTC" firstStartedPulling="2026-02-14 04:25:25.133703945 +0000 UTC m=+957.214641259" lastFinishedPulling="2026-02-14 04:25:27.530360236 +0000 UTC m=+959.611297550" observedRunningTime="2026-02-14 04:25:28.18049824 +0000 UTC m=+960.261435584" watchObservedRunningTime="2026-02-14 04:25:28.1839624 +0000 UTC m=+960.264899714" Feb 14 04:25:30 crc kubenswrapper[4867]: I0214 04:25:30.413148 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-67594686f4-52kwb"] Feb 14 04:25:30 crc kubenswrapper[4867]: I0214 04:25:30.415440 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-67594686f4-52kwb" Feb 14 04:25:30 crc kubenswrapper[4867]: I0214 04:25:30.427239 4867 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 14 04:25:30 crc kubenswrapper[4867]: I0214 04:25:30.427800 4867 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 14 04:25:30 crc kubenswrapper[4867]: I0214 04:25:30.428135 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 14 04:25:30 crc kubenswrapper[4867]: I0214 04:25:30.428367 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 14 04:25:30 crc kubenswrapper[4867]: I0214 04:25:30.428448 4867 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-ssl6n" Feb 14 04:25:30 crc kubenswrapper[4867]: I0214 04:25:30.448308 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-67594686f4-52kwb"] Feb 14 04:25:30 crc kubenswrapper[4867]: I0214 04:25:30.559679 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6pxc\" (UniqueName: \"kubernetes.io/projected/e1d5f0bd-4e8c-45c7-9d4e-c530689948ad-kube-api-access-q6pxc\") pod \"metallb-operator-controller-manager-67594686f4-52kwb\" (UID: \"e1d5f0bd-4e8c-45c7-9d4e-c530689948ad\") " pod="metallb-system/metallb-operator-controller-manager-67594686f4-52kwb" Feb 14 04:25:30 crc kubenswrapper[4867]: I0214 04:25:30.559733 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e1d5f0bd-4e8c-45c7-9d4e-c530689948ad-webhook-cert\") pod \"metallb-operator-controller-manager-67594686f4-52kwb\" (UID: \"e1d5f0bd-4e8c-45c7-9d4e-c530689948ad\") " pod="metallb-system/metallb-operator-controller-manager-67594686f4-52kwb" Feb 14 04:25:30 crc kubenswrapper[4867]: I0214 04:25:30.559772 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e1d5f0bd-4e8c-45c7-9d4e-c530689948ad-apiservice-cert\") pod \"metallb-operator-controller-manager-67594686f4-52kwb\" (UID: \"e1d5f0bd-4e8c-45c7-9d4e-c530689948ad\") " pod="metallb-system/metallb-operator-controller-manager-67594686f4-52kwb" Feb 14 04:25:30 crc kubenswrapper[4867]: I0214 04:25:30.662785 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6pxc\" (UniqueName: \"kubernetes.io/projected/e1d5f0bd-4e8c-45c7-9d4e-c530689948ad-kube-api-access-q6pxc\") pod \"metallb-operator-controller-manager-67594686f4-52kwb\" (UID: \"e1d5f0bd-4e8c-45c7-9d4e-c530689948ad\") " pod="metallb-system/metallb-operator-controller-manager-67594686f4-52kwb" Feb 14 04:25:30 crc kubenswrapper[4867]: I0214 04:25:30.662834 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e1d5f0bd-4e8c-45c7-9d4e-c530689948ad-webhook-cert\") pod \"metallb-operator-controller-manager-67594686f4-52kwb\" (UID: \"e1d5f0bd-4e8c-45c7-9d4e-c530689948ad\") " pod="metallb-system/metallb-operator-controller-manager-67594686f4-52kwb" Feb 14 04:25:30 crc kubenswrapper[4867]: I0214 04:25:30.662868 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e1d5f0bd-4e8c-45c7-9d4e-c530689948ad-apiservice-cert\") pod \"metallb-operator-controller-manager-67594686f4-52kwb\" (UID: \"e1d5f0bd-4e8c-45c7-9d4e-c530689948ad\") " pod="metallb-system/metallb-operator-controller-manager-67594686f4-52kwb" Feb 14 04:25:30 crc kubenswrapper[4867]: I0214 04:25:30.671620 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e1d5f0bd-4e8c-45c7-9d4e-c530689948ad-apiservice-cert\") pod \"metallb-operator-controller-manager-67594686f4-52kwb\" (UID: \"e1d5f0bd-4e8c-45c7-9d4e-c530689948ad\") " pod="metallb-system/metallb-operator-controller-manager-67594686f4-52kwb" Feb 14 04:25:30 crc kubenswrapper[4867]: I0214 04:25:30.673171 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e1d5f0bd-4e8c-45c7-9d4e-c530689948ad-webhook-cert\") pod \"metallb-operator-controller-manager-67594686f4-52kwb\" (UID: \"e1d5f0bd-4e8c-45c7-9d4e-c530689948ad\") " pod="metallb-system/metallb-operator-controller-manager-67594686f4-52kwb" Feb 14 04:25:30 crc kubenswrapper[4867]: I0214 04:25:30.690535 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6pxc\" (UniqueName: \"kubernetes.io/projected/e1d5f0bd-4e8c-45c7-9d4e-c530689948ad-kube-api-access-q6pxc\") pod \"metallb-operator-controller-manager-67594686f4-52kwb\" (UID: \"e1d5f0bd-4e8c-45c7-9d4e-c530689948ad\") " pod="metallb-system/metallb-operator-controller-manager-67594686f4-52kwb" Feb 14 04:25:30 crc kubenswrapper[4867]: I0214 04:25:30.749837 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-67594686f4-52kwb" Feb 14 04:25:30 crc kubenswrapper[4867]: I0214 04:25:30.970153 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-7f9bfb45cb-mpxbn"] Feb 14 04:25:30 crc kubenswrapper[4867]: I0214 04:25:30.971391 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7f9bfb45cb-mpxbn" Feb 14 04:25:30 crc kubenswrapper[4867]: I0214 04:25:30.977367 4867 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 14 04:25:30 crc kubenswrapper[4867]: I0214 04:25:30.977699 4867 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-bk6fr" Feb 14 04:25:30 crc kubenswrapper[4867]: I0214 04:25:30.977956 4867 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 14 04:25:31 crc kubenswrapper[4867]: I0214 04:25:31.035256 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7f9bfb45cb-mpxbn"] Feb 14 04:25:31 crc kubenswrapper[4867]: I0214 04:25:31.072286 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d5e9c930-96ca-4a35-af4f-b8ae033469a5-apiservice-cert\") pod \"metallb-operator-webhook-server-7f9bfb45cb-mpxbn\" (UID: \"d5e9c930-96ca-4a35-af4f-b8ae033469a5\") " pod="metallb-system/metallb-operator-webhook-server-7f9bfb45cb-mpxbn" Feb 14 04:25:31 crc kubenswrapper[4867]: I0214 04:25:31.072425 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4ltd\" (UniqueName: \"kubernetes.io/projected/d5e9c930-96ca-4a35-af4f-b8ae033469a5-kube-api-access-t4ltd\") pod \"metallb-operator-webhook-server-7f9bfb45cb-mpxbn\" (UID: \"d5e9c930-96ca-4a35-af4f-b8ae033469a5\") " pod="metallb-system/metallb-operator-webhook-server-7f9bfb45cb-mpxbn" Feb 14 04:25:31 crc kubenswrapper[4867]: I0214 04:25:31.072485 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d5e9c930-96ca-4a35-af4f-b8ae033469a5-webhook-cert\") pod \"metallb-operator-webhook-server-7f9bfb45cb-mpxbn\" (UID: \"d5e9c930-96ca-4a35-af4f-b8ae033469a5\") " pod="metallb-system/metallb-operator-webhook-server-7f9bfb45cb-mpxbn" Feb 14 04:25:31 crc kubenswrapper[4867]: I0214 04:25:31.173570 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4ltd\" (UniqueName: \"kubernetes.io/projected/d5e9c930-96ca-4a35-af4f-b8ae033469a5-kube-api-access-t4ltd\") pod \"metallb-operator-webhook-server-7f9bfb45cb-mpxbn\" (UID: \"d5e9c930-96ca-4a35-af4f-b8ae033469a5\") " pod="metallb-system/metallb-operator-webhook-server-7f9bfb45cb-mpxbn" Feb 14 04:25:31 crc kubenswrapper[4867]: I0214 04:25:31.173933 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d5e9c930-96ca-4a35-af4f-b8ae033469a5-webhook-cert\") pod \"metallb-operator-webhook-server-7f9bfb45cb-mpxbn\" (UID: \"d5e9c930-96ca-4a35-af4f-b8ae033469a5\") " pod="metallb-system/metallb-operator-webhook-server-7f9bfb45cb-mpxbn" Feb 14 04:25:31 crc kubenswrapper[4867]: I0214 04:25:31.173995 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d5e9c930-96ca-4a35-af4f-b8ae033469a5-apiservice-cert\") pod \"metallb-operator-webhook-server-7f9bfb45cb-mpxbn\" (UID: \"d5e9c930-96ca-4a35-af4f-b8ae033469a5\") " pod="metallb-system/metallb-operator-webhook-server-7f9bfb45cb-mpxbn" Feb 14 04:25:31 crc kubenswrapper[4867]: I0214 04:25:31.192045 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d5e9c930-96ca-4a35-af4f-b8ae033469a5-apiservice-cert\") pod \"metallb-operator-webhook-server-7f9bfb45cb-mpxbn\" (UID: \"d5e9c930-96ca-4a35-af4f-b8ae033469a5\") " pod="metallb-system/metallb-operator-webhook-server-7f9bfb45cb-mpxbn" Feb 14 04:25:31 crc kubenswrapper[4867]: I0214 04:25:31.192315 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d5e9c930-96ca-4a35-af4f-b8ae033469a5-webhook-cert\") pod \"metallb-operator-webhook-server-7f9bfb45cb-mpxbn\" (UID: \"d5e9c930-96ca-4a35-af4f-b8ae033469a5\") " pod="metallb-system/metallb-operator-webhook-server-7f9bfb45cb-mpxbn" Feb 14 04:25:31 crc kubenswrapper[4867]: I0214 04:25:31.197058 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4ltd\" (UniqueName: \"kubernetes.io/projected/d5e9c930-96ca-4a35-af4f-b8ae033469a5-kube-api-access-t4ltd\") pod \"metallb-operator-webhook-server-7f9bfb45cb-mpxbn\" (UID: \"d5e9c930-96ca-4a35-af4f-b8ae033469a5\") " pod="metallb-system/metallb-operator-webhook-server-7f9bfb45cb-mpxbn" Feb 14 04:25:31 crc kubenswrapper[4867]: I0214 04:25:31.254700 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 04:25:31 crc kubenswrapper[4867]: I0214 04:25:31.254787 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 04:25:31 crc kubenswrapper[4867]: I0214 04:25:31.335587 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7f9bfb45cb-mpxbn" Feb 14 04:25:31 crc kubenswrapper[4867]: I0214 04:25:31.519633 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-67594686f4-52kwb"] Feb 14 04:25:31 crc kubenswrapper[4867]: I0214 04:25:31.869250 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7f9bfb45cb-mpxbn"] Feb 14 04:25:32 crc kubenswrapper[4867]: I0214 04:25:32.203479 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7f9bfb45cb-mpxbn" event={"ID":"d5e9c930-96ca-4a35-af4f-b8ae033469a5","Type":"ContainerStarted","Data":"7718f8a85877233a199a0d78e4a43cd0f8c75fac444005e1e147a286cedb7377"} Feb 14 04:25:32 crc kubenswrapper[4867]: I0214 04:25:32.205368 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-67594686f4-52kwb" event={"ID":"e1d5f0bd-4e8c-45c7-9d4e-c530689948ad","Type":"ContainerStarted","Data":"5be51d0e0c6b771905fdca56951d824129bce28d1ecefdf5c2b307a204fea993"} Feb 14 04:25:33 crc kubenswrapper[4867]: I0214 04:25:33.378263 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-89zzb" Feb 14 04:25:33 crc kubenswrapper[4867]: I0214 04:25:33.384671 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-89zzb" Feb 14 04:25:33 crc kubenswrapper[4867]: I0214 04:25:33.457212 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-89zzb" Feb 14 04:25:34 crc kubenswrapper[4867]: I0214 04:25:34.313847 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-89zzb" Feb 14 04:25:36 crc kubenswrapper[4867]: I0214 04:25:36.284870 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-67594686f4-52kwb" event={"ID":"e1d5f0bd-4e8c-45c7-9d4e-c530689948ad","Type":"ContainerStarted","Data":"4de37120723c6ceb858cc27ed5593f4b0f873f34286ef080ea925db6e29ad027"} Feb 14 04:25:36 crc kubenswrapper[4867]: I0214 04:25:36.307345 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-67594686f4-52kwb" podStartSLOduration=2.279931092 podStartE2EDuration="6.307328192s" podCreationTimestamp="2026-02-14 04:25:30 +0000 UTC" firstStartedPulling="2026-02-14 04:25:31.534052791 +0000 UTC m=+963.614990105" lastFinishedPulling="2026-02-14 04:25:35.561449891 +0000 UTC m=+967.642387205" observedRunningTime="2026-02-14 04:25:36.303942434 +0000 UTC m=+968.384879748" watchObservedRunningTime="2026-02-14 04:25:36.307328192 +0000 UTC m=+968.388265506" Feb 14 04:25:36 crc kubenswrapper[4867]: I0214 04:25:36.648941 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-89zzb"] Feb 14 04:25:37 crc kubenswrapper[4867]: I0214 04:25:37.291665 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-67594686f4-52kwb" Feb 14 04:25:37 crc kubenswrapper[4867]: I0214 04:25:37.291849 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-89zzb" podUID="41593fcf-d77d-43cb-897b-bf50bbc07d31" containerName="registry-server" containerID="cri-o://84b14d36d17a6852928a4165379f01ef8bd89cd3b51c2f9a1fa85599bcd5a4af" gracePeriod=2 Feb 14 04:25:38 crc kubenswrapper[4867]: I0214 04:25:38.312164 4867 generic.go:334] "Generic (PLEG): container finished" podID="41593fcf-d77d-43cb-897b-bf50bbc07d31" containerID="84b14d36d17a6852928a4165379f01ef8bd89cd3b51c2f9a1fa85599bcd5a4af" exitCode=0 Feb 14 04:25:38 crc kubenswrapper[4867]: I0214 04:25:38.312246 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-89zzb" event={"ID":"41593fcf-d77d-43cb-897b-bf50bbc07d31","Type":"ContainerDied","Data":"84b14d36d17a6852928a4165379f01ef8bd89cd3b51c2f9a1fa85599bcd5a4af"} Feb 14 04:25:38 crc kubenswrapper[4867]: I0214 04:25:38.580870 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-89zzb" Feb 14 04:25:38 crc kubenswrapper[4867]: I0214 04:25:38.730808 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4fhtd\" (UniqueName: \"kubernetes.io/projected/41593fcf-d77d-43cb-897b-bf50bbc07d31-kube-api-access-4fhtd\") pod \"41593fcf-d77d-43cb-897b-bf50bbc07d31\" (UID: \"41593fcf-d77d-43cb-897b-bf50bbc07d31\") " Feb 14 04:25:38 crc kubenswrapper[4867]: I0214 04:25:38.730944 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41593fcf-d77d-43cb-897b-bf50bbc07d31-utilities\") pod \"41593fcf-d77d-43cb-897b-bf50bbc07d31\" (UID: \"41593fcf-d77d-43cb-897b-bf50bbc07d31\") " Feb 14 04:25:38 crc kubenswrapper[4867]: I0214 04:25:38.731030 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41593fcf-d77d-43cb-897b-bf50bbc07d31-catalog-content\") pod \"41593fcf-d77d-43cb-897b-bf50bbc07d31\" (UID: \"41593fcf-d77d-43cb-897b-bf50bbc07d31\") " Feb 14 04:25:38 crc kubenswrapper[4867]: I0214 04:25:38.732056 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41593fcf-d77d-43cb-897b-bf50bbc07d31-utilities" (OuterVolumeSpecName: "utilities") pod "41593fcf-d77d-43cb-897b-bf50bbc07d31" (UID: "41593fcf-d77d-43cb-897b-bf50bbc07d31"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:25:38 crc kubenswrapper[4867]: I0214 04:25:38.740438 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41593fcf-d77d-43cb-897b-bf50bbc07d31-kube-api-access-4fhtd" (OuterVolumeSpecName: "kube-api-access-4fhtd") pod "41593fcf-d77d-43cb-897b-bf50bbc07d31" (UID: "41593fcf-d77d-43cb-897b-bf50bbc07d31"). InnerVolumeSpecName "kube-api-access-4fhtd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:25:38 crc kubenswrapper[4867]: I0214 04:25:38.783038 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41593fcf-d77d-43cb-897b-bf50bbc07d31-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "41593fcf-d77d-43cb-897b-bf50bbc07d31" (UID: "41593fcf-d77d-43cb-897b-bf50bbc07d31"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:25:38 crc kubenswrapper[4867]: I0214 04:25:38.832973 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4fhtd\" (UniqueName: \"kubernetes.io/projected/41593fcf-d77d-43cb-897b-bf50bbc07d31-kube-api-access-4fhtd\") on node \"crc\" DevicePath \"\"" Feb 14 04:25:38 crc kubenswrapper[4867]: I0214 04:25:38.833019 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41593fcf-d77d-43cb-897b-bf50bbc07d31-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 04:25:38 crc kubenswrapper[4867]: I0214 04:25:38.833031 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41593fcf-d77d-43cb-897b-bf50bbc07d31-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 04:25:39 crc kubenswrapper[4867]: E0214 04:25:39.104617 4867 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod41593fcf_d77d_43cb_897b_bf50bbc07d31.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod41593fcf_d77d_43cb_897b_bf50bbc07d31.slice/crio-b1c5650acb46edc5c20087f88f5e194cf319b08e3b50efe1315f88ecbf3e0799\": RecentStats: unable to find data in memory cache]" Feb 14 04:25:39 crc kubenswrapper[4867]: I0214 04:25:39.321276 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-89zzb" event={"ID":"41593fcf-d77d-43cb-897b-bf50bbc07d31","Type":"ContainerDied","Data":"b1c5650acb46edc5c20087f88f5e194cf319b08e3b50efe1315f88ecbf3e0799"} Feb 14 04:25:39 crc kubenswrapper[4867]: I0214 04:25:39.321336 4867 scope.go:117] "RemoveContainer" containerID="84b14d36d17a6852928a4165379f01ef8bd89cd3b51c2f9a1fa85599bcd5a4af" Feb 14 04:25:39 crc kubenswrapper[4867]: I0214 04:25:39.321331 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-89zzb" Feb 14 04:25:39 crc kubenswrapper[4867]: I0214 04:25:39.323219 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7f9bfb45cb-mpxbn" event={"ID":"d5e9c930-96ca-4a35-af4f-b8ae033469a5","Type":"ContainerStarted","Data":"7b47d8831936f974296fa5b46313134eee7c7016a1d36736b8027bb6454a7f66"} Feb 14 04:25:39 crc kubenswrapper[4867]: I0214 04:25:39.323453 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-7f9bfb45cb-mpxbn" Feb 14 04:25:39 crc kubenswrapper[4867]: I0214 04:25:39.340049 4867 scope.go:117] "RemoveContainer" containerID="f4d29f8ea9676c2101890d6b580cc624a0fb609f17c4b40302ee52454cdc91b7" Feb 14 04:25:39 crc kubenswrapper[4867]: I0214 04:25:39.341732 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-89zzb"] Feb 14 04:25:39 crc kubenswrapper[4867]: I0214 04:25:39.359682 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-89zzb"] Feb 14 04:25:39 crc kubenswrapper[4867]: I0214 04:25:39.361075 4867 scope.go:117] "RemoveContainer" containerID="5c0f380549657313e0565dc481c122d115c86229dca3f0afe73563f2bb24adf6" Feb 14 04:25:39 crc kubenswrapper[4867]: I0214 04:25:39.375712 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-7f9bfb45cb-mpxbn" podStartSLOduration=2.983757409 podStartE2EDuration="9.375689846s" podCreationTimestamp="2026-02-14 04:25:30 +0000 UTC" firstStartedPulling="2026-02-14 04:25:31.880786485 +0000 UTC m=+963.961723799" lastFinishedPulling="2026-02-14 04:25:38.272718922 +0000 UTC m=+970.353656236" observedRunningTime="2026-02-14 04:25:39.369718751 +0000 UTC m=+971.450656065" watchObservedRunningTime="2026-02-14 04:25:39.375689846 +0000 UTC m=+971.456627160" Feb 14 04:25:41 crc kubenswrapper[4867]: I0214 04:25:41.006805 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41593fcf-d77d-43cb-897b-bf50bbc07d31" path="/var/lib/kubelet/pods/41593fcf-d77d-43cb-897b-bf50bbc07d31/volumes" Feb 14 04:25:51 crc kubenswrapper[4867]: I0214 04:25:51.343881 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-7f9bfb45cb-mpxbn" Feb 14 04:26:01 crc kubenswrapper[4867]: I0214 04:26:01.250427 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 04:26:01 crc kubenswrapper[4867]: I0214 04:26:01.251019 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 04:26:01 crc kubenswrapper[4867]: I0214 04:26:01.251062 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" Feb 14 04:26:01 crc kubenswrapper[4867]: I0214 04:26:01.251749 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3ce87267e4cadbd1bac903bbe9da7eec07159552420bcd52dda15fc535f1ace5"} pod="openshift-machine-config-operator/machine-config-daemon-4s95t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 04:26:01 crc kubenswrapper[4867]: I0214 04:26:01.251798 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" containerID="cri-o://3ce87267e4cadbd1bac903bbe9da7eec07159552420bcd52dda15fc535f1ace5" gracePeriod=600 Feb 14 04:26:01 crc kubenswrapper[4867]: I0214 04:26:01.485779 4867 generic.go:334] "Generic (PLEG): container finished" podID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerID="3ce87267e4cadbd1bac903bbe9da7eec07159552420bcd52dda15fc535f1ace5" exitCode=0 Feb 14 04:26:01 crc kubenswrapper[4867]: I0214 04:26:01.485826 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" event={"ID":"5992e46c-bce7-4b9f-82f2-c7ffb93286cd","Type":"ContainerDied","Data":"3ce87267e4cadbd1bac903bbe9da7eec07159552420bcd52dda15fc535f1ace5"} Feb 14 04:26:01 crc kubenswrapper[4867]: I0214 04:26:01.486245 4867 scope.go:117] "RemoveContainer" containerID="51f114f48cb9a2cff6d859aa7aea42ea438df249b54ac2cc89b9fb1c0a39a59a" Feb 14 04:26:02 crc kubenswrapper[4867]: I0214 04:26:02.533678 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" event={"ID":"5992e46c-bce7-4b9f-82f2-c7ffb93286cd","Type":"ContainerStarted","Data":"a6dbe719cdc073fcc8481a2727f00815982a8bd61b2cd10d4229a11b7b5cb46c"} Feb 14 04:26:10 crc kubenswrapper[4867]: I0214 04:26:10.753491 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-67594686f4-52kwb" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.481349 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-nzdwg"] Feb 14 04:26:11 crc kubenswrapper[4867]: E0214 04:26:11.481737 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41593fcf-d77d-43cb-897b-bf50bbc07d31" containerName="registry-server" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.481758 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="41593fcf-d77d-43cb-897b-bf50bbc07d31" containerName="registry-server" Feb 14 04:26:11 crc kubenswrapper[4867]: E0214 04:26:11.481793 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41593fcf-d77d-43cb-897b-bf50bbc07d31" containerName="extract-utilities" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.481802 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="41593fcf-d77d-43cb-897b-bf50bbc07d31" containerName="extract-utilities" Feb 14 04:26:11 crc kubenswrapper[4867]: E0214 04:26:11.481816 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41593fcf-d77d-43cb-897b-bf50bbc07d31" containerName="extract-content" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.481826 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="41593fcf-d77d-43cb-897b-bf50bbc07d31" containerName="extract-content" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.482004 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="41593fcf-d77d-43cb-897b-bf50bbc07d31" containerName="registry-server" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.485445 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-nzdwg" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.489906 4867 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-gpnt5" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.490219 4867 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.490401 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.499089 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-9gqfb"] Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.500345 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-9gqfb" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.501836 4867 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.541561 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-9gqfb"] Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.585336 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/cfde5532-97c7-47b8-8b63-0159fc9e82b9-reloader\") pod \"frr-k8s-nzdwg\" (UID: \"cfde5532-97c7-47b8-8b63-0159fc9e82b9\") " pod="metallb-system/frr-k8s-nzdwg" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.585656 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/cfde5532-97c7-47b8-8b63-0159fc9e82b9-metrics\") pod \"frr-k8s-nzdwg\" (UID: \"cfde5532-97c7-47b8-8b63-0159fc9e82b9\") " pod="metallb-system/frr-k8s-nzdwg" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.585703 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/cfde5532-97c7-47b8-8b63-0159fc9e82b9-frr-sockets\") pod \"frr-k8s-nzdwg\" (UID: \"cfde5532-97c7-47b8-8b63-0159fc9e82b9\") " pod="metallb-system/frr-k8s-nzdwg" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.585749 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/85e0628d-4132-4c09-9da0-35db43024c9c-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-9gqfb\" (UID: \"85e0628d-4132-4c09-9da0-35db43024c9c\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-9gqfb" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.585789 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/cfde5532-97c7-47b8-8b63-0159fc9e82b9-frr-conf\") pod \"frr-k8s-nzdwg\" (UID: \"cfde5532-97c7-47b8-8b63-0159fc9e82b9\") " pod="metallb-system/frr-k8s-nzdwg" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.585813 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fmk4\" (UniqueName: \"kubernetes.io/projected/cfde5532-97c7-47b8-8b63-0159fc9e82b9-kube-api-access-2fmk4\") pod \"frr-k8s-nzdwg\" (UID: \"cfde5532-97c7-47b8-8b63-0159fc9e82b9\") " pod="metallb-system/frr-k8s-nzdwg" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.585853 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/cfde5532-97c7-47b8-8b63-0159fc9e82b9-frr-startup\") pod \"frr-k8s-nzdwg\" (UID: \"cfde5532-97c7-47b8-8b63-0159fc9e82b9\") " pod="metallb-system/frr-k8s-nzdwg" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.585879 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4b4d\" (UniqueName: \"kubernetes.io/projected/85e0628d-4132-4c09-9da0-35db43024c9c-kube-api-access-x4b4d\") pod \"frr-k8s-webhook-server-78b44bf5bb-9gqfb\" (UID: \"85e0628d-4132-4c09-9da0-35db43024c9c\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-9gqfb" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.585936 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cfde5532-97c7-47b8-8b63-0159fc9e82b9-metrics-certs\") pod \"frr-k8s-nzdwg\" (UID: \"cfde5532-97c7-47b8-8b63-0159fc9e82b9\") " pod="metallb-system/frr-k8s-nzdwg" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.605716 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-4hvw7"] Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.619987 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-4hvw7" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.623996 4867 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-tv9sc" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.624161 4867 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.624319 4867 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.624450 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.624879 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-69bbfbf88f-zhmxc"] Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.626432 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-zhmxc" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.630646 4867 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.645024 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-zhmxc"] Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.686976 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/516cf204-1263-431e-a450-039739b0d925-cert\") pod \"controller-69bbfbf88f-zhmxc\" (UID: \"516cf204-1263-431e-a450-039739b0d925\") " pod="metallb-system/controller-69bbfbf88f-zhmxc" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.687068 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/cfde5532-97c7-47b8-8b63-0159fc9e82b9-metrics\") pod \"frr-k8s-nzdwg\" (UID: \"cfde5532-97c7-47b8-8b63-0159fc9e82b9\") " pod="metallb-system/frr-k8s-nzdwg" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.687095 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/cfde5532-97c7-47b8-8b63-0159fc9e82b9-frr-sockets\") pod \"frr-k8s-nzdwg\" (UID: \"cfde5532-97c7-47b8-8b63-0159fc9e82b9\") " pod="metallb-system/frr-k8s-nzdwg" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.687124 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gm68d\" (UniqueName: \"kubernetes.io/projected/516cf204-1263-431e-a450-039739b0d925-kube-api-access-gm68d\") pod \"controller-69bbfbf88f-zhmxc\" (UID: \"516cf204-1263-431e-a450-039739b0d925\") " pod="metallb-system/controller-69bbfbf88f-zhmxc" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.687147 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/516cf204-1263-431e-a450-039739b0d925-metrics-certs\") pod \"controller-69bbfbf88f-zhmxc\" (UID: \"516cf204-1263-431e-a450-039739b0d925\") " pod="metallb-system/controller-69bbfbf88f-zhmxc" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.687171 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/85e0628d-4132-4c09-9da0-35db43024c9c-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-9gqfb\" (UID: \"85e0628d-4132-4c09-9da0-35db43024c9c\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-9gqfb" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.687197 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8-metrics-certs\") pod \"speaker-4hvw7\" (UID: \"6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8\") " pod="metallb-system/speaker-4hvw7" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.687212 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvmd9\" (UniqueName: \"kubernetes.io/projected/6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8-kube-api-access-qvmd9\") pod \"speaker-4hvw7\" (UID: \"6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8\") " pod="metallb-system/speaker-4hvw7" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.687234 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8-metallb-excludel2\") pod \"speaker-4hvw7\" (UID: \"6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8\") " pod="metallb-system/speaker-4hvw7" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.687252 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fmk4\" (UniqueName: \"kubernetes.io/projected/cfde5532-97c7-47b8-8b63-0159fc9e82b9-kube-api-access-2fmk4\") pod \"frr-k8s-nzdwg\" (UID: \"cfde5532-97c7-47b8-8b63-0159fc9e82b9\") " pod="metallb-system/frr-k8s-nzdwg" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.687268 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/cfde5532-97c7-47b8-8b63-0159fc9e82b9-frr-conf\") pod \"frr-k8s-nzdwg\" (UID: \"cfde5532-97c7-47b8-8b63-0159fc9e82b9\") " pod="metallb-system/frr-k8s-nzdwg" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.687296 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/cfde5532-97c7-47b8-8b63-0159fc9e82b9-frr-startup\") pod \"frr-k8s-nzdwg\" (UID: \"cfde5532-97c7-47b8-8b63-0159fc9e82b9\") " pod="metallb-system/frr-k8s-nzdwg" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.687317 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4b4d\" (UniqueName: \"kubernetes.io/projected/85e0628d-4132-4c09-9da0-35db43024c9c-kube-api-access-x4b4d\") pod \"frr-k8s-webhook-server-78b44bf5bb-9gqfb\" (UID: \"85e0628d-4132-4c09-9da0-35db43024c9c\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-9gqfb" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.687343 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8-memberlist\") pod \"speaker-4hvw7\" (UID: \"6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8\") " pod="metallb-system/speaker-4hvw7" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.687375 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cfde5532-97c7-47b8-8b63-0159fc9e82b9-metrics-certs\") pod \"frr-k8s-nzdwg\" (UID: \"cfde5532-97c7-47b8-8b63-0159fc9e82b9\") " pod="metallb-system/frr-k8s-nzdwg" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.687402 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/cfde5532-97c7-47b8-8b63-0159fc9e82b9-reloader\") pod \"frr-k8s-nzdwg\" (UID: \"cfde5532-97c7-47b8-8b63-0159fc9e82b9\") " pod="metallb-system/frr-k8s-nzdwg" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.687823 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/cfde5532-97c7-47b8-8b63-0159fc9e82b9-reloader\") pod \"frr-k8s-nzdwg\" (UID: \"cfde5532-97c7-47b8-8b63-0159fc9e82b9\") " pod="metallb-system/frr-k8s-nzdwg" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.688016 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/cfde5532-97c7-47b8-8b63-0159fc9e82b9-metrics\") pod \"frr-k8s-nzdwg\" (UID: \"cfde5532-97c7-47b8-8b63-0159fc9e82b9\") " pod="metallb-system/frr-k8s-nzdwg" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.688186 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/cfde5532-97c7-47b8-8b63-0159fc9e82b9-frr-sockets\") pod \"frr-k8s-nzdwg\" (UID: \"cfde5532-97c7-47b8-8b63-0159fc9e82b9\") " pod="metallb-system/frr-k8s-nzdwg" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.691871 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/cfde5532-97c7-47b8-8b63-0159fc9e82b9-frr-conf\") pod \"frr-k8s-nzdwg\" (UID: \"cfde5532-97c7-47b8-8b63-0159fc9e82b9\") " pod="metallb-system/frr-k8s-nzdwg" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.693238 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/cfde5532-97c7-47b8-8b63-0159fc9e82b9-frr-startup\") pod \"frr-k8s-nzdwg\" (UID: \"cfde5532-97c7-47b8-8b63-0159fc9e82b9\") " pod="metallb-system/frr-k8s-nzdwg" Feb 14 04:26:11 crc kubenswrapper[4867]: E0214 04:26:11.693417 4867 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Feb 14 04:26:11 crc kubenswrapper[4867]: E0214 04:26:11.693499 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cfde5532-97c7-47b8-8b63-0159fc9e82b9-metrics-certs podName:cfde5532-97c7-47b8-8b63-0159fc9e82b9 nodeName:}" failed. No retries permitted until 2026-02-14 04:26:12.19347867 +0000 UTC m=+1004.274416054 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cfde5532-97c7-47b8-8b63-0159fc9e82b9-metrics-certs") pod "frr-k8s-nzdwg" (UID: "cfde5532-97c7-47b8-8b63-0159fc9e82b9") : secret "frr-k8s-certs-secret" not found Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.693970 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/85e0628d-4132-4c09-9da0-35db43024c9c-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-9gqfb\" (UID: \"85e0628d-4132-4c09-9da0-35db43024c9c\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-9gqfb" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.714722 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fmk4\" (UniqueName: \"kubernetes.io/projected/cfde5532-97c7-47b8-8b63-0159fc9e82b9-kube-api-access-2fmk4\") pod \"frr-k8s-nzdwg\" (UID: \"cfde5532-97c7-47b8-8b63-0159fc9e82b9\") " pod="metallb-system/frr-k8s-nzdwg" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.717805 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4b4d\" (UniqueName: \"kubernetes.io/projected/85e0628d-4132-4c09-9da0-35db43024c9c-kube-api-access-x4b4d\") pod \"frr-k8s-webhook-server-78b44bf5bb-9gqfb\" (UID: \"85e0628d-4132-4c09-9da0-35db43024c9c\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-9gqfb" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.788946 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8-metrics-certs\") pod \"speaker-4hvw7\" (UID: \"6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8\") " pod="metallb-system/speaker-4hvw7" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.789330 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvmd9\" (UniqueName: \"kubernetes.io/projected/6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8-kube-api-access-qvmd9\") pod \"speaker-4hvw7\" (UID: \"6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8\") " pod="metallb-system/speaker-4hvw7" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.789364 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8-metallb-excludel2\") pod \"speaker-4hvw7\" (UID: \"6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8\") " pod="metallb-system/speaker-4hvw7" Feb 14 04:26:11 crc kubenswrapper[4867]: E0214 04:26:11.789143 4867 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Feb 14 04:26:11 crc kubenswrapper[4867]: E0214 04:26:11.789469 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8-metrics-certs podName:6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8 nodeName:}" failed. No retries permitted until 2026-02-14 04:26:12.289445951 +0000 UTC m=+1004.370383265 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8-metrics-certs") pod "speaker-4hvw7" (UID: "6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8") : secret "speaker-certs-secret" not found Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.789496 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8-memberlist\") pod \"speaker-4hvw7\" (UID: \"6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8\") " pod="metallb-system/speaker-4hvw7" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.789742 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/516cf204-1263-431e-a450-039739b0d925-cert\") pod \"controller-69bbfbf88f-zhmxc\" (UID: \"516cf204-1263-431e-a450-039739b0d925\") " pod="metallb-system/controller-69bbfbf88f-zhmxc" Feb 14 04:26:11 crc kubenswrapper[4867]: E0214 04:26:11.789796 4867 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.789878 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gm68d\" (UniqueName: \"kubernetes.io/projected/516cf204-1263-431e-a450-039739b0d925-kube-api-access-gm68d\") pod \"controller-69bbfbf88f-zhmxc\" (UID: \"516cf204-1263-431e-a450-039739b0d925\") " pod="metallb-system/controller-69bbfbf88f-zhmxc" Feb 14 04:26:11 crc kubenswrapper[4867]: E0214 04:26:11.789905 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8-memberlist podName:6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8 nodeName:}" failed. No retries permitted until 2026-02-14 04:26:12.289881553 +0000 UTC m=+1004.370818967 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8-memberlist") pod "speaker-4hvw7" (UID: "6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8") : secret "metallb-memberlist" not found Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.789942 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/516cf204-1263-431e-a450-039739b0d925-metrics-certs\") pod \"controller-69bbfbf88f-zhmxc\" (UID: \"516cf204-1263-431e-a450-039739b0d925\") " pod="metallb-system/controller-69bbfbf88f-zhmxc" Feb 14 04:26:11 crc kubenswrapper[4867]: E0214 04:26:11.790052 4867 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Feb 14 04:26:11 crc kubenswrapper[4867]: E0214 04:26:11.790101 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/516cf204-1263-431e-a450-039739b0d925-metrics-certs podName:516cf204-1263-431e-a450-039739b0d925 nodeName:}" failed. No retries permitted until 2026-02-14 04:26:12.290090288 +0000 UTC m=+1004.371027612 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/516cf204-1263-431e-a450-039739b0d925-metrics-certs") pod "controller-69bbfbf88f-zhmxc" (UID: "516cf204-1263-431e-a450-039739b0d925") : secret "controller-certs-secret" not found Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.790785 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8-metallb-excludel2\") pod \"speaker-4hvw7\" (UID: \"6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8\") " pod="metallb-system/speaker-4hvw7" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.808317 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/516cf204-1263-431e-a450-039739b0d925-cert\") pod \"controller-69bbfbf88f-zhmxc\" (UID: \"516cf204-1263-431e-a450-039739b0d925\") " pod="metallb-system/controller-69bbfbf88f-zhmxc" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.820589 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvmd9\" (UniqueName: \"kubernetes.io/projected/6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8-kube-api-access-qvmd9\") pod \"speaker-4hvw7\" (UID: \"6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8\") " pod="metallb-system/speaker-4hvw7" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.836662 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gm68d\" (UniqueName: \"kubernetes.io/projected/516cf204-1263-431e-a450-039739b0d925-kube-api-access-gm68d\") pod \"controller-69bbfbf88f-zhmxc\" (UID: \"516cf204-1263-431e-a450-039739b0d925\") " pod="metallb-system/controller-69bbfbf88f-zhmxc" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.844345 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-9gqfb" Feb 14 04:26:12 crc kubenswrapper[4867]: I0214 04:26:12.196307 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cfde5532-97c7-47b8-8b63-0159fc9e82b9-metrics-certs\") pod \"frr-k8s-nzdwg\" (UID: \"cfde5532-97c7-47b8-8b63-0159fc9e82b9\") " pod="metallb-system/frr-k8s-nzdwg" Feb 14 04:26:12 crc kubenswrapper[4867]: I0214 04:26:12.201710 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cfde5532-97c7-47b8-8b63-0159fc9e82b9-metrics-certs\") pod \"frr-k8s-nzdwg\" (UID: \"cfde5532-97c7-47b8-8b63-0159fc9e82b9\") " pod="metallb-system/frr-k8s-nzdwg" Feb 14 04:26:12 crc kubenswrapper[4867]: I0214 04:26:12.273275 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-9gqfb"] Feb 14 04:26:12 crc kubenswrapper[4867]: I0214 04:26:12.297637 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8-memberlist\") pod \"speaker-4hvw7\" (UID: \"6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8\") " pod="metallb-system/speaker-4hvw7" Feb 14 04:26:12 crc kubenswrapper[4867]: I0214 04:26:12.297763 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/516cf204-1263-431e-a450-039739b0d925-metrics-certs\") pod \"controller-69bbfbf88f-zhmxc\" (UID: \"516cf204-1263-431e-a450-039739b0d925\") " pod="metallb-system/controller-69bbfbf88f-zhmxc" Feb 14 04:26:12 crc kubenswrapper[4867]: I0214 04:26:12.297798 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8-metrics-certs\") pod \"speaker-4hvw7\" (UID: \"6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8\") " pod="metallb-system/speaker-4hvw7" Feb 14 04:26:12 crc kubenswrapper[4867]: E0214 04:26:12.302722 4867 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 14 04:26:12 crc kubenswrapper[4867]: E0214 04:26:12.302832 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8-memberlist podName:6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8 nodeName:}" failed. No retries permitted until 2026-02-14 04:26:13.302806726 +0000 UTC m=+1005.383744080 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8-memberlist") pod "speaker-4hvw7" (UID: "6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8") : secret "metallb-memberlist" not found Feb 14 04:26:12 crc kubenswrapper[4867]: I0214 04:26:12.310216 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8-metrics-certs\") pod \"speaker-4hvw7\" (UID: \"6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8\") " pod="metallb-system/speaker-4hvw7" Feb 14 04:26:12 crc kubenswrapper[4867]: I0214 04:26:12.310263 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/516cf204-1263-431e-a450-039739b0d925-metrics-certs\") pod \"controller-69bbfbf88f-zhmxc\" (UID: \"516cf204-1263-431e-a450-039739b0d925\") " pod="metallb-system/controller-69bbfbf88f-zhmxc" Feb 14 04:26:12 crc kubenswrapper[4867]: I0214 04:26:12.414085 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-nzdwg" Feb 14 04:26:12 crc kubenswrapper[4867]: I0214 04:26:12.584076 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-zhmxc" Feb 14 04:26:12 crc kubenswrapper[4867]: I0214 04:26:12.601703 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-nzdwg" event={"ID":"cfde5532-97c7-47b8-8b63-0159fc9e82b9","Type":"ContainerStarted","Data":"f0f07f92e5b1e4236153a02b0c2fb464b5e43abca36d508342ba96642bd11950"} Feb 14 04:26:12 crc kubenswrapper[4867]: I0214 04:26:12.602689 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-9gqfb" event={"ID":"85e0628d-4132-4c09-9da0-35db43024c9c","Type":"ContainerStarted","Data":"ce77dc003a1565cbaecc3f50e4f0d210e45e322dbcb4cc8aa0c95512aa6a94b8"} Feb 14 04:26:13 crc kubenswrapper[4867]: I0214 04:26:13.009130 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-zhmxc"] Feb 14 04:26:13 crc kubenswrapper[4867]: W0214 04:26:13.016687 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod516cf204_1263_431e_a450_039739b0d925.slice/crio-5db76231faba5162eef42083f43337039ed7e39a0d2c457e34f80b3c9d246a39 WatchSource:0}: Error finding container 5db76231faba5162eef42083f43337039ed7e39a0d2c457e34f80b3c9d246a39: Status 404 returned error can't find the container with id 5db76231faba5162eef42083f43337039ed7e39a0d2c457e34f80b3c9d246a39 Feb 14 04:26:13 crc kubenswrapper[4867]: I0214 04:26:13.316540 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8-memberlist\") pod \"speaker-4hvw7\" (UID: \"6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8\") " pod="metallb-system/speaker-4hvw7" Feb 14 04:26:13 crc kubenswrapper[4867]: I0214 04:26:13.322903 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8-memberlist\") pod \"speaker-4hvw7\" (UID: \"6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8\") " pod="metallb-system/speaker-4hvw7" Feb 14 04:26:13 crc kubenswrapper[4867]: I0214 04:26:13.473424 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-4hvw7" Feb 14 04:26:13 crc kubenswrapper[4867]: W0214 04:26:13.498410 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e0a7a97_9ea6_4dcf_85a4_995d891fa5f8.slice/crio-0ed5654043fca3cf04be83ab8bf5856a5166c0e07070e97c6f242745ca28bd50 WatchSource:0}: Error finding container 0ed5654043fca3cf04be83ab8bf5856a5166c0e07070e97c6f242745ca28bd50: Status 404 returned error can't find the container with id 0ed5654043fca3cf04be83ab8bf5856a5166c0e07070e97c6f242745ca28bd50 Feb 14 04:26:13 crc kubenswrapper[4867]: I0214 04:26:13.628651 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-zhmxc" event={"ID":"516cf204-1263-431e-a450-039739b0d925","Type":"ContainerStarted","Data":"e7c8076069c83a4d5e444b60b9e3f64f117dacf01a093cfeed7b95ebb0df2e1d"} Feb 14 04:26:13 crc kubenswrapper[4867]: I0214 04:26:13.628710 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-zhmxc" event={"ID":"516cf204-1263-431e-a450-039739b0d925","Type":"ContainerStarted","Data":"4bbf9b9014a8149d15e6f79b0dcdd17d692b22b97863b761a75ad9d86bb21987"} Feb 14 04:26:13 crc kubenswrapper[4867]: I0214 04:26:13.628724 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-zhmxc" event={"ID":"516cf204-1263-431e-a450-039739b0d925","Type":"ContainerStarted","Data":"5db76231faba5162eef42083f43337039ed7e39a0d2c457e34f80b3c9d246a39"} Feb 14 04:26:13 crc kubenswrapper[4867]: I0214 04:26:13.629844 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-69bbfbf88f-zhmxc" Feb 14 04:26:13 crc kubenswrapper[4867]: I0214 04:26:13.632234 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-4hvw7" event={"ID":"6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8","Type":"ContainerStarted","Data":"0ed5654043fca3cf04be83ab8bf5856a5166c0e07070e97c6f242745ca28bd50"} Feb 14 04:26:13 crc kubenswrapper[4867]: I0214 04:26:13.652094 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-69bbfbf88f-zhmxc" podStartSLOduration=2.652047465 podStartE2EDuration="2.652047465s" podCreationTimestamp="2026-02-14 04:26:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:26:13.649810387 +0000 UTC m=+1005.730747701" watchObservedRunningTime="2026-02-14 04:26:13.652047465 +0000 UTC m=+1005.732984779" Feb 14 04:26:14 crc kubenswrapper[4867]: I0214 04:26:14.659375 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-4hvw7" event={"ID":"6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8","Type":"ContainerStarted","Data":"a6e42e3026e062a43a4b38b44ad77704843728f5218e54cbfd71ef805c27bacb"} Feb 14 04:26:14 crc kubenswrapper[4867]: I0214 04:26:14.659740 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-4hvw7" event={"ID":"6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8","Type":"ContainerStarted","Data":"1c50e8be32836da6fce22b59341f0df53ed1589043997f275a93de461dc1feea"} Feb 14 04:26:14 crc kubenswrapper[4867]: I0214 04:26:14.712786 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-4hvw7" podStartSLOduration=3.712761984 podStartE2EDuration="3.712761984s" podCreationTimestamp="2026-02-14 04:26:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:26:14.702964831 +0000 UTC m=+1006.783902145" watchObservedRunningTime="2026-02-14 04:26:14.712761984 +0000 UTC m=+1006.793699318" Feb 14 04:26:15 crc kubenswrapper[4867]: I0214 04:26:15.667650 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-4hvw7" Feb 14 04:26:22 crc kubenswrapper[4867]: I0214 04:26:22.754906 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-9gqfb" event={"ID":"85e0628d-4132-4c09-9da0-35db43024c9c","Type":"ContainerStarted","Data":"e4c58a36f0ba8ec1610fa373ec1045e46fc1fd0f54e17718ead321d3a683914d"} Feb 14 04:26:22 crc kubenswrapper[4867]: I0214 04:26:22.756765 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-9gqfb" Feb 14 04:26:22 crc kubenswrapper[4867]: I0214 04:26:22.758436 4867 generic.go:334] "Generic (PLEG): container finished" podID="cfde5532-97c7-47b8-8b63-0159fc9e82b9" containerID="0aea0bb3b1a2276d3b97fea97e62516551b7b690f473022c0b6928d6ab7538ff" exitCode=0 Feb 14 04:26:22 crc kubenswrapper[4867]: I0214 04:26:22.758470 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-nzdwg" event={"ID":"cfde5532-97c7-47b8-8b63-0159fc9e82b9","Type":"ContainerDied","Data":"0aea0bb3b1a2276d3b97fea97e62516551b7b690f473022c0b6928d6ab7538ff"} Feb 14 04:26:22 crc kubenswrapper[4867]: I0214 04:26:22.793098 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-9gqfb" podStartSLOduration=2.38072621 podStartE2EDuration="11.79307356s" podCreationTimestamp="2026-02-14 04:26:11 +0000 UTC" firstStartedPulling="2026-02-14 04:26:12.276418414 +0000 UTC m=+1004.357355728" lastFinishedPulling="2026-02-14 04:26:21.688765764 +0000 UTC m=+1013.769703078" observedRunningTime="2026-02-14 04:26:22.788020329 +0000 UTC m=+1014.868957643" watchObservedRunningTime="2026-02-14 04:26:22.79307356 +0000 UTC m=+1014.874010874" Feb 14 04:26:22 crc kubenswrapper[4867]: I0214 04:26:22.932583 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5crd9"] Feb 14 04:26:22 crc kubenswrapper[4867]: I0214 04:26:22.935708 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5crd9" Feb 14 04:26:22 crc kubenswrapper[4867]: I0214 04:26:22.957687 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87fbab35-1a29-4dcd-94fd-b8d663b73622-utilities\") pod \"community-operators-5crd9\" (UID: \"87fbab35-1a29-4dcd-94fd-b8d663b73622\") " pod="openshift-marketplace/community-operators-5crd9" Feb 14 04:26:22 crc kubenswrapper[4867]: I0214 04:26:22.957879 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87fbab35-1a29-4dcd-94fd-b8d663b73622-catalog-content\") pod \"community-operators-5crd9\" (UID: \"87fbab35-1a29-4dcd-94fd-b8d663b73622\") " pod="openshift-marketplace/community-operators-5crd9" Feb 14 04:26:22 crc kubenswrapper[4867]: I0214 04:26:22.957975 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67bfl\" (UniqueName: \"kubernetes.io/projected/87fbab35-1a29-4dcd-94fd-b8d663b73622-kube-api-access-67bfl\") pod \"community-operators-5crd9\" (UID: \"87fbab35-1a29-4dcd-94fd-b8d663b73622\") " pod="openshift-marketplace/community-operators-5crd9" Feb 14 04:26:22 crc kubenswrapper[4867]: I0214 04:26:22.978248 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5crd9"] Feb 14 04:26:23 crc kubenswrapper[4867]: I0214 04:26:23.059788 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87fbab35-1a29-4dcd-94fd-b8d663b73622-utilities\") pod \"community-operators-5crd9\" (UID: \"87fbab35-1a29-4dcd-94fd-b8d663b73622\") " pod="openshift-marketplace/community-operators-5crd9" Feb 14 04:26:23 crc kubenswrapper[4867]: I0214 04:26:23.059890 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87fbab35-1a29-4dcd-94fd-b8d663b73622-catalog-content\") pod \"community-operators-5crd9\" (UID: \"87fbab35-1a29-4dcd-94fd-b8d663b73622\") " pod="openshift-marketplace/community-operators-5crd9" Feb 14 04:26:23 crc kubenswrapper[4867]: I0214 04:26:23.060616 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67bfl\" (UniqueName: \"kubernetes.io/projected/87fbab35-1a29-4dcd-94fd-b8d663b73622-kube-api-access-67bfl\") pod \"community-operators-5crd9\" (UID: \"87fbab35-1a29-4dcd-94fd-b8d663b73622\") " pod="openshift-marketplace/community-operators-5crd9" Feb 14 04:26:23 crc kubenswrapper[4867]: I0214 04:26:23.060696 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87fbab35-1a29-4dcd-94fd-b8d663b73622-catalog-content\") pod \"community-operators-5crd9\" (UID: \"87fbab35-1a29-4dcd-94fd-b8d663b73622\") " pod="openshift-marketplace/community-operators-5crd9" Feb 14 04:26:23 crc kubenswrapper[4867]: I0214 04:26:23.060767 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87fbab35-1a29-4dcd-94fd-b8d663b73622-utilities\") pod \"community-operators-5crd9\" (UID: \"87fbab35-1a29-4dcd-94fd-b8d663b73622\") " pod="openshift-marketplace/community-operators-5crd9" Feb 14 04:26:23 crc kubenswrapper[4867]: I0214 04:26:23.080782 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67bfl\" (UniqueName: \"kubernetes.io/projected/87fbab35-1a29-4dcd-94fd-b8d663b73622-kube-api-access-67bfl\") pod \"community-operators-5crd9\" (UID: \"87fbab35-1a29-4dcd-94fd-b8d663b73622\") " pod="openshift-marketplace/community-operators-5crd9" Feb 14 04:26:23 crc kubenswrapper[4867]: I0214 04:26:23.286567 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5crd9" Feb 14 04:26:23 crc kubenswrapper[4867]: I0214 04:26:23.484098 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-4hvw7" Feb 14 04:26:23 crc kubenswrapper[4867]: I0214 04:26:23.767200 4867 generic.go:334] "Generic (PLEG): container finished" podID="cfde5532-97c7-47b8-8b63-0159fc9e82b9" containerID="cd9b88599d43ea9f82cd648a739f3263ee4fed536da4d246c5ad2c6864aad0a0" exitCode=0 Feb 14 04:26:23 crc kubenswrapper[4867]: I0214 04:26:23.767248 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-nzdwg" event={"ID":"cfde5532-97c7-47b8-8b63-0159fc9e82b9","Type":"ContainerDied","Data":"cd9b88599d43ea9f82cd648a739f3263ee4fed536da4d246c5ad2c6864aad0a0"} Feb 14 04:26:23 crc kubenswrapper[4867]: I0214 04:26:23.825554 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5crd9"] Feb 14 04:26:23 crc kubenswrapper[4867]: W0214 04:26:23.830750 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod87fbab35_1a29_4dcd_94fd_b8d663b73622.slice/crio-de1d5c3a5756f6da67db095fba834d9ec14cf3329af840e674b920ee7c05505b WatchSource:0}: Error finding container de1d5c3a5756f6da67db095fba834d9ec14cf3329af840e674b920ee7c05505b: Status 404 returned error can't find the container with id de1d5c3a5756f6da67db095fba834d9ec14cf3329af840e674b920ee7c05505b Feb 14 04:26:24 crc kubenswrapper[4867]: I0214 04:26:24.780404 4867 generic.go:334] "Generic (PLEG): container finished" podID="cfde5532-97c7-47b8-8b63-0159fc9e82b9" containerID="0e79cead43e145ebc00cae1a79b5c2bfc4c85f66748229277e2c4e4c8ef7f651" exitCode=0 Feb 14 04:26:24 crc kubenswrapper[4867]: I0214 04:26:24.780446 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-nzdwg" event={"ID":"cfde5532-97c7-47b8-8b63-0159fc9e82b9","Type":"ContainerDied","Data":"0e79cead43e145ebc00cae1a79b5c2bfc4c85f66748229277e2c4e4c8ef7f651"} Feb 14 04:26:24 crc kubenswrapper[4867]: I0214 04:26:24.783027 4867 generic.go:334] "Generic (PLEG): container finished" podID="87fbab35-1a29-4dcd-94fd-b8d663b73622" containerID="8b2d52f06eebee7118510c869b74986963358a5a824948f2fd114a350afa5c2e" exitCode=0 Feb 14 04:26:24 crc kubenswrapper[4867]: I0214 04:26:24.783086 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5crd9" event={"ID":"87fbab35-1a29-4dcd-94fd-b8d663b73622","Type":"ContainerDied","Data":"8b2d52f06eebee7118510c869b74986963358a5a824948f2fd114a350afa5c2e"} Feb 14 04:26:24 crc kubenswrapper[4867]: I0214 04:26:24.783134 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5crd9" event={"ID":"87fbab35-1a29-4dcd-94fd-b8d663b73622","Type":"ContainerStarted","Data":"de1d5c3a5756f6da67db095fba834d9ec14cf3329af840e674b920ee7c05505b"} Feb 14 04:26:25 crc kubenswrapper[4867]: I0214 04:26:25.794341 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-nzdwg" event={"ID":"cfde5532-97c7-47b8-8b63-0159fc9e82b9","Type":"ContainerStarted","Data":"a607ea132c1aa0b9d6c68c3601ae04a26220cd55eee8e095594f2aace6ecac5a"} Feb 14 04:26:25 crc kubenswrapper[4867]: I0214 04:26:25.794859 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-nzdwg" event={"ID":"cfde5532-97c7-47b8-8b63-0159fc9e82b9","Type":"ContainerStarted","Data":"7dc066e000b0f0659e3da8817568bd5537335c5736f2f7be29d33d5f49e508de"} Feb 14 04:26:25 crc kubenswrapper[4867]: I0214 04:26:25.796016 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5crd9" event={"ID":"87fbab35-1a29-4dcd-94fd-b8d663b73622","Type":"ContainerStarted","Data":"a1f4caaea9c54471dd9119c2245d0b2f434696526f81d5bbf79e28b36d5b28cb"} Feb 14 04:26:26 crc kubenswrapper[4867]: I0214 04:26:26.816267 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-nzdwg" event={"ID":"cfde5532-97c7-47b8-8b63-0159fc9e82b9","Type":"ContainerStarted","Data":"2b641826e1bdc0c9338a084886d7dddd2dae8caa45adbe0d79e15726e335705c"} Feb 14 04:26:26 crc kubenswrapper[4867]: I0214 04:26:26.817345 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-nzdwg" event={"ID":"cfde5532-97c7-47b8-8b63-0159fc9e82b9","Type":"ContainerStarted","Data":"5b950ed8d59a06a71544ad0e918e0512757c07d75b22164cb8ef06d82b857118"} Feb 14 04:26:26 crc kubenswrapper[4867]: I0214 04:26:26.817426 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-nzdwg" event={"ID":"cfde5532-97c7-47b8-8b63-0159fc9e82b9","Type":"ContainerStarted","Data":"8a31d984db28b5601904993e4b679f38e218cc59f491162ded1096bde8c0e281"} Feb 14 04:26:26 crc kubenswrapper[4867]: I0214 04:26:26.817487 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-nzdwg" event={"ID":"cfde5532-97c7-47b8-8b63-0159fc9e82b9","Type":"ContainerStarted","Data":"8426c07a22007ae8cd6cc9210f95af45a35f7e53edfe6d5be65ad75c86067d42"} Feb 14 04:26:26 crc kubenswrapper[4867]: I0214 04:26:26.817690 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-nzdwg" Feb 14 04:26:26 crc kubenswrapper[4867]: I0214 04:26:26.819483 4867 generic.go:334] "Generic (PLEG): container finished" podID="87fbab35-1a29-4dcd-94fd-b8d663b73622" containerID="a1f4caaea9c54471dd9119c2245d0b2f434696526f81d5bbf79e28b36d5b28cb" exitCode=0 Feb 14 04:26:26 crc kubenswrapper[4867]: I0214 04:26:26.819555 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5crd9" event={"ID":"87fbab35-1a29-4dcd-94fd-b8d663b73622","Type":"ContainerDied","Data":"a1f4caaea9c54471dd9119c2245d0b2f434696526f81d5bbf79e28b36d5b28cb"} Feb 14 04:26:26 crc kubenswrapper[4867]: I0214 04:26:26.842263 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-nzdwg" podStartSLOduration=6.7129966549999995 podStartE2EDuration="15.842240055s" podCreationTimestamp="2026-02-14 04:26:11 +0000 UTC" firstStartedPulling="2026-02-14 04:26:12.54235287 +0000 UTC m=+1004.623290184" lastFinishedPulling="2026-02-14 04:26:21.67159626 +0000 UTC m=+1013.752533584" observedRunningTime="2026-02-14 04:26:26.841217988 +0000 UTC m=+1018.922155322" watchObservedRunningTime="2026-02-14 04:26:26.842240055 +0000 UTC m=+1018.923177369" Feb 14 04:26:27 crc kubenswrapper[4867]: I0214 04:26:27.414933 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-nzdwg" Feb 14 04:26:27 crc kubenswrapper[4867]: I0214 04:26:27.488840 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-nzdwg" Feb 14 04:26:27 crc kubenswrapper[4867]: I0214 04:26:27.831282 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5crd9" event={"ID":"87fbab35-1a29-4dcd-94fd-b8d663b73622","Type":"ContainerStarted","Data":"95ef0fea88c456826ef1c8f90e3fcd90f92474f8009712d30ff98125d3441f87"} Feb 14 04:26:27 crc kubenswrapper[4867]: I0214 04:26:27.875870 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5crd9" podStartSLOduration=3.445740594 podStartE2EDuration="5.875846263s" podCreationTimestamp="2026-02-14 04:26:22 +0000 UTC" firstStartedPulling="2026-02-14 04:26:24.784381932 +0000 UTC m=+1016.865319246" lastFinishedPulling="2026-02-14 04:26:27.214487601 +0000 UTC m=+1019.295424915" observedRunningTime="2026-02-14 04:26:27.87111767 +0000 UTC m=+1019.952054994" watchObservedRunningTime="2026-02-14 04:26:27.875846263 +0000 UTC m=+1019.956783587" Feb 14 04:26:30 crc kubenswrapper[4867]: I0214 04:26:30.657251 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-29mb7"] Feb 14 04:26:30 crc kubenswrapper[4867]: I0214 04:26:30.659529 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-29mb7" Feb 14 04:26:30 crc kubenswrapper[4867]: I0214 04:26:30.662200 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 14 04:26:30 crc kubenswrapper[4867]: I0214 04:26:30.663009 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 14 04:26:30 crc kubenswrapper[4867]: I0214 04:26:30.663200 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-rmhl7" Feb 14 04:26:30 crc kubenswrapper[4867]: I0214 04:26:30.667554 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-29mb7"] Feb 14 04:26:30 crc kubenswrapper[4867]: I0214 04:26:30.805090 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbssn\" (UniqueName: \"kubernetes.io/projected/b4bb205c-0469-49a0-b783-9b51ae11ddfe-kube-api-access-zbssn\") pod \"openstack-operator-index-29mb7\" (UID: \"b4bb205c-0469-49a0-b783-9b51ae11ddfe\") " pod="openstack-operators/openstack-operator-index-29mb7" Feb 14 04:26:30 crc kubenswrapper[4867]: I0214 04:26:30.907411 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbssn\" (UniqueName: \"kubernetes.io/projected/b4bb205c-0469-49a0-b783-9b51ae11ddfe-kube-api-access-zbssn\") pod \"openstack-operator-index-29mb7\" (UID: \"b4bb205c-0469-49a0-b783-9b51ae11ddfe\") " pod="openstack-operators/openstack-operator-index-29mb7" Feb 14 04:26:30 crc kubenswrapper[4867]: I0214 04:26:30.928360 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbssn\" (UniqueName: \"kubernetes.io/projected/b4bb205c-0469-49a0-b783-9b51ae11ddfe-kube-api-access-zbssn\") pod \"openstack-operator-index-29mb7\" (UID: \"b4bb205c-0469-49a0-b783-9b51ae11ddfe\") " pod="openstack-operators/openstack-operator-index-29mb7" Feb 14 04:26:30 crc kubenswrapper[4867]: I0214 04:26:30.988218 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-29mb7" Feb 14 04:26:31 crc kubenswrapper[4867]: I0214 04:26:31.522379 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-29mb7"] Feb 14 04:26:32 crc kubenswrapper[4867]: I0214 04:26:31.870784 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-29mb7" event={"ID":"b4bb205c-0469-49a0-b783-9b51ae11ddfe","Type":"ContainerStarted","Data":"28a0d17bc4f973949e58e3192827620aa395a4178d30694415f9c18c7463ced4"} Feb 14 04:26:32 crc kubenswrapper[4867]: I0214 04:26:32.227326 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-9gqfb" Feb 14 04:26:32 crc kubenswrapper[4867]: I0214 04:26:32.588683 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-69bbfbf88f-zhmxc" Feb 14 04:26:33 crc kubenswrapper[4867]: I0214 04:26:33.286746 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5crd9" Feb 14 04:26:33 crc kubenswrapper[4867]: I0214 04:26:33.287066 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5crd9" Feb 14 04:26:33 crc kubenswrapper[4867]: I0214 04:26:33.353955 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5crd9" Feb 14 04:26:33 crc kubenswrapper[4867]: I0214 04:26:33.960428 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5crd9" Feb 14 04:26:35 crc kubenswrapper[4867]: I0214 04:26:35.911573 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-29mb7" event={"ID":"b4bb205c-0469-49a0-b783-9b51ae11ddfe","Type":"ContainerStarted","Data":"56b5a70b5aa1a66aaa851499b6c31a6255ba3615b98722b19c9dce1fa934e34b"} Feb 14 04:26:35 crc kubenswrapper[4867]: I0214 04:26:35.934628 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-29mb7" podStartSLOduration=2.36591447 podStartE2EDuration="5.934602721s" podCreationTimestamp="2026-02-14 04:26:30 +0000 UTC" firstStartedPulling="2026-02-14 04:26:31.530867316 +0000 UTC m=+1023.611804630" lastFinishedPulling="2026-02-14 04:26:35.099555547 +0000 UTC m=+1027.180492881" observedRunningTime="2026-02-14 04:26:35.932220049 +0000 UTC m=+1028.013157403" watchObservedRunningTime="2026-02-14 04:26:35.934602721 +0000 UTC m=+1028.015540035" Feb 14 04:26:37 crc kubenswrapper[4867]: I0214 04:26:37.845453 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5crd9"] Feb 14 04:26:37 crc kubenswrapper[4867]: I0214 04:26:37.846451 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5crd9" podUID="87fbab35-1a29-4dcd-94fd-b8d663b73622" containerName="registry-server" containerID="cri-o://95ef0fea88c456826ef1c8f90e3fcd90f92474f8009712d30ff98125d3441f87" gracePeriod=2 Feb 14 04:26:38 crc kubenswrapper[4867]: I0214 04:26:38.791742 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5crd9" Feb 14 04:26:38 crc kubenswrapper[4867]: I0214 04:26:38.934679 4867 generic.go:334] "Generic (PLEG): container finished" podID="87fbab35-1a29-4dcd-94fd-b8d663b73622" containerID="95ef0fea88c456826ef1c8f90e3fcd90f92474f8009712d30ff98125d3441f87" exitCode=0 Feb 14 04:26:38 crc kubenswrapper[4867]: I0214 04:26:38.934741 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5crd9" Feb 14 04:26:38 crc kubenswrapper[4867]: I0214 04:26:38.934731 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5crd9" event={"ID":"87fbab35-1a29-4dcd-94fd-b8d663b73622","Type":"ContainerDied","Data":"95ef0fea88c456826ef1c8f90e3fcd90f92474f8009712d30ff98125d3441f87"} Feb 14 04:26:38 crc kubenswrapper[4867]: I0214 04:26:38.934807 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5crd9" event={"ID":"87fbab35-1a29-4dcd-94fd-b8d663b73622","Type":"ContainerDied","Data":"de1d5c3a5756f6da67db095fba834d9ec14cf3329af840e674b920ee7c05505b"} Feb 14 04:26:38 crc kubenswrapper[4867]: I0214 04:26:38.934833 4867 scope.go:117] "RemoveContainer" containerID="95ef0fea88c456826ef1c8f90e3fcd90f92474f8009712d30ff98125d3441f87" Feb 14 04:26:38 crc kubenswrapper[4867]: I0214 04:26:38.950308 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67bfl\" (UniqueName: \"kubernetes.io/projected/87fbab35-1a29-4dcd-94fd-b8d663b73622-kube-api-access-67bfl\") pod \"87fbab35-1a29-4dcd-94fd-b8d663b73622\" (UID: \"87fbab35-1a29-4dcd-94fd-b8d663b73622\") " Feb 14 04:26:38 crc kubenswrapper[4867]: I0214 04:26:38.950355 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87fbab35-1a29-4dcd-94fd-b8d663b73622-catalog-content\") pod \"87fbab35-1a29-4dcd-94fd-b8d663b73622\" (UID: \"87fbab35-1a29-4dcd-94fd-b8d663b73622\") " Feb 14 04:26:38 crc kubenswrapper[4867]: I0214 04:26:38.950393 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87fbab35-1a29-4dcd-94fd-b8d663b73622-utilities\") pod \"87fbab35-1a29-4dcd-94fd-b8d663b73622\" (UID: \"87fbab35-1a29-4dcd-94fd-b8d663b73622\") " Feb 14 04:26:38 crc kubenswrapper[4867]: I0214 04:26:38.951247 4867 scope.go:117] "RemoveContainer" containerID="a1f4caaea9c54471dd9119c2245d0b2f434696526f81d5bbf79e28b36d5b28cb" Feb 14 04:26:38 crc kubenswrapper[4867]: I0214 04:26:38.951453 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87fbab35-1a29-4dcd-94fd-b8d663b73622-utilities" (OuterVolumeSpecName: "utilities") pod "87fbab35-1a29-4dcd-94fd-b8d663b73622" (UID: "87fbab35-1a29-4dcd-94fd-b8d663b73622"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:26:38 crc kubenswrapper[4867]: I0214 04:26:38.959404 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87fbab35-1a29-4dcd-94fd-b8d663b73622-kube-api-access-67bfl" (OuterVolumeSpecName: "kube-api-access-67bfl") pod "87fbab35-1a29-4dcd-94fd-b8d663b73622" (UID: "87fbab35-1a29-4dcd-94fd-b8d663b73622"). InnerVolumeSpecName "kube-api-access-67bfl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:26:38 crc kubenswrapper[4867]: I0214 04:26:38.966485 4867 scope.go:117] "RemoveContainer" containerID="8b2d52f06eebee7118510c869b74986963358a5a824948f2fd114a350afa5c2e" Feb 14 04:26:39 crc kubenswrapper[4867]: I0214 04:26:39.004785 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87fbab35-1a29-4dcd-94fd-b8d663b73622-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "87fbab35-1a29-4dcd-94fd-b8d663b73622" (UID: "87fbab35-1a29-4dcd-94fd-b8d663b73622"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:26:39 crc kubenswrapper[4867]: I0214 04:26:39.025425 4867 scope.go:117] "RemoveContainer" containerID="95ef0fea88c456826ef1c8f90e3fcd90f92474f8009712d30ff98125d3441f87" Feb 14 04:26:39 crc kubenswrapper[4867]: E0214 04:26:39.026064 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95ef0fea88c456826ef1c8f90e3fcd90f92474f8009712d30ff98125d3441f87\": container with ID starting with 95ef0fea88c456826ef1c8f90e3fcd90f92474f8009712d30ff98125d3441f87 not found: ID does not exist" containerID="95ef0fea88c456826ef1c8f90e3fcd90f92474f8009712d30ff98125d3441f87" Feb 14 04:26:39 crc kubenswrapper[4867]: I0214 04:26:39.026106 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95ef0fea88c456826ef1c8f90e3fcd90f92474f8009712d30ff98125d3441f87"} err="failed to get container status \"95ef0fea88c456826ef1c8f90e3fcd90f92474f8009712d30ff98125d3441f87\": rpc error: code = NotFound desc = could not find container \"95ef0fea88c456826ef1c8f90e3fcd90f92474f8009712d30ff98125d3441f87\": container with ID starting with 95ef0fea88c456826ef1c8f90e3fcd90f92474f8009712d30ff98125d3441f87 not found: ID does not exist" Feb 14 04:26:39 crc kubenswrapper[4867]: I0214 04:26:39.026136 4867 scope.go:117] "RemoveContainer" containerID="a1f4caaea9c54471dd9119c2245d0b2f434696526f81d5bbf79e28b36d5b28cb" Feb 14 04:26:39 crc kubenswrapper[4867]: E0214 04:26:39.026547 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1f4caaea9c54471dd9119c2245d0b2f434696526f81d5bbf79e28b36d5b28cb\": container with ID starting with a1f4caaea9c54471dd9119c2245d0b2f434696526f81d5bbf79e28b36d5b28cb not found: ID does not exist" containerID="a1f4caaea9c54471dd9119c2245d0b2f434696526f81d5bbf79e28b36d5b28cb" Feb 14 04:26:39 crc kubenswrapper[4867]: I0214 04:26:39.026575 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1f4caaea9c54471dd9119c2245d0b2f434696526f81d5bbf79e28b36d5b28cb"} err="failed to get container status \"a1f4caaea9c54471dd9119c2245d0b2f434696526f81d5bbf79e28b36d5b28cb\": rpc error: code = NotFound desc = could not find container \"a1f4caaea9c54471dd9119c2245d0b2f434696526f81d5bbf79e28b36d5b28cb\": container with ID starting with a1f4caaea9c54471dd9119c2245d0b2f434696526f81d5bbf79e28b36d5b28cb not found: ID does not exist" Feb 14 04:26:39 crc kubenswrapper[4867]: I0214 04:26:39.026589 4867 scope.go:117] "RemoveContainer" containerID="8b2d52f06eebee7118510c869b74986963358a5a824948f2fd114a350afa5c2e" Feb 14 04:26:39 crc kubenswrapper[4867]: E0214 04:26:39.027228 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b2d52f06eebee7118510c869b74986963358a5a824948f2fd114a350afa5c2e\": container with ID starting with 8b2d52f06eebee7118510c869b74986963358a5a824948f2fd114a350afa5c2e not found: ID does not exist" containerID="8b2d52f06eebee7118510c869b74986963358a5a824948f2fd114a350afa5c2e" Feb 14 04:26:39 crc kubenswrapper[4867]: I0214 04:26:39.027256 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b2d52f06eebee7118510c869b74986963358a5a824948f2fd114a350afa5c2e"} err="failed to get container status \"8b2d52f06eebee7118510c869b74986963358a5a824948f2fd114a350afa5c2e\": rpc error: code = NotFound desc = could not find container \"8b2d52f06eebee7118510c869b74986963358a5a824948f2fd114a350afa5c2e\": container with ID starting with 8b2d52f06eebee7118510c869b74986963358a5a824948f2fd114a350afa5c2e not found: ID does not exist" Feb 14 04:26:39 crc kubenswrapper[4867]: I0214 04:26:39.052261 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67bfl\" (UniqueName: \"kubernetes.io/projected/87fbab35-1a29-4dcd-94fd-b8d663b73622-kube-api-access-67bfl\") on node \"crc\" DevicePath \"\"" Feb 14 04:26:39 crc kubenswrapper[4867]: I0214 04:26:39.052293 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87fbab35-1a29-4dcd-94fd-b8d663b73622-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 04:26:39 crc kubenswrapper[4867]: I0214 04:26:39.052303 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87fbab35-1a29-4dcd-94fd-b8d663b73622-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 04:26:39 crc kubenswrapper[4867]: I0214 04:26:39.253889 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5crd9"] Feb 14 04:26:39 crc kubenswrapper[4867]: I0214 04:26:39.259787 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5crd9"] Feb 14 04:26:40 crc kubenswrapper[4867]: I0214 04:26:40.988431 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-29mb7" Feb 14 04:26:40 crc kubenswrapper[4867]: I0214 04:26:40.988814 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-29mb7" Feb 14 04:26:41 crc kubenswrapper[4867]: I0214 04:26:41.015786 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87fbab35-1a29-4dcd-94fd-b8d663b73622" path="/var/lib/kubelet/pods/87fbab35-1a29-4dcd-94fd-b8d663b73622/volumes" Feb 14 04:26:41 crc kubenswrapper[4867]: I0214 04:26:41.026499 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-29mb7" Feb 14 04:26:41 crc kubenswrapper[4867]: I0214 04:26:41.986715 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-29mb7" Feb 14 04:26:42 crc kubenswrapper[4867]: I0214 04:26:42.420008 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-nzdwg" Feb 14 04:26:43 crc kubenswrapper[4867]: I0214 04:26:43.887415 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/8c4df5843827cca9a4ba10f11751e86eb8b77e6cae3749237366ad3dfec8wq7"] Feb 14 04:26:43 crc kubenswrapper[4867]: E0214 04:26:43.888057 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87fbab35-1a29-4dcd-94fd-b8d663b73622" containerName="extract-utilities" Feb 14 04:26:43 crc kubenswrapper[4867]: I0214 04:26:43.888070 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="87fbab35-1a29-4dcd-94fd-b8d663b73622" containerName="extract-utilities" Feb 14 04:26:43 crc kubenswrapper[4867]: E0214 04:26:43.888103 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87fbab35-1a29-4dcd-94fd-b8d663b73622" containerName="extract-content" Feb 14 04:26:43 crc kubenswrapper[4867]: I0214 04:26:43.888112 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="87fbab35-1a29-4dcd-94fd-b8d663b73622" containerName="extract-content" Feb 14 04:26:43 crc kubenswrapper[4867]: E0214 04:26:43.888120 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87fbab35-1a29-4dcd-94fd-b8d663b73622" containerName="registry-server" Feb 14 04:26:43 crc kubenswrapper[4867]: I0214 04:26:43.888125 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="87fbab35-1a29-4dcd-94fd-b8d663b73622" containerName="registry-server" Feb 14 04:26:43 crc kubenswrapper[4867]: I0214 04:26:43.888273 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="87fbab35-1a29-4dcd-94fd-b8d663b73622" containerName="registry-server" Feb 14 04:26:43 crc kubenswrapper[4867]: I0214 04:26:43.889439 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/8c4df5843827cca9a4ba10f11751e86eb8b77e6cae3749237366ad3dfec8wq7" Feb 14 04:26:43 crc kubenswrapper[4867]: I0214 04:26:43.891532 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-n8htd" Feb 14 04:26:43 crc kubenswrapper[4867]: I0214 04:26:43.901564 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/8c4df5843827cca9a4ba10f11751e86eb8b77e6cae3749237366ad3dfec8wq7"] Feb 14 04:26:44 crc kubenswrapper[4867]: I0214 04:26:44.036878 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb-bundle\") pod \"8c4df5843827cca9a4ba10f11751e86eb8b77e6cae3749237366ad3dfec8wq7\" (UID: \"fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb\") " pod="openstack-operators/8c4df5843827cca9a4ba10f11751e86eb8b77e6cae3749237366ad3dfec8wq7" Feb 14 04:26:44 crc kubenswrapper[4867]: I0214 04:26:44.036983 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86bzh\" (UniqueName: \"kubernetes.io/projected/fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb-kube-api-access-86bzh\") pod \"8c4df5843827cca9a4ba10f11751e86eb8b77e6cae3749237366ad3dfec8wq7\" (UID: \"fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb\") " pod="openstack-operators/8c4df5843827cca9a4ba10f11751e86eb8b77e6cae3749237366ad3dfec8wq7" Feb 14 04:26:44 crc kubenswrapper[4867]: I0214 04:26:44.037028 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb-util\") pod \"8c4df5843827cca9a4ba10f11751e86eb8b77e6cae3749237366ad3dfec8wq7\" (UID: \"fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb\") " pod="openstack-operators/8c4df5843827cca9a4ba10f11751e86eb8b77e6cae3749237366ad3dfec8wq7" Feb 14 04:26:44 crc kubenswrapper[4867]: I0214 04:26:44.138758 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb-bundle\") pod \"8c4df5843827cca9a4ba10f11751e86eb8b77e6cae3749237366ad3dfec8wq7\" (UID: \"fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb\") " pod="openstack-operators/8c4df5843827cca9a4ba10f11751e86eb8b77e6cae3749237366ad3dfec8wq7" Feb 14 04:26:44 crc kubenswrapper[4867]: I0214 04:26:44.138856 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86bzh\" (UniqueName: \"kubernetes.io/projected/fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb-kube-api-access-86bzh\") pod \"8c4df5843827cca9a4ba10f11751e86eb8b77e6cae3749237366ad3dfec8wq7\" (UID: \"fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb\") " pod="openstack-operators/8c4df5843827cca9a4ba10f11751e86eb8b77e6cae3749237366ad3dfec8wq7" Feb 14 04:26:44 crc kubenswrapper[4867]: I0214 04:26:44.138912 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb-util\") pod \"8c4df5843827cca9a4ba10f11751e86eb8b77e6cae3749237366ad3dfec8wq7\" (UID: \"fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb\") " pod="openstack-operators/8c4df5843827cca9a4ba10f11751e86eb8b77e6cae3749237366ad3dfec8wq7" Feb 14 04:26:44 crc kubenswrapper[4867]: I0214 04:26:44.139659 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb-bundle\") pod \"8c4df5843827cca9a4ba10f11751e86eb8b77e6cae3749237366ad3dfec8wq7\" (UID: \"fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb\") " pod="openstack-operators/8c4df5843827cca9a4ba10f11751e86eb8b77e6cae3749237366ad3dfec8wq7" Feb 14 04:26:44 crc kubenswrapper[4867]: I0214 04:26:44.139698 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb-util\") pod \"8c4df5843827cca9a4ba10f11751e86eb8b77e6cae3749237366ad3dfec8wq7\" (UID: \"fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb\") " pod="openstack-operators/8c4df5843827cca9a4ba10f11751e86eb8b77e6cae3749237366ad3dfec8wq7" Feb 14 04:26:44 crc kubenswrapper[4867]: I0214 04:26:44.157403 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86bzh\" (UniqueName: \"kubernetes.io/projected/fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb-kube-api-access-86bzh\") pod \"8c4df5843827cca9a4ba10f11751e86eb8b77e6cae3749237366ad3dfec8wq7\" (UID: \"fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb\") " pod="openstack-operators/8c4df5843827cca9a4ba10f11751e86eb8b77e6cae3749237366ad3dfec8wq7" Feb 14 04:26:44 crc kubenswrapper[4867]: I0214 04:26:44.215833 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/8c4df5843827cca9a4ba10f11751e86eb8b77e6cae3749237366ad3dfec8wq7" Feb 14 04:26:44 crc kubenswrapper[4867]: I0214 04:26:44.683619 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/8c4df5843827cca9a4ba10f11751e86eb8b77e6cae3749237366ad3dfec8wq7"] Feb 14 04:26:44 crc kubenswrapper[4867]: I0214 04:26:44.979729 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8c4df5843827cca9a4ba10f11751e86eb8b77e6cae3749237366ad3dfec8wq7" event={"ID":"fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb","Type":"ContainerStarted","Data":"26692d66fac6046678dcec0f0061f631b2eeddc2732b9a77066139ad9b186ab7"} Feb 14 04:26:44 crc kubenswrapper[4867]: I0214 04:26:44.980073 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8c4df5843827cca9a4ba10f11751e86eb8b77e6cae3749237366ad3dfec8wq7" event={"ID":"fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb","Type":"ContainerStarted","Data":"c384d75f4d61a30f6f036a03975d5148dcaeb9cdbd96528b48b66f421343518a"} Feb 14 04:26:45 crc kubenswrapper[4867]: I0214 04:26:45.991467 4867 generic.go:334] "Generic (PLEG): container finished" podID="fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb" containerID="26692d66fac6046678dcec0f0061f631b2eeddc2732b9a77066139ad9b186ab7" exitCode=0 Feb 14 04:26:45 crc kubenswrapper[4867]: I0214 04:26:45.991531 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8c4df5843827cca9a4ba10f11751e86eb8b77e6cae3749237366ad3dfec8wq7" event={"ID":"fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb","Type":"ContainerDied","Data":"26692d66fac6046678dcec0f0061f631b2eeddc2732b9a77066139ad9b186ab7"} Feb 14 04:26:47 crc kubenswrapper[4867]: I0214 04:26:47.003085 4867 generic.go:334] "Generic (PLEG): container finished" podID="fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb" containerID="f28e180ce4df271ab60e8101d1a4a5a090a6e9f14af22216d08fa32a9c9cfce1" exitCode=0 Feb 14 04:26:47 crc kubenswrapper[4867]: I0214 04:26:47.014293 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8c4df5843827cca9a4ba10f11751e86eb8b77e6cae3749237366ad3dfec8wq7" event={"ID":"fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb","Type":"ContainerDied","Data":"f28e180ce4df271ab60e8101d1a4a5a090a6e9f14af22216d08fa32a9c9cfce1"} Feb 14 04:26:48 crc kubenswrapper[4867]: I0214 04:26:48.021992 4867 generic.go:334] "Generic (PLEG): container finished" podID="fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb" containerID="159bc62f52e94cba661c0e5bd47942bf912ce2a37d3c6a9764d0abd2e62d919d" exitCode=0 Feb 14 04:26:48 crc kubenswrapper[4867]: I0214 04:26:48.022306 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8c4df5843827cca9a4ba10f11751e86eb8b77e6cae3749237366ad3dfec8wq7" event={"ID":"fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb","Type":"ContainerDied","Data":"159bc62f52e94cba661c0e5bd47942bf912ce2a37d3c6a9764d0abd2e62d919d"} Feb 14 04:26:49 crc kubenswrapper[4867]: I0214 04:26:49.383536 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/8c4df5843827cca9a4ba10f11751e86eb8b77e6cae3749237366ad3dfec8wq7" Feb 14 04:26:49 crc kubenswrapper[4867]: I0214 04:26:49.539810 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb-bundle\") pod \"fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb\" (UID: \"fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb\") " Feb 14 04:26:49 crc kubenswrapper[4867]: I0214 04:26:49.539881 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb-util\") pod \"fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb\" (UID: \"fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb\") " Feb 14 04:26:49 crc kubenswrapper[4867]: I0214 04:26:49.539998 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86bzh\" (UniqueName: \"kubernetes.io/projected/fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb-kube-api-access-86bzh\") pod \"fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb\" (UID: \"fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb\") " Feb 14 04:26:49 crc kubenswrapper[4867]: I0214 04:26:49.540776 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb-bundle" (OuterVolumeSpecName: "bundle") pod "fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb" (UID: "fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:26:49 crc kubenswrapper[4867]: I0214 04:26:49.548827 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb-kube-api-access-86bzh" (OuterVolumeSpecName: "kube-api-access-86bzh") pod "fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb" (UID: "fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb"). InnerVolumeSpecName "kube-api-access-86bzh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:26:49 crc kubenswrapper[4867]: I0214 04:26:49.555357 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb-util" (OuterVolumeSpecName: "util") pod "fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb" (UID: "fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:26:49 crc kubenswrapper[4867]: I0214 04:26:49.643009 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-86bzh\" (UniqueName: \"kubernetes.io/projected/fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb-kube-api-access-86bzh\") on node \"crc\" DevicePath \"\"" Feb 14 04:26:49 crc kubenswrapper[4867]: I0214 04:26:49.643051 4867 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:26:49 crc kubenswrapper[4867]: I0214 04:26:49.643060 4867 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb-util\") on node \"crc\" DevicePath \"\"" Feb 14 04:26:50 crc kubenswrapper[4867]: I0214 04:26:50.039924 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8c4df5843827cca9a4ba10f11751e86eb8b77e6cae3749237366ad3dfec8wq7" event={"ID":"fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb","Type":"ContainerDied","Data":"c384d75f4d61a30f6f036a03975d5148dcaeb9cdbd96528b48b66f421343518a"} Feb 14 04:26:50 crc kubenswrapper[4867]: I0214 04:26:50.040266 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c384d75f4d61a30f6f036a03975d5148dcaeb9cdbd96528b48b66f421343518a" Feb 14 04:26:50 crc kubenswrapper[4867]: I0214 04:26:50.039974 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/8c4df5843827cca9a4ba10f11751e86eb8b77e6cae3749237366ad3dfec8wq7" Feb 14 04:26:53 crc kubenswrapper[4867]: I0214 04:26:53.042406 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-6b9546c8f4-49lm8"] Feb 14 04:26:53 crc kubenswrapper[4867]: E0214 04:26:53.044211 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb" containerName="extract" Feb 14 04:26:53 crc kubenswrapper[4867]: I0214 04:26:53.044294 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb" containerName="extract" Feb 14 04:26:53 crc kubenswrapper[4867]: E0214 04:26:53.044366 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb" containerName="util" Feb 14 04:26:53 crc kubenswrapper[4867]: I0214 04:26:53.044425 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb" containerName="util" Feb 14 04:26:53 crc kubenswrapper[4867]: E0214 04:26:53.044485 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb" containerName="pull" Feb 14 04:26:53 crc kubenswrapper[4867]: I0214 04:26:53.044566 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb" containerName="pull" Feb 14 04:26:53 crc kubenswrapper[4867]: I0214 04:26:53.044765 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb" containerName="extract" Feb 14 04:26:53 crc kubenswrapper[4867]: I0214 04:26:53.045369 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-6b9546c8f4-49lm8" Feb 14 04:26:53 crc kubenswrapper[4867]: I0214 04:26:53.047341 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-tvplp" Feb 14 04:26:53 crc kubenswrapper[4867]: I0214 04:26:53.073371 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-6b9546c8f4-49lm8"] Feb 14 04:26:53 crc kubenswrapper[4867]: I0214 04:26:53.206786 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5q46f\" (UniqueName: \"kubernetes.io/projected/10461723-ecff-48fe-a034-9a07bf3bf8f7-kube-api-access-5q46f\") pod \"openstack-operator-controller-init-6b9546c8f4-49lm8\" (UID: \"10461723-ecff-48fe-a034-9a07bf3bf8f7\") " pod="openstack-operators/openstack-operator-controller-init-6b9546c8f4-49lm8" Feb 14 04:26:53 crc kubenswrapper[4867]: I0214 04:26:53.307965 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5q46f\" (UniqueName: \"kubernetes.io/projected/10461723-ecff-48fe-a034-9a07bf3bf8f7-kube-api-access-5q46f\") pod \"openstack-operator-controller-init-6b9546c8f4-49lm8\" (UID: \"10461723-ecff-48fe-a034-9a07bf3bf8f7\") " pod="openstack-operators/openstack-operator-controller-init-6b9546c8f4-49lm8" Feb 14 04:26:53 crc kubenswrapper[4867]: I0214 04:26:53.326292 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5q46f\" (UniqueName: \"kubernetes.io/projected/10461723-ecff-48fe-a034-9a07bf3bf8f7-kube-api-access-5q46f\") pod \"openstack-operator-controller-init-6b9546c8f4-49lm8\" (UID: \"10461723-ecff-48fe-a034-9a07bf3bf8f7\") " pod="openstack-operators/openstack-operator-controller-init-6b9546c8f4-49lm8" Feb 14 04:26:53 crc kubenswrapper[4867]: I0214 04:26:53.365364 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-6b9546c8f4-49lm8" Feb 14 04:26:53 crc kubenswrapper[4867]: I0214 04:26:53.834177 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-6b9546c8f4-49lm8"] Feb 14 04:26:54 crc kubenswrapper[4867]: I0214 04:26:54.080383 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-6b9546c8f4-49lm8" event={"ID":"10461723-ecff-48fe-a034-9a07bf3bf8f7","Type":"ContainerStarted","Data":"da48c96176f86ed873e7ef026b1e135894bc0628dcc59baf8b819923a1ba2408"} Feb 14 04:26:59 crc kubenswrapper[4867]: I0214 04:26:59.130157 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-6b9546c8f4-49lm8" event={"ID":"10461723-ecff-48fe-a034-9a07bf3bf8f7","Type":"ContainerStarted","Data":"b501166086dcf813d43fbe01f66927fcbef4f7716cc8b6badc80e7113b808be2"} Feb 14 04:26:59 crc kubenswrapper[4867]: I0214 04:26:59.130807 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-6b9546c8f4-49lm8" Feb 14 04:26:59 crc kubenswrapper[4867]: I0214 04:26:59.162228 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-6b9546c8f4-49lm8" podStartSLOduration=1.117456254 podStartE2EDuration="6.162207793s" podCreationTimestamp="2026-02-14 04:26:53 +0000 UTC" firstStartedPulling="2026-02-14 04:26:53.848585481 +0000 UTC m=+1045.929522795" lastFinishedPulling="2026-02-14 04:26:58.89333702 +0000 UTC m=+1050.974274334" observedRunningTime="2026-02-14 04:26:59.154224617 +0000 UTC m=+1051.235161941" watchObservedRunningTime="2026-02-14 04:26:59.162207793 +0000 UTC m=+1051.243145127" Feb 14 04:27:13 crc kubenswrapper[4867]: I0214 04:27:13.370035 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-6b9546c8f4-49lm8" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.160044 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-pxm8d"] Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.161572 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-pxm8d" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.164101 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-q4xdx" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.173087 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-chbgl"] Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.174215 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-chbgl" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.175059 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8t7h\" (UniqueName: \"kubernetes.io/projected/66c8a0dd-f076-4994-bd42-39c80de83233-kube-api-access-w8t7h\") pod \"barbican-operator-controller-manager-868647ff47-pxm8d\" (UID: \"66c8a0dd-f076-4994-bd42-39c80de83233\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-pxm8d" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.179691 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-scmhd" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.183492 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-pxm8d"] Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.199251 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-ndb8l"] Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.200441 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-ndb8l" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.207923 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-kggl2" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.220743 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-tpfxn"] Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.221890 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-tpfxn" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.224003 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-hv82j" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.229843 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-chbgl"] Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.235264 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-ndb8l"] Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.276133 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-tpfxn"] Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.277112 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jt5nx\" (UniqueName: \"kubernetes.io/projected/3025ff58-4a91-43f5-8f15-94cadd0cef8b-kube-api-access-jt5nx\") pod \"cinder-operator-controller-manager-5d946d989d-chbgl\" (UID: \"3025ff58-4a91-43f5-8f15-94cadd0cef8b\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-chbgl" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.277218 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8t7h\" (UniqueName: \"kubernetes.io/projected/66c8a0dd-f076-4994-bd42-39c80de83233-kube-api-access-w8t7h\") pod \"barbican-operator-controller-manager-868647ff47-pxm8d\" (UID: \"66c8a0dd-f076-4994-bd42-39c80de83233\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-pxm8d" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.277257 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gntpk\" (UniqueName: \"kubernetes.io/projected/1f889f7b-8ae5-43e3-ab54-d3bf06c010df-kube-api-access-gntpk\") pod \"glance-operator-controller-manager-77987464f4-tpfxn\" (UID: \"1f889f7b-8ae5-43e3-ab54-d3bf06c010df\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-tpfxn" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.277292 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trhdw\" (UniqueName: \"kubernetes.io/projected/652d3b74-0634-4f8f-b5ef-3adfc53920eb-kube-api-access-trhdw\") pod \"designate-operator-controller-manager-6d8bf5c495-ndb8l\" (UID: \"652d3b74-0634-4f8f-b5ef-3adfc53920eb\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-ndb8l" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.305878 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-jxpv2"] Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.305912 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8t7h\" (UniqueName: \"kubernetes.io/projected/66c8a0dd-f076-4994-bd42-39c80de83233-kube-api-access-w8t7h\") pod \"barbican-operator-controller-manager-868647ff47-pxm8d\" (UID: \"66c8a0dd-f076-4994-bd42-39c80de83233\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-pxm8d" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.306876 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-jxpv2" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.310897 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-g9dmc" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.311725 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-bgznq"] Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.312724 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-bgznq" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.314186 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-mw859" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.353590 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-jxpv2"] Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.362269 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-bgznq"] Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.381054 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jt5nx\" (UniqueName: \"kubernetes.io/projected/3025ff58-4a91-43f5-8f15-94cadd0cef8b-kube-api-access-jt5nx\") pod \"cinder-operator-controller-manager-5d946d989d-chbgl\" (UID: \"3025ff58-4a91-43f5-8f15-94cadd0cef8b\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-chbgl" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.381151 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gntpk\" (UniqueName: \"kubernetes.io/projected/1f889f7b-8ae5-43e3-ab54-d3bf06c010df-kube-api-access-gntpk\") pod \"glance-operator-controller-manager-77987464f4-tpfxn\" (UID: \"1f889f7b-8ae5-43e3-ab54-d3bf06c010df\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-tpfxn" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.381179 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trhdw\" (UniqueName: \"kubernetes.io/projected/652d3b74-0634-4f8f-b5ef-3adfc53920eb-kube-api-access-trhdw\") pod \"designate-operator-controller-manager-6d8bf5c495-ndb8l\" (UID: \"652d3b74-0634-4f8f-b5ef-3adfc53920eb\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-ndb8l" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.381208 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnw7g\" (UniqueName: \"kubernetes.io/projected/185d4fd5-608b-48d8-8731-27e7a05adfe2-kube-api-access-vnw7g\") pod \"heat-operator-controller-manager-69f49c598c-jxpv2\" (UID: \"185d4fd5-608b-48d8-8731-27e7a05adfe2\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-jxpv2" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.381231 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgnfc\" (UniqueName: \"kubernetes.io/projected/4b75df5b-04e5-445f-8d2d-57c6cbe5971c-kube-api-access-cgnfc\") pod \"horizon-operator-controller-manager-5b9b8895d5-bgznq\" (UID: \"4b75df5b-04e5-445f-8d2d-57c6cbe5971c\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-bgznq" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.395144 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-jqq2w"] Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.396514 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-jqq2w" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.415546 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.415816 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-9d88p" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.423138 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-jqq2w"] Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.425004 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gntpk\" (UniqueName: \"kubernetes.io/projected/1f889f7b-8ae5-43e3-ab54-d3bf06c010df-kube-api-access-gntpk\") pod \"glance-operator-controller-manager-77987464f4-tpfxn\" (UID: \"1f889f7b-8ae5-43e3-ab54-d3bf06c010df\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-tpfxn" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.434376 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trhdw\" (UniqueName: \"kubernetes.io/projected/652d3b74-0634-4f8f-b5ef-3adfc53920eb-kube-api-access-trhdw\") pod \"designate-operator-controller-manager-6d8bf5c495-ndb8l\" (UID: \"652d3b74-0634-4f8f-b5ef-3adfc53920eb\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-ndb8l" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.453957 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-x7qx5"] Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.455484 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-x7qx5" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.461020 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jt5nx\" (UniqueName: \"kubernetes.io/projected/3025ff58-4a91-43f5-8f15-94cadd0cef8b-kube-api-access-jt5nx\") pod \"cinder-operator-controller-manager-5d946d989d-chbgl\" (UID: \"3025ff58-4a91-43f5-8f15-94cadd0cef8b\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-chbgl" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.468678 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-hh6sv" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.470555 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-6nhjp"] Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.482664 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ebee5651-7233-4c18-bb97-a4dc91eabef4-cert\") pod \"infra-operator-controller-manager-79d975b745-jqq2w\" (UID: \"ebee5651-7233-4c18-bb97-a4dc91eabef4\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-jqq2w" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.483075 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnw7g\" (UniqueName: \"kubernetes.io/projected/185d4fd5-608b-48d8-8731-27e7a05adfe2-kube-api-access-vnw7g\") pod \"heat-operator-controller-manager-69f49c598c-jxpv2\" (UID: \"185d4fd5-608b-48d8-8731-27e7a05adfe2\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-jxpv2" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.483108 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llwh7\" (UniqueName: \"kubernetes.io/projected/ebee5651-7233-4c18-bb97-a4dc91eabef4-kube-api-access-llwh7\") pod \"infra-operator-controller-manager-79d975b745-jqq2w\" (UID: \"ebee5651-7233-4c18-bb97-a4dc91eabef4\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-jqq2w" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.483148 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgnfc\" (UniqueName: \"kubernetes.io/projected/4b75df5b-04e5-445f-8d2d-57c6cbe5971c-kube-api-access-cgnfc\") pod \"horizon-operator-controller-manager-5b9b8895d5-bgznq\" (UID: \"4b75df5b-04e5-445f-8d2d-57c6cbe5971c\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-bgznq" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.483252 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7pjj\" (UniqueName: \"kubernetes.io/projected/dc65ca0c-1d72-468f-b600-dfb8332bf4bd-kube-api-access-s7pjj\") pod \"keystone-operator-controller-manager-b4d948c87-x7qx5\" (UID: \"dc65ca0c-1d72-468f-b600-dfb8332bf4bd\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-x7qx5" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.482980 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-6nhjp" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.484141 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-pxm8d" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.493354 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-2pspc" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.505719 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-chbgl" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.527932 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-ndb8l" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.536584 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-6nhjp"] Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.542295 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgnfc\" (UniqueName: \"kubernetes.io/projected/4b75df5b-04e5-445f-8d2d-57c6cbe5971c-kube-api-access-cgnfc\") pod \"horizon-operator-controller-manager-5b9b8895d5-bgznq\" (UID: \"4b75df5b-04e5-445f-8d2d-57c6cbe5971c\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-bgznq" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.552789 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-tpfxn" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.583075 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnw7g\" (UniqueName: \"kubernetes.io/projected/185d4fd5-608b-48d8-8731-27e7a05adfe2-kube-api-access-vnw7g\") pod \"heat-operator-controller-manager-69f49c598c-jxpv2\" (UID: \"185d4fd5-608b-48d8-8731-27e7a05adfe2\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-jxpv2" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.584348 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llwh7\" (UniqueName: \"kubernetes.io/projected/ebee5651-7233-4c18-bb97-a4dc91eabef4-kube-api-access-llwh7\") pod \"infra-operator-controller-manager-79d975b745-jqq2w\" (UID: \"ebee5651-7233-4c18-bb97-a4dc91eabef4\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-jqq2w" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.584446 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7pjj\" (UniqueName: \"kubernetes.io/projected/dc65ca0c-1d72-468f-b600-dfb8332bf4bd-kube-api-access-s7pjj\") pod \"keystone-operator-controller-manager-b4d948c87-x7qx5\" (UID: \"dc65ca0c-1d72-468f-b600-dfb8332bf4bd\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-x7qx5" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.584474 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpjl7\" (UniqueName: \"kubernetes.io/projected/94ff35ef-77e1-4085-ad2f-837ebc666b2a-kube-api-access-bpjl7\") pod \"ironic-operator-controller-manager-554564d7fc-6nhjp\" (UID: \"94ff35ef-77e1-4085-ad2f-837ebc666b2a\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-6nhjp" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.584544 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ebee5651-7233-4c18-bb97-a4dc91eabef4-cert\") pod \"infra-operator-controller-manager-79d975b745-jqq2w\" (UID: \"ebee5651-7233-4c18-bb97-a4dc91eabef4\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-jqq2w" Feb 14 04:27:33 crc kubenswrapper[4867]: E0214 04:27:33.584661 4867 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 14 04:27:33 crc kubenswrapper[4867]: E0214 04:27:33.584710 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebee5651-7233-4c18-bb97-a4dc91eabef4-cert podName:ebee5651-7233-4c18-bb97-a4dc91eabef4 nodeName:}" failed. No retries permitted until 2026-02-14 04:27:34.08469228 +0000 UTC m=+1086.165629584 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ebee5651-7233-4c18-bb97-a4dc91eabef4-cert") pod "infra-operator-controller-manager-79d975b745-jqq2w" (UID: "ebee5651-7233-4c18-bb97-a4dc91eabef4") : secret "infra-operator-webhook-server-cert" not found Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.601276 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-x7qx5"] Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.681005 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-jxpv2" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.684358 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-8dzwp"] Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.685337 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-8dzwp" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.690072 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpjl7\" (UniqueName: \"kubernetes.io/projected/94ff35ef-77e1-4085-ad2f-837ebc666b2a-kube-api-access-bpjl7\") pod \"ironic-operator-controller-manager-554564d7fc-6nhjp\" (UID: \"94ff35ef-77e1-4085-ad2f-837ebc666b2a\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-6nhjp" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.691711 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-kd49j" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.699816 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-bgznq" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.710875 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llwh7\" (UniqueName: \"kubernetes.io/projected/ebee5651-7233-4c18-bb97-a4dc91eabef4-kube-api-access-llwh7\") pod \"infra-operator-controller-manager-79d975b745-jqq2w\" (UID: \"ebee5651-7233-4c18-bb97-a4dc91eabef4\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-jqq2w" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.731221 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7pjj\" (UniqueName: \"kubernetes.io/projected/dc65ca0c-1d72-468f-b600-dfb8332bf4bd-kube-api-access-s7pjj\") pod \"keystone-operator-controller-manager-b4d948c87-x7qx5\" (UID: \"dc65ca0c-1d72-468f-b600-dfb8332bf4bd\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-x7qx5" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.740572 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-2xwdd"] Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.741775 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-2xwdd" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.753204 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-8dzwp"] Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.764172 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-srcqs" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.773852 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-2xwdd"] Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.791436 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nd8nr\" (UniqueName: \"kubernetes.io/projected/6b5078d9-f30f-40a8-b5b5-8eb11271ec10-kube-api-access-nd8nr\") pod \"manila-operator-controller-manager-54f6768c69-8dzwp\" (UID: \"6b5078d9-f30f-40a8-b5b5-8eb11271ec10\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-8dzwp" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.791586 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjptq\" (UniqueName: \"kubernetes.io/projected/38a9cdf3-42e2-4279-8092-af7e8c82bc51-kube-api-access-kjptq\") pod \"neutron-operator-controller-manager-64ddbf8bb-2xwdd\" (UID: \"38a9cdf3-42e2-4279-8092-af7e8c82bc51\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-2xwdd" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.798269 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-wwm9m"] Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.799479 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-wwm9m" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.809883 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-lmbx6" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.810096 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpjl7\" (UniqueName: \"kubernetes.io/projected/94ff35ef-77e1-4085-ad2f-837ebc666b2a-kube-api-access-bpjl7\") pod \"ironic-operator-controller-manager-554564d7fc-6nhjp\" (UID: \"94ff35ef-77e1-4085-ad2f-837ebc666b2a\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-6nhjp" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.854879 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-tf6rg"] Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.866206 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-tf6rg" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.872392 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-ff2jx" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.909125 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjptq\" (UniqueName: \"kubernetes.io/projected/38a9cdf3-42e2-4279-8092-af7e8c82bc51-kube-api-access-kjptq\") pod \"neutron-operator-controller-manager-64ddbf8bb-2xwdd\" (UID: \"38a9cdf3-42e2-4279-8092-af7e8c82bc51\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-2xwdd" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.909226 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nd8nr\" (UniqueName: \"kubernetes.io/projected/6b5078d9-f30f-40a8-b5b5-8eb11271ec10-kube-api-access-nd8nr\") pod \"manila-operator-controller-manager-54f6768c69-8dzwp\" (UID: \"6b5078d9-f30f-40a8-b5b5-8eb11271ec10\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-8dzwp" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.909254 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btzhx\" (UniqueName: \"kubernetes.io/projected/7bb6de63-3c92-43de-a01b-b34df765aeba-kube-api-access-btzhx\") pod \"mariadb-operator-controller-manager-6994f66f48-wwm9m\" (UID: \"7bb6de63-3c92-43de-a01b-b34df765aeba\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-wwm9m" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.938132 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-x7qx5" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.949333 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjptq\" (UniqueName: \"kubernetes.io/projected/38a9cdf3-42e2-4279-8092-af7e8c82bc51-kube-api-access-kjptq\") pod \"neutron-operator-controller-manager-64ddbf8bb-2xwdd\" (UID: \"38a9cdf3-42e2-4279-8092-af7e8c82bc51\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-2xwdd" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.974683 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nd8nr\" (UniqueName: \"kubernetes.io/projected/6b5078d9-f30f-40a8-b5b5-8eb11271ec10-kube-api-access-nd8nr\") pod \"manila-operator-controller-manager-54f6768c69-8dzwp\" (UID: \"6b5078d9-f30f-40a8-b5b5-8eb11271ec10\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-8dzwp" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.974758 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-wwm9m"] Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.069832 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkk9v\" (UniqueName: \"kubernetes.io/projected/74a43e5b-11c4-459d-bbc7-03aa03489f17-kube-api-access-dkk9v\") pod \"nova-operator-controller-manager-567668f5cf-tf6rg\" (UID: \"74a43e5b-11c4-459d-bbc7-03aa03489f17\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-tf6rg" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.070168 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btzhx\" (UniqueName: \"kubernetes.io/projected/7bb6de63-3c92-43de-a01b-b34df765aeba-kube-api-access-btzhx\") pod \"mariadb-operator-controller-manager-6994f66f48-wwm9m\" (UID: \"7bb6de63-3c92-43de-a01b-b34df765aeba\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-wwm9m" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.082133 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-6nhjp" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.110707 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-tf6rg"] Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.111075 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-8dzwp" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.111969 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btzhx\" (UniqueName: \"kubernetes.io/projected/7bb6de63-3c92-43de-a01b-b34df765aeba-kube-api-access-btzhx\") pod \"mariadb-operator-controller-manager-6994f66f48-wwm9m\" (UID: \"7bb6de63-3c92-43de-a01b-b34df765aeba\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-wwm9m" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.154129 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-7zkqz"] Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.155334 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-7zkqz" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.161845 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-hnlct" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.168423 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-2xwdd" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.172735 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkk9v\" (UniqueName: \"kubernetes.io/projected/74a43e5b-11c4-459d-bbc7-03aa03489f17-kube-api-access-dkk9v\") pod \"nova-operator-controller-manager-567668f5cf-tf6rg\" (UID: \"74a43e5b-11c4-459d-bbc7-03aa03489f17\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-tf6rg" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.172953 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ebee5651-7233-4c18-bb97-a4dc91eabef4-cert\") pod \"infra-operator-controller-manager-79d975b745-jqq2w\" (UID: \"ebee5651-7233-4c18-bb97-a4dc91eabef4\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-jqq2w" Feb 14 04:27:34 crc kubenswrapper[4867]: E0214 04:27:34.182281 4867 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 14 04:27:34 crc kubenswrapper[4867]: E0214 04:27:34.182344 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebee5651-7233-4c18-bb97-a4dc91eabef4-cert podName:ebee5651-7233-4c18-bb97-a4dc91eabef4 nodeName:}" failed. No retries permitted until 2026-02-14 04:27:35.182326694 +0000 UTC m=+1087.263264008 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ebee5651-7233-4c18-bb97-a4dc91eabef4-cert") pod "infra-operator-controller-manager-79d975b745-jqq2w" (UID: "ebee5651-7233-4c18-bb97-a4dc91eabef4") : secret "infra-operator-webhook-server-cert" not found Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.206968 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-wwm9m" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.228577 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-dszdp"] Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.231092 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-dszdp" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.250631 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-xh5pm" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.256655 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkk9v\" (UniqueName: \"kubernetes.io/projected/74a43e5b-11c4-459d-bbc7-03aa03489f17-kube-api-access-dkk9v\") pod \"nova-operator-controller-manager-567668f5cf-tf6rg\" (UID: \"74a43e5b-11c4-459d-bbc7-03aa03489f17\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-tf6rg" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.274048 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwncd\" (UniqueName: \"kubernetes.io/projected/64ff8480-2ca0-40d5-b5c9-448d0db3c575-kube-api-access-kwncd\") pod \"octavia-operator-controller-manager-69f8888797-7zkqz\" (UID: \"64ff8480-2ca0-40d5-b5c9-448d0db3c575\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-7zkqz" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.274100 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5cmr\" (UniqueName: \"kubernetes.io/projected/ffb00aaf-6760-440e-827a-f795baf3693a-kube-api-access-l5cmr\") pod \"ovn-operator-controller-manager-d44cf6b75-dszdp\" (UID: \"ffb00aaf-6760-440e-827a-f795baf3693a\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-dszdp" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.309754 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cs8b7t"] Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.311395 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cs8b7t" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.333627 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-7zkqz"] Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.346613 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-97lkg" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.347054 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.349945 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-dszdp"] Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.372137 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-vwvtz"] Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.373652 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-vwvtz" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.375324 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5cmr\" (UniqueName: \"kubernetes.io/projected/ffb00aaf-6760-440e-827a-f795baf3693a-kube-api-access-l5cmr\") pod \"ovn-operator-controller-manager-d44cf6b75-dszdp\" (UID: \"ffb00aaf-6760-440e-827a-f795baf3693a\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-dszdp" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.375419 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkz75\" (UniqueName: \"kubernetes.io/projected/634f9e2f-2100-49e3-a31f-a369cf8ff13f-kube-api-access-hkz75\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cs8b7t\" (UID: \"634f9e2f-2100-49e3-a31f-a369cf8ff13f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cs8b7t" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.375444 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/634f9e2f-2100-49e3-a31f-a369cf8ff13f-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cs8b7t\" (UID: \"634f9e2f-2100-49e3-a31f-a369cf8ff13f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cs8b7t" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.375497 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhjqt\" (UniqueName: \"kubernetes.io/projected/9ec66be5-3947-45d1-bf34-c7639e8d4c8a-kube-api-access-lhjqt\") pod \"placement-operator-controller-manager-8497b45c89-vwvtz\" (UID: \"9ec66be5-3947-45d1-bf34-c7639e8d4c8a\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-vwvtz" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.375541 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwncd\" (UniqueName: \"kubernetes.io/projected/64ff8480-2ca0-40d5-b5c9-448d0db3c575-kube-api-access-kwncd\") pod \"octavia-operator-controller-manager-69f8888797-7zkqz\" (UID: \"64ff8480-2ca0-40d5-b5c9-448d0db3c575\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-7zkqz" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.379124 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-mz82h" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.383247 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-vwvtz"] Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.390318 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-tf6rg" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.394572 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-snrw6"] Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.396190 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-snrw6" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.402005 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-x7jg4" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.403626 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cs8b7t"] Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.422324 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5cmr\" (UniqueName: \"kubernetes.io/projected/ffb00aaf-6760-440e-827a-f795baf3693a-kube-api-access-l5cmr\") pod \"ovn-operator-controller-manager-d44cf6b75-dszdp\" (UID: \"ffb00aaf-6760-440e-827a-f795baf3693a\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-dszdp" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.429637 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-snrw6"] Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.436872 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-55dcdcc8d-49t56"] Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.438482 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-55dcdcc8d-49t56" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.442135 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-c6gsz" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.448057 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-t7hwz"] Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.450069 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-t7hwz" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.455436 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-v45zn" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.463742 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwncd\" (UniqueName: \"kubernetes.io/projected/64ff8480-2ca0-40d5-b5c9-448d0db3c575-kube-api-access-kwncd\") pod \"octavia-operator-controller-manager-69f8888797-7zkqz\" (UID: \"64ff8480-2ca0-40d5-b5c9-448d0db3c575\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-7zkqz" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.475793 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-55dcdcc8d-49t56"] Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.479274 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/634f9e2f-2100-49e3-a31f-a369cf8ff13f-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cs8b7t\" (UID: \"634f9e2f-2100-49e3-a31f-a369cf8ff13f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cs8b7t" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.479474 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crhw2\" (UniqueName: \"kubernetes.io/projected/67e3f2b9-2dbf-4c35-b1cd-02be51f58e38-kube-api-access-crhw2\") pod \"test-operator-controller-manager-7866795846-t7hwz\" (UID: \"67e3f2b9-2dbf-4c35-b1cd-02be51f58e38\") " pod="openstack-operators/test-operator-controller-manager-7866795846-t7hwz" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.480306 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhjqt\" (UniqueName: \"kubernetes.io/projected/9ec66be5-3947-45d1-bf34-c7639e8d4c8a-kube-api-access-lhjqt\") pod \"placement-operator-controller-manager-8497b45c89-vwvtz\" (UID: \"9ec66be5-3947-45d1-bf34-c7639e8d4c8a\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-vwvtz" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.480737 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgtlx\" (UniqueName: \"kubernetes.io/projected/d72a97fb-2a6a-4af1-8f0c-de88ab679119-kube-api-access-dgtlx\") pod \"telemetry-operator-controller-manager-55dcdcc8d-49t56\" (UID: \"d72a97fb-2a6a-4af1-8f0c-de88ab679119\") " pod="openstack-operators/telemetry-operator-controller-manager-55dcdcc8d-49t56" Feb 14 04:27:34 crc kubenswrapper[4867]: E0214 04:27:34.479871 4867 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 14 04:27:34 crc kubenswrapper[4867]: E0214 04:27:34.482796 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/634f9e2f-2100-49e3-a31f-a369cf8ff13f-cert podName:634f9e2f-2100-49e3-a31f-a369cf8ff13f nodeName:}" failed. No retries permitted until 2026-02-14 04:27:34.982724052 +0000 UTC m=+1087.063661556 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/634f9e2f-2100-49e3-a31f-a369cf8ff13f-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cs8b7t" (UID: "634f9e2f-2100-49e3-a31f-a369cf8ff13f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.488134 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dth6\" (UniqueName: \"kubernetes.io/projected/bc4bb4fd-bcc8-438b-af84-a2db3d3e346a-kube-api-access-7dth6\") pod \"swift-operator-controller-manager-68f46476f-snrw6\" (UID: \"bc4bb4fd-bcc8-438b-af84-a2db3d3e346a\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-snrw6" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.488809 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkz75\" (UniqueName: \"kubernetes.io/projected/634f9e2f-2100-49e3-a31f-a369cf8ff13f-kube-api-access-hkz75\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cs8b7t\" (UID: \"634f9e2f-2100-49e3-a31f-a369cf8ff13f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cs8b7t" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.517658 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-t7hwz"] Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.535094 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-7zkqz" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.556939 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkz75\" (UniqueName: \"kubernetes.io/projected/634f9e2f-2100-49e3-a31f-a369cf8ff13f-kube-api-access-hkz75\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cs8b7t\" (UID: \"634f9e2f-2100-49e3-a31f-a369cf8ff13f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cs8b7t" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.562963 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhjqt\" (UniqueName: \"kubernetes.io/projected/9ec66be5-3947-45d1-bf34-c7639e8d4c8a-kube-api-access-lhjqt\") pod \"placement-operator-controller-manager-8497b45c89-vwvtz\" (UID: \"9ec66be5-3947-45d1-bf34-c7639e8d4c8a\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-vwvtz" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.591578 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgtlx\" (UniqueName: \"kubernetes.io/projected/d72a97fb-2a6a-4af1-8f0c-de88ab679119-kube-api-access-dgtlx\") pod \"telemetry-operator-controller-manager-55dcdcc8d-49t56\" (UID: \"d72a97fb-2a6a-4af1-8f0c-de88ab679119\") " pod="openstack-operators/telemetry-operator-controller-manager-55dcdcc8d-49t56" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.591696 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dth6\" (UniqueName: \"kubernetes.io/projected/bc4bb4fd-bcc8-438b-af84-a2db3d3e346a-kube-api-access-7dth6\") pod \"swift-operator-controller-manager-68f46476f-snrw6\" (UID: \"bc4bb4fd-bcc8-438b-af84-a2db3d3e346a\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-snrw6" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.591803 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crhw2\" (UniqueName: \"kubernetes.io/projected/67e3f2b9-2dbf-4c35-b1cd-02be51f58e38-kube-api-access-crhw2\") pod \"test-operator-controller-manager-7866795846-t7hwz\" (UID: \"67e3f2b9-2dbf-4c35-b1cd-02be51f58e38\") " pod="openstack-operators/test-operator-controller-manager-7866795846-t7hwz" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.633278 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crhw2\" (UniqueName: \"kubernetes.io/projected/67e3f2b9-2dbf-4c35-b1cd-02be51f58e38-kube-api-access-crhw2\") pod \"test-operator-controller-manager-7866795846-t7hwz\" (UID: \"67e3f2b9-2dbf-4c35-b1cd-02be51f58e38\") " pod="openstack-operators/test-operator-controller-manager-7866795846-t7hwz" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.638237 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-dszdp" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.643444 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-6d9jj"] Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.649628 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-6d9jj" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.647920 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dth6\" (UniqueName: \"kubernetes.io/projected/bc4bb4fd-bcc8-438b-af84-a2db3d3e346a-kube-api-access-7dth6\") pod \"swift-operator-controller-manager-68f46476f-snrw6\" (UID: \"bc4bb4fd-bcc8-438b-af84-a2db3d3e346a\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-snrw6" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.655574 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgtlx\" (UniqueName: \"kubernetes.io/projected/d72a97fb-2a6a-4af1-8f0c-de88ab679119-kube-api-access-dgtlx\") pod \"telemetry-operator-controller-manager-55dcdcc8d-49t56\" (UID: \"d72a97fb-2a6a-4af1-8f0c-de88ab679119\") " pod="openstack-operators/telemetry-operator-controller-manager-55dcdcc8d-49t56" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.657014 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-whvgl" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.694679 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m96xf\" (UniqueName: \"kubernetes.io/projected/82e5dbee-ab1e-498c-9460-be75226afa18-kube-api-access-m96xf\") pod \"watcher-operator-controller-manager-5db88f68c-6d9jj\" (UID: \"82e5dbee-ab1e-498c-9460-be75226afa18\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-6d9jj" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.710086 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-snrw6" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.748255 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-6d9jj"] Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.760222 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-75585db5cc-kzk25"] Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.761376 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-75585db5cc-kzk25" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.773533 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-zz8bp" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.773732 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.773850 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.783228 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-75585db5cc-kzk25"] Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.795926 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m96xf\" (UniqueName: \"kubernetes.io/projected/82e5dbee-ab1e-498c-9460-be75226afa18-kube-api-access-m96xf\") pod \"watcher-operator-controller-manager-5db88f68c-6d9jj\" (UID: \"82e5dbee-ab1e-498c-9460-be75226afa18\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-6d9jj" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.796947 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64cmk\" (UniqueName: \"kubernetes.io/projected/c83fa345-043f-453c-b797-a00db3111d44-kube-api-access-64cmk\") pod \"openstack-operator-controller-manager-75585db5cc-kzk25\" (UID: \"c83fa345-043f-453c-b797-a00db3111d44\") " pod="openstack-operators/openstack-operator-controller-manager-75585db5cc-kzk25" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.797063 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c83fa345-043f-453c-b797-a00db3111d44-metrics-certs\") pod \"openstack-operator-controller-manager-75585db5cc-kzk25\" (UID: \"c83fa345-043f-453c-b797-a00db3111d44\") " pod="openstack-operators/openstack-operator-controller-manager-75585db5cc-kzk25" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.797192 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c83fa345-043f-453c-b797-a00db3111d44-webhook-certs\") pod \"openstack-operator-controller-manager-75585db5cc-kzk25\" (UID: \"c83fa345-043f-453c-b797-a00db3111d44\") " pod="openstack-operators/openstack-operator-controller-manager-75585db5cc-kzk25" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.841162 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m96xf\" (UniqueName: \"kubernetes.io/projected/82e5dbee-ab1e-498c-9460-be75226afa18-kube-api-access-m96xf\") pod \"watcher-operator-controller-manager-5db88f68c-6d9jj\" (UID: \"82e5dbee-ab1e-498c-9460-be75226afa18\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-6d9jj" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.855688 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-87pdl"] Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.866176 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-87pdl" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.871011 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-jclfs" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.919843 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64cmk\" (UniqueName: \"kubernetes.io/projected/c83fa345-043f-453c-b797-a00db3111d44-kube-api-access-64cmk\") pod \"openstack-operator-controller-manager-75585db5cc-kzk25\" (UID: \"c83fa345-043f-453c-b797-a00db3111d44\") " pod="openstack-operators/openstack-operator-controller-manager-75585db5cc-kzk25" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.919944 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c83fa345-043f-453c-b797-a00db3111d44-metrics-certs\") pod \"openstack-operator-controller-manager-75585db5cc-kzk25\" (UID: \"c83fa345-043f-453c-b797-a00db3111d44\") " pod="openstack-operators/openstack-operator-controller-manager-75585db5cc-kzk25" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.920010 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c83fa345-043f-453c-b797-a00db3111d44-webhook-certs\") pod \"openstack-operator-controller-manager-75585db5cc-kzk25\" (UID: \"c83fa345-043f-453c-b797-a00db3111d44\") " pod="openstack-operators/openstack-operator-controller-manager-75585db5cc-kzk25" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.920106 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ds89f\" (UniqueName: \"kubernetes.io/projected/c38fa6a1-63b1-44a2-82b8-d6fd3d8a1f8d-kube-api-access-ds89f\") pod \"rabbitmq-cluster-operator-manager-668c99d594-87pdl\" (UID: \"c38fa6a1-63b1-44a2-82b8-d6fd3d8a1f8d\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-87pdl" Feb 14 04:27:34 crc kubenswrapper[4867]: E0214 04:27:34.921665 4867 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 14 04:27:34 crc kubenswrapper[4867]: E0214 04:27:34.921725 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c83fa345-043f-453c-b797-a00db3111d44-metrics-certs podName:c83fa345-043f-453c-b797-a00db3111d44 nodeName:}" failed. No retries permitted until 2026-02-14 04:27:35.421709862 +0000 UTC m=+1087.502647176 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c83fa345-043f-453c-b797-a00db3111d44-metrics-certs") pod "openstack-operator-controller-manager-75585db5cc-kzk25" (UID: "c83fa345-043f-453c-b797-a00db3111d44") : secret "metrics-server-cert" not found Feb 14 04:27:34 crc kubenswrapper[4867]: E0214 04:27:34.922432 4867 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 14 04:27:34 crc kubenswrapper[4867]: E0214 04:27:34.922468 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c83fa345-043f-453c-b797-a00db3111d44-webhook-certs podName:c83fa345-043f-453c-b797-a00db3111d44 nodeName:}" failed. No retries permitted until 2026-02-14 04:27:35.422455562 +0000 UTC m=+1087.503392876 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c83fa345-043f-453c-b797-a00db3111d44-webhook-certs") pod "openstack-operator-controller-manager-75585db5cc-kzk25" (UID: "c83fa345-043f-453c-b797-a00db3111d44") : secret "webhook-server-cert" not found Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.922495 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-87pdl"] Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.939070 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-vwvtz" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.959467 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64cmk\" (UniqueName: \"kubernetes.io/projected/c83fa345-043f-453c-b797-a00db3111d44-kube-api-access-64cmk\") pod \"openstack-operator-controller-manager-75585db5cc-kzk25\" (UID: \"c83fa345-043f-453c-b797-a00db3111d44\") " pod="openstack-operators/openstack-operator-controller-manager-75585db5cc-kzk25" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.963288 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-55dcdcc8d-49t56" Feb 14 04:27:34 crc kubenswrapper[4867]: W0214 04:27:34.964606 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod652d3b74_0634_4f8f_b5ef_3adfc53920eb.slice/crio-52edf9eab01e212f143162fbc14ac778a09a8eaf3df72d6c10af306a3d505f28 WatchSource:0}: Error finding container 52edf9eab01e212f143162fbc14ac778a09a8eaf3df72d6c10af306a3d505f28: Status 404 returned error can't find the container with id 52edf9eab01e212f143162fbc14ac778a09a8eaf3df72d6c10af306a3d505f28 Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.984015 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-t7hwz" Feb 14 04:27:35 crc kubenswrapper[4867]: I0214 04:27:35.010528 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-6d9jj" Feb 14 04:27:35 crc kubenswrapper[4867]: I0214 04:27:35.022212 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-ndb8l"] Feb 14 04:27:35 crc kubenswrapper[4867]: I0214 04:27:35.022470 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ds89f\" (UniqueName: \"kubernetes.io/projected/c38fa6a1-63b1-44a2-82b8-d6fd3d8a1f8d-kube-api-access-ds89f\") pod \"rabbitmq-cluster-operator-manager-668c99d594-87pdl\" (UID: \"c38fa6a1-63b1-44a2-82b8-d6fd3d8a1f8d\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-87pdl" Feb 14 04:27:35 crc kubenswrapper[4867]: I0214 04:27:35.022580 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/634f9e2f-2100-49e3-a31f-a369cf8ff13f-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cs8b7t\" (UID: \"634f9e2f-2100-49e3-a31f-a369cf8ff13f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cs8b7t" Feb 14 04:27:35 crc kubenswrapper[4867]: E0214 04:27:35.022729 4867 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 14 04:27:35 crc kubenswrapper[4867]: E0214 04:27:35.022769 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/634f9e2f-2100-49e3-a31f-a369cf8ff13f-cert podName:634f9e2f-2100-49e3-a31f-a369cf8ff13f nodeName:}" failed. No retries permitted until 2026-02-14 04:27:36.022755925 +0000 UTC m=+1088.103693239 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/634f9e2f-2100-49e3-a31f-a369cf8ff13f-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cs8b7t" (UID: "634f9e2f-2100-49e3-a31f-a369cf8ff13f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 14 04:27:35 crc kubenswrapper[4867]: I0214 04:27:35.048742 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ds89f\" (UniqueName: \"kubernetes.io/projected/c38fa6a1-63b1-44a2-82b8-d6fd3d8a1f8d-kube-api-access-ds89f\") pod \"rabbitmq-cluster-operator-manager-668c99d594-87pdl\" (UID: \"c38fa6a1-63b1-44a2-82b8-d6fd3d8a1f8d\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-87pdl" Feb 14 04:27:35 crc kubenswrapper[4867]: I0214 04:27:35.075499 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-87pdl" Feb 14 04:27:35 crc kubenswrapper[4867]: I0214 04:27:35.233452 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ebee5651-7233-4c18-bb97-a4dc91eabef4-cert\") pod \"infra-operator-controller-manager-79d975b745-jqq2w\" (UID: \"ebee5651-7233-4c18-bb97-a4dc91eabef4\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-jqq2w" Feb 14 04:27:35 crc kubenswrapper[4867]: E0214 04:27:35.234479 4867 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 14 04:27:35 crc kubenswrapper[4867]: E0214 04:27:35.234579 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebee5651-7233-4c18-bb97-a4dc91eabef4-cert podName:ebee5651-7233-4c18-bb97-a4dc91eabef4 nodeName:}" failed. No retries permitted until 2026-02-14 04:27:37.234560842 +0000 UTC m=+1089.315498156 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ebee5651-7233-4c18-bb97-a4dc91eabef4-cert") pod "infra-operator-controller-manager-79d975b745-jqq2w" (UID: "ebee5651-7233-4c18-bb97-a4dc91eabef4") : secret "infra-operator-webhook-server-cert" not found Feb 14 04:27:35 crc kubenswrapper[4867]: I0214 04:27:35.442286 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c83fa345-043f-453c-b797-a00db3111d44-metrics-certs\") pod \"openstack-operator-controller-manager-75585db5cc-kzk25\" (UID: \"c83fa345-043f-453c-b797-a00db3111d44\") " pod="openstack-operators/openstack-operator-controller-manager-75585db5cc-kzk25" Feb 14 04:27:35 crc kubenswrapper[4867]: I0214 04:27:35.442361 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c83fa345-043f-453c-b797-a00db3111d44-webhook-certs\") pod \"openstack-operator-controller-manager-75585db5cc-kzk25\" (UID: \"c83fa345-043f-453c-b797-a00db3111d44\") " pod="openstack-operators/openstack-operator-controller-manager-75585db5cc-kzk25" Feb 14 04:27:35 crc kubenswrapper[4867]: E0214 04:27:35.442519 4867 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 14 04:27:35 crc kubenswrapper[4867]: E0214 04:27:35.442553 4867 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 14 04:27:35 crc kubenswrapper[4867]: E0214 04:27:35.442601 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c83fa345-043f-453c-b797-a00db3111d44-metrics-certs podName:c83fa345-043f-453c-b797-a00db3111d44 nodeName:}" failed. No retries permitted until 2026-02-14 04:27:36.442578352 +0000 UTC m=+1088.523515666 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c83fa345-043f-453c-b797-a00db3111d44-metrics-certs") pod "openstack-operator-controller-manager-75585db5cc-kzk25" (UID: "c83fa345-043f-453c-b797-a00db3111d44") : secret "metrics-server-cert" not found Feb 14 04:27:35 crc kubenswrapper[4867]: E0214 04:27:35.442670 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c83fa345-043f-453c-b797-a00db3111d44-webhook-certs podName:c83fa345-043f-453c-b797-a00db3111d44 nodeName:}" failed. No retries permitted until 2026-02-14 04:27:36.442642683 +0000 UTC m=+1088.523580027 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c83fa345-043f-453c-b797-a00db3111d44-webhook-certs") pod "openstack-operator-controller-manager-75585db5cc-kzk25" (UID: "c83fa345-043f-453c-b797-a00db3111d44") : secret "webhook-server-cert" not found Feb 14 04:27:35 crc kubenswrapper[4867]: I0214 04:27:35.465913 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-ndb8l" event={"ID":"652d3b74-0634-4f8f-b5ef-3adfc53920eb","Type":"ContainerStarted","Data":"52edf9eab01e212f143162fbc14ac778a09a8eaf3df72d6c10af306a3d505f28"} Feb 14 04:27:35 crc kubenswrapper[4867]: I0214 04:27:35.534368 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-chbgl"] Feb 14 04:27:35 crc kubenswrapper[4867]: I0214 04:27:35.549569 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-bgznq"] Feb 14 04:27:35 crc kubenswrapper[4867]: I0214 04:27:35.571190 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-tpfxn"] Feb 14 04:27:36 crc kubenswrapper[4867]: I0214 04:27:36.055444 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/634f9e2f-2100-49e3-a31f-a369cf8ff13f-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cs8b7t\" (UID: \"634f9e2f-2100-49e3-a31f-a369cf8ff13f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cs8b7t" Feb 14 04:27:36 crc kubenswrapper[4867]: E0214 04:27:36.055655 4867 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 14 04:27:36 crc kubenswrapper[4867]: E0214 04:27:36.055723 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/634f9e2f-2100-49e3-a31f-a369cf8ff13f-cert podName:634f9e2f-2100-49e3-a31f-a369cf8ff13f nodeName:}" failed. No retries permitted until 2026-02-14 04:27:38.055691256 +0000 UTC m=+1090.136628570 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/634f9e2f-2100-49e3-a31f-a369cf8ff13f-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cs8b7t" (UID: "634f9e2f-2100-49e3-a31f-a369cf8ff13f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 14 04:27:36 crc kubenswrapper[4867]: I0214 04:27:36.312984 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-6nhjp"] Feb 14 04:27:36 crc kubenswrapper[4867]: W0214 04:27:36.345818 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod94ff35ef_77e1_4085_ad2f_837ebc666b2a.slice/crio-4e640445f8d68ccf2f3516341efbf9e3412a5ec236840dccefaf9a4c3a5386c9 WatchSource:0}: Error finding container 4e640445f8d68ccf2f3516341efbf9e3412a5ec236840dccefaf9a4c3a5386c9: Status 404 returned error can't find the container with id 4e640445f8d68ccf2f3516341efbf9e3412a5ec236840dccefaf9a4c3a5386c9 Feb 14 04:27:36 crc kubenswrapper[4867]: I0214 04:27:36.411275 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-pxm8d"] Feb 14 04:27:36 crc kubenswrapper[4867]: I0214 04:27:36.417156 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-x7qx5"] Feb 14 04:27:36 crc kubenswrapper[4867]: I0214 04:27:36.430325 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-8dzwp"] Feb 14 04:27:36 crc kubenswrapper[4867]: I0214 04:27:36.444260 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-wwm9m"] Feb 14 04:27:36 crc kubenswrapper[4867]: W0214 04:27:36.450831 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7bb6de63_3c92_43de_a01b_b34df765aeba.slice/crio-84bc1182291f959caf0fbd7b52cd6048d5ffc97b45d13e117dd68228ef852863 WatchSource:0}: Error finding container 84bc1182291f959caf0fbd7b52cd6048d5ffc97b45d13e117dd68228ef852863: Status 404 returned error can't find the container with id 84bc1182291f959caf0fbd7b52cd6048d5ffc97b45d13e117dd68228ef852863 Feb 14 04:27:36 crc kubenswrapper[4867]: I0214 04:27:36.465457 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c83fa345-043f-453c-b797-a00db3111d44-metrics-certs\") pod \"openstack-operator-controller-manager-75585db5cc-kzk25\" (UID: \"c83fa345-043f-453c-b797-a00db3111d44\") " pod="openstack-operators/openstack-operator-controller-manager-75585db5cc-kzk25" Feb 14 04:27:36 crc kubenswrapper[4867]: I0214 04:27:36.465557 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c83fa345-043f-453c-b797-a00db3111d44-webhook-certs\") pod \"openstack-operator-controller-manager-75585db5cc-kzk25\" (UID: \"c83fa345-043f-453c-b797-a00db3111d44\") " pod="openstack-operators/openstack-operator-controller-manager-75585db5cc-kzk25" Feb 14 04:27:36 crc kubenswrapper[4867]: E0214 04:27:36.465887 4867 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 14 04:27:36 crc kubenswrapper[4867]: E0214 04:27:36.465962 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c83fa345-043f-453c-b797-a00db3111d44-webhook-certs podName:c83fa345-043f-453c-b797-a00db3111d44 nodeName:}" failed. No retries permitted until 2026-02-14 04:27:38.465941835 +0000 UTC m=+1090.546879149 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c83fa345-043f-453c-b797-a00db3111d44-webhook-certs") pod "openstack-operator-controller-manager-75585db5cc-kzk25" (UID: "c83fa345-043f-453c-b797-a00db3111d44") : secret "webhook-server-cert" not found Feb 14 04:27:36 crc kubenswrapper[4867]: E0214 04:27:36.466089 4867 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 14 04:27:36 crc kubenswrapper[4867]: E0214 04:27:36.466115 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c83fa345-043f-453c-b797-a00db3111d44-metrics-certs podName:c83fa345-043f-453c-b797-a00db3111d44 nodeName:}" failed. No retries permitted until 2026-02-14 04:27:38.466107349 +0000 UTC m=+1090.547044663 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c83fa345-043f-453c-b797-a00db3111d44-metrics-certs") pod "openstack-operator-controller-manager-75585db5cc-kzk25" (UID: "c83fa345-043f-453c-b797-a00db3111d44") : secret "metrics-server-cert" not found Feb 14 04:27:36 crc kubenswrapper[4867]: W0214 04:27:36.470094 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddc65ca0c_1d72_468f_b600_dfb8332bf4bd.slice/crio-d1c0923e9066cbdbb8acdffba82d421dd7e7c0c5b5873387483cb09db6b8223d WatchSource:0}: Error finding container d1c0923e9066cbdbb8acdffba82d421dd7e7c0c5b5873387483cb09db6b8223d: Status 404 returned error can't find the container with id d1c0923e9066cbdbb8acdffba82d421dd7e7c0c5b5873387483cb09db6b8223d Feb 14 04:27:36 crc kubenswrapper[4867]: I0214 04:27:36.484873 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-2xwdd"] Feb 14 04:27:36 crc kubenswrapper[4867]: I0214 04:27:36.486727 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-chbgl" event={"ID":"3025ff58-4a91-43f5-8f15-94cadd0cef8b","Type":"ContainerStarted","Data":"3486d63008881850141f3c6801e5de370335935b8a0c2fd4f6e6473dfca53257"} Feb 14 04:27:36 crc kubenswrapper[4867]: I0214 04:27:36.490383 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-6nhjp" event={"ID":"94ff35ef-77e1-4085-ad2f-837ebc666b2a","Type":"ContainerStarted","Data":"4e640445f8d68ccf2f3516341efbf9e3412a5ec236840dccefaf9a4c3a5386c9"} Feb 14 04:27:36 crc kubenswrapper[4867]: I0214 04:27:36.491821 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-8dzwp" event={"ID":"6b5078d9-f30f-40a8-b5b5-8eb11271ec10","Type":"ContainerStarted","Data":"70004c12d7ec2f7c3fbf5ed65f2704ce433ec9e7b6e632f35b08e2734c5129ab"} Feb 14 04:27:36 crc kubenswrapper[4867]: I0214 04:27:36.492751 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-jxpv2"] Feb 14 04:27:36 crc kubenswrapper[4867]: I0214 04:27:36.495605 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-bgznq" event={"ID":"4b75df5b-04e5-445f-8d2d-57c6cbe5971c","Type":"ContainerStarted","Data":"c090158e55241f0f12ac4546db79eb2cccfa1075841accaaaefe07be84fabef6"} Feb 14 04:27:36 crc kubenswrapper[4867]: I0214 04:27:36.497252 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-jxpv2" event={"ID":"185d4fd5-608b-48d8-8731-27e7a05adfe2","Type":"ContainerStarted","Data":"c397c8163fa1ede506dad697514827cc45774b1109508546a439953b13268236"} Feb 14 04:27:36 crc kubenswrapper[4867]: I0214 04:27:36.499383 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-pxm8d" event={"ID":"66c8a0dd-f076-4994-bd42-39c80de83233","Type":"ContainerStarted","Data":"1767024aea7b6d6a4618042325dd23bbba1b2c218958dcb948aefcbed3993a01"} Feb 14 04:27:36 crc kubenswrapper[4867]: I0214 04:27:36.501040 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-2xwdd" event={"ID":"38a9cdf3-42e2-4279-8092-af7e8c82bc51","Type":"ContainerStarted","Data":"468f8b506ff6171da44b28b3b05f6da5a38aba9184f679530ec1d4c9ba71fdfd"} Feb 14 04:27:36 crc kubenswrapper[4867]: I0214 04:27:36.502094 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-tpfxn" event={"ID":"1f889f7b-8ae5-43e3-ab54-d3bf06c010df","Type":"ContainerStarted","Data":"0dd50f2f66fa11dad74488744124afd939b227ecebb16740df7975f37dd8b6e0"} Feb 14 04:27:36 crc kubenswrapper[4867]: I0214 04:27:36.503678 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-wwm9m" event={"ID":"7bb6de63-3c92-43de-a01b-b34df765aeba","Type":"ContainerStarted","Data":"84bc1182291f959caf0fbd7b52cd6048d5ffc97b45d13e117dd68228ef852863"} Feb 14 04:27:36 crc kubenswrapper[4867]: I0214 04:27:36.852226 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-7zkqz"] Feb 14 04:27:36 crc kubenswrapper[4867]: I0214 04:27:36.882708 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-snrw6"] Feb 14 04:27:36 crc kubenswrapper[4867]: I0214 04:27:36.893292 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-vwvtz"] Feb 14 04:27:36 crc kubenswrapper[4867]: I0214 04:27:36.940245 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-dszdp"] Feb 14 04:27:36 crc kubenswrapper[4867]: I0214 04:27:36.966458 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-t7hwz"] Feb 14 04:27:36 crc kubenswrapper[4867]: I0214 04:27:36.973421 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-tf6rg"] Feb 14 04:27:37 crc kubenswrapper[4867]: W0214 04:27:37.049528 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podffb00aaf_6760_440e_827a_f795baf3693a.slice/crio-2d25dceeaacba429fb54ce0c37d77b73ab889bcae69443c380a1172186bade07 WatchSource:0}: Error finding container 2d25dceeaacba429fb54ce0c37d77b73ab889bcae69443c380a1172186bade07: Status 404 returned error can't find the container with id 2d25dceeaacba429fb54ce0c37d77b73ab889bcae69443c380a1172186bade07 Feb 14 04:27:37 crc kubenswrapper[4867]: I0214 04:27:37.121398 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-87pdl"] Feb 14 04:27:37 crc kubenswrapper[4867]: I0214 04:27:37.158779 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-55dcdcc8d-49t56"] Feb 14 04:27:37 crc kubenswrapper[4867]: I0214 04:27:37.195638 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-6d9jj"] Feb 14 04:27:37 crc kubenswrapper[4867]: I0214 04:27:37.304414 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ebee5651-7233-4c18-bb97-a4dc91eabef4-cert\") pod \"infra-operator-controller-manager-79d975b745-jqq2w\" (UID: \"ebee5651-7233-4c18-bb97-a4dc91eabef4\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-jqq2w" Feb 14 04:27:37 crc kubenswrapper[4867]: E0214 04:27:37.304744 4867 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 14 04:27:37 crc kubenswrapper[4867]: E0214 04:27:37.304801 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebee5651-7233-4c18-bb97-a4dc91eabef4-cert podName:ebee5651-7233-4c18-bb97-a4dc91eabef4 nodeName:}" failed. No retries permitted until 2026-02-14 04:27:41.304782516 +0000 UTC m=+1093.385719830 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ebee5651-7233-4c18-bb97-a4dc91eabef4-cert") pod "infra-operator-controller-manager-79d975b745-jqq2w" (UID: "ebee5651-7233-4c18-bb97-a4dc91eabef4") : secret "infra-operator-webhook-server-cert" not found Feb 14 04:27:37 crc kubenswrapper[4867]: E0214 04:27:37.310316 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.32:5001/openstack-k8s-operators/telemetry-operator:49fb0a393e644ad55559f09981950c6ee3a56dc1,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dgtlx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-55dcdcc8d-49t56_openstack-operators(d72a97fb-2a6a-4af1-8f0c-de88ab679119): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 14 04:27:37 crc kubenswrapper[4867]: E0214 04:27:37.313835 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-55dcdcc8d-49t56" podUID="d72a97fb-2a6a-4af1-8f0c-de88ab679119" Feb 14 04:27:37 crc kubenswrapper[4867]: I0214 04:27:37.534890 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-87pdl" event={"ID":"c38fa6a1-63b1-44a2-82b8-d6fd3d8a1f8d","Type":"ContainerStarted","Data":"ec8538ea905e098132cd0a4606ca455df5793f551ee0ffc05f39aeaefa2a5afd"} Feb 14 04:27:37 crc kubenswrapper[4867]: I0214 04:27:37.553406 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-6d9jj" event={"ID":"82e5dbee-ab1e-498c-9460-be75226afa18","Type":"ContainerStarted","Data":"bbed95620b33275c0700efbeb8a76ea9636171c1539b4dc8b4d7dce7ae4bc3fb"} Feb 14 04:27:37 crc kubenswrapper[4867]: I0214 04:27:37.560342 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-55dcdcc8d-49t56" event={"ID":"d72a97fb-2a6a-4af1-8f0c-de88ab679119","Type":"ContainerStarted","Data":"aed5e5c714e4a9c1e168c59ce610a5ecbbc01db9fcb895fd3688ee465aacf1ce"} Feb 14 04:27:37 crc kubenswrapper[4867]: E0214 04:27:37.563376 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.32:5001/openstack-k8s-operators/telemetry-operator:49fb0a393e644ad55559f09981950c6ee3a56dc1\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-55dcdcc8d-49t56" podUID="d72a97fb-2a6a-4af1-8f0c-de88ab679119" Feb 14 04:27:37 crc kubenswrapper[4867]: I0214 04:27:37.563698 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-snrw6" event={"ID":"bc4bb4fd-bcc8-438b-af84-a2db3d3e346a","Type":"ContainerStarted","Data":"451971d4e6eb23f775ab700a3e8168a3b4894c06cec5fb806095d84b4e098b02"} Feb 14 04:27:37 crc kubenswrapper[4867]: I0214 04:27:37.565390 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-x7qx5" event={"ID":"dc65ca0c-1d72-468f-b600-dfb8332bf4bd","Type":"ContainerStarted","Data":"d1c0923e9066cbdbb8acdffba82d421dd7e7c0c5b5873387483cb09db6b8223d"} Feb 14 04:27:37 crc kubenswrapper[4867]: I0214 04:27:37.589245 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-tf6rg" event={"ID":"74a43e5b-11c4-459d-bbc7-03aa03489f17","Type":"ContainerStarted","Data":"0cc8aac57799d65f4415381fa51edae43efec455f3302fead56283b6071fefac"} Feb 14 04:27:37 crc kubenswrapper[4867]: I0214 04:27:37.593391 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-vwvtz" event={"ID":"9ec66be5-3947-45d1-bf34-c7639e8d4c8a","Type":"ContainerStarted","Data":"fd3f56a56f7735e4753c75e480b745ffcfcb6e579b6d15d338b096ed0bb3f044"} Feb 14 04:27:37 crc kubenswrapper[4867]: I0214 04:27:37.602482 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-dszdp" event={"ID":"ffb00aaf-6760-440e-827a-f795baf3693a","Type":"ContainerStarted","Data":"2d25dceeaacba429fb54ce0c37d77b73ab889bcae69443c380a1172186bade07"} Feb 14 04:27:37 crc kubenswrapper[4867]: I0214 04:27:37.608431 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-t7hwz" event={"ID":"67e3f2b9-2dbf-4c35-b1cd-02be51f58e38","Type":"ContainerStarted","Data":"ba835ce0379d301618433cd283b5f5bdf8901d8b2297bb3ea4165c0b7992dc57"} Feb 14 04:27:37 crc kubenswrapper[4867]: I0214 04:27:37.610036 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-7zkqz" event={"ID":"64ff8480-2ca0-40d5-b5c9-448d0db3c575","Type":"ContainerStarted","Data":"ae9e1c6041f00c8d2d1988f72b26402f126a810ecf255e0e707a4a679e15e711"} Feb 14 04:27:38 crc kubenswrapper[4867]: I0214 04:27:38.140535 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/634f9e2f-2100-49e3-a31f-a369cf8ff13f-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cs8b7t\" (UID: \"634f9e2f-2100-49e3-a31f-a369cf8ff13f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cs8b7t" Feb 14 04:27:38 crc kubenswrapper[4867]: E0214 04:27:38.140907 4867 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 14 04:27:38 crc kubenswrapper[4867]: E0214 04:27:38.141066 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/634f9e2f-2100-49e3-a31f-a369cf8ff13f-cert podName:634f9e2f-2100-49e3-a31f-a369cf8ff13f nodeName:}" failed. No retries permitted until 2026-02-14 04:27:42.141041491 +0000 UTC m=+1094.221978805 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/634f9e2f-2100-49e3-a31f-a369cf8ff13f-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cs8b7t" (UID: "634f9e2f-2100-49e3-a31f-a369cf8ff13f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 14 04:27:38 crc kubenswrapper[4867]: I0214 04:27:38.555330 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c83fa345-043f-453c-b797-a00db3111d44-metrics-certs\") pod \"openstack-operator-controller-manager-75585db5cc-kzk25\" (UID: \"c83fa345-043f-453c-b797-a00db3111d44\") " pod="openstack-operators/openstack-operator-controller-manager-75585db5cc-kzk25" Feb 14 04:27:38 crc kubenswrapper[4867]: I0214 04:27:38.555425 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c83fa345-043f-453c-b797-a00db3111d44-webhook-certs\") pod \"openstack-operator-controller-manager-75585db5cc-kzk25\" (UID: \"c83fa345-043f-453c-b797-a00db3111d44\") " pod="openstack-operators/openstack-operator-controller-manager-75585db5cc-kzk25" Feb 14 04:27:38 crc kubenswrapper[4867]: E0214 04:27:38.555783 4867 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 14 04:27:38 crc kubenswrapper[4867]: E0214 04:27:38.555861 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c83fa345-043f-453c-b797-a00db3111d44-webhook-certs podName:c83fa345-043f-453c-b797-a00db3111d44 nodeName:}" failed. No retries permitted until 2026-02-14 04:27:42.555839466 +0000 UTC m=+1094.636776780 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c83fa345-043f-453c-b797-a00db3111d44-webhook-certs") pod "openstack-operator-controller-manager-75585db5cc-kzk25" (UID: "c83fa345-043f-453c-b797-a00db3111d44") : secret "webhook-server-cert" not found Feb 14 04:27:38 crc kubenswrapper[4867]: E0214 04:27:38.556326 4867 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 14 04:27:38 crc kubenswrapper[4867]: E0214 04:27:38.556363 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c83fa345-043f-453c-b797-a00db3111d44-metrics-certs podName:c83fa345-043f-453c-b797-a00db3111d44 nodeName:}" failed. No retries permitted until 2026-02-14 04:27:42.556353839 +0000 UTC m=+1094.637291153 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c83fa345-043f-453c-b797-a00db3111d44-metrics-certs") pod "openstack-operator-controller-manager-75585db5cc-kzk25" (UID: "c83fa345-043f-453c-b797-a00db3111d44") : secret "metrics-server-cert" not found Feb 14 04:27:38 crc kubenswrapper[4867]: E0214 04:27:38.643992 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.32:5001/openstack-k8s-operators/telemetry-operator:49fb0a393e644ad55559f09981950c6ee3a56dc1\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-55dcdcc8d-49t56" podUID="d72a97fb-2a6a-4af1-8f0c-de88ab679119" Feb 14 04:27:41 crc kubenswrapper[4867]: I0214 04:27:41.338203 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ebee5651-7233-4c18-bb97-a4dc91eabef4-cert\") pod \"infra-operator-controller-manager-79d975b745-jqq2w\" (UID: \"ebee5651-7233-4c18-bb97-a4dc91eabef4\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-jqq2w" Feb 14 04:27:41 crc kubenswrapper[4867]: E0214 04:27:41.338884 4867 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 14 04:27:41 crc kubenswrapper[4867]: E0214 04:27:41.338935 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebee5651-7233-4c18-bb97-a4dc91eabef4-cert podName:ebee5651-7233-4c18-bb97-a4dc91eabef4 nodeName:}" failed. No retries permitted until 2026-02-14 04:27:49.338920793 +0000 UTC m=+1101.419858107 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ebee5651-7233-4c18-bb97-a4dc91eabef4-cert") pod "infra-operator-controller-manager-79d975b745-jqq2w" (UID: "ebee5651-7233-4c18-bb97-a4dc91eabef4") : secret "infra-operator-webhook-server-cert" not found Feb 14 04:27:42 crc kubenswrapper[4867]: I0214 04:27:42.150828 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/634f9e2f-2100-49e3-a31f-a369cf8ff13f-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cs8b7t\" (UID: \"634f9e2f-2100-49e3-a31f-a369cf8ff13f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cs8b7t" Feb 14 04:27:42 crc kubenswrapper[4867]: E0214 04:27:42.151063 4867 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 14 04:27:42 crc kubenswrapper[4867]: E0214 04:27:42.151237 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/634f9e2f-2100-49e3-a31f-a369cf8ff13f-cert podName:634f9e2f-2100-49e3-a31f-a369cf8ff13f nodeName:}" failed. No retries permitted until 2026-02-14 04:27:50.151221348 +0000 UTC m=+1102.232158662 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/634f9e2f-2100-49e3-a31f-a369cf8ff13f-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cs8b7t" (UID: "634f9e2f-2100-49e3-a31f-a369cf8ff13f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 14 04:27:42 crc kubenswrapper[4867]: I0214 04:27:42.559278 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c83fa345-043f-453c-b797-a00db3111d44-metrics-certs\") pod \"openstack-operator-controller-manager-75585db5cc-kzk25\" (UID: \"c83fa345-043f-453c-b797-a00db3111d44\") " pod="openstack-operators/openstack-operator-controller-manager-75585db5cc-kzk25" Feb 14 04:27:42 crc kubenswrapper[4867]: I0214 04:27:42.559487 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c83fa345-043f-453c-b797-a00db3111d44-webhook-certs\") pod \"openstack-operator-controller-manager-75585db5cc-kzk25\" (UID: \"c83fa345-043f-453c-b797-a00db3111d44\") " pod="openstack-operators/openstack-operator-controller-manager-75585db5cc-kzk25" Feb 14 04:27:42 crc kubenswrapper[4867]: E0214 04:27:42.559418 4867 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 14 04:27:42 crc kubenswrapper[4867]: E0214 04:27:42.559642 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c83fa345-043f-453c-b797-a00db3111d44-metrics-certs podName:c83fa345-043f-453c-b797-a00db3111d44 nodeName:}" failed. No retries permitted until 2026-02-14 04:27:50.559626588 +0000 UTC m=+1102.640563902 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c83fa345-043f-453c-b797-a00db3111d44-metrics-certs") pod "openstack-operator-controller-manager-75585db5cc-kzk25" (UID: "c83fa345-043f-453c-b797-a00db3111d44") : secret "metrics-server-cert" not found Feb 14 04:27:42 crc kubenswrapper[4867]: E0214 04:27:42.560173 4867 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 14 04:27:42 crc kubenswrapper[4867]: E0214 04:27:42.560213 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c83fa345-043f-453c-b797-a00db3111d44-webhook-certs podName:c83fa345-043f-453c-b797-a00db3111d44 nodeName:}" failed. No retries permitted until 2026-02-14 04:27:50.560204273 +0000 UTC m=+1102.641141577 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c83fa345-043f-453c-b797-a00db3111d44-webhook-certs") pod "openstack-operator-controller-manager-75585db5cc-kzk25" (UID: "c83fa345-043f-453c-b797-a00db3111d44") : secret "webhook-server-cert" not found Feb 14 04:27:49 crc kubenswrapper[4867]: E0214 04:27:49.406482 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/cinder-operator@sha256:2b8ab3063af4aaeed0198197aae6f391c6647ac686c94c85668537f1d5933979" Feb 14 04:27:49 crc kubenswrapper[4867]: E0214 04:27:49.407184 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/cinder-operator@sha256:2b8ab3063af4aaeed0198197aae6f391c6647ac686c94c85668537f1d5933979,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jt5nx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-5d946d989d-chbgl_openstack-operators(3025ff58-4a91-43f5-8f15-94cadd0cef8b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 04:27:49 crc kubenswrapper[4867]: E0214 04:27:49.408406 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-chbgl" podUID="3025ff58-4a91-43f5-8f15-94cadd0cef8b" Feb 14 04:27:49 crc kubenswrapper[4867]: I0214 04:27:49.410995 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ebee5651-7233-4c18-bb97-a4dc91eabef4-cert\") pod \"infra-operator-controller-manager-79d975b745-jqq2w\" (UID: \"ebee5651-7233-4c18-bb97-a4dc91eabef4\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-jqq2w" Feb 14 04:27:49 crc kubenswrapper[4867]: I0214 04:27:49.418915 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ebee5651-7233-4c18-bb97-a4dc91eabef4-cert\") pod \"infra-operator-controller-manager-79d975b745-jqq2w\" (UID: \"ebee5651-7233-4c18-bb97-a4dc91eabef4\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-jqq2w" Feb 14 04:27:49 crc kubenswrapper[4867]: I0214 04:27:49.466650 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-jqq2w" Feb 14 04:27:49 crc kubenswrapper[4867]: E0214 04:27:49.767343 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/cinder-operator@sha256:2b8ab3063af4aaeed0198197aae6f391c6647ac686c94c85668537f1d5933979\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-chbgl" podUID="3025ff58-4a91-43f5-8f15-94cadd0cef8b" Feb 14 04:27:50 crc kubenswrapper[4867]: I0214 04:27:50.225465 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/634f9e2f-2100-49e3-a31f-a369cf8ff13f-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cs8b7t\" (UID: \"634f9e2f-2100-49e3-a31f-a369cf8ff13f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cs8b7t" Feb 14 04:27:50 crc kubenswrapper[4867]: I0214 04:27:50.230058 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/634f9e2f-2100-49e3-a31f-a369cf8ff13f-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cs8b7t\" (UID: \"634f9e2f-2100-49e3-a31f-a369cf8ff13f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cs8b7t" Feb 14 04:27:50 crc kubenswrapper[4867]: I0214 04:27:50.281714 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cs8b7t" Feb 14 04:27:50 crc kubenswrapper[4867]: I0214 04:27:50.632922 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c83fa345-043f-453c-b797-a00db3111d44-metrics-certs\") pod \"openstack-operator-controller-manager-75585db5cc-kzk25\" (UID: \"c83fa345-043f-453c-b797-a00db3111d44\") " pod="openstack-operators/openstack-operator-controller-manager-75585db5cc-kzk25" Feb 14 04:27:50 crc kubenswrapper[4867]: I0214 04:27:50.634232 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c83fa345-043f-453c-b797-a00db3111d44-webhook-certs\") pod \"openstack-operator-controller-manager-75585db5cc-kzk25\" (UID: \"c83fa345-043f-453c-b797-a00db3111d44\") " pod="openstack-operators/openstack-operator-controller-manager-75585db5cc-kzk25" Feb 14 04:27:50 crc kubenswrapper[4867]: E0214 04:27:50.634457 4867 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 14 04:27:50 crc kubenswrapper[4867]: E0214 04:27:50.634570 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c83fa345-043f-453c-b797-a00db3111d44-webhook-certs podName:c83fa345-043f-453c-b797-a00db3111d44 nodeName:}" failed. No retries permitted until 2026-02-14 04:28:06.634544655 +0000 UTC m=+1118.715481979 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c83fa345-043f-453c-b797-a00db3111d44-webhook-certs") pod "openstack-operator-controller-manager-75585db5cc-kzk25" (UID: "c83fa345-043f-453c-b797-a00db3111d44") : secret "webhook-server-cert" not found Feb 14 04:27:50 crc kubenswrapper[4867]: I0214 04:27:50.637326 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c83fa345-043f-453c-b797-a00db3111d44-metrics-certs\") pod \"openstack-operator-controller-manager-75585db5cc-kzk25\" (UID: \"c83fa345-043f-453c-b797-a00db3111d44\") " pod="openstack-operators/openstack-operator-controller-manager-75585db5cc-kzk25" Feb 14 04:27:51 crc kubenswrapper[4867]: E0214 04:27:51.483266 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c" Feb 14 04:27:51 crc kubenswrapper[4867]: E0214 04:27:51.483998 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nd8nr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-54f6768c69-8dzwp_openstack-operators(6b5078d9-f30f-40a8-b5b5-8eb11271ec10): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 04:27:51 crc kubenswrapper[4867]: E0214 04:27:51.485213 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-8dzwp" podUID="6b5078d9-f30f-40a8-b5b5-8eb11271ec10" Feb 14 04:27:51 crc kubenswrapper[4867]: E0214 04:27:51.781592 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c\\\"\"" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-8dzwp" podUID="6b5078d9-f30f-40a8-b5b5-8eb11271ec10" Feb 14 04:27:51 crc kubenswrapper[4867]: E0214 04:27:51.986919 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6" Feb 14 04:27:51 crc kubenswrapper[4867]: E0214 04:27:51.987085 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-crhw2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-7866795846-t7hwz_openstack-operators(67e3f2b9-2dbf-4c35-b1cd-02be51f58e38): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 04:27:51 crc kubenswrapper[4867]: E0214 04:27:51.988426 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-7866795846-t7hwz" podUID="67e3f2b9-2dbf-4c35-b1cd-02be51f58e38" Feb 14 04:27:52 crc kubenswrapper[4867]: E0214 04:27:52.788175 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6\\\"\"" pod="openstack-operators/test-operator-controller-manager-7866795846-t7hwz" podUID="67e3f2b9-2dbf-4c35-b1cd-02be51f58e38" Feb 14 04:27:54 crc kubenswrapper[4867]: E0214 04:27:54.969839 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:e8a675284ff97a1d3f0f07583863be20b20b4aa48ebb34dbc80d83fe39d757b2" Feb 14 04:27:54 crc kubenswrapper[4867]: E0214 04:27:54.970434 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:e8a675284ff97a1d3f0f07583863be20b20b4aa48ebb34dbc80d83fe39d757b2,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vnw7g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-69f49c598c-jxpv2_openstack-operators(185d4fd5-608b-48d8-8731-27e7a05adfe2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 04:27:54 crc kubenswrapper[4867]: E0214 04:27:54.972073 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-jxpv2" podUID="185d4fd5-608b-48d8-8731-27e7a05adfe2" Feb 14 04:27:55 crc kubenswrapper[4867]: E0214 04:27:55.745452 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0" Feb 14 04:27:55 crc kubenswrapper[4867]: E0214 04:27:55.745850 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m96xf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-5db88f68c-6d9jj_openstack-operators(82e5dbee-ab1e-498c-9460-be75226afa18): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 04:27:55 crc kubenswrapper[4867]: E0214 04:27:55.747563 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-6d9jj" podUID="82e5dbee-ab1e-498c-9460-be75226afa18" Feb 14 04:27:55 crc kubenswrapper[4867]: E0214 04:27:55.812661 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:e8a675284ff97a1d3f0f07583863be20b20b4aa48ebb34dbc80d83fe39d757b2\\\"\"" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-jxpv2" podUID="185d4fd5-608b-48d8-8731-27e7a05adfe2" Feb 14 04:27:55 crc kubenswrapper[4867]: E0214 04:27:55.813006 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-6d9jj" podUID="82e5dbee-ab1e-498c-9460-be75226afa18" Feb 14 04:27:56 crc kubenswrapper[4867]: E0214 04:27:56.411062 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:1ab3ec59cd8e30dd8423e91ad832403bdefbae3b8ac47e15578d5a677d7ba0df" Feb 14 04:27:56 crc kubenswrapper[4867]: E0214 04:27:56.411288 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:1ab3ec59cd8e30dd8423e91ad832403bdefbae3b8ac47e15578d5a677d7ba0df,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gntpk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-77987464f4-tpfxn_openstack-operators(1f889f7b-8ae5-43e3-ab54-d3bf06c010df): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 04:27:56 crc kubenswrapper[4867]: E0214 04:27:56.412518 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-77987464f4-tpfxn" podUID="1f889f7b-8ae5-43e3-ab54-d3bf06c010df" Feb 14 04:27:56 crc kubenswrapper[4867]: E0214 04:27:56.837949 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/glance-operator@sha256:1ab3ec59cd8e30dd8423e91ad832403bdefbae3b8ac47e15578d5a677d7ba0df\\\"\"" pod="openstack-operators/glance-operator-controller-manager-77987464f4-tpfxn" podUID="1f889f7b-8ae5-43e3-ab54-d3bf06c010df" Feb 14 04:27:57 crc kubenswrapper[4867]: E0214 04:27:57.711566 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf" Feb 14 04:27:57 crc kubenswrapper[4867]: E0214 04:27:57.712389 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kjptq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-64ddbf8bb-2xwdd_openstack-operators(38a9cdf3-42e2-4279-8092-af7e8c82bc51): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 04:27:57 crc kubenswrapper[4867]: E0214 04:27:57.713636 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-2xwdd" podUID="38a9cdf3-42e2-4279-8092-af7e8c82bc51" Feb 14 04:27:57 crc kubenswrapper[4867]: E0214 04:27:57.853996 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-2xwdd" podUID="38a9cdf3-42e2-4279-8092-af7e8c82bc51" Feb 14 04:27:58 crc kubenswrapper[4867]: E0214 04:27:58.480552 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04" Feb 14 04:27:58 crc kubenswrapper[4867]: E0214 04:27:58.480922 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7dth6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68f46476f-snrw6_openstack-operators(bc4bb4fd-bcc8-438b-af84-a2db3d3e346a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 04:27:58 crc kubenswrapper[4867]: E0214 04:27:58.482298 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-snrw6" podUID="bc4bb4fd-bcc8-438b-af84-a2db3d3e346a" Feb 14 04:27:58 crc kubenswrapper[4867]: E0214 04:27:58.859021 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-snrw6" podUID="bc4bb4fd-bcc8-438b-af84-a2db3d3e346a" Feb 14 04:27:59 crc kubenswrapper[4867]: E0214 04:27:59.000335 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867" Feb 14 04:27:59 crc kubenswrapper[4867]: E0214 04:27:59.000824 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bpjl7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-554564d7fc-6nhjp_openstack-operators(94ff35ef-77e1-4085-ad2f-837ebc666b2a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 04:27:59 crc kubenswrapper[4867]: E0214 04:27:59.002011 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-6nhjp" podUID="94ff35ef-77e1-4085-ad2f-837ebc666b2a" Feb 14 04:27:59 crc kubenswrapper[4867]: E0214 04:27:59.542917 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/barbican-operator@sha256:90ad8fd8c1889b6be77925016532218eb6149d2c1c8535a5f9f1775c776fa6cc" Feb 14 04:27:59 crc kubenswrapper[4867]: E0214 04:27:59.543084 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/barbican-operator@sha256:90ad8fd8c1889b6be77925016532218eb6149d2c1c8535a5f9f1775c776fa6cc,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w8t7h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-868647ff47-pxm8d_openstack-operators(66c8a0dd-f076-4994-bd42-39c80de83233): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 04:27:59 crc kubenswrapper[4867]: E0214 04:27:59.545008 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-pxm8d" podUID="66c8a0dd-f076-4994-bd42-39c80de83233" Feb 14 04:27:59 crc kubenswrapper[4867]: E0214 04:27:59.867439 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/barbican-operator@sha256:90ad8fd8c1889b6be77925016532218eb6149d2c1c8535a5f9f1775c776fa6cc\\\"\"" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-pxm8d" podUID="66c8a0dd-f076-4994-bd42-39c80de83233" Feb 14 04:27:59 crc kubenswrapper[4867]: E0214 04:27:59.867488 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-6nhjp" podUID="94ff35ef-77e1-4085-ad2f-837ebc666b2a" Feb 14 04:27:59 crc kubenswrapper[4867]: E0214 04:27:59.991647 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd" Feb 14 04:27:59 crc kubenswrapper[4867]: E0214 04:27:59.992069 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lhjqt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-8497b45c89-vwvtz_openstack-operators(9ec66be5-3947-45d1-bf34-c7639e8d4c8a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 04:27:59 crc kubenswrapper[4867]: E0214 04:27:59.993528 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-vwvtz" podUID="9ec66be5-3947-45d1-bf34-c7639e8d4c8a" Feb 14 04:28:00 crc kubenswrapper[4867]: E0214 04:28:00.445358 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a" Feb 14 04:28:00 crc kubenswrapper[4867]: E0214 04:28:00.445592 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-btzhx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-6994f66f48-wwm9m_openstack-operators(7bb6de63-3c92-43de-a01b-b34df765aeba): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 04:28:00 crc kubenswrapper[4867]: E0214 04:28:00.446812 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-wwm9m" podUID="7bb6de63-3c92-43de-a01b-b34df765aeba" Feb 14 04:28:00 crc kubenswrapper[4867]: E0214 04:28:00.874054 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd\\\"\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-vwvtz" podUID="9ec66be5-3947-45d1-bf34-c7639e8d4c8a" Feb 14 04:28:00 crc kubenswrapper[4867]: E0214 04:28:00.874406 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-wwm9m" podUID="7bb6de63-3c92-43de-a01b-b34df765aeba" Feb 14 04:28:01 crc kubenswrapper[4867]: I0214 04:28:01.252608 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 04:28:01 crc kubenswrapper[4867]: I0214 04:28:01.252684 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 04:28:01 crc kubenswrapper[4867]: E0214 04:28:01.980698 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:c1e33e962043cd6e3d09ebd225cb72781451dba7af2d57522e5c6eedbdc91642" Feb 14 04:28:01 crc kubenswrapper[4867]: E0214 04:28:01.980868 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:c1e33e962043cd6e3d09ebd225cb72781451dba7af2d57522e5c6eedbdc91642,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-trhdw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-6d8bf5c495-ndb8l_openstack-operators(652d3b74-0634-4f8f-b5ef-3adfc53920eb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 04:28:01 crc kubenswrapper[4867]: E0214 04:28:01.982027 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-ndb8l" podUID="652d3b74-0634-4f8f-b5ef-3adfc53920eb" Feb 14 04:28:02 crc kubenswrapper[4867]: E0214 04:28:02.891369 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:c1e33e962043cd6e3d09ebd225cb72781451dba7af2d57522e5c6eedbdc91642\\\"\"" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-ndb8l" podUID="652d3b74-0634-4f8f-b5ef-3adfc53920eb" Feb 14 04:28:03 crc kubenswrapper[4867]: E0214 04:28:03.130935 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838" Feb 14 04:28:03 crc kubenswrapper[4867]: E0214 04:28:03.131316 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dkk9v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-567668f5cf-tf6rg_openstack-operators(74a43e5b-11c4-459d-bbc7-03aa03489f17): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 04:28:03 crc kubenswrapper[4867]: E0214 04:28:03.132812 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-tf6rg" podUID="74a43e5b-11c4-459d-bbc7-03aa03489f17" Feb 14 04:28:03 crc kubenswrapper[4867]: E0214 04:28:03.898363 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838\\\"\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-tf6rg" podUID="74a43e5b-11c4-459d-bbc7-03aa03489f17" Feb 14 04:28:04 crc kubenswrapper[4867]: E0214 04:28:04.263175 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Feb 14 04:28:04 crc kubenswrapper[4867]: E0214 04:28:04.263695 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ds89f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-87pdl_openstack-operators(c38fa6a1-63b1-44a2-82b8-d6fd3d8a1f8d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 04:28:04 crc kubenswrapper[4867]: E0214 04:28:04.265047 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-87pdl" podUID="c38fa6a1-63b1-44a2-82b8-d6fd3d8a1f8d" Feb 14 04:28:04 crc kubenswrapper[4867]: W0214 04:28:04.854611 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod634f9e2f_2100_49e3_a31f_a369cf8ff13f.slice/crio-a6d603fa2a233d07377724b48a5e2f32f017c667c1fc3a0a359a71fed8e1a5d2 WatchSource:0}: Error finding container a6d603fa2a233d07377724b48a5e2f32f017c667c1fc3a0a359a71fed8e1a5d2: Status 404 returned error can't find the container with id a6d603fa2a233d07377724b48a5e2f32f017c667c1fc3a0a359a71fed8e1a5d2 Feb 14 04:28:04 crc kubenswrapper[4867]: I0214 04:28:04.856015 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cs8b7t"] Feb 14 04:28:04 crc kubenswrapper[4867]: I0214 04:28:04.906990 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-7zkqz" event={"ID":"64ff8480-2ca0-40d5-b5c9-448d0db3c575","Type":"ContainerStarted","Data":"dba0773e63253be2ecd558d953c291677c56007f46dc4d0a1851dfa825654812"} Feb 14 04:28:04 crc kubenswrapper[4867]: I0214 04:28:04.907070 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-7zkqz" Feb 14 04:28:04 crc kubenswrapper[4867]: I0214 04:28:04.908926 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-55dcdcc8d-49t56" event={"ID":"d72a97fb-2a6a-4af1-8f0c-de88ab679119","Type":"ContainerStarted","Data":"fed495e34766497dd42cf0325a418ddf77140542a3dce04637259a53eb94b72f"} Feb 14 04:28:04 crc kubenswrapper[4867]: I0214 04:28:04.909589 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-55dcdcc8d-49t56" Feb 14 04:28:04 crc kubenswrapper[4867]: I0214 04:28:04.910559 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-x7qx5" event={"ID":"dc65ca0c-1d72-468f-b600-dfb8332bf4bd","Type":"ContainerStarted","Data":"e88e177c8e3d3815ee6c35934ac281ad46676b4d19ae3457ab25535ae3e922be"} Feb 14 04:28:04 crc kubenswrapper[4867]: I0214 04:28:04.910712 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-x7qx5" Feb 14 04:28:04 crc kubenswrapper[4867]: I0214 04:28:04.912144 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-dszdp" event={"ID":"ffb00aaf-6760-440e-827a-f795baf3693a","Type":"ContainerStarted","Data":"c83513991e76903ffa1ba3f5e92920d4dac8235a719191fc8a9e37c60c0a9075"} Feb 14 04:28:04 crc kubenswrapper[4867]: I0214 04:28:04.912297 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-dszdp" Feb 14 04:28:04 crc kubenswrapper[4867]: I0214 04:28:04.918987 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-t7hwz" event={"ID":"67e3f2b9-2dbf-4c35-b1cd-02be51f58e38","Type":"ContainerStarted","Data":"67f0608bf77e8453cbdaea86d982c6360c1581bec5e3ea53dc77c4258ce8e77a"} Feb 14 04:28:04 crc kubenswrapper[4867]: I0214 04:28:04.919227 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-7866795846-t7hwz" Feb 14 04:28:04 crc kubenswrapper[4867]: I0214 04:28:04.927371 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-8dzwp" event={"ID":"6b5078d9-f30f-40a8-b5b5-8eb11271ec10","Type":"ContainerStarted","Data":"7256c05ae79a737b3cc7955bbdecdf7c386ed5125625a5dea66d06a219c3f123"} Feb 14 04:28:04 crc kubenswrapper[4867]: I0214 04:28:04.927663 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-8dzwp" Feb 14 04:28:04 crc kubenswrapper[4867]: I0214 04:28:04.931255 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-bgznq" event={"ID":"4b75df5b-04e5-445f-8d2d-57c6cbe5971c","Type":"ContainerStarted","Data":"66917816db67d8bf627a0d6b3d12c972b57d5b2fa6cec95cc61d85d0fb783963"} Feb 14 04:28:04 crc kubenswrapper[4867]: I0214 04:28:04.931748 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-bgznq" Feb 14 04:28:04 crc kubenswrapper[4867]: I0214 04:28:04.932799 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cs8b7t" event={"ID":"634f9e2f-2100-49e3-a31f-a369cf8ff13f","Type":"ContainerStarted","Data":"a6d603fa2a233d07377724b48a5e2f32f017c667c1fc3a0a359a71fed8e1a5d2"} Feb 14 04:28:04 crc kubenswrapper[4867]: E0214 04:28:04.935357 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-87pdl" podUID="c38fa6a1-63b1-44a2-82b8-d6fd3d8a1f8d" Feb 14 04:28:04 crc kubenswrapper[4867]: I0214 04:28:04.945379 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-7zkqz" podStartSLOduration=4.62802649 podStartE2EDuration="31.945360043s" podCreationTimestamp="2026-02-14 04:27:33 +0000 UTC" firstStartedPulling="2026-02-14 04:27:36.933399252 +0000 UTC m=+1089.014336556" lastFinishedPulling="2026-02-14 04:28:04.250732795 +0000 UTC m=+1116.331670109" observedRunningTime="2026-02-14 04:28:04.931310437 +0000 UTC m=+1117.012247751" watchObservedRunningTime="2026-02-14 04:28:04.945360043 +0000 UTC m=+1117.026297357" Feb 14 04:28:05 crc kubenswrapper[4867]: I0214 04:28:05.024696 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-bgznq" podStartSLOduration=4.454048541 podStartE2EDuration="32.024677696s" podCreationTimestamp="2026-02-14 04:27:33 +0000 UTC" firstStartedPulling="2026-02-14 04:27:35.572477421 +0000 UTC m=+1087.653414735" lastFinishedPulling="2026-02-14 04:28:03.143106566 +0000 UTC m=+1115.224043890" observedRunningTime="2026-02-14 04:28:05.004314566 +0000 UTC m=+1117.085251880" watchObservedRunningTime="2026-02-14 04:28:05.024677696 +0000 UTC m=+1117.105615010" Feb 14 04:28:05 crc kubenswrapper[4867]: I0214 04:28:05.047762 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-jqq2w"] Feb 14 04:28:05 crc kubenswrapper[4867]: I0214 04:28:05.054073 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-dszdp" podStartSLOduration=4.871434172 podStartE2EDuration="32.05404842s" podCreationTimestamp="2026-02-14 04:27:33 +0000 UTC" firstStartedPulling="2026-02-14 04:27:37.062127131 +0000 UTC m=+1089.143064445" lastFinishedPulling="2026-02-14 04:28:04.244741379 +0000 UTC m=+1116.325678693" observedRunningTime="2026-02-14 04:28:05.044654226 +0000 UTC m=+1117.125591540" watchObservedRunningTime="2026-02-14 04:28:05.05404842 +0000 UTC m=+1117.134985754" Feb 14 04:28:05 crc kubenswrapper[4867]: I0214 04:28:05.095145 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-8dzwp" podStartSLOduration=3.83936297 podStartE2EDuration="32.095127399s" podCreationTimestamp="2026-02-14 04:27:33 +0000 UTC" firstStartedPulling="2026-02-14 04:27:36.448650647 +0000 UTC m=+1088.529587961" lastFinishedPulling="2026-02-14 04:28:04.704415076 +0000 UTC m=+1116.785352390" observedRunningTime="2026-02-14 04:28:05.093725652 +0000 UTC m=+1117.174662966" watchObservedRunningTime="2026-02-14 04:28:05.095127399 +0000 UTC m=+1117.176064713" Feb 14 04:28:05 crc kubenswrapper[4867]: I0214 04:28:05.130162 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-7866795846-t7hwz" podStartSLOduration=4.6839956879999995 podStartE2EDuration="32.130138369s" podCreationTimestamp="2026-02-14 04:27:33 +0000 UTC" firstStartedPulling="2026-02-14 04:27:37.049017652 +0000 UTC m=+1089.129954966" lastFinishedPulling="2026-02-14 04:28:04.495160333 +0000 UTC m=+1116.576097647" observedRunningTime="2026-02-14 04:28:05.129310058 +0000 UTC m=+1117.210247372" watchObservedRunningTime="2026-02-14 04:28:05.130138369 +0000 UTC m=+1117.211075673" Feb 14 04:28:05 crc kubenswrapper[4867]: I0214 04:28:05.235917 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-x7qx5" podStartSLOduration=4.461384635 podStartE2EDuration="32.23590205s" podCreationTimestamp="2026-02-14 04:27:33 +0000 UTC" firstStartedPulling="2026-02-14 04:27:36.475544883 +0000 UTC m=+1088.556482197" lastFinishedPulling="2026-02-14 04:28:04.250062298 +0000 UTC m=+1116.330999612" observedRunningTime="2026-02-14 04:28:05.187214334 +0000 UTC m=+1117.268151648" watchObservedRunningTime="2026-02-14 04:28:05.23590205 +0000 UTC m=+1117.316839354" Feb 14 04:28:05 crc kubenswrapper[4867]: I0214 04:28:05.236341 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-55dcdcc8d-49t56" podStartSLOduration=5.05117993 podStartE2EDuration="32.236336642s" podCreationTimestamp="2026-02-14 04:27:33 +0000 UTC" firstStartedPulling="2026-02-14 04:27:37.310114264 +0000 UTC m=+1089.391051578" lastFinishedPulling="2026-02-14 04:28:04.495270976 +0000 UTC m=+1116.576208290" observedRunningTime="2026-02-14 04:28:05.232373349 +0000 UTC m=+1117.313310663" watchObservedRunningTime="2026-02-14 04:28:05.236336642 +0000 UTC m=+1117.317273956" Feb 14 04:28:05 crc kubenswrapper[4867]: I0214 04:28:05.940907 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-jqq2w" event={"ID":"ebee5651-7233-4c18-bb97-a4dc91eabef4","Type":"ContainerStarted","Data":"22c74f9f2f8244e121926f179c8afdca6427d769bf911f2aa6fbbf3221939845"} Feb 14 04:28:05 crc kubenswrapper[4867]: I0214 04:28:05.942393 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-chbgl" event={"ID":"3025ff58-4a91-43f5-8f15-94cadd0cef8b","Type":"ContainerStarted","Data":"a29228d01cbb6e1a2e7ef06b29313bb44d6874f7c517e0036cafd031ec6c4fc1"} Feb 14 04:28:05 crc kubenswrapper[4867]: I0214 04:28:05.943146 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-chbgl" Feb 14 04:28:05 crc kubenswrapper[4867]: I0214 04:28:05.965620 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-chbgl" podStartSLOduration=3.012739201 podStartE2EDuration="32.96560246s" podCreationTimestamp="2026-02-14 04:27:33 +0000 UTC" firstStartedPulling="2026-02-14 04:27:35.574520783 +0000 UTC m=+1087.655458097" lastFinishedPulling="2026-02-14 04:28:05.527384032 +0000 UTC m=+1117.608321356" observedRunningTime="2026-02-14 04:28:05.962221493 +0000 UTC m=+1118.043158807" watchObservedRunningTime="2026-02-14 04:28:05.96560246 +0000 UTC m=+1118.046539774" Feb 14 04:28:06 crc kubenswrapper[4867]: I0214 04:28:06.658695 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c83fa345-043f-453c-b797-a00db3111d44-webhook-certs\") pod \"openstack-operator-controller-manager-75585db5cc-kzk25\" (UID: \"c83fa345-043f-453c-b797-a00db3111d44\") " pod="openstack-operators/openstack-operator-controller-manager-75585db5cc-kzk25" Feb 14 04:28:06 crc kubenswrapper[4867]: I0214 04:28:06.666135 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c83fa345-043f-453c-b797-a00db3111d44-webhook-certs\") pod \"openstack-operator-controller-manager-75585db5cc-kzk25\" (UID: \"c83fa345-043f-453c-b797-a00db3111d44\") " pod="openstack-operators/openstack-operator-controller-manager-75585db5cc-kzk25" Feb 14 04:28:06 crc kubenswrapper[4867]: I0214 04:28:06.843010 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-75585db5cc-kzk25" Feb 14 04:28:07 crc kubenswrapper[4867]: I0214 04:28:07.363622 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-75585db5cc-kzk25"] Feb 14 04:28:07 crc kubenswrapper[4867]: W0214 04:28:07.379419 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc83fa345_043f_453c_b797_a00db3111d44.slice/crio-53850596ee7561b746178b85f2b864cdff0e9a820efb5c8fc5bc4f3017f563d6 WatchSource:0}: Error finding container 53850596ee7561b746178b85f2b864cdff0e9a820efb5c8fc5bc4f3017f563d6: Status 404 returned error can't find the container with id 53850596ee7561b746178b85f2b864cdff0e9a820efb5c8fc5bc4f3017f563d6 Feb 14 04:28:07 crc kubenswrapper[4867]: I0214 04:28:07.965003 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-75585db5cc-kzk25" event={"ID":"c83fa345-043f-453c-b797-a00db3111d44","Type":"ContainerStarted","Data":"7a2ee5a9bcad944530f5c6de38ec65cfcb4cfe6b779359783d1bc2456001426a"} Feb 14 04:28:07 crc kubenswrapper[4867]: I0214 04:28:07.965302 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-75585db5cc-kzk25" event={"ID":"c83fa345-043f-453c-b797-a00db3111d44","Type":"ContainerStarted","Data":"53850596ee7561b746178b85f2b864cdff0e9a820efb5c8fc5bc4f3017f563d6"} Feb 14 04:28:07 crc kubenswrapper[4867]: I0214 04:28:07.966390 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-75585db5cc-kzk25" Feb 14 04:28:08 crc kubenswrapper[4867]: I0214 04:28:08.020095 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-75585db5cc-kzk25" podStartSLOduration=34.020051927 podStartE2EDuration="34.020051927s" podCreationTimestamp="2026-02-14 04:27:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:28:08.008091806 +0000 UTC m=+1120.089029120" watchObservedRunningTime="2026-02-14 04:28:08.020051927 +0000 UTC m=+1120.100989241" Feb 14 04:28:13 crc kubenswrapper[4867]: I0214 04:28:13.510172 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-chbgl" Feb 14 04:28:13 crc kubenswrapper[4867]: I0214 04:28:13.705290 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-bgznq" Feb 14 04:28:13 crc kubenswrapper[4867]: I0214 04:28:13.942221 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-x7qx5" Feb 14 04:28:14 crc kubenswrapper[4867]: I0214 04:28:14.118250 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-8dzwp" Feb 14 04:28:14 crc kubenswrapper[4867]: I0214 04:28:14.541142 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-7zkqz" Feb 14 04:28:14 crc kubenswrapper[4867]: I0214 04:28:14.646432 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-dszdp" Feb 14 04:28:14 crc kubenswrapper[4867]: I0214 04:28:14.967454 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-55dcdcc8d-49t56" Feb 14 04:28:14 crc kubenswrapper[4867]: I0214 04:28:14.987436 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-7866795846-t7hwz" Feb 14 04:28:15 crc kubenswrapper[4867]: I0214 04:28:15.048064 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-jxpv2" event={"ID":"185d4fd5-608b-48d8-8731-27e7a05adfe2","Type":"ContainerStarted","Data":"fed09a44d0d668968ebe9709f90b5aa759aebf8092c357413bb704036e8e59ec"} Feb 14 04:28:15 crc kubenswrapper[4867]: I0214 04:28:15.049281 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-jxpv2" Feb 14 04:28:15 crc kubenswrapper[4867]: I0214 04:28:15.051711 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-jqq2w" event={"ID":"ebee5651-7233-4c18-bb97-a4dc91eabef4","Type":"ContainerStarted","Data":"d3beb0e27719410f426cff5b15244494a4ee7c2cbce0eb2198cf7ca641696505"} Feb 14 04:28:15 crc kubenswrapper[4867]: I0214 04:28:15.052007 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79d975b745-jqq2w" Feb 14 04:28:15 crc kubenswrapper[4867]: I0214 04:28:15.055937 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-tpfxn" event={"ID":"1f889f7b-8ae5-43e3-ab54-d3bf06c010df","Type":"ContainerStarted","Data":"8894d7e55a88068670ef6806a3ba8242e721063ca543ab0b1eb958d616bd6830"} Feb 14 04:28:15 crc kubenswrapper[4867]: I0214 04:28:15.056237 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-77987464f4-tpfxn" Feb 14 04:28:15 crc kubenswrapper[4867]: I0214 04:28:15.063622 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-wwm9m" event={"ID":"7bb6de63-3c92-43de-a01b-b34df765aeba","Type":"ContainerStarted","Data":"d87e3587e2f3ef22cff7675f9ef30627896f6c5f50a0d16d8ccdc5839a94ae83"} Feb 14 04:28:15 crc kubenswrapper[4867]: I0214 04:28:15.064355 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-wwm9m" Feb 14 04:28:15 crc kubenswrapper[4867]: I0214 04:28:15.072742 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-pxm8d" event={"ID":"66c8a0dd-f076-4994-bd42-39c80de83233","Type":"ContainerStarted","Data":"5037d0e368dde2b99b3c5a944e803df1b58c708c2c477ef6e42307397ba217ea"} Feb 14 04:28:15 crc kubenswrapper[4867]: I0214 04:28:15.073651 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-pxm8d" Feb 14 04:28:15 crc kubenswrapper[4867]: I0214 04:28:15.091469 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-snrw6" event={"ID":"bc4bb4fd-bcc8-438b-af84-a2db3d3e346a","Type":"ContainerStarted","Data":"e3063da35fa7215aaed10a458603d6c94495363580003e6a0e6a48e4a1367801"} Feb 14 04:28:15 crc kubenswrapper[4867]: I0214 04:28:15.091928 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68f46476f-snrw6" Feb 14 04:28:15 crc kubenswrapper[4867]: I0214 04:28:15.104851 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-2xwdd" event={"ID":"38a9cdf3-42e2-4279-8092-af7e8c82bc51","Type":"ContainerStarted","Data":"14ca97f9db879083cb331bf07f6fc278f12ca99a6d001aa8f050bf341b95ecb0"} Feb 14 04:28:15 crc kubenswrapper[4867]: I0214 04:28:15.105753 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-2xwdd" Feb 14 04:28:15 crc kubenswrapper[4867]: I0214 04:28:15.106092 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-pxm8d" podStartSLOduration=4.090180098 podStartE2EDuration="42.106076927s" podCreationTimestamp="2026-02-14 04:27:33 +0000 UTC" firstStartedPulling="2026-02-14 04:27:36.358206189 +0000 UTC m=+1088.439143503" lastFinishedPulling="2026-02-14 04:28:14.374103018 +0000 UTC m=+1126.455040332" observedRunningTime="2026-02-14 04:28:15.106030536 +0000 UTC m=+1127.186967870" watchObservedRunningTime="2026-02-14 04:28:15.106076927 +0000 UTC m=+1127.187014241" Feb 14 04:28:15 crc kubenswrapper[4867]: I0214 04:28:15.119780 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-6nhjp" event={"ID":"94ff35ef-77e1-4085-ad2f-837ebc666b2a","Type":"ContainerStarted","Data":"56f2401d817967e7dfc249d99a2014932b93916388d466d645c9c4c84aa46aab"} Feb 14 04:28:15 crc kubenswrapper[4867]: I0214 04:28:15.120695 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-6nhjp" Feb 14 04:28:15 crc kubenswrapper[4867]: I0214 04:28:15.136124 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cs8b7t" event={"ID":"634f9e2f-2100-49e3-a31f-a369cf8ff13f","Type":"ContainerStarted","Data":"403136f34a075ecd6d7c5c8a094d619a3f5e7e071fa96a3e6040cda845a2f86f"} Feb 14 04:28:15 crc kubenswrapper[4867]: I0214 04:28:15.136897 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cs8b7t" Feb 14 04:28:15 crc kubenswrapper[4867]: I0214 04:28:15.140653 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-77987464f4-tpfxn" podStartSLOduration=3.342405857 podStartE2EDuration="42.140634626s" podCreationTimestamp="2026-02-14 04:27:33 +0000 UTC" firstStartedPulling="2026-02-14 04:27:35.575700554 +0000 UTC m=+1087.656637868" lastFinishedPulling="2026-02-14 04:28:14.373929323 +0000 UTC m=+1126.454866637" observedRunningTime="2026-02-14 04:28:15.131049226 +0000 UTC m=+1127.211986540" watchObservedRunningTime="2026-02-14 04:28:15.140634626 +0000 UTC m=+1127.221571940" Feb 14 04:28:15 crc kubenswrapper[4867]: I0214 04:28:15.145966 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-ndb8l" event={"ID":"652d3b74-0634-4f8f-b5ef-3adfc53920eb","Type":"ContainerStarted","Data":"b7b2b14eb03ea6bf5916f1c07b3ad2754d1387e5fed0b42455928cd802f75d69"} Feb 14 04:28:15 crc kubenswrapper[4867]: I0214 04:28:15.146419 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-ndb8l" Feb 14 04:28:15 crc kubenswrapper[4867]: I0214 04:28:15.169735 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-wwm9m" podStartSLOduration=4.030531109 podStartE2EDuration="42.169670791s" podCreationTimestamp="2026-02-14 04:27:33 +0000 UTC" firstStartedPulling="2026-02-14 04:27:36.47310912 +0000 UTC m=+1088.554046434" lastFinishedPulling="2026-02-14 04:28:14.612248802 +0000 UTC m=+1126.693186116" observedRunningTime="2026-02-14 04:28:15.16154867 +0000 UTC m=+1127.242485994" watchObservedRunningTime="2026-02-14 04:28:15.169670791 +0000 UTC m=+1127.250608105" Feb 14 04:28:15 crc kubenswrapper[4867]: I0214 04:28:15.193916 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-jxpv2" podStartSLOduration=4.291797685 podStartE2EDuration="42.193900191s" podCreationTimestamp="2026-02-14 04:27:33 +0000 UTC" firstStartedPulling="2026-02-14 04:27:36.473066489 +0000 UTC m=+1088.554003803" lastFinishedPulling="2026-02-14 04:28:14.375168995 +0000 UTC m=+1126.456106309" observedRunningTime="2026-02-14 04:28:15.188815189 +0000 UTC m=+1127.269752503" watchObservedRunningTime="2026-02-14 04:28:15.193900191 +0000 UTC m=+1127.274837505" Feb 14 04:28:15 crc kubenswrapper[4867]: I0214 04:28:15.224790 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79d975b745-jqq2w" podStartSLOduration=33.015922539 podStartE2EDuration="42.224773034s" podCreationTimestamp="2026-02-14 04:27:33 +0000 UTC" firstStartedPulling="2026-02-14 04:28:05.066990727 +0000 UTC m=+1117.147928041" lastFinishedPulling="2026-02-14 04:28:14.275841222 +0000 UTC m=+1126.356778536" observedRunningTime="2026-02-14 04:28:15.218615454 +0000 UTC m=+1127.299552768" watchObservedRunningTime="2026-02-14 04:28:15.224773034 +0000 UTC m=+1127.305710348" Feb 14 04:28:15 crc kubenswrapper[4867]: I0214 04:28:15.245550 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-2xwdd" podStartSLOduration=4.348013786 podStartE2EDuration="42.245528314s" podCreationTimestamp="2026-02-14 04:27:33 +0000 UTC" firstStartedPulling="2026-02-14 04:27:36.478101829 +0000 UTC m=+1088.559039143" lastFinishedPulling="2026-02-14 04:28:14.375616357 +0000 UTC m=+1126.456553671" observedRunningTime="2026-02-14 04:28:15.237241738 +0000 UTC m=+1127.318179042" watchObservedRunningTime="2026-02-14 04:28:15.245528314 +0000 UTC m=+1127.326465628" Feb 14 04:28:15 crc kubenswrapper[4867]: I0214 04:28:15.273382 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-ndb8l" podStartSLOduration=2.6954447630000002 podStartE2EDuration="42.273360488s" podCreationTimestamp="2026-02-14 04:27:33 +0000 UTC" firstStartedPulling="2026-02-14 04:27:35.020860076 +0000 UTC m=+1087.101797390" lastFinishedPulling="2026-02-14 04:28:14.598775811 +0000 UTC m=+1126.679713115" observedRunningTime="2026-02-14 04:28:15.269305442 +0000 UTC m=+1127.350242756" watchObservedRunningTime="2026-02-14 04:28:15.273360488 +0000 UTC m=+1127.354297802" Feb 14 04:28:15 crc kubenswrapper[4867]: I0214 04:28:15.314446 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68f46476f-snrw6" podStartSLOduration=4.971096393 podStartE2EDuration="42.314420976s" podCreationTimestamp="2026-02-14 04:27:33 +0000 UTC" firstStartedPulling="2026-02-14 04:27:36.933427073 +0000 UTC m=+1089.014364387" lastFinishedPulling="2026-02-14 04:28:14.276751646 +0000 UTC m=+1126.357688970" observedRunningTime="2026-02-14 04:28:15.309048826 +0000 UTC m=+1127.389986140" watchObservedRunningTime="2026-02-14 04:28:15.314420976 +0000 UTC m=+1127.395358290" Feb 14 04:28:15 crc kubenswrapper[4867]: I0214 04:28:15.349289 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-6nhjp" podStartSLOduration=4.372369008 podStartE2EDuration="42.349268102s" podCreationTimestamp="2026-02-14 04:27:33 +0000 UTC" firstStartedPulling="2026-02-14 04:27:36.35862668 +0000 UTC m=+1088.439563994" lastFinishedPulling="2026-02-14 04:28:14.335525774 +0000 UTC m=+1126.416463088" observedRunningTime="2026-02-14 04:28:15.348251616 +0000 UTC m=+1127.429188930" watchObservedRunningTime="2026-02-14 04:28:15.349268102 +0000 UTC m=+1127.430205416" Feb 14 04:28:15 crc kubenswrapper[4867]: I0214 04:28:15.392659 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cs8b7t" podStartSLOduration=33.067462609 podStartE2EDuration="42.39263601s" podCreationTimestamp="2026-02-14 04:27:33 +0000 UTC" firstStartedPulling="2026-02-14 04:28:04.857480847 +0000 UTC m=+1116.938418161" lastFinishedPulling="2026-02-14 04:28:14.182654248 +0000 UTC m=+1126.263591562" observedRunningTime="2026-02-14 04:28:15.385743011 +0000 UTC m=+1127.466680345" watchObservedRunningTime="2026-02-14 04:28:15.39263601 +0000 UTC m=+1127.473573324" Feb 14 04:28:16 crc kubenswrapper[4867]: I0214 04:28:16.230019 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-6d9jj" event={"ID":"82e5dbee-ab1e-498c-9460-be75226afa18","Type":"ContainerStarted","Data":"e4b9247c8e6be527ef2a9a0b9af8b49146d28bd377bed746f75902fbf11841a2"} Feb 14 04:28:16 crc kubenswrapper[4867]: I0214 04:28:16.232210 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-6d9jj" Feb 14 04:28:16 crc kubenswrapper[4867]: I0214 04:28:16.252328 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-6d9jj" podStartSLOduration=6.248007481 podStartE2EDuration="43.252303481s" podCreationTimestamp="2026-02-14 04:27:33 +0000 UTC" firstStartedPulling="2026-02-14 04:27:37.371715037 +0000 UTC m=+1089.452652351" lastFinishedPulling="2026-02-14 04:28:14.376011037 +0000 UTC m=+1126.456948351" observedRunningTime="2026-02-14 04:28:16.247122696 +0000 UTC m=+1128.328060020" watchObservedRunningTime="2026-02-14 04:28:16.252303481 +0000 UTC m=+1128.333240795" Feb 14 04:28:16 crc kubenswrapper[4867]: I0214 04:28:16.849549 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-75585db5cc-kzk25" Feb 14 04:28:17 crc kubenswrapper[4867]: I0214 04:28:17.237210 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-vwvtz" event={"ID":"9ec66be5-3947-45d1-bf34-c7639e8d4c8a","Type":"ContainerStarted","Data":"0240b976b25ccf1c053a870ea138e0a0e957fc5c1bfb9682d6269c052b9ba2d5"} Feb 14 04:28:17 crc kubenswrapper[4867]: I0214 04:28:17.237635 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-vwvtz" Feb 14 04:28:17 crc kubenswrapper[4867]: I0214 04:28:17.238803 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-tf6rg" event={"ID":"74a43e5b-11c4-459d-bbc7-03aa03489f17","Type":"ContainerStarted","Data":"a01ba509f7a52344ad900a86ef39c3df54f080f47bbaad35cc8747cba870531b"} Feb 14 04:28:17 crc kubenswrapper[4867]: I0214 04:28:17.239778 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-tf6rg" Feb 14 04:28:17 crc kubenswrapper[4867]: I0214 04:28:17.256192 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-vwvtz" podStartSLOduration=5.101790526 podStartE2EDuration="44.256160672s" podCreationTimestamp="2026-02-14 04:27:33 +0000 UTC" firstStartedPulling="2026-02-14 04:27:37.049169116 +0000 UTC m=+1089.130106430" lastFinishedPulling="2026-02-14 04:28:16.203539262 +0000 UTC m=+1128.284476576" observedRunningTime="2026-02-14 04:28:17.250950966 +0000 UTC m=+1129.331888290" watchObservedRunningTime="2026-02-14 04:28:17.256160672 +0000 UTC m=+1129.337097996" Feb 14 04:28:17 crc kubenswrapper[4867]: I0214 04:28:17.279011 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-tf6rg" podStartSLOduration=5.086305589 podStartE2EDuration="44.278994716s" podCreationTimestamp="2026-02-14 04:27:33 +0000 UTC" firstStartedPulling="2026-02-14 04:27:37.008691679 +0000 UTC m=+1089.089628993" lastFinishedPulling="2026-02-14 04:28:16.201380806 +0000 UTC m=+1128.282318120" observedRunningTime="2026-02-14 04:28:17.272497267 +0000 UTC m=+1129.353434591" watchObservedRunningTime="2026-02-14 04:28:17.278994716 +0000 UTC m=+1129.359932030" Feb 14 04:28:19 crc kubenswrapper[4867]: I0214 04:28:19.476231 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79d975b745-jqq2w" Feb 14 04:28:20 crc kubenswrapper[4867]: I0214 04:28:20.268463 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-87pdl" event={"ID":"c38fa6a1-63b1-44a2-82b8-d6fd3d8a1f8d","Type":"ContainerStarted","Data":"0f79bed42d7427fc6fb8fd280b968295c72ddab44991fb6bd63a312b21582ecc"} Feb 14 04:28:20 crc kubenswrapper[4867]: I0214 04:28:20.290766 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cs8b7t" Feb 14 04:28:20 crc kubenswrapper[4867]: I0214 04:28:20.297408 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-87pdl" podStartSLOduration=4.080734356 podStartE2EDuration="46.297383444s" podCreationTimestamp="2026-02-14 04:27:34 +0000 UTC" firstStartedPulling="2026-02-14 04:27:37.255736918 +0000 UTC m=+1089.336674222" lastFinishedPulling="2026-02-14 04:28:19.472385946 +0000 UTC m=+1131.553323310" observedRunningTime="2026-02-14 04:28:20.286585784 +0000 UTC m=+1132.367523138" watchObservedRunningTime="2026-02-14 04:28:20.297383444 +0000 UTC m=+1132.378320758" Feb 14 04:28:23 crc kubenswrapper[4867]: I0214 04:28:23.492677 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-pxm8d" Feb 14 04:28:23 crc kubenswrapper[4867]: I0214 04:28:23.531107 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-ndb8l" Feb 14 04:28:23 crc kubenswrapper[4867]: I0214 04:28:23.605919 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-77987464f4-tpfxn" Feb 14 04:28:23 crc kubenswrapper[4867]: I0214 04:28:23.684157 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-jxpv2" Feb 14 04:28:24 crc kubenswrapper[4867]: I0214 04:28:24.087079 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-6nhjp" Feb 14 04:28:24 crc kubenswrapper[4867]: I0214 04:28:24.172251 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-2xwdd" Feb 14 04:28:24 crc kubenswrapper[4867]: I0214 04:28:24.211427 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-wwm9m" Feb 14 04:28:24 crc kubenswrapper[4867]: I0214 04:28:24.395795 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-tf6rg" Feb 14 04:28:24 crc kubenswrapper[4867]: I0214 04:28:24.714793 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68f46476f-snrw6" Feb 14 04:28:24 crc kubenswrapper[4867]: I0214 04:28:24.942550 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-vwvtz" Feb 14 04:28:25 crc kubenswrapper[4867]: I0214 04:28:25.020183 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-6d9jj" Feb 14 04:28:31 crc kubenswrapper[4867]: I0214 04:28:31.250937 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 04:28:31 crc kubenswrapper[4867]: I0214 04:28:31.251526 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 04:28:43 crc kubenswrapper[4867]: I0214 04:28:43.381323 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-s87hs"] Feb 14 04:28:43 crc kubenswrapper[4867]: I0214 04:28:43.386291 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-s87hs" Feb 14 04:28:43 crc kubenswrapper[4867]: I0214 04:28:43.392933 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-s87hs"] Feb 14 04:28:43 crc kubenswrapper[4867]: I0214 04:28:43.393092 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 14 04:28:43 crc kubenswrapper[4867]: I0214 04:28:43.393106 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-4thnd" Feb 14 04:28:43 crc kubenswrapper[4867]: I0214 04:28:43.393161 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 14 04:28:43 crc kubenswrapper[4867]: I0214 04:28:43.393256 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 14 04:28:43 crc kubenswrapper[4867]: I0214 04:28:43.465847 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-z692n"] Feb 14 04:28:43 crc kubenswrapper[4867]: I0214 04:28:43.467780 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-z692n" Feb 14 04:28:43 crc kubenswrapper[4867]: I0214 04:28:43.476439 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 14 04:28:43 crc kubenswrapper[4867]: I0214 04:28:43.477995 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-959cx\" (UniqueName: \"kubernetes.io/projected/1e9ddba3-128d-4025-9661-b07c5e1e9329-kube-api-access-959cx\") pod \"dnsmasq-dns-675f4bcbfc-s87hs\" (UID: \"1e9ddba3-128d-4025-9661-b07c5e1e9329\") " pod="openstack/dnsmasq-dns-675f4bcbfc-s87hs" Feb 14 04:28:43 crc kubenswrapper[4867]: I0214 04:28:43.478316 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e9ddba3-128d-4025-9661-b07c5e1e9329-config\") pod \"dnsmasq-dns-675f4bcbfc-s87hs\" (UID: \"1e9ddba3-128d-4025-9661-b07c5e1e9329\") " pod="openstack/dnsmasq-dns-675f4bcbfc-s87hs" Feb 14 04:28:43 crc kubenswrapper[4867]: I0214 04:28:43.487241 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-z692n"] Feb 14 04:28:43 crc kubenswrapper[4867]: I0214 04:28:43.579671 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-959cx\" (UniqueName: \"kubernetes.io/projected/1e9ddba3-128d-4025-9661-b07c5e1e9329-kube-api-access-959cx\") pod \"dnsmasq-dns-675f4bcbfc-s87hs\" (UID: \"1e9ddba3-128d-4025-9661-b07c5e1e9329\") " pod="openstack/dnsmasq-dns-675f4bcbfc-s87hs" Feb 14 04:28:43 crc kubenswrapper[4867]: I0214 04:28:43.579822 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e9ddba3-128d-4025-9661-b07c5e1e9329-config\") pod \"dnsmasq-dns-675f4bcbfc-s87hs\" (UID: \"1e9ddba3-128d-4025-9661-b07c5e1e9329\") " pod="openstack/dnsmasq-dns-675f4bcbfc-s87hs" Feb 14 04:28:43 crc kubenswrapper[4867]: I0214 04:28:43.579863 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a87cc0d-e74a-4be2-9ac2-7f9d565f34e3-config\") pod \"dnsmasq-dns-78dd6ddcc-z692n\" (UID: \"6a87cc0d-e74a-4be2-9ac2-7f9d565f34e3\") " pod="openstack/dnsmasq-dns-78dd6ddcc-z692n" Feb 14 04:28:43 crc kubenswrapper[4867]: I0214 04:28:43.579889 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6a87cc0d-e74a-4be2-9ac2-7f9d565f34e3-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-z692n\" (UID: \"6a87cc0d-e74a-4be2-9ac2-7f9d565f34e3\") " pod="openstack/dnsmasq-dns-78dd6ddcc-z692n" Feb 14 04:28:43 crc kubenswrapper[4867]: I0214 04:28:43.579917 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4khb7\" (UniqueName: \"kubernetes.io/projected/6a87cc0d-e74a-4be2-9ac2-7f9d565f34e3-kube-api-access-4khb7\") pod \"dnsmasq-dns-78dd6ddcc-z692n\" (UID: \"6a87cc0d-e74a-4be2-9ac2-7f9d565f34e3\") " pod="openstack/dnsmasq-dns-78dd6ddcc-z692n" Feb 14 04:28:43 crc kubenswrapper[4867]: I0214 04:28:43.581150 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e9ddba3-128d-4025-9661-b07c5e1e9329-config\") pod \"dnsmasq-dns-675f4bcbfc-s87hs\" (UID: \"1e9ddba3-128d-4025-9661-b07c5e1e9329\") " pod="openstack/dnsmasq-dns-675f4bcbfc-s87hs" Feb 14 04:28:43 crc kubenswrapper[4867]: I0214 04:28:43.619165 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-959cx\" (UniqueName: \"kubernetes.io/projected/1e9ddba3-128d-4025-9661-b07c5e1e9329-kube-api-access-959cx\") pod \"dnsmasq-dns-675f4bcbfc-s87hs\" (UID: \"1e9ddba3-128d-4025-9661-b07c5e1e9329\") " pod="openstack/dnsmasq-dns-675f4bcbfc-s87hs" Feb 14 04:28:43 crc kubenswrapper[4867]: I0214 04:28:43.681610 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a87cc0d-e74a-4be2-9ac2-7f9d565f34e3-config\") pod \"dnsmasq-dns-78dd6ddcc-z692n\" (UID: \"6a87cc0d-e74a-4be2-9ac2-7f9d565f34e3\") " pod="openstack/dnsmasq-dns-78dd6ddcc-z692n" Feb 14 04:28:43 crc kubenswrapper[4867]: I0214 04:28:43.682005 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6a87cc0d-e74a-4be2-9ac2-7f9d565f34e3-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-z692n\" (UID: \"6a87cc0d-e74a-4be2-9ac2-7f9d565f34e3\") " pod="openstack/dnsmasq-dns-78dd6ddcc-z692n" Feb 14 04:28:43 crc kubenswrapper[4867]: I0214 04:28:43.682050 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4khb7\" (UniqueName: \"kubernetes.io/projected/6a87cc0d-e74a-4be2-9ac2-7f9d565f34e3-kube-api-access-4khb7\") pod \"dnsmasq-dns-78dd6ddcc-z692n\" (UID: \"6a87cc0d-e74a-4be2-9ac2-7f9d565f34e3\") " pod="openstack/dnsmasq-dns-78dd6ddcc-z692n" Feb 14 04:28:43 crc kubenswrapper[4867]: I0214 04:28:43.683244 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a87cc0d-e74a-4be2-9ac2-7f9d565f34e3-config\") pod \"dnsmasq-dns-78dd6ddcc-z692n\" (UID: \"6a87cc0d-e74a-4be2-9ac2-7f9d565f34e3\") " pod="openstack/dnsmasq-dns-78dd6ddcc-z692n" Feb 14 04:28:43 crc kubenswrapper[4867]: I0214 04:28:43.683284 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6a87cc0d-e74a-4be2-9ac2-7f9d565f34e3-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-z692n\" (UID: \"6a87cc0d-e74a-4be2-9ac2-7f9d565f34e3\") " pod="openstack/dnsmasq-dns-78dd6ddcc-z692n" Feb 14 04:28:43 crc kubenswrapper[4867]: I0214 04:28:43.705440 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4khb7\" (UniqueName: \"kubernetes.io/projected/6a87cc0d-e74a-4be2-9ac2-7f9d565f34e3-kube-api-access-4khb7\") pod \"dnsmasq-dns-78dd6ddcc-z692n\" (UID: \"6a87cc0d-e74a-4be2-9ac2-7f9d565f34e3\") " pod="openstack/dnsmasq-dns-78dd6ddcc-z692n" Feb 14 04:28:43 crc kubenswrapper[4867]: I0214 04:28:43.720914 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-s87hs" Feb 14 04:28:43 crc kubenswrapper[4867]: I0214 04:28:43.831283 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-z692n" Feb 14 04:28:44 crc kubenswrapper[4867]: I0214 04:28:44.407743 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-s87hs"] Feb 14 04:28:44 crc kubenswrapper[4867]: I0214 04:28:44.486101 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-z692n"] Feb 14 04:28:44 crc kubenswrapper[4867]: W0214 04:28:44.488827 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6a87cc0d_e74a_4be2_9ac2_7f9d565f34e3.slice/crio-6f6ab74e4cce4dfd48bee7b02d98ab6f158452369ea200d7d42632fd7db4659d WatchSource:0}: Error finding container 6f6ab74e4cce4dfd48bee7b02d98ab6f158452369ea200d7d42632fd7db4659d: Status 404 returned error can't find the container with id 6f6ab74e4cce4dfd48bee7b02d98ab6f158452369ea200d7d42632fd7db4659d Feb 14 04:28:44 crc kubenswrapper[4867]: I0214 04:28:44.522009 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-z692n" event={"ID":"6a87cc0d-e74a-4be2-9ac2-7f9d565f34e3","Type":"ContainerStarted","Data":"6f6ab74e4cce4dfd48bee7b02d98ab6f158452369ea200d7d42632fd7db4659d"} Feb 14 04:28:44 crc kubenswrapper[4867]: I0214 04:28:44.523101 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-s87hs" event={"ID":"1e9ddba3-128d-4025-9661-b07c5e1e9329","Type":"ContainerStarted","Data":"37ef45b02f432cc3a119a40a28ecba25608ab0780e37295f9063ae35dc630718"} Feb 14 04:28:45 crc kubenswrapper[4867]: I0214 04:28:45.705582 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-s87hs"] Feb 14 04:28:45 crc kubenswrapper[4867]: I0214 04:28:45.729920 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-lbzlt"] Feb 14 04:28:45 crc kubenswrapper[4867]: I0214 04:28:45.735167 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-lbzlt" Feb 14 04:28:45 crc kubenswrapper[4867]: I0214 04:28:45.749353 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-lbzlt"] Feb 14 04:28:45 crc kubenswrapper[4867]: I0214 04:28:45.847050 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7j8f4\" (UniqueName: \"kubernetes.io/projected/dbe41be0-f7f8-47ff-a587-b85e282fa5ee-kube-api-access-7j8f4\") pod \"dnsmasq-dns-5ccc8479f9-lbzlt\" (UID: \"dbe41be0-f7f8-47ff-a587-b85e282fa5ee\") " pod="openstack/dnsmasq-dns-5ccc8479f9-lbzlt" Feb 14 04:28:45 crc kubenswrapper[4867]: I0214 04:28:45.849632 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dbe41be0-f7f8-47ff-a587-b85e282fa5ee-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-lbzlt\" (UID: \"dbe41be0-f7f8-47ff-a587-b85e282fa5ee\") " pod="openstack/dnsmasq-dns-5ccc8479f9-lbzlt" Feb 14 04:28:45 crc kubenswrapper[4867]: I0214 04:28:45.849705 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbe41be0-f7f8-47ff-a587-b85e282fa5ee-config\") pod \"dnsmasq-dns-5ccc8479f9-lbzlt\" (UID: \"dbe41be0-f7f8-47ff-a587-b85e282fa5ee\") " pod="openstack/dnsmasq-dns-5ccc8479f9-lbzlt" Feb 14 04:28:45 crc kubenswrapper[4867]: I0214 04:28:45.951261 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7j8f4\" (UniqueName: \"kubernetes.io/projected/dbe41be0-f7f8-47ff-a587-b85e282fa5ee-kube-api-access-7j8f4\") pod \"dnsmasq-dns-5ccc8479f9-lbzlt\" (UID: \"dbe41be0-f7f8-47ff-a587-b85e282fa5ee\") " pod="openstack/dnsmasq-dns-5ccc8479f9-lbzlt" Feb 14 04:28:45 crc kubenswrapper[4867]: I0214 04:28:45.951317 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dbe41be0-f7f8-47ff-a587-b85e282fa5ee-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-lbzlt\" (UID: \"dbe41be0-f7f8-47ff-a587-b85e282fa5ee\") " pod="openstack/dnsmasq-dns-5ccc8479f9-lbzlt" Feb 14 04:28:45 crc kubenswrapper[4867]: I0214 04:28:45.951337 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbe41be0-f7f8-47ff-a587-b85e282fa5ee-config\") pod \"dnsmasq-dns-5ccc8479f9-lbzlt\" (UID: \"dbe41be0-f7f8-47ff-a587-b85e282fa5ee\") " pod="openstack/dnsmasq-dns-5ccc8479f9-lbzlt" Feb 14 04:28:45 crc kubenswrapper[4867]: I0214 04:28:45.952307 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dbe41be0-f7f8-47ff-a587-b85e282fa5ee-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-lbzlt\" (UID: \"dbe41be0-f7f8-47ff-a587-b85e282fa5ee\") " pod="openstack/dnsmasq-dns-5ccc8479f9-lbzlt" Feb 14 04:28:45 crc kubenswrapper[4867]: I0214 04:28:45.952412 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbe41be0-f7f8-47ff-a587-b85e282fa5ee-config\") pod \"dnsmasq-dns-5ccc8479f9-lbzlt\" (UID: \"dbe41be0-f7f8-47ff-a587-b85e282fa5ee\") " pod="openstack/dnsmasq-dns-5ccc8479f9-lbzlt" Feb 14 04:28:45 crc kubenswrapper[4867]: I0214 04:28:45.995476 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7j8f4\" (UniqueName: \"kubernetes.io/projected/dbe41be0-f7f8-47ff-a587-b85e282fa5ee-kube-api-access-7j8f4\") pod \"dnsmasq-dns-5ccc8479f9-lbzlt\" (UID: \"dbe41be0-f7f8-47ff-a587-b85e282fa5ee\") " pod="openstack/dnsmasq-dns-5ccc8479f9-lbzlt" Feb 14 04:28:46 crc kubenswrapper[4867]: I0214 04:28:46.075793 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-lbzlt" Feb 14 04:28:46 crc kubenswrapper[4867]: I0214 04:28:46.391361 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-z692n"] Feb 14 04:28:46 crc kubenswrapper[4867]: I0214 04:28:46.443179 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-hxkz7"] Feb 14 04:28:46 crc kubenswrapper[4867]: I0214 04:28:46.445183 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-hxkz7" Feb 14 04:28:46 crc kubenswrapper[4867]: I0214 04:28:46.459592 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-hxkz7"] Feb 14 04:28:46 crc kubenswrapper[4867]: I0214 04:28:46.580078 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xj27v\" (UniqueName: \"kubernetes.io/projected/a5f0e82b-f765-4fe1-b74e-856e1a6d8b8c-kube-api-access-xj27v\") pod \"dnsmasq-dns-57d769cc4f-hxkz7\" (UID: \"a5f0e82b-f765-4fe1-b74e-856e1a6d8b8c\") " pod="openstack/dnsmasq-dns-57d769cc4f-hxkz7" Feb 14 04:28:46 crc kubenswrapper[4867]: I0214 04:28:46.580273 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5f0e82b-f765-4fe1-b74e-856e1a6d8b8c-config\") pod \"dnsmasq-dns-57d769cc4f-hxkz7\" (UID: \"a5f0e82b-f765-4fe1-b74e-856e1a6d8b8c\") " pod="openstack/dnsmasq-dns-57d769cc4f-hxkz7" Feb 14 04:28:46 crc kubenswrapper[4867]: I0214 04:28:46.580407 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a5f0e82b-f765-4fe1-b74e-856e1a6d8b8c-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-hxkz7\" (UID: \"a5f0e82b-f765-4fe1-b74e-856e1a6d8b8c\") " pod="openstack/dnsmasq-dns-57d769cc4f-hxkz7" Feb 14 04:28:46 crc kubenswrapper[4867]: I0214 04:28:46.705754 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xj27v\" (UniqueName: \"kubernetes.io/projected/a5f0e82b-f765-4fe1-b74e-856e1a6d8b8c-kube-api-access-xj27v\") pod \"dnsmasq-dns-57d769cc4f-hxkz7\" (UID: \"a5f0e82b-f765-4fe1-b74e-856e1a6d8b8c\") " pod="openstack/dnsmasq-dns-57d769cc4f-hxkz7" Feb 14 04:28:46 crc kubenswrapper[4867]: I0214 04:28:46.705844 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5f0e82b-f765-4fe1-b74e-856e1a6d8b8c-config\") pod \"dnsmasq-dns-57d769cc4f-hxkz7\" (UID: \"a5f0e82b-f765-4fe1-b74e-856e1a6d8b8c\") " pod="openstack/dnsmasq-dns-57d769cc4f-hxkz7" Feb 14 04:28:46 crc kubenswrapper[4867]: I0214 04:28:46.705905 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a5f0e82b-f765-4fe1-b74e-856e1a6d8b8c-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-hxkz7\" (UID: \"a5f0e82b-f765-4fe1-b74e-856e1a6d8b8c\") " pod="openstack/dnsmasq-dns-57d769cc4f-hxkz7" Feb 14 04:28:46 crc kubenswrapper[4867]: I0214 04:28:46.706901 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a5f0e82b-f765-4fe1-b74e-856e1a6d8b8c-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-hxkz7\" (UID: \"a5f0e82b-f765-4fe1-b74e-856e1a6d8b8c\") " pod="openstack/dnsmasq-dns-57d769cc4f-hxkz7" Feb 14 04:28:46 crc kubenswrapper[4867]: I0214 04:28:46.707346 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5f0e82b-f765-4fe1-b74e-856e1a6d8b8c-config\") pod \"dnsmasq-dns-57d769cc4f-hxkz7\" (UID: \"a5f0e82b-f765-4fe1-b74e-856e1a6d8b8c\") " pod="openstack/dnsmasq-dns-57d769cc4f-hxkz7" Feb 14 04:28:46 crc kubenswrapper[4867]: I0214 04:28:46.739422 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xj27v\" (UniqueName: \"kubernetes.io/projected/a5f0e82b-f765-4fe1-b74e-856e1a6d8b8c-kube-api-access-xj27v\") pod \"dnsmasq-dns-57d769cc4f-hxkz7\" (UID: \"a5f0e82b-f765-4fe1-b74e-856e1a6d8b8c\") " pod="openstack/dnsmasq-dns-57d769cc4f-hxkz7" Feb 14 04:28:46 crc kubenswrapper[4867]: I0214 04:28:46.793542 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-hxkz7" Feb 14 04:28:46 crc kubenswrapper[4867]: I0214 04:28:46.861597 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 14 04:28:46 crc kubenswrapper[4867]: I0214 04:28:46.863861 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:28:46 crc kubenswrapper[4867]: I0214 04:28:46.875040 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 14 04:28:46 crc kubenswrapper[4867]: I0214 04:28:46.875439 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 14 04:28:46 crc kubenswrapper[4867]: I0214 04:28:46.875615 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-7gx8s" Feb 14 04:28:46 crc kubenswrapper[4867]: I0214 04:28:46.875677 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 14 04:28:46 crc kubenswrapper[4867]: I0214 04:28:46.875818 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 14 04:28:46 crc kubenswrapper[4867]: I0214 04:28:46.875878 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 14 04:28:46 crc kubenswrapper[4867]: I0214 04:28:46.875824 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 14 04:28:46 crc kubenswrapper[4867]: I0214 04:28:46.892422 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 14 04:28:47 crc kubenswrapper[4867]: I0214 04:28:47.010709 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e1e022d9-e2db-41eb-bbc8-36a85211a141-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1e022d9-e2db-41eb-bbc8-36a85211a141\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:28:47 crc kubenswrapper[4867]: I0214 04:28:47.011075 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e1e022d9-e2db-41eb-bbc8-36a85211a141-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1e022d9-e2db-41eb-bbc8-36a85211a141\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:28:47 crc kubenswrapper[4867]: I0214 04:28:47.011115 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e1e022d9-e2db-41eb-bbc8-36a85211a141-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1e022d9-e2db-41eb-bbc8-36a85211a141\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:28:47 crc kubenswrapper[4867]: I0214 04:28:47.011146 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e1e022d9-e2db-41eb-bbc8-36a85211a141-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1e022d9-e2db-41eb-bbc8-36a85211a141\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:28:47 crc kubenswrapper[4867]: I0214 04:28:47.011294 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8137c787-0a8b-490f-9eaf-e3821659a9ce\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8137c787-0a8b-490f-9eaf-e3821659a9ce\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1e022d9-e2db-41eb-bbc8-36a85211a141\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:28:47 crc kubenswrapper[4867]: I0214 04:28:47.011338 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e1e022d9-e2db-41eb-bbc8-36a85211a141-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1e022d9-e2db-41eb-bbc8-36a85211a141\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:28:47 crc kubenswrapper[4867]: I0214 04:28:47.011425 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e1e022d9-e2db-41eb-bbc8-36a85211a141-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1e022d9-e2db-41eb-bbc8-36a85211a141\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:28:47 crc kubenswrapper[4867]: I0214 04:28:47.011522 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e1e022d9-e2db-41eb-bbc8-36a85211a141-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1e022d9-e2db-41eb-bbc8-36a85211a141\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:28:47 crc kubenswrapper[4867]: I0214 04:28:47.011556 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrf6j\" (UniqueName: \"kubernetes.io/projected/e1e022d9-e2db-41eb-bbc8-36a85211a141-kube-api-access-wrf6j\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1e022d9-e2db-41eb-bbc8-36a85211a141\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:28:47 crc kubenswrapper[4867]: I0214 04:28:47.011660 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e1e022d9-e2db-41eb-bbc8-36a85211a141-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1e022d9-e2db-41eb-bbc8-36a85211a141\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:28:47 crc kubenswrapper[4867]: I0214 04:28:47.011730 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e1e022d9-e2db-41eb-bbc8-36a85211a141-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1e022d9-e2db-41eb-bbc8-36a85211a141\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:28:47 crc kubenswrapper[4867]: I0214 04:28:47.038331 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-lbzlt"] Feb 14 04:28:47 crc kubenswrapper[4867]: W0214 04:28:47.055767 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddbe41be0_f7f8_47ff_a587_b85e282fa5ee.slice/crio-e2cde2f1b32b51340fd763b4ceea3517b72f2d3aa4a0cc5b4b3855a816cd999b WatchSource:0}: Error finding container e2cde2f1b32b51340fd763b4ceea3517b72f2d3aa4a0cc5b4b3855a816cd999b: Status 404 returned error can't find the container with id e2cde2f1b32b51340fd763b4ceea3517b72f2d3aa4a0cc5b4b3855a816cd999b Feb 14 04:28:47 crc kubenswrapper[4867]: I0214 04:28:47.113850 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e1e022d9-e2db-41eb-bbc8-36a85211a141-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1e022d9-e2db-41eb-bbc8-36a85211a141\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:28:47 crc kubenswrapper[4867]: I0214 04:28:47.113920 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e1e022d9-e2db-41eb-bbc8-36a85211a141-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1e022d9-e2db-41eb-bbc8-36a85211a141\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:28:47 crc kubenswrapper[4867]: I0214 04:28:47.113961 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e1e022d9-e2db-41eb-bbc8-36a85211a141-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1e022d9-e2db-41eb-bbc8-36a85211a141\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:28:47 crc kubenswrapper[4867]: I0214 04:28:47.113993 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e1e022d9-e2db-41eb-bbc8-36a85211a141-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1e022d9-e2db-41eb-bbc8-36a85211a141\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:28:47 crc kubenswrapper[4867]: I0214 04:28:47.114040 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e1e022d9-e2db-41eb-bbc8-36a85211a141-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1e022d9-e2db-41eb-bbc8-36a85211a141\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:28:47 crc kubenswrapper[4867]: I0214 04:28:47.114091 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e1e022d9-e2db-41eb-bbc8-36a85211a141-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1e022d9-e2db-41eb-bbc8-36a85211a141\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:28:47 crc kubenswrapper[4867]: I0214 04:28:47.114115 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e1e022d9-e2db-41eb-bbc8-36a85211a141-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1e022d9-e2db-41eb-bbc8-36a85211a141\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:28:47 crc kubenswrapper[4867]: I0214 04:28:47.114132 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-8137c787-0a8b-490f-9eaf-e3821659a9ce\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8137c787-0a8b-490f-9eaf-e3821659a9ce\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1e022d9-e2db-41eb-bbc8-36a85211a141\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:28:47 crc kubenswrapper[4867]: I0214 04:28:47.114166 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e1e022d9-e2db-41eb-bbc8-36a85211a141-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1e022d9-e2db-41eb-bbc8-36a85211a141\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:28:47 crc kubenswrapper[4867]: I0214 04:28:47.114209 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e1e022d9-e2db-41eb-bbc8-36a85211a141-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1e022d9-e2db-41eb-bbc8-36a85211a141\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:28:47 crc kubenswrapper[4867]: I0214 04:28:47.114240 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrf6j\" (UniqueName: \"kubernetes.io/projected/e1e022d9-e2db-41eb-bbc8-36a85211a141-kube-api-access-wrf6j\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1e022d9-e2db-41eb-bbc8-36a85211a141\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:28:47 crc kubenswrapper[4867]: I0214 04:28:47.115910 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e1e022d9-e2db-41eb-bbc8-36a85211a141-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1e022d9-e2db-41eb-bbc8-36a85211a141\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:28:47 crc kubenswrapper[4867]: I0214 04:28:47.117437 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e1e022d9-e2db-41eb-bbc8-36a85211a141-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1e022d9-e2db-41eb-bbc8-36a85211a141\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:28:47 crc kubenswrapper[4867]: I0214 04:28:47.117887 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e1e022d9-e2db-41eb-bbc8-36a85211a141-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1e022d9-e2db-41eb-bbc8-36a85211a141\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:28:47 crc kubenswrapper[4867]: I0214 04:28:47.118803 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e1e022d9-e2db-41eb-bbc8-36a85211a141-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1e022d9-e2db-41eb-bbc8-36a85211a141\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:28:47 crc kubenswrapper[4867]: I0214 04:28:47.120805 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e1e022d9-e2db-41eb-bbc8-36a85211a141-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1e022d9-e2db-41eb-bbc8-36a85211a141\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:28:47 crc kubenswrapper[4867]: I0214 04:28:47.122455 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e1e022d9-e2db-41eb-bbc8-36a85211a141-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1e022d9-e2db-41eb-bbc8-36a85211a141\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:28:47 crc kubenswrapper[4867]: I0214 04:28:47.122819 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e1e022d9-e2db-41eb-bbc8-36a85211a141-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1e022d9-e2db-41eb-bbc8-36a85211a141\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:28:47 crc kubenswrapper[4867]: I0214 04:28:47.124249 4867 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 04:28:47 crc kubenswrapper[4867]: I0214 04:28:47.124282 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-8137c787-0a8b-490f-9eaf-e3821659a9ce\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8137c787-0a8b-490f-9eaf-e3821659a9ce\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1e022d9-e2db-41eb-bbc8-36a85211a141\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/7c81ba883a06ca9e019b2d7c726ddbfb519b81827f5cfcee1e25c00752814b8f/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:28:47 crc kubenswrapper[4867]: I0214 04:28:47.132069 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e1e022d9-e2db-41eb-bbc8-36a85211a141-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1e022d9-e2db-41eb-bbc8-36a85211a141\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:28:47 crc kubenswrapper[4867]: I0214 04:28:47.144656 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e1e022d9-e2db-41eb-bbc8-36a85211a141-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1e022d9-e2db-41eb-bbc8-36a85211a141\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:28:47 crc kubenswrapper[4867]: I0214 04:28:47.148740 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrf6j\" (UniqueName: \"kubernetes.io/projected/e1e022d9-e2db-41eb-bbc8-36a85211a141-kube-api-access-wrf6j\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1e022d9-e2db-41eb-bbc8-36a85211a141\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:28:47 crc kubenswrapper[4867]: I0214 04:28:47.194748 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-8137c787-0a8b-490f-9eaf-e3821659a9ce\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8137c787-0a8b-490f-9eaf-e3821659a9ce\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1e022d9-e2db-41eb-bbc8-36a85211a141\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:28:47 crc kubenswrapper[4867]: I0214 04:28:47.208719 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:28:47 crc kubenswrapper[4867]: I0214 04:28:47.360718 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-hxkz7"] Feb 14 04:28:47 crc kubenswrapper[4867]: W0214 04:28:47.377941 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda5f0e82b_f765_4fe1_b74e_856e1a6d8b8c.slice/crio-9892bc720311d5c087d97016222dedfbfd5d79d98d86d65c02c43134fdd42239 WatchSource:0}: Error finding container 9892bc720311d5c087d97016222dedfbfd5d79d98d86d65c02c43134fdd42239: Status 404 returned error can't find the container with id 9892bc720311d5c087d97016222dedfbfd5d79d98d86d65c02c43134fdd42239 Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.549875 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.552703 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.558327 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.558463 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.558565 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.558681 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.558734 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.558795 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.563282 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-xwq4z" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.586745 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.597969 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-1"] Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.600072 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.620333 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-2"] Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.622044 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.623458 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/647ba30a-5526-4e27-9095-680c31ff4eb3-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"647ba30a-5526-4e27-9095-680c31ff4eb3\") " pod="openstack/rabbitmq-server-0" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.623495 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kp9g\" (UniqueName: \"kubernetes.io/projected/647ba30a-5526-4e27-9095-680c31ff4eb3-kube-api-access-6kp9g\") pod \"rabbitmq-server-0\" (UID: \"647ba30a-5526-4e27-9095-680c31ff4eb3\") " pod="openstack/rabbitmq-server-0" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.623638 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/647ba30a-5526-4e27-9095-680c31ff4eb3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"647ba30a-5526-4e27-9095-680c31ff4eb3\") " pod="openstack/rabbitmq-server-0" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.623675 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/647ba30a-5526-4e27-9095-680c31ff4eb3-pod-info\") pod \"rabbitmq-server-0\" (UID: \"647ba30a-5526-4e27-9095-680c31ff4eb3\") " pod="openstack/rabbitmq-server-0" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.623765 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/647ba30a-5526-4e27-9095-680c31ff4eb3-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"647ba30a-5526-4e27-9095-680c31ff4eb3\") " pod="openstack/rabbitmq-server-0" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.623784 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/647ba30a-5526-4e27-9095-680c31ff4eb3-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"647ba30a-5526-4e27-9095-680c31ff4eb3\") " pod="openstack/rabbitmq-server-0" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.623803 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/647ba30a-5526-4e27-9095-680c31ff4eb3-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"647ba30a-5526-4e27-9095-680c31ff4eb3\") " pod="openstack/rabbitmq-server-0" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.623951 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/647ba30a-5526-4e27-9095-680c31ff4eb3-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"647ba30a-5526-4e27-9095-680c31ff4eb3\") " pod="openstack/rabbitmq-server-0" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.623983 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-5e0ed597-0ada-4a46-9560-1f84a6822196\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5e0ed597-0ada-4a46-9560-1f84a6822196\") pod \"rabbitmq-server-0\" (UID: \"647ba30a-5526-4e27-9095-680c31ff4eb3\") " pod="openstack/rabbitmq-server-0" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.624006 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/647ba30a-5526-4e27-9095-680c31ff4eb3-server-conf\") pod \"rabbitmq-server-0\" (UID: \"647ba30a-5526-4e27-9095-680c31ff4eb3\") " pod="openstack/rabbitmq-server-0" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.624023 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/647ba30a-5526-4e27-9095-680c31ff4eb3-config-data\") pod \"rabbitmq-server-0\" (UID: \"647ba30a-5526-4e27-9095-680c31ff4eb3\") " pod="openstack/rabbitmq-server-0" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.635973 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-hxkz7" event={"ID":"a5f0e82b-f765-4fe1-b74e-856e1a6d8b8c","Type":"ContainerStarted","Data":"9892bc720311d5c087d97016222dedfbfd5d79d98d86d65c02c43134fdd42239"} Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.637634 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-lbzlt" event={"ID":"dbe41be0-f7f8-47ff-a587-b85e282fa5ee","Type":"ContainerStarted","Data":"e2cde2f1b32b51340fd763b4ceea3517b72f2d3aa4a0cc5b4b3855a816cd999b"} Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.661793 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.678597 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.726014 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d997565a-60ec-4873-b7c9-bde8044c981f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d997565a-60ec-4873-b7c9-bde8044c981f\") pod \"rabbitmq-server-2\" (UID: \"9bba5174-edd6-4e59-8b84-6c50439be88e\") " pod="openstack/rabbitmq-server-2" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.726054 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6bc83863-74f4-4509-969c-0f3305a542a8-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"6bc83863-74f4-4509-969c-0f3305a542a8\") " pod="openstack/rabbitmq-server-1" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.726078 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q676p\" (UniqueName: \"kubernetes.io/projected/9bba5174-edd6-4e59-8b84-6c50439be88e-kube-api-access-q676p\") pod \"rabbitmq-server-2\" (UID: \"9bba5174-edd6-4e59-8b84-6c50439be88e\") " pod="openstack/rabbitmq-server-2" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.726106 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/647ba30a-5526-4e27-9095-680c31ff4eb3-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"647ba30a-5526-4e27-9095-680c31ff4eb3\") " pod="openstack/rabbitmq-server-0" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.726127 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/647ba30a-5526-4e27-9095-680c31ff4eb3-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"647ba30a-5526-4e27-9095-680c31ff4eb3\") " pod="openstack/rabbitmq-server-0" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.726208 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/647ba30a-5526-4e27-9095-680c31ff4eb3-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"647ba30a-5526-4e27-9095-680c31ff4eb3\") " pod="openstack/rabbitmq-server-0" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.726271 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6bc83863-74f4-4509-969c-0f3305a542a8-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"6bc83863-74f4-4509-969c-0f3305a542a8\") " pod="openstack/rabbitmq-server-1" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.726358 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/647ba30a-5526-4e27-9095-680c31ff4eb3-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"647ba30a-5526-4e27-9095-680c31ff4eb3\") " pod="openstack/rabbitmq-server-0" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.726392 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9bba5174-edd6-4e59-8b84-6c50439be88e-server-conf\") pod \"rabbitmq-server-2\" (UID: \"9bba5174-edd6-4e59-8b84-6c50439be88e\") " pod="openstack/rabbitmq-server-2" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.726429 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-5e0ed597-0ada-4a46-9560-1f84a6822196\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5e0ed597-0ada-4a46-9560-1f84a6822196\") pod \"rabbitmq-server-0\" (UID: \"647ba30a-5526-4e27-9095-680c31ff4eb3\") " pod="openstack/rabbitmq-server-0" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.726455 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6bc83863-74f4-4509-969c-0f3305a542a8-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"6bc83863-74f4-4509-969c-0f3305a542a8\") " pod="openstack/rabbitmq-server-1" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.726484 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9bba5174-edd6-4e59-8b84-6c50439be88e-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"9bba5174-edd6-4e59-8b84-6c50439be88e\") " pod="openstack/rabbitmq-server-2" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.726516 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/647ba30a-5526-4e27-9095-680c31ff4eb3-server-conf\") pod \"rabbitmq-server-0\" (UID: \"647ba30a-5526-4e27-9095-680c31ff4eb3\") " pod="openstack/rabbitmq-server-0" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.726538 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9bba5174-edd6-4e59-8b84-6c50439be88e-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"9bba5174-edd6-4e59-8b84-6c50439be88e\") " pod="openstack/rabbitmq-server-2" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.726563 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/647ba30a-5526-4e27-9095-680c31ff4eb3-config-data\") pod \"rabbitmq-server-0\" (UID: \"647ba30a-5526-4e27-9095-680c31ff4eb3\") " pod="openstack/rabbitmq-server-0" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.726585 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-294tk\" (UniqueName: \"kubernetes.io/projected/6bc83863-74f4-4509-969c-0f3305a542a8-kube-api-access-294tk\") pod \"rabbitmq-server-1\" (UID: \"6bc83863-74f4-4509-969c-0f3305a542a8\") " pod="openstack/rabbitmq-server-1" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.726622 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9bba5174-edd6-4e59-8b84-6c50439be88e-config-data\") pod \"rabbitmq-server-2\" (UID: \"9bba5174-edd6-4e59-8b84-6c50439be88e\") " pod="openstack/rabbitmq-server-2" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.726636 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9bba5174-edd6-4e59-8b84-6c50439be88e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"9bba5174-edd6-4e59-8b84-6c50439be88e\") " pod="openstack/rabbitmq-server-2" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.726654 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9bba5174-edd6-4e59-8b84-6c50439be88e-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"9bba5174-edd6-4e59-8b84-6c50439be88e\") " pod="openstack/rabbitmq-server-2" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.726709 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6bc83863-74f4-4509-969c-0f3305a542a8-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"6bc83863-74f4-4509-969c-0f3305a542a8\") " pod="openstack/rabbitmq-server-1" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.726732 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6bc83863-74f4-4509-969c-0f3305a542a8-pod-info\") pod \"rabbitmq-server-1\" (UID: \"6bc83863-74f4-4509-969c-0f3305a542a8\") " pod="openstack/rabbitmq-server-1" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.726750 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/647ba30a-5526-4e27-9095-680c31ff4eb3-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"647ba30a-5526-4e27-9095-680c31ff4eb3\") " pod="openstack/rabbitmq-server-0" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.726780 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9bba5174-edd6-4e59-8b84-6c50439be88e-pod-info\") pod \"rabbitmq-server-2\" (UID: \"9bba5174-edd6-4e59-8b84-6c50439be88e\") " pod="openstack/rabbitmq-server-2" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.726815 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kp9g\" (UniqueName: \"kubernetes.io/projected/647ba30a-5526-4e27-9095-680c31ff4eb3-kube-api-access-6kp9g\") pod \"rabbitmq-server-0\" (UID: \"647ba30a-5526-4e27-9095-680c31ff4eb3\") " pod="openstack/rabbitmq-server-0" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.726854 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6bc83863-74f4-4509-969c-0f3305a542a8-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"6bc83863-74f4-4509-969c-0f3305a542a8\") " pod="openstack/rabbitmq-server-1" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.726911 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9bba5174-edd6-4e59-8b84-6c50439be88e-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"9bba5174-edd6-4e59-8b84-6c50439be88e\") " pod="openstack/rabbitmq-server-2" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.726928 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6bc83863-74f4-4509-969c-0f3305a542a8-server-conf\") pod \"rabbitmq-server-1\" (UID: \"6bc83863-74f4-4509-969c-0f3305a542a8\") " pod="openstack/rabbitmq-server-1" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.726965 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-64ab6375-8d81-46bd-80ba-b738c813923f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-64ab6375-8d81-46bd-80ba-b738c813923f\") pod \"rabbitmq-server-1\" (UID: \"6bc83863-74f4-4509-969c-0f3305a542a8\") " pod="openstack/rabbitmq-server-1" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.727015 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9bba5174-edd6-4e59-8b84-6c50439be88e-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"9bba5174-edd6-4e59-8b84-6c50439be88e\") " pod="openstack/rabbitmq-server-2" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.727058 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/647ba30a-5526-4e27-9095-680c31ff4eb3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"647ba30a-5526-4e27-9095-680c31ff4eb3\") " pod="openstack/rabbitmq-server-0" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.727097 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6bc83863-74f4-4509-969c-0f3305a542a8-config-data\") pod \"rabbitmq-server-1\" (UID: \"6bc83863-74f4-4509-969c-0f3305a542a8\") " pod="openstack/rabbitmq-server-1" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.727129 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/647ba30a-5526-4e27-9095-680c31ff4eb3-pod-info\") pod \"rabbitmq-server-0\" (UID: \"647ba30a-5526-4e27-9095-680c31ff4eb3\") " pod="openstack/rabbitmq-server-0" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.727150 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6bc83863-74f4-4509-969c-0f3305a542a8-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"6bc83863-74f4-4509-969c-0f3305a542a8\") " pod="openstack/rabbitmq-server-1" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.728211 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/647ba30a-5526-4e27-9095-680c31ff4eb3-config-data\") pod \"rabbitmq-server-0\" (UID: \"647ba30a-5526-4e27-9095-680c31ff4eb3\") " pod="openstack/rabbitmq-server-0" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.728687 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/647ba30a-5526-4e27-9095-680c31ff4eb3-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"647ba30a-5526-4e27-9095-680c31ff4eb3\") " pod="openstack/rabbitmq-server-0" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.729257 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/647ba30a-5526-4e27-9095-680c31ff4eb3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"647ba30a-5526-4e27-9095-680c31ff4eb3\") " pod="openstack/rabbitmq-server-0" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.730370 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/647ba30a-5526-4e27-9095-680c31ff4eb3-server-conf\") pod \"rabbitmq-server-0\" (UID: \"647ba30a-5526-4e27-9095-680c31ff4eb3\") " pod="openstack/rabbitmq-server-0" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.730607 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/647ba30a-5526-4e27-9095-680c31ff4eb3-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"647ba30a-5526-4e27-9095-680c31ff4eb3\") " pod="openstack/rabbitmq-server-0" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.733458 4867 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.733518 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-5e0ed597-0ada-4a46-9560-1f84a6822196\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5e0ed597-0ada-4a46-9560-1f84a6822196\") pod \"rabbitmq-server-0\" (UID: \"647ba30a-5526-4e27-9095-680c31ff4eb3\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b6ecbc127793ccdba0f55c49c319b455a0b3bdad6043979264d9c6d7f92205d3/globalmount\"" pod="openstack/rabbitmq-server-0" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.734429 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/647ba30a-5526-4e27-9095-680c31ff4eb3-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"647ba30a-5526-4e27-9095-680c31ff4eb3\") " pod="openstack/rabbitmq-server-0" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.735064 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/647ba30a-5526-4e27-9095-680c31ff4eb3-pod-info\") pod \"rabbitmq-server-0\" (UID: \"647ba30a-5526-4e27-9095-680c31ff4eb3\") " pod="openstack/rabbitmq-server-0" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.735480 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/647ba30a-5526-4e27-9095-680c31ff4eb3-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"647ba30a-5526-4e27-9095-680c31ff4eb3\") " pod="openstack/rabbitmq-server-0" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.738553 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/647ba30a-5526-4e27-9095-680c31ff4eb3-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"647ba30a-5526-4e27-9095-680c31ff4eb3\") " pod="openstack/rabbitmq-server-0" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.745235 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kp9g\" (UniqueName: \"kubernetes.io/projected/647ba30a-5526-4e27-9095-680c31ff4eb3-kube-api-access-6kp9g\") pod \"rabbitmq-server-0\" (UID: \"647ba30a-5526-4e27-9095-680c31ff4eb3\") " pod="openstack/rabbitmq-server-0" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.769407 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.779111 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-5e0ed597-0ada-4a46-9560-1f84a6822196\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5e0ed597-0ada-4a46-9560-1f84a6822196\") pod \"rabbitmq-server-0\" (UID: \"647ba30a-5526-4e27-9095-680c31ff4eb3\") " pod="openstack/rabbitmq-server-0" Feb 14 04:28:48 crc kubenswrapper[4867]: W0214 04:28:47.779462 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode1e022d9_e2db_41eb_bbc8_36a85211a141.slice/crio-eff48d6ea9b314940f4e42275756ed44177eec1f24e83d25c5b5fe5435a8ea2e WatchSource:0}: Error finding container eff48d6ea9b314940f4e42275756ed44177eec1f24e83d25c5b5fe5435a8ea2e: Status 404 returned error can't find the container with id eff48d6ea9b314940f4e42275756ed44177eec1f24e83d25c5b5fe5435a8ea2e Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.836402 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9bba5174-edd6-4e59-8b84-6c50439be88e-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"9bba5174-edd6-4e59-8b84-6c50439be88e\") " pod="openstack/rabbitmq-server-2" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.836463 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6bc83863-74f4-4509-969c-0f3305a542a8-server-conf\") pod \"rabbitmq-server-1\" (UID: \"6bc83863-74f4-4509-969c-0f3305a542a8\") " pod="openstack/rabbitmq-server-1" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.836520 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-64ab6375-8d81-46bd-80ba-b738c813923f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-64ab6375-8d81-46bd-80ba-b738c813923f\") pod \"rabbitmq-server-1\" (UID: \"6bc83863-74f4-4509-969c-0f3305a542a8\") " pod="openstack/rabbitmq-server-1" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.836556 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9bba5174-edd6-4e59-8b84-6c50439be88e-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"9bba5174-edd6-4e59-8b84-6c50439be88e\") " pod="openstack/rabbitmq-server-2" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.836595 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6bc83863-74f4-4509-969c-0f3305a542a8-config-data\") pod \"rabbitmq-server-1\" (UID: \"6bc83863-74f4-4509-969c-0f3305a542a8\") " pod="openstack/rabbitmq-server-1" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.836627 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6bc83863-74f4-4509-969c-0f3305a542a8-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"6bc83863-74f4-4509-969c-0f3305a542a8\") " pod="openstack/rabbitmq-server-1" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.836680 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d997565a-60ec-4873-b7c9-bde8044c981f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d997565a-60ec-4873-b7c9-bde8044c981f\") pod \"rabbitmq-server-2\" (UID: \"9bba5174-edd6-4e59-8b84-6c50439be88e\") " pod="openstack/rabbitmq-server-2" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.836705 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q676p\" (UniqueName: \"kubernetes.io/projected/9bba5174-edd6-4e59-8b84-6c50439be88e-kube-api-access-q676p\") pod \"rabbitmq-server-2\" (UID: \"9bba5174-edd6-4e59-8b84-6c50439be88e\") " pod="openstack/rabbitmq-server-2" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.836725 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6bc83863-74f4-4509-969c-0f3305a542a8-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"6bc83863-74f4-4509-969c-0f3305a542a8\") " pod="openstack/rabbitmq-server-1" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.836761 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6bc83863-74f4-4509-969c-0f3305a542a8-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"6bc83863-74f4-4509-969c-0f3305a542a8\") " pod="openstack/rabbitmq-server-1" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.836810 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9bba5174-edd6-4e59-8b84-6c50439be88e-server-conf\") pod \"rabbitmq-server-2\" (UID: \"9bba5174-edd6-4e59-8b84-6c50439be88e\") " pod="openstack/rabbitmq-server-2" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.836841 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6bc83863-74f4-4509-969c-0f3305a542a8-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"6bc83863-74f4-4509-969c-0f3305a542a8\") " pod="openstack/rabbitmq-server-1" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.836869 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9bba5174-edd6-4e59-8b84-6c50439be88e-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"9bba5174-edd6-4e59-8b84-6c50439be88e\") " pod="openstack/rabbitmq-server-2" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.836893 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9bba5174-edd6-4e59-8b84-6c50439be88e-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"9bba5174-edd6-4e59-8b84-6c50439be88e\") " pod="openstack/rabbitmq-server-2" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.836919 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-294tk\" (UniqueName: \"kubernetes.io/projected/6bc83863-74f4-4509-969c-0f3305a542a8-kube-api-access-294tk\") pod \"rabbitmq-server-1\" (UID: \"6bc83863-74f4-4509-969c-0f3305a542a8\") " pod="openstack/rabbitmq-server-1" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.836951 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9bba5174-edd6-4e59-8b84-6c50439be88e-config-data\") pod \"rabbitmq-server-2\" (UID: \"9bba5174-edd6-4e59-8b84-6c50439be88e\") " pod="openstack/rabbitmq-server-2" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.836973 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9bba5174-edd6-4e59-8b84-6c50439be88e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"9bba5174-edd6-4e59-8b84-6c50439be88e\") " pod="openstack/rabbitmq-server-2" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.836991 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9bba5174-edd6-4e59-8b84-6c50439be88e-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"9bba5174-edd6-4e59-8b84-6c50439be88e\") " pod="openstack/rabbitmq-server-2" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.837035 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6bc83863-74f4-4509-969c-0f3305a542a8-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"6bc83863-74f4-4509-969c-0f3305a542a8\") " pod="openstack/rabbitmq-server-1" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.837060 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6bc83863-74f4-4509-969c-0f3305a542a8-pod-info\") pod \"rabbitmq-server-1\" (UID: \"6bc83863-74f4-4509-969c-0f3305a542a8\") " pod="openstack/rabbitmq-server-1" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.837080 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9bba5174-edd6-4e59-8b84-6c50439be88e-pod-info\") pod \"rabbitmq-server-2\" (UID: \"9bba5174-edd6-4e59-8b84-6c50439be88e\") " pod="openstack/rabbitmq-server-2" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.837114 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6bc83863-74f4-4509-969c-0f3305a542a8-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"6bc83863-74f4-4509-969c-0f3305a542a8\") " pod="openstack/rabbitmq-server-1" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.837659 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6bc83863-74f4-4509-969c-0f3305a542a8-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"6bc83863-74f4-4509-969c-0f3305a542a8\") " pod="openstack/rabbitmq-server-1" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.839845 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9bba5174-edd6-4e59-8b84-6c50439be88e-config-data\") pod \"rabbitmq-server-2\" (UID: \"9bba5174-edd6-4e59-8b84-6c50439be88e\") " pod="openstack/rabbitmq-server-2" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.841182 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9bba5174-edd6-4e59-8b84-6c50439be88e-server-conf\") pod \"rabbitmq-server-2\" (UID: \"9bba5174-edd6-4e59-8b84-6c50439be88e\") " pod="openstack/rabbitmq-server-2" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.842262 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9bba5174-edd6-4e59-8b84-6c50439be88e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"9bba5174-edd6-4e59-8b84-6c50439be88e\") " pod="openstack/rabbitmq-server-2" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.842586 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9bba5174-edd6-4e59-8b84-6c50439be88e-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"9bba5174-edd6-4e59-8b84-6c50439be88e\") " pod="openstack/rabbitmq-server-2" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.844498 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6bc83863-74f4-4509-969c-0f3305a542a8-server-conf\") pod \"rabbitmq-server-1\" (UID: \"6bc83863-74f4-4509-969c-0f3305a542a8\") " pod="openstack/rabbitmq-server-1" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.844721 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9bba5174-edd6-4e59-8b84-6c50439be88e-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"9bba5174-edd6-4e59-8b84-6c50439be88e\") " pod="openstack/rabbitmq-server-2" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.845134 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6bc83863-74f4-4509-969c-0f3305a542a8-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"6bc83863-74f4-4509-969c-0f3305a542a8\") " pod="openstack/rabbitmq-server-1" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.845366 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6bc83863-74f4-4509-969c-0f3305a542a8-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"6bc83863-74f4-4509-969c-0f3305a542a8\") " pod="openstack/rabbitmq-server-1" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.852182 4867 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.852219 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d997565a-60ec-4873-b7c9-bde8044c981f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d997565a-60ec-4873-b7c9-bde8044c981f\") pod \"rabbitmq-server-2\" (UID: \"9bba5174-edd6-4e59-8b84-6c50439be88e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/03d7bcff7c5d0322515cfcd29e48bfb1d0d6f9021316ba38c2028cf5ce82afee/globalmount\"" pod="openstack/rabbitmq-server-2" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.853719 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6bc83863-74f4-4509-969c-0f3305a542a8-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"6bc83863-74f4-4509-969c-0f3305a542a8\") " pod="openstack/rabbitmq-server-1" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.854120 4867 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.854139 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-64ab6375-8d81-46bd-80ba-b738c813923f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-64ab6375-8d81-46bd-80ba-b738c813923f\") pod \"rabbitmq-server-1\" (UID: \"6bc83863-74f4-4509-969c-0f3305a542a8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/55ff7cc17667ae9e120da2b34de2e1baed28e5c0bfceac7c1699349f36759e58/globalmount\"" pod="openstack/rabbitmq-server-1" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.855497 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6bc83863-74f4-4509-969c-0f3305a542a8-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"6bc83863-74f4-4509-969c-0f3305a542a8\") " pod="openstack/rabbitmq-server-1" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.860708 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6bc83863-74f4-4509-969c-0f3305a542a8-config-data\") pod \"rabbitmq-server-1\" (UID: \"6bc83863-74f4-4509-969c-0f3305a542a8\") " pod="openstack/rabbitmq-server-1" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.862885 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9bba5174-edd6-4e59-8b84-6c50439be88e-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"9bba5174-edd6-4e59-8b84-6c50439be88e\") " pod="openstack/rabbitmq-server-2" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.863755 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9bba5174-edd6-4e59-8b84-6c50439be88e-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"9bba5174-edd6-4e59-8b84-6c50439be88e\") " pod="openstack/rabbitmq-server-2" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.865753 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-294tk\" (UniqueName: \"kubernetes.io/projected/6bc83863-74f4-4509-969c-0f3305a542a8-kube-api-access-294tk\") pod \"rabbitmq-server-1\" (UID: \"6bc83863-74f4-4509-969c-0f3305a542a8\") " pod="openstack/rabbitmq-server-1" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.866160 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6bc83863-74f4-4509-969c-0f3305a542a8-pod-info\") pod \"rabbitmq-server-1\" (UID: \"6bc83863-74f4-4509-969c-0f3305a542a8\") " pod="openstack/rabbitmq-server-1" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.866182 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9bba5174-edd6-4e59-8b84-6c50439be88e-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"9bba5174-edd6-4e59-8b84-6c50439be88e\") " pod="openstack/rabbitmq-server-2" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.866622 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9bba5174-edd6-4e59-8b84-6c50439be88e-pod-info\") pod \"rabbitmq-server-2\" (UID: \"9bba5174-edd6-4e59-8b84-6c50439be88e\") " pod="openstack/rabbitmq-server-2" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.867486 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6bc83863-74f4-4509-969c-0f3305a542a8-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"6bc83863-74f4-4509-969c-0f3305a542a8\") " pod="openstack/rabbitmq-server-1" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.868825 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q676p\" (UniqueName: \"kubernetes.io/projected/9bba5174-edd6-4e59-8b84-6c50439be88e-kube-api-access-q676p\") pod \"rabbitmq-server-2\" (UID: \"9bba5174-edd6-4e59-8b84-6c50439be88e\") " pod="openstack/rabbitmq-server-2" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.917202 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.978944 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-64ab6375-8d81-46bd-80ba-b738c813923f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-64ab6375-8d81-46bd-80ba-b738c813923f\") pod \"rabbitmq-server-1\" (UID: \"6bc83863-74f4-4509-969c-0f3305a542a8\") " pod="openstack/rabbitmq-server-1" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:48.004030 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d997565a-60ec-4873-b7c9-bde8044c981f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d997565a-60ec-4873-b7c9-bde8044c981f\") pod \"rabbitmq-server-2\" (UID: \"9bba5174-edd6-4e59-8b84-6c50439be88e\") " pod="openstack/rabbitmq-server-2" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:48.054972 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:48.250284 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:48.655137 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e1e022d9-e2db-41eb-bbc8-36a85211a141","Type":"ContainerStarted","Data":"eff48d6ea9b314940f4e42275756ed44177eec1f24e83d25c5b5fe5435a8ea2e"} Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:48.975691 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:48.979942 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:48.983293 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-xbw69" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:48.988610 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:48.988765 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:48.988860 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:48.991883 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 14 04:28:49 crc kubenswrapper[4867]: I0214 04:28:49.068418 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 14 04:28:49 crc kubenswrapper[4867]: I0214 04:28:49.068458 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 14 04:28:49 crc kubenswrapper[4867]: I0214 04:28:49.103242 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b27199a8-11ac-4e59-90b8-b42387dd6dd2-kolla-config\") pod \"openstack-galera-0\" (UID: \"b27199a8-11ac-4e59-90b8-b42387dd6dd2\") " pod="openstack/openstack-galera-0" Feb 14 04:28:49 crc kubenswrapper[4867]: I0214 04:28:49.103279 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b27199a8-11ac-4e59-90b8-b42387dd6dd2-config-data-default\") pod \"openstack-galera-0\" (UID: \"b27199a8-11ac-4e59-90b8-b42387dd6dd2\") " pod="openstack/openstack-galera-0" Feb 14 04:28:49 crc kubenswrapper[4867]: I0214 04:28:49.103355 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-04ed5daa-c5d1-498b-a709-6e4af0a0932b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-04ed5daa-c5d1-498b-a709-6e4af0a0932b\") pod \"openstack-galera-0\" (UID: \"b27199a8-11ac-4e59-90b8-b42387dd6dd2\") " pod="openstack/openstack-galera-0" Feb 14 04:28:49 crc kubenswrapper[4867]: I0214 04:28:49.103411 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b27199a8-11ac-4e59-90b8-b42387dd6dd2-config-data-generated\") pod \"openstack-galera-0\" (UID: \"b27199a8-11ac-4e59-90b8-b42387dd6dd2\") " pod="openstack/openstack-galera-0" Feb 14 04:28:49 crc kubenswrapper[4867]: I0214 04:28:49.103446 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b27199a8-11ac-4e59-90b8-b42387dd6dd2-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"b27199a8-11ac-4e59-90b8-b42387dd6dd2\") " pod="openstack/openstack-galera-0" Feb 14 04:28:49 crc kubenswrapper[4867]: I0214 04:28:49.103484 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b27199a8-11ac-4e59-90b8-b42387dd6dd2-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"b27199a8-11ac-4e59-90b8-b42387dd6dd2\") " pod="openstack/openstack-galera-0" Feb 14 04:28:49 crc kubenswrapper[4867]: I0214 04:28:49.103538 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24d97\" (UniqueName: \"kubernetes.io/projected/b27199a8-11ac-4e59-90b8-b42387dd6dd2-kube-api-access-24d97\") pod \"openstack-galera-0\" (UID: \"b27199a8-11ac-4e59-90b8-b42387dd6dd2\") " pod="openstack/openstack-galera-0" Feb 14 04:28:49 crc kubenswrapper[4867]: I0214 04:28:49.103561 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b27199a8-11ac-4e59-90b8-b42387dd6dd2-operator-scripts\") pod \"openstack-galera-0\" (UID: \"b27199a8-11ac-4e59-90b8-b42387dd6dd2\") " pod="openstack/openstack-galera-0" Feb 14 04:28:49 crc kubenswrapper[4867]: I0214 04:28:49.190034 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 14 04:28:49 crc kubenswrapper[4867]: I0214 04:28:49.208229 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-04ed5daa-c5d1-498b-a709-6e4af0a0932b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-04ed5daa-c5d1-498b-a709-6e4af0a0932b\") pod \"openstack-galera-0\" (UID: \"b27199a8-11ac-4e59-90b8-b42387dd6dd2\") " pod="openstack/openstack-galera-0" Feb 14 04:28:49 crc kubenswrapper[4867]: I0214 04:28:49.209282 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b27199a8-11ac-4e59-90b8-b42387dd6dd2-config-data-generated\") pod \"openstack-galera-0\" (UID: \"b27199a8-11ac-4e59-90b8-b42387dd6dd2\") " pod="openstack/openstack-galera-0" Feb 14 04:28:49 crc kubenswrapper[4867]: I0214 04:28:49.209426 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b27199a8-11ac-4e59-90b8-b42387dd6dd2-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"b27199a8-11ac-4e59-90b8-b42387dd6dd2\") " pod="openstack/openstack-galera-0" Feb 14 04:28:49 crc kubenswrapper[4867]: I0214 04:28:49.209466 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b27199a8-11ac-4e59-90b8-b42387dd6dd2-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"b27199a8-11ac-4e59-90b8-b42387dd6dd2\") " pod="openstack/openstack-galera-0" Feb 14 04:28:49 crc kubenswrapper[4867]: I0214 04:28:49.209539 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24d97\" (UniqueName: \"kubernetes.io/projected/b27199a8-11ac-4e59-90b8-b42387dd6dd2-kube-api-access-24d97\") pod \"openstack-galera-0\" (UID: \"b27199a8-11ac-4e59-90b8-b42387dd6dd2\") " pod="openstack/openstack-galera-0" Feb 14 04:28:49 crc kubenswrapper[4867]: I0214 04:28:49.209576 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b27199a8-11ac-4e59-90b8-b42387dd6dd2-operator-scripts\") pod \"openstack-galera-0\" (UID: \"b27199a8-11ac-4e59-90b8-b42387dd6dd2\") " pod="openstack/openstack-galera-0" Feb 14 04:28:49 crc kubenswrapper[4867]: I0214 04:28:49.209746 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b27199a8-11ac-4e59-90b8-b42387dd6dd2-kolla-config\") pod \"openstack-galera-0\" (UID: \"b27199a8-11ac-4e59-90b8-b42387dd6dd2\") " pod="openstack/openstack-galera-0" Feb 14 04:28:49 crc kubenswrapper[4867]: I0214 04:28:49.209767 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b27199a8-11ac-4e59-90b8-b42387dd6dd2-config-data-default\") pod \"openstack-galera-0\" (UID: \"b27199a8-11ac-4e59-90b8-b42387dd6dd2\") " pod="openstack/openstack-galera-0" Feb 14 04:28:49 crc kubenswrapper[4867]: I0214 04:28:49.210695 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b27199a8-11ac-4e59-90b8-b42387dd6dd2-config-data-default\") pod \"openstack-galera-0\" (UID: \"b27199a8-11ac-4e59-90b8-b42387dd6dd2\") " pod="openstack/openstack-galera-0" Feb 14 04:28:49 crc kubenswrapper[4867]: I0214 04:28:49.213055 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b27199a8-11ac-4e59-90b8-b42387dd6dd2-kolla-config\") pod \"openstack-galera-0\" (UID: \"b27199a8-11ac-4e59-90b8-b42387dd6dd2\") " pod="openstack/openstack-galera-0" Feb 14 04:28:49 crc kubenswrapper[4867]: I0214 04:28:49.213355 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b27199a8-11ac-4e59-90b8-b42387dd6dd2-config-data-generated\") pod \"openstack-galera-0\" (UID: \"b27199a8-11ac-4e59-90b8-b42387dd6dd2\") " pod="openstack/openstack-galera-0" Feb 14 04:28:49 crc kubenswrapper[4867]: I0214 04:28:49.214418 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b27199a8-11ac-4e59-90b8-b42387dd6dd2-operator-scripts\") pod \"openstack-galera-0\" (UID: \"b27199a8-11ac-4e59-90b8-b42387dd6dd2\") " pod="openstack/openstack-galera-0" Feb 14 04:28:49 crc kubenswrapper[4867]: I0214 04:28:49.215193 4867 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 04:28:49 crc kubenswrapper[4867]: I0214 04:28:49.215266 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-04ed5daa-c5d1-498b-a709-6e4af0a0932b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-04ed5daa-c5d1-498b-a709-6e4af0a0932b\") pod \"openstack-galera-0\" (UID: \"b27199a8-11ac-4e59-90b8-b42387dd6dd2\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/407d6ce299045fd326f604e987d7292806f389f36b0aa734b66f6d28c6aa64a2/globalmount\"" pod="openstack/openstack-galera-0" Feb 14 04:28:49 crc kubenswrapper[4867]: I0214 04:28:49.219591 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b27199a8-11ac-4e59-90b8-b42387dd6dd2-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"b27199a8-11ac-4e59-90b8-b42387dd6dd2\") " pod="openstack/openstack-galera-0" Feb 14 04:28:49 crc kubenswrapper[4867]: I0214 04:28:49.242719 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b27199a8-11ac-4e59-90b8-b42387dd6dd2-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"b27199a8-11ac-4e59-90b8-b42387dd6dd2\") " pod="openstack/openstack-galera-0" Feb 14 04:28:49 crc kubenswrapper[4867]: I0214 04:28:49.254664 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24d97\" (UniqueName: \"kubernetes.io/projected/b27199a8-11ac-4e59-90b8-b42387dd6dd2-kube-api-access-24d97\") pod \"openstack-galera-0\" (UID: \"b27199a8-11ac-4e59-90b8-b42387dd6dd2\") " pod="openstack/openstack-galera-0" Feb 14 04:28:49 crc kubenswrapper[4867]: I0214 04:28:49.376583 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-04ed5daa-c5d1-498b-a709-6e4af0a0932b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-04ed5daa-c5d1-498b-a709-6e4af0a0932b\") pod \"openstack-galera-0\" (UID: \"b27199a8-11ac-4e59-90b8-b42387dd6dd2\") " pod="openstack/openstack-galera-0" Feb 14 04:28:49 crc kubenswrapper[4867]: I0214 04:28:49.392250 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 14 04:28:49 crc kubenswrapper[4867]: I0214 04:28:49.631902 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 14 04:28:49 crc kubenswrapper[4867]: I0214 04:28:49.738884 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"6bc83863-74f4-4509-969c-0f3305a542a8","Type":"ContainerStarted","Data":"6d2235a75be13119e9c9aa74a5f3a2e2f13d32b41febb3b537fd57f955f1f8bc"} Feb 14 04:28:49 crc kubenswrapper[4867]: I0214 04:28:49.770820 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"647ba30a-5526-4e27-9095-680c31ff4eb3","Type":"ContainerStarted","Data":"3dfa840147a64ccb967653d642c377ae9470c558827d87830014de26dfbf1136"} Feb 14 04:28:49 crc kubenswrapper[4867]: I0214 04:28:49.803260 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"9bba5174-edd6-4e59-8b84-6c50439be88e","Type":"ContainerStarted","Data":"1a22c1b816602c7a9c207095a5f963d6cce2df715e59142c62ec1b7539b424fc"} Feb 14 04:28:50 crc kubenswrapper[4867]: I0214 04:28:50.512382 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 14 04:28:50 crc kubenswrapper[4867]: I0214 04:28:50.776399 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 14 04:28:50 crc kubenswrapper[4867]: I0214 04:28:50.778590 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 14 04:28:50 crc kubenswrapper[4867]: I0214 04:28:50.783173 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 14 04:28:50 crc kubenswrapper[4867]: I0214 04:28:50.783473 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 14 04:28:50 crc kubenswrapper[4867]: I0214 04:28:50.783775 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-ffdkm" Feb 14 04:28:50 crc kubenswrapper[4867]: I0214 04:28:50.784454 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 14 04:28:50 crc kubenswrapper[4867]: I0214 04:28:50.788769 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 14 04:28:50 crc kubenswrapper[4867]: I0214 04:28:50.810804 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 14 04:28:50 crc kubenswrapper[4867]: I0214 04:28:50.817338 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 14 04:28:50 crc kubenswrapper[4867]: I0214 04:28:50.829101 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 14 04:28:50 crc kubenswrapper[4867]: I0214 04:28:50.829352 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-rs22h" Feb 14 04:28:50 crc kubenswrapper[4867]: I0214 04:28:50.832061 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 14 04:28:50 crc kubenswrapper[4867]: I0214 04:28:50.874125 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"b27199a8-11ac-4e59-90b8-b42387dd6dd2","Type":"ContainerStarted","Data":"6e3034a330e6e973a85a9955386cad48ddcbda0e3b4d2bda1bd1c14a5a4e9067"} Feb 14 04:28:50 crc kubenswrapper[4867]: I0214 04:28:50.880187 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 14 04:28:50 crc kubenswrapper[4867]: I0214 04:28:50.906970 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-68542d8a-fd27-4c7f-94a6-39cc84f8a109\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-68542d8a-fd27-4c7f-94a6-39cc84f8a109\") pod \"openstack-cell1-galera-0\" (UID: \"505de461-9e6f-4914-bf50-e2bf4149b566\") " pod="openstack/openstack-cell1-galera-0" Feb 14 04:28:50 crc kubenswrapper[4867]: I0214 04:28:50.907022 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7sm7m\" (UniqueName: \"kubernetes.io/projected/505de461-9e6f-4914-bf50-e2bf4149b566-kube-api-access-7sm7m\") pod \"openstack-cell1-galera-0\" (UID: \"505de461-9e6f-4914-bf50-e2bf4149b566\") " pod="openstack/openstack-cell1-galera-0" Feb 14 04:28:50 crc kubenswrapper[4867]: I0214 04:28:50.907051 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/505de461-9e6f-4914-bf50-e2bf4149b566-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"505de461-9e6f-4914-bf50-e2bf4149b566\") " pod="openstack/openstack-cell1-galera-0" Feb 14 04:28:50 crc kubenswrapper[4867]: I0214 04:28:50.907091 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/505de461-9e6f-4914-bf50-e2bf4149b566-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"505de461-9e6f-4914-bf50-e2bf4149b566\") " pod="openstack/openstack-cell1-galera-0" Feb 14 04:28:50 crc kubenswrapper[4867]: I0214 04:28:50.907122 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/505de461-9e6f-4914-bf50-e2bf4149b566-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"505de461-9e6f-4914-bf50-e2bf4149b566\") " pod="openstack/openstack-cell1-galera-0" Feb 14 04:28:50 crc kubenswrapper[4867]: I0214 04:28:50.907267 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/505de461-9e6f-4914-bf50-e2bf4149b566-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"505de461-9e6f-4914-bf50-e2bf4149b566\") " pod="openstack/openstack-cell1-galera-0" Feb 14 04:28:50 crc kubenswrapper[4867]: I0214 04:28:50.907287 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/505de461-9e6f-4914-bf50-e2bf4149b566-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"505de461-9e6f-4914-bf50-e2bf4149b566\") " pod="openstack/openstack-cell1-galera-0" Feb 14 04:28:50 crc kubenswrapper[4867]: I0214 04:28:50.907350 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/505de461-9e6f-4914-bf50-e2bf4149b566-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"505de461-9e6f-4914-bf50-e2bf4149b566\") " pod="openstack/openstack-cell1-galera-0" Feb 14 04:28:51 crc kubenswrapper[4867]: I0214 04:28:51.008754 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/505de461-9e6f-4914-bf50-e2bf4149b566-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"505de461-9e6f-4914-bf50-e2bf4149b566\") " pod="openstack/openstack-cell1-galera-0" Feb 14 04:28:51 crc kubenswrapper[4867]: I0214 04:28:51.008817 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vk9sq\" (UniqueName: \"kubernetes.io/projected/f1d6dceb-5ee5-407d-ade4-be35d128d8dc-kube-api-access-vk9sq\") pod \"memcached-0\" (UID: \"f1d6dceb-5ee5-407d-ade4-be35d128d8dc\") " pod="openstack/memcached-0" Feb 14 04:28:51 crc kubenswrapper[4867]: I0214 04:28:51.008843 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-68542d8a-fd27-4c7f-94a6-39cc84f8a109\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-68542d8a-fd27-4c7f-94a6-39cc84f8a109\") pod \"openstack-cell1-galera-0\" (UID: \"505de461-9e6f-4914-bf50-e2bf4149b566\") " pod="openstack/openstack-cell1-galera-0" Feb 14 04:28:51 crc kubenswrapper[4867]: I0214 04:28:51.008864 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1d6dceb-5ee5-407d-ade4-be35d128d8dc-memcached-tls-certs\") pod \"memcached-0\" (UID: \"f1d6dceb-5ee5-407d-ade4-be35d128d8dc\") " pod="openstack/memcached-0" Feb 14 04:28:51 crc kubenswrapper[4867]: I0214 04:28:51.008893 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7sm7m\" (UniqueName: \"kubernetes.io/projected/505de461-9e6f-4914-bf50-e2bf4149b566-kube-api-access-7sm7m\") pod \"openstack-cell1-galera-0\" (UID: \"505de461-9e6f-4914-bf50-e2bf4149b566\") " pod="openstack/openstack-cell1-galera-0" Feb 14 04:28:51 crc kubenswrapper[4867]: I0214 04:28:51.008917 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/505de461-9e6f-4914-bf50-e2bf4149b566-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"505de461-9e6f-4914-bf50-e2bf4149b566\") " pod="openstack/openstack-cell1-galera-0" Feb 14 04:28:51 crc kubenswrapper[4867]: I0214 04:28:51.008941 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f1d6dceb-5ee5-407d-ade4-be35d128d8dc-config-data\") pod \"memcached-0\" (UID: \"f1d6dceb-5ee5-407d-ade4-be35d128d8dc\") " pod="openstack/memcached-0" Feb 14 04:28:51 crc kubenswrapper[4867]: I0214 04:28:51.008967 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/505de461-9e6f-4914-bf50-e2bf4149b566-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"505de461-9e6f-4914-bf50-e2bf4149b566\") " pod="openstack/openstack-cell1-galera-0" Feb 14 04:28:51 crc kubenswrapper[4867]: I0214 04:28:51.008992 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1d6dceb-5ee5-407d-ade4-be35d128d8dc-combined-ca-bundle\") pod \"memcached-0\" (UID: \"f1d6dceb-5ee5-407d-ade4-be35d128d8dc\") " pod="openstack/memcached-0" Feb 14 04:28:51 crc kubenswrapper[4867]: I0214 04:28:51.010005 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/505de461-9e6f-4914-bf50-e2bf4149b566-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"505de461-9e6f-4914-bf50-e2bf4149b566\") " pod="openstack/openstack-cell1-galera-0" Feb 14 04:28:51 crc kubenswrapper[4867]: I0214 04:28:51.010094 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f1d6dceb-5ee5-407d-ade4-be35d128d8dc-kolla-config\") pod \"memcached-0\" (UID: \"f1d6dceb-5ee5-407d-ade4-be35d128d8dc\") " pod="openstack/memcached-0" Feb 14 04:28:51 crc kubenswrapper[4867]: I0214 04:28:51.010175 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/505de461-9e6f-4914-bf50-e2bf4149b566-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"505de461-9e6f-4914-bf50-e2bf4149b566\") " pod="openstack/openstack-cell1-galera-0" Feb 14 04:28:51 crc kubenswrapper[4867]: I0214 04:28:51.010198 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/505de461-9e6f-4914-bf50-e2bf4149b566-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"505de461-9e6f-4914-bf50-e2bf4149b566\") " pod="openstack/openstack-cell1-galera-0" Feb 14 04:28:51 crc kubenswrapper[4867]: I0214 04:28:51.010630 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/505de461-9e6f-4914-bf50-e2bf4149b566-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"505de461-9e6f-4914-bf50-e2bf4149b566\") " pod="openstack/openstack-cell1-galera-0" Feb 14 04:28:51 crc kubenswrapper[4867]: I0214 04:28:51.015951 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/505de461-9e6f-4914-bf50-e2bf4149b566-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"505de461-9e6f-4914-bf50-e2bf4149b566\") " pod="openstack/openstack-cell1-galera-0" Feb 14 04:28:51 crc kubenswrapper[4867]: I0214 04:28:51.016771 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/505de461-9e6f-4914-bf50-e2bf4149b566-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"505de461-9e6f-4914-bf50-e2bf4149b566\") " pod="openstack/openstack-cell1-galera-0" Feb 14 04:28:51 crc kubenswrapper[4867]: I0214 04:28:51.017047 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/505de461-9e6f-4914-bf50-e2bf4149b566-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"505de461-9e6f-4914-bf50-e2bf4149b566\") " pod="openstack/openstack-cell1-galera-0" Feb 14 04:28:51 crc kubenswrapper[4867]: I0214 04:28:51.019957 4867 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 04:28:51 crc kubenswrapper[4867]: I0214 04:28:51.019998 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-68542d8a-fd27-4c7f-94a6-39cc84f8a109\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-68542d8a-fd27-4c7f-94a6-39cc84f8a109\") pod \"openstack-cell1-galera-0\" (UID: \"505de461-9e6f-4914-bf50-e2bf4149b566\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ed8d1f3f2a89d962c8e70e2f9692b177bbfe1fa5bf896782a6497e50ff763e73/globalmount\"" pod="openstack/openstack-cell1-galera-0" Feb 14 04:28:51 crc kubenswrapper[4867]: I0214 04:28:51.024986 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/505de461-9e6f-4914-bf50-e2bf4149b566-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"505de461-9e6f-4914-bf50-e2bf4149b566\") " pod="openstack/openstack-cell1-galera-0" Feb 14 04:28:51 crc kubenswrapper[4867]: I0214 04:28:51.043346 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/505de461-9e6f-4914-bf50-e2bf4149b566-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"505de461-9e6f-4914-bf50-e2bf4149b566\") " pod="openstack/openstack-cell1-galera-0" Feb 14 04:28:51 crc kubenswrapper[4867]: I0214 04:28:51.048238 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7sm7m\" (UniqueName: \"kubernetes.io/projected/505de461-9e6f-4914-bf50-e2bf4149b566-kube-api-access-7sm7m\") pod \"openstack-cell1-galera-0\" (UID: \"505de461-9e6f-4914-bf50-e2bf4149b566\") " pod="openstack/openstack-cell1-galera-0" Feb 14 04:28:51 crc kubenswrapper[4867]: I0214 04:28:51.112729 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vk9sq\" (UniqueName: \"kubernetes.io/projected/f1d6dceb-5ee5-407d-ade4-be35d128d8dc-kube-api-access-vk9sq\") pod \"memcached-0\" (UID: \"f1d6dceb-5ee5-407d-ade4-be35d128d8dc\") " pod="openstack/memcached-0" Feb 14 04:28:51 crc kubenswrapper[4867]: I0214 04:28:51.112787 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1d6dceb-5ee5-407d-ade4-be35d128d8dc-memcached-tls-certs\") pod \"memcached-0\" (UID: \"f1d6dceb-5ee5-407d-ade4-be35d128d8dc\") " pod="openstack/memcached-0" Feb 14 04:28:51 crc kubenswrapper[4867]: I0214 04:28:51.112845 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f1d6dceb-5ee5-407d-ade4-be35d128d8dc-config-data\") pod \"memcached-0\" (UID: \"f1d6dceb-5ee5-407d-ade4-be35d128d8dc\") " pod="openstack/memcached-0" Feb 14 04:28:51 crc kubenswrapper[4867]: I0214 04:28:51.112882 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1d6dceb-5ee5-407d-ade4-be35d128d8dc-combined-ca-bundle\") pod \"memcached-0\" (UID: \"f1d6dceb-5ee5-407d-ade4-be35d128d8dc\") " pod="openstack/memcached-0" Feb 14 04:28:51 crc kubenswrapper[4867]: I0214 04:28:51.112965 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f1d6dceb-5ee5-407d-ade4-be35d128d8dc-kolla-config\") pod \"memcached-0\" (UID: \"f1d6dceb-5ee5-407d-ade4-be35d128d8dc\") " pod="openstack/memcached-0" Feb 14 04:28:51 crc kubenswrapper[4867]: I0214 04:28:51.114199 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f1d6dceb-5ee5-407d-ade4-be35d128d8dc-kolla-config\") pod \"memcached-0\" (UID: \"f1d6dceb-5ee5-407d-ade4-be35d128d8dc\") " pod="openstack/memcached-0" Feb 14 04:28:51 crc kubenswrapper[4867]: I0214 04:28:51.114341 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f1d6dceb-5ee5-407d-ade4-be35d128d8dc-config-data\") pod \"memcached-0\" (UID: \"f1d6dceb-5ee5-407d-ade4-be35d128d8dc\") " pod="openstack/memcached-0" Feb 14 04:28:51 crc kubenswrapper[4867]: I0214 04:28:51.126698 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1d6dceb-5ee5-407d-ade4-be35d128d8dc-memcached-tls-certs\") pod \"memcached-0\" (UID: \"f1d6dceb-5ee5-407d-ade4-be35d128d8dc\") " pod="openstack/memcached-0" Feb 14 04:28:51 crc kubenswrapper[4867]: I0214 04:28:51.137083 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-68542d8a-fd27-4c7f-94a6-39cc84f8a109\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-68542d8a-fd27-4c7f-94a6-39cc84f8a109\") pod \"openstack-cell1-galera-0\" (UID: \"505de461-9e6f-4914-bf50-e2bf4149b566\") " pod="openstack/openstack-cell1-galera-0" Feb 14 04:28:51 crc kubenswrapper[4867]: I0214 04:28:51.141068 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1d6dceb-5ee5-407d-ade4-be35d128d8dc-combined-ca-bundle\") pod \"memcached-0\" (UID: \"f1d6dceb-5ee5-407d-ade4-be35d128d8dc\") " pod="openstack/memcached-0" Feb 14 04:28:51 crc kubenswrapper[4867]: I0214 04:28:51.142207 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vk9sq\" (UniqueName: \"kubernetes.io/projected/f1d6dceb-5ee5-407d-ade4-be35d128d8dc-kube-api-access-vk9sq\") pod \"memcached-0\" (UID: \"f1d6dceb-5ee5-407d-ade4-be35d128d8dc\") " pod="openstack/memcached-0" Feb 14 04:28:51 crc kubenswrapper[4867]: I0214 04:28:51.167986 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 14 04:28:51 crc kubenswrapper[4867]: I0214 04:28:51.429584 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 14 04:28:52 crc kubenswrapper[4867]: I0214 04:28:52.789312 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 14 04:28:52 crc kubenswrapper[4867]: I0214 04:28:52.974782 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"f1d6dceb-5ee5-407d-ade4-be35d128d8dc","Type":"ContainerStarted","Data":"a12f5cd207497e1be12c7bcbddd32c2c27c498b8a906cdeac8ccb904fa2f62ed"} Feb 14 04:28:52 crc kubenswrapper[4867]: I0214 04:28:52.978388 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 14 04:28:53 crc kubenswrapper[4867]: I0214 04:28:53.822278 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 14 04:28:53 crc kubenswrapper[4867]: I0214 04:28:53.826081 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 14 04:28:53 crc kubenswrapper[4867]: I0214 04:28:53.834310 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-tpx28" Feb 14 04:28:53 crc kubenswrapper[4867]: I0214 04:28:53.837926 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 14 04:28:53 crc kubenswrapper[4867]: I0214 04:28:53.930866 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5zbq\" (UniqueName: \"kubernetes.io/projected/a78fec22-f395-42fc-a228-8d896580bc95-kube-api-access-h5zbq\") pod \"kube-state-metrics-0\" (UID: \"a78fec22-f395-42fc-a228-8d896580bc95\") " pod="openstack/kube-state-metrics-0" Feb 14 04:28:54 crc kubenswrapper[4867]: I0214 04:28:54.033618 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5zbq\" (UniqueName: \"kubernetes.io/projected/a78fec22-f395-42fc-a228-8d896580bc95-kube-api-access-h5zbq\") pod \"kube-state-metrics-0\" (UID: \"a78fec22-f395-42fc-a228-8d896580bc95\") " pod="openstack/kube-state-metrics-0" Feb 14 04:28:54 crc kubenswrapper[4867]: I0214 04:28:54.073698 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"505de461-9e6f-4914-bf50-e2bf4149b566","Type":"ContainerStarted","Data":"cb4b60d6fe1eb81c3db75f2723e52986af5ddbfde223fcc274cdbd671f8e5b99"} Feb 14 04:28:54 crc kubenswrapper[4867]: I0214 04:28:54.081656 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5zbq\" (UniqueName: \"kubernetes.io/projected/a78fec22-f395-42fc-a228-8d896580bc95-kube-api-access-h5zbq\") pod \"kube-state-metrics-0\" (UID: \"a78fec22-f395-42fc-a228-8d896580bc95\") " pod="openstack/kube-state-metrics-0" Feb 14 04:28:54 crc kubenswrapper[4867]: I0214 04:28:54.176224 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 14 04:28:54 crc kubenswrapper[4867]: I0214 04:28:54.708706 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-492b9"] Feb 14 04:28:54 crc kubenswrapper[4867]: I0214 04:28:54.710448 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-492b9" Feb 14 04:28:54 crc kubenswrapper[4867]: I0214 04:28:54.720996 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards-sa-dockercfg-wftkz" Feb 14 04:28:54 crc kubenswrapper[4867]: I0214 04:28:54.721243 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards" Feb 14 04:28:54 crc kubenswrapper[4867]: I0214 04:28:54.748577 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-492b9"] Feb 14 04:28:54 crc kubenswrapper[4867]: I0214 04:28:54.766656 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kf2rw\" (UniqueName: \"kubernetes.io/projected/701367b7-aef6-43b5-a0f9-3a91206962de-kube-api-access-kf2rw\") pod \"observability-ui-dashboards-66cbf594b5-492b9\" (UID: \"701367b7-aef6-43b5-a0f9-3a91206962de\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-492b9" Feb 14 04:28:54 crc kubenswrapper[4867]: I0214 04:28:54.766773 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/701367b7-aef6-43b5-a0f9-3a91206962de-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-492b9\" (UID: \"701367b7-aef6-43b5-a0f9-3a91206962de\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-492b9" Feb 14 04:28:54 crc kubenswrapper[4867]: I0214 04:28:54.869693 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kf2rw\" (UniqueName: \"kubernetes.io/projected/701367b7-aef6-43b5-a0f9-3a91206962de-kube-api-access-kf2rw\") pod \"observability-ui-dashboards-66cbf594b5-492b9\" (UID: \"701367b7-aef6-43b5-a0f9-3a91206962de\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-492b9" Feb 14 04:28:54 crc kubenswrapper[4867]: I0214 04:28:54.869790 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/701367b7-aef6-43b5-a0f9-3a91206962de-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-492b9\" (UID: \"701367b7-aef6-43b5-a0f9-3a91206962de\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-492b9" Feb 14 04:28:54 crc kubenswrapper[4867]: I0214 04:28:54.905860 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/701367b7-aef6-43b5-a0f9-3a91206962de-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-492b9\" (UID: \"701367b7-aef6-43b5-a0f9-3a91206962de\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-492b9" Feb 14 04:28:54 crc kubenswrapper[4867]: I0214 04:28:54.933067 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kf2rw\" (UniqueName: \"kubernetes.io/projected/701367b7-aef6-43b5-a0f9-3a91206962de-kube-api-access-kf2rw\") pod \"observability-ui-dashboards-66cbf594b5-492b9\" (UID: \"701367b7-aef6-43b5-a0f9-3a91206962de\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-492b9" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.084501 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-492b9" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.308998 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-796d588566-h9wcn"] Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.310996 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-796d588566-h9wcn" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.386862 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-796d588566-h9wcn"] Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.392149 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/41d35864-bb64-45f3-bc1e-a7d5440c35ad-console-config\") pod \"console-796d588566-h9wcn\" (UID: \"41d35864-bb64-45f3-bc1e-a7d5440c35ad\") " pod="openshift-console/console-796d588566-h9wcn" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.392192 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/41d35864-bb64-45f3-bc1e-a7d5440c35ad-service-ca\") pod \"console-796d588566-h9wcn\" (UID: \"41d35864-bb64-45f3-bc1e-a7d5440c35ad\") " pod="openshift-console/console-796d588566-h9wcn" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.392216 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mj7ct\" (UniqueName: \"kubernetes.io/projected/41d35864-bb64-45f3-bc1e-a7d5440c35ad-kube-api-access-mj7ct\") pod \"console-796d588566-h9wcn\" (UID: \"41d35864-bb64-45f3-bc1e-a7d5440c35ad\") " pod="openshift-console/console-796d588566-h9wcn" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.392273 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41d35864-bb64-45f3-bc1e-a7d5440c35ad-trusted-ca-bundle\") pod \"console-796d588566-h9wcn\" (UID: \"41d35864-bb64-45f3-bc1e-a7d5440c35ad\") " pod="openshift-console/console-796d588566-h9wcn" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.392318 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/41d35864-bb64-45f3-bc1e-a7d5440c35ad-oauth-serving-cert\") pod \"console-796d588566-h9wcn\" (UID: \"41d35864-bb64-45f3-bc1e-a7d5440c35ad\") " pod="openshift-console/console-796d588566-h9wcn" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.392373 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/41d35864-bb64-45f3-bc1e-a7d5440c35ad-console-oauth-config\") pod \"console-796d588566-h9wcn\" (UID: \"41d35864-bb64-45f3-bc1e-a7d5440c35ad\") " pod="openshift-console/console-796d588566-h9wcn" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.392395 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/41d35864-bb64-45f3-bc1e-a7d5440c35ad-console-serving-cert\") pod \"console-796d588566-h9wcn\" (UID: \"41d35864-bb64-45f3-bc1e-a7d5440c35ad\") " pod="openshift-console/console-796d588566-h9wcn" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.442114 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.445118 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.477272 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.477464 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.477596 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-dgxf9" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.477706 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.477816 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.478628 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.478749 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.478897 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.496418 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0eda836b-4d69-49e8-a582-e29da56fd005\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0eda836b-4d69-49e8-a582-e29da56fd005\") pod \"prometheus-metric-storage-0\" (UID: \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.496786 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/41d35864-bb64-45f3-bc1e-a7d5440c35ad-console-oauth-config\") pod \"console-796d588566-h9wcn\" (UID: \"41d35864-bb64-45f3-bc1e-a7d5440c35ad\") " pod="openshift-console/console-796d588566-h9wcn" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.496864 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/c755009c-2bb6-4f8f-9b53-460a0e4c9447-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.496962 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/41d35864-bb64-45f3-bc1e-a7d5440c35ad-console-serving-cert\") pod \"console-796d588566-h9wcn\" (UID: \"41d35864-bb64-45f3-bc1e-a7d5440c35ad\") " pod="openshift-console/console-796d588566-h9wcn" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.497054 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c755009c-2bb6-4f8f-9b53-460a0e4c9447-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.497117 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c755009c-2bb6-4f8f-9b53-460a0e4c9447-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.497227 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c755009c-2bb6-4f8f-9b53-460a0e4c9447-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.497355 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/41d35864-bb64-45f3-bc1e-a7d5440c35ad-console-config\") pod \"console-796d588566-h9wcn\" (UID: \"41d35864-bb64-45f3-bc1e-a7d5440c35ad\") " pod="openshift-console/console-796d588566-h9wcn" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.497441 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/41d35864-bb64-45f3-bc1e-a7d5440c35ad-service-ca\") pod \"console-796d588566-h9wcn\" (UID: \"41d35864-bb64-45f3-bc1e-a7d5440c35ad\") " pod="openshift-console/console-796d588566-h9wcn" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.497523 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mj7ct\" (UniqueName: \"kubernetes.io/projected/41d35864-bb64-45f3-bc1e-a7d5440c35ad-kube-api-access-mj7ct\") pod \"console-796d588566-h9wcn\" (UID: \"41d35864-bb64-45f3-bc1e-a7d5440c35ad\") " pod="openshift-console/console-796d588566-h9wcn" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.497604 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c755009c-2bb6-4f8f-9b53-460a0e4c9447-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.497694 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41d35864-bb64-45f3-bc1e-a7d5440c35ad-trusted-ca-bundle\") pod \"console-796d588566-h9wcn\" (UID: \"41d35864-bb64-45f3-bc1e-a7d5440c35ad\") " pod="openshift-console/console-796d588566-h9wcn" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.497912 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpz8v\" (UniqueName: \"kubernetes.io/projected/c755009c-2bb6-4f8f-9b53-460a0e4c9447-kube-api-access-tpz8v\") pod \"prometheus-metric-storage-0\" (UID: \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.498000 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/c755009c-2bb6-4f8f-9b53-460a0e4c9447-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.498071 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/41d35864-bb64-45f3-bc1e-a7d5440c35ad-oauth-serving-cert\") pod \"console-796d588566-h9wcn\" (UID: \"41d35864-bb64-45f3-bc1e-a7d5440c35ad\") " pod="openshift-console/console-796d588566-h9wcn" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.498148 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c755009c-2bb6-4f8f-9b53-460a0e4c9447-config\") pod \"prometheus-metric-storage-0\" (UID: \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.498215 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c755009c-2bb6-4f8f-9b53-460a0e4c9447-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.502647 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/41d35864-bb64-45f3-bc1e-a7d5440c35ad-service-ca\") pod \"console-796d588566-h9wcn\" (UID: \"41d35864-bb64-45f3-bc1e-a7d5440c35ad\") " pod="openshift-console/console-796d588566-h9wcn" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.505429 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41d35864-bb64-45f3-bc1e-a7d5440c35ad-trusted-ca-bundle\") pod \"console-796d588566-h9wcn\" (UID: \"41d35864-bb64-45f3-bc1e-a7d5440c35ad\") " pod="openshift-console/console-796d588566-h9wcn" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.505617 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/41d35864-bb64-45f3-bc1e-a7d5440c35ad-oauth-serving-cert\") pod \"console-796d588566-h9wcn\" (UID: \"41d35864-bb64-45f3-bc1e-a7d5440c35ad\") " pod="openshift-console/console-796d588566-h9wcn" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.506039 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/41d35864-bb64-45f3-bc1e-a7d5440c35ad-console-config\") pod \"console-796d588566-h9wcn\" (UID: \"41d35864-bb64-45f3-bc1e-a7d5440c35ad\") " pod="openshift-console/console-796d588566-h9wcn" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.565546 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/41d35864-bb64-45f3-bc1e-a7d5440c35ad-console-oauth-config\") pod \"console-796d588566-h9wcn\" (UID: \"41d35864-bb64-45f3-bc1e-a7d5440c35ad\") " pod="openshift-console/console-796d588566-h9wcn" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.576633 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.586155 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/41d35864-bb64-45f3-bc1e-a7d5440c35ad-console-serving-cert\") pod \"console-796d588566-h9wcn\" (UID: \"41d35864-bb64-45f3-bc1e-a7d5440c35ad\") " pod="openshift-console/console-796d588566-h9wcn" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.599929 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpz8v\" (UniqueName: \"kubernetes.io/projected/c755009c-2bb6-4f8f-9b53-460a0e4c9447-kube-api-access-tpz8v\") pod \"prometheus-metric-storage-0\" (UID: \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.600008 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/c755009c-2bb6-4f8f-9b53-460a0e4c9447-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.600050 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c755009c-2bb6-4f8f-9b53-460a0e4c9447-config\") pod \"prometheus-metric-storage-0\" (UID: \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.600079 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c755009c-2bb6-4f8f-9b53-460a0e4c9447-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.600120 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0eda836b-4d69-49e8-a582-e29da56fd005\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0eda836b-4d69-49e8-a582-e29da56fd005\") pod \"prometheus-metric-storage-0\" (UID: \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.600174 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/c755009c-2bb6-4f8f-9b53-460a0e4c9447-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.600230 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c755009c-2bb6-4f8f-9b53-460a0e4c9447-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.600255 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c755009c-2bb6-4f8f-9b53-460a0e4c9447-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.600280 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c755009c-2bb6-4f8f-9b53-460a0e4c9447-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.600355 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c755009c-2bb6-4f8f-9b53-460a0e4c9447-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.601536 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c755009c-2bb6-4f8f-9b53-460a0e4c9447-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.602331 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/c755009c-2bb6-4f8f-9b53-460a0e4c9447-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.603163 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mj7ct\" (UniqueName: \"kubernetes.io/projected/41d35864-bb64-45f3-bc1e-a7d5440c35ad-kube-api-access-mj7ct\") pod \"console-796d588566-h9wcn\" (UID: \"41d35864-bb64-45f3-bc1e-a7d5440c35ad\") " pod="openshift-console/console-796d588566-h9wcn" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.603675 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/c755009c-2bb6-4f8f-9b53-460a0e4c9447-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.637396 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c755009c-2bb6-4f8f-9b53-460a0e4c9447-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.638167 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c755009c-2bb6-4f8f-9b53-460a0e4c9447-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.650046 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/c755009c-2bb6-4f8f-9b53-460a0e4c9447-config\") pod \"prometheus-metric-storage-0\" (UID: \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.653968 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c755009c-2bb6-4f8f-9b53-460a0e4c9447-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.675405 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c755009c-2bb6-4f8f-9b53-460a0e4c9447-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.685068 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-796d588566-h9wcn" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.699634 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpz8v\" (UniqueName: \"kubernetes.io/projected/c755009c-2bb6-4f8f-9b53-460a0e4c9447-kube-api-access-tpz8v\") pod \"prometheus-metric-storage-0\" (UID: \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.700292 4867 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.700328 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0eda836b-4d69-49e8-a582-e29da56fd005\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0eda836b-4d69-49e8-a582-e29da56fd005\") pod \"prometheus-metric-storage-0\" (UID: \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/7c69566d4c941ca8a51b196b92114beed9536eafb9e04e7c441265c9a20c9feb/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.154312 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0eda836b-4d69-49e8-a582-e29da56fd005\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0eda836b-4d69-49e8-a582-e29da56fd005\") pod \"prometheus-metric-storage-0\" (UID: \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.275837 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-492b9"] Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.401690 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.503619 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.519945 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-7lpqj"] Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.521815 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-7lpqj" Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.530105 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.530440 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-475js" Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.530671 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.552584 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-7lpqj"] Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.577803 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-dznst"] Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.584114 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-dznst" Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.600205 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-dznst"] Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.638989 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/16c28c0f-9310-4721-87cf-2d1bb88b5bba-var-run-ovn\") pod \"ovn-controller-7lpqj\" (UID: \"16c28c0f-9310-4721-87cf-2d1bb88b5bba\") " pod="openstack/ovn-controller-7lpqj" Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.639032 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/16c28c0f-9310-4721-87cf-2d1bb88b5bba-var-log-ovn\") pod \"ovn-controller-7lpqj\" (UID: \"16c28c0f-9310-4721-87cf-2d1bb88b5bba\") " pod="openstack/ovn-controller-7lpqj" Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.639072 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16c28c0f-9310-4721-87cf-2d1bb88b5bba-combined-ca-bundle\") pod \"ovn-controller-7lpqj\" (UID: \"16c28c0f-9310-4721-87cf-2d1bb88b5bba\") " pod="openstack/ovn-controller-7lpqj" Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.639110 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/16c28c0f-9310-4721-87cf-2d1bb88b5bba-scripts\") pod \"ovn-controller-7lpqj\" (UID: \"16c28c0f-9310-4721-87cf-2d1bb88b5bba\") " pod="openstack/ovn-controller-7lpqj" Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.639131 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/16c28c0f-9310-4721-87cf-2d1bb88b5bba-var-run\") pod \"ovn-controller-7lpqj\" (UID: \"16c28c0f-9310-4721-87cf-2d1bb88b5bba\") " pod="openstack/ovn-controller-7lpqj" Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.639153 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsr9m\" (UniqueName: \"kubernetes.io/projected/16c28c0f-9310-4721-87cf-2d1bb88b5bba-kube-api-access-hsr9m\") pod \"ovn-controller-7lpqj\" (UID: \"16c28c0f-9310-4721-87cf-2d1bb88b5bba\") " pod="openstack/ovn-controller-7lpqj" Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.639415 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/16c28c0f-9310-4721-87cf-2d1bb88b5bba-ovn-controller-tls-certs\") pod \"ovn-controller-7lpqj\" (UID: \"16c28c0f-9310-4721-87cf-2d1bb88b5bba\") " pod="openstack/ovn-controller-7lpqj" Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.743166 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6f356df8-0955-46c4-9166-2c1eef982399-var-run\") pod \"ovn-controller-ovs-dznst\" (UID: \"6f356df8-0955-46c4-9166-2c1eef982399\") " pod="openstack/ovn-controller-ovs-dznst" Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.743251 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/16c28c0f-9310-4721-87cf-2d1bb88b5bba-ovn-controller-tls-certs\") pod \"ovn-controller-7lpqj\" (UID: \"16c28c0f-9310-4721-87cf-2d1bb88b5bba\") " pod="openstack/ovn-controller-7lpqj" Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.743800 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6f356df8-0955-46c4-9166-2c1eef982399-scripts\") pod \"ovn-controller-ovs-dznst\" (UID: \"6f356df8-0955-46c4-9166-2c1eef982399\") " pod="openstack/ovn-controller-ovs-dznst" Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.743922 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/16c28c0f-9310-4721-87cf-2d1bb88b5bba-var-run-ovn\") pod \"ovn-controller-7lpqj\" (UID: \"16c28c0f-9310-4721-87cf-2d1bb88b5bba\") " pod="openstack/ovn-controller-7lpqj" Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.744151 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/16c28c0f-9310-4721-87cf-2d1bb88b5bba-var-log-ovn\") pod \"ovn-controller-7lpqj\" (UID: \"16c28c0f-9310-4721-87cf-2d1bb88b5bba\") " pod="openstack/ovn-controller-7lpqj" Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.744203 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/6f356df8-0955-46c4-9166-2c1eef982399-etc-ovs\") pod \"ovn-controller-ovs-dznst\" (UID: \"6f356df8-0955-46c4-9166-2c1eef982399\") " pod="openstack/ovn-controller-ovs-dznst" Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.744224 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7d8w\" (UniqueName: \"kubernetes.io/projected/6f356df8-0955-46c4-9166-2c1eef982399-kube-api-access-p7d8w\") pod \"ovn-controller-ovs-dznst\" (UID: \"6f356df8-0955-46c4-9166-2c1eef982399\") " pod="openstack/ovn-controller-ovs-dznst" Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.744254 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16c28c0f-9310-4721-87cf-2d1bb88b5bba-combined-ca-bundle\") pod \"ovn-controller-7lpqj\" (UID: \"16c28c0f-9310-4721-87cf-2d1bb88b5bba\") " pod="openstack/ovn-controller-7lpqj" Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.744280 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/6f356df8-0955-46c4-9166-2c1eef982399-var-lib\") pod \"ovn-controller-ovs-dznst\" (UID: \"6f356df8-0955-46c4-9166-2c1eef982399\") " pod="openstack/ovn-controller-ovs-dznst" Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.744542 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/6f356df8-0955-46c4-9166-2c1eef982399-var-log\") pod \"ovn-controller-ovs-dznst\" (UID: \"6f356df8-0955-46c4-9166-2c1eef982399\") " pod="openstack/ovn-controller-ovs-dznst" Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.744584 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/16c28c0f-9310-4721-87cf-2d1bb88b5bba-scripts\") pod \"ovn-controller-7lpqj\" (UID: \"16c28c0f-9310-4721-87cf-2d1bb88b5bba\") " pod="openstack/ovn-controller-7lpqj" Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.744609 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/16c28c0f-9310-4721-87cf-2d1bb88b5bba-var-run\") pod \"ovn-controller-7lpqj\" (UID: \"16c28c0f-9310-4721-87cf-2d1bb88b5bba\") " pod="openstack/ovn-controller-7lpqj" Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.744752 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsr9m\" (UniqueName: \"kubernetes.io/projected/16c28c0f-9310-4721-87cf-2d1bb88b5bba-kube-api-access-hsr9m\") pod \"ovn-controller-7lpqj\" (UID: \"16c28c0f-9310-4721-87cf-2d1bb88b5bba\") " pod="openstack/ovn-controller-7lpqj" Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.752142 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/16c28c0f-9310-4721-87cf-2d1bb88b5bba-var-run\") pod \"ovn-controller-7lpqj\" (UID: \"16c28c0f-9310-4721-87cf-2d1bb88b5bba\") " pod="openstack/ovn-controller-7lpqj" Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.752569 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/16c28c0f-9310-4721-87cf-2d1bb88b5bba-var-log-ovn\") pod \"ovn-controller-7lpqj\" (UID: \"16c28c0f-9310-4721-87cf-2d1bb88b5bba\") " pod="openstack/ovn-controller-7lpqj" Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.758134 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/16c28c0f-9310-4721-87cf-2d1bb88b5bba-var-run-ovn\") pod \"ovn-controller-7lpqj\" (UID: \"16c28c0f-9310-4721-87cf-2d1bb88b5bba\") " pod="openstack/ovn-controller-7lpqj" Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.784238 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/16c28c0f-9310-4721-87cf-2d1bb88b5bba-scripts\") pod \"ovn-controller-7lpqj\" (UID: \"16c28c0f-9310-4721-87cf-2d1bb88b5bba\") " pod="openstack/ovn-controller-7lpqj" Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.791668 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsr9m\" (UniqueName: \"kubernetes.io/projected/16c28c0f-9310-4721-87cf-2d1bb88b5bba-kube-api-access-hsr9m\") pod \"ovn-controller-7lpqj\" (UID: \"16c28c0f-9310-4721-87cf-2d1bb88b5bba\") " pod="openstack/ovn-controller-7lpqj" Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.807694 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-796d588566-h9wcn"] Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.814344 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/16c28c0f-9310-4721-87cf-2d1bb88b5bba-ovn-controller-tls-certs\") pod \"ovn-controller-7lpqj\" (UID: \"16c28c0f-9310-4721-87cf-2d1bb88b5bba\") " pod="openstack/ovn-controller-7lpqj" Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.821106 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16c28c0f-9310-4721-87cf-2d1bb88b5bba-combined-ca-bundle\") pod \"ovn-controller-7lpqj\" (UID: \"16c28c0f-9310-4721-87cf-2d1bb88b5bba\") " pod="openstack/ovn-controller-7lpqj" Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.846668 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/6f356df8-0955-46c4-9166-2c1eef982399-etc-ovs\") pod \"ovn-controller-ovs-dznst\" (UID: \"6f356df8-0955-46c4-9166-2c1eef982399\") " pod="openstack/ovn-controller-ovs-dznst" Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.849595 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7d8w\" (UniqueName: \"kubernetes.io/projected/6f356df8-0955-46c4-9166-2c1eef982399-kube-api-access-p7d8w\") pod \"ovn-controller-ovs-dznst\" (UID: \"6f356df8-0955-46c4-9166-2c1eef982399\") " pod="openstack/ovn-controller-ovs-dznst" Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.850063 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/6f356df8-0955-46c4-9166-2c1eef982399-var-lib\") pod \"ovn-controller-ovs-dznst\" (UID: \"6f356df8-0955-46c4-9166-2c1eef982399\") " pod="openstack/ovn-controller-ovs-dznst" Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.850594 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/6f356df8-0955-46c4-9166-2c1eef982399-var-log\") pod \"ovn-controller-ovs-dznst\" (UID: \"6f356df8-0955-46c4-9166-2c1eef982399\") " pod="openstack/ovn-controller-ovs-dznst" Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.850781 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6f356df8-0955-46c4-9166-2c1eef982399-var-run\") pod \"ovn-controller-ovs-dznst\" (UID: \"6f356df8-0955-46c4-9166-2c1eef982399\") " pod="openstack/ovn-controller-ovs-dznst" Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.847463 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/6f356df8-0955-46c4-9166-2c1eef982399-etc-ovs\") pod \"ovn-controller-ovs-dznst\" (UID: \"6f356df8-0955-46c4-9166-2c1eef982399\") " pod="openstack/ovn-controller-ovs-dznst" Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.850994 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6f356df8-0955-46c4-9166-2c1eef982399-scripts\") pod \"ovn-controller-ovs-dznst\" (UID: \"6f356df8-0955-46c4-9166-2c1eef982399\") " pod="openstack/ovn-controller-ovs-dznst" Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.851098 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6f356df8-0955-46c4-9166-2c1eef982399-var-run\") pod \"ovn-controller-ovs-dznst\" (UID: \"6f356df8-0955-46c4-9166-2c1eef982399\") " pod="openstack/ovn-controller-ovs-dznst" Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.851191 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/6f356df8-0955-46c4-9166-2c1eef982399-var-log\") pod \"ovn-controller-ovs-dznst\" (UID: \"6f356df8-0955-46c4-9166-2c1eef982399\") " pod="openstack/ovn-controller-ovs-dznst" Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.852102 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/6f356df8-0955-46c4-9166-2c1eef982399-var-lib\") pod \"ovn-controller-ovs-dznst\" (UID: \"6f356df8-0955-46c4-9166-2c1eef982399\") " pod="openstack/ovn-controller-ovs-dznst" Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.855311 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6f356df8-0955-46c4-9166-2c1eef982399-scripts\") pod \"ovn-controller-ovs-dznst\" (UID: \"6f356df8-0955-46c4-9166-2c1eef982399\") " pod="openstack/ovn-controller-ovs-dznst" Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.871322 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7d8w\" (UniqueName: \"kubernetes.io/projected/6f356df8-0955-46c4-9166-2c1eef982399-kube-api-access-p7d8w\") pod \"ovn-controller-ovs-dznst\" (UID: \"6f356df8-0955-46c4-9166-2c1eef982399\") " pod="openstack/ovn-controller-ovs-dznst" Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.880803 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-7lpqj" Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.929230 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-dznst" Feb 14 04:28:57 crc kubenswrapper[4867]: I0214 04:28:57.344369 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 14 04:28:57 crc kubenswrapper[4867]: I0214 04:28:57.346983 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 14 04:28:57 crc kubenswrapper[4867]: I0214 04:28:57.352389 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 14 04:28:57 crc kubenswrapper[4867]: I0214 04:28:57.352770 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 14 04:28:57 crc kubenswrapper[4867]: I0214 04:28:57.352965 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 14 04:28:57 crc kubenswrapper[4867]: I0214 04:28:57.353027 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-fqcln" Feb 14 04:28:57 crc kubenswrapper[4867]: I0214 04:28:57.353435 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 14 04:28:57 crc kubenswrapper[4867]: I0214 04:28:57.355141 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 14 04:28:57 crc kubenswrapper[4867]: I0214 04:28:57.362145 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-492b9" event={"ID":"701367b7-aef6-43b5-a0f9-3a91206962de","Type":"ContainerStarted","Data":"297be4f2c2d05398602a7a56cc65b22059095b88f73a6839ea02bb1fb7fdd68b"} Feb 14 04:28:57 crc kubenswrapper[4867]: I0214 04:28:57.473317 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-63ae6954-6a1d-48f9-b6a7-ee0e266f72bb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-63ae6954-6a1d-48f9-b6a7-ee0e266f72bb\") pod \"ovsdbserver-nb-0\" (UID: \"353b0cad-bb6a-4a68-b787-64fb7b32ee27\") " pod="openstack/ovsdbserver-nb-0" Feb 14 04:28:57 crc kubenswrapper[4867]: I0214 04:28:57.473414 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/353b0cad-bb6a-4a68-b787-64fb7b32ee27-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"353b0cad-bb6a-4a68-b787-64fb7b32ee27\") " pod="openstack/ovsdbserver-nb-0" Feb 14 04:28:57 crc kubenswrapper[4867]: I0214 04:28:57.473439 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/353b0cad-bb6a-4a68-b787-64fb7b32ee27-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"353b0cad-bb6a-4a68-b787-64fb7b32ee27\") " pod="openstack/ovsdbserver-nb-0" Feb 14 04:28:57 crc kubenswrapper[4867]: I0214 04:28:57.474938 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/353b0cad-bb6a-4a68-b787-64fb7b32ee27-config\") pod \"ovsdbserver-nb-0\" (UID: \"353b0cad-bb6a-4a68-b787-64fb7b32ee27\") " pod="openstack/ovsdbserver-nb-0" Feb 14 04:28:57 crc kubenswrapper[4867]: I0214 04:28:57.474968 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/353b0cad-bb6a-4a68-b787-64fb7b32ee27-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"353b0cad-bb6a-4a68-b787-64fb7b32ee27\") " pod="openstack/ovsdbserver-nb-0" Feb 14 04:28:57 crc kubenswrapper[4867]: I0214 04:28:57.474991 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcrqc\" (UniqueName: \"kubernetes.io/projected/353b0cad-bb6a-4a68-b787-64fb7b32ee27-kube-api-access-wcrqc\") pod \"ovsdbserver-nb-0\" (UID: \"353b0cad-bb6a-4a68-b787-64fb7b32ee27\") " pod="openstack/ovsdbserver-nb-0" Feb 14 04:28:57 crc kubenswrapper[4867]: I0214 04:28:57.475027 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/353b0cad-bb6a-4a68-b787-64fb7b32ee27-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"353b0cad-bb6a-4a68-b787-64fb7b32ee27\") " pod="openstack/ovsdbserver-nb-0" Feb 14 04:28:57 crc kubenswrapper[4867]: I0214 04:28:57.475134 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/353b0cad-bb6a-4a68-b787-64fb7b32ee27-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"353b0cad-bb6a-4a68-b787-64fb7b32ee27\") " pod="openstack/ovsdbserver-nb-0" Feb 14 04:28:57 crc kubenswrapper[4867]: I0214 04:28:57.577568 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/353b0cad-bb6a-4a68-b787-64fb7b32ee27-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"353b0cad-bb6a-4a68-b787-64fb7b32ee27\") " pod="openstack/ovsdbserver-nb-0" Feb 14 04:28:57 crc kubenswrapper[4867]: I0214 04:28:57.577697 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/353b0cad-bb6a-4a68-b787-64fb7b32ee27-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"353b0cad-bb6a-4a68-b787-64fb7b32ee27\") " pod="openstack/ovsdbserver-nb-0" Feb 14 04:28:57 crc kubenswrapper[4867]: I0214 04:28:57.577790 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-63ae6954-6a1d-48f9-b6a7-ee0e266f72bb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-63ae6954-6a1d-48f9-b6a7-ee0e266f72bb\") pod \"ovsdbserver-nb-0\" (UID: \"353b0cad-bb6a-4a68-b787-64fb7b32ee27\") " pod="openstack/ovsdbserver-nb-0" Feb 14 04:28:57 crc kubenswrapper[4867]: I0214 04:28:57.577858 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/353b0cad-bb6a-4a68-b787-64fb7b32ee27-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"353b0cad-bb6a-4a68-b787-64fb7b32ee27\") " pod="openstack/ovsdbserver-nb-0" Feb 14 04:28:57 crc kubenswrapper[4867]: I0214 04:28:57.577875 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/353b0cad-bb6a-4a68-b787-64fb7b32ee27-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"353b0cad-bb6a-4a68-b787-64fb7b32ee27\") " pod="openstack/ovsdbserver-nb-0" Feb 14 04:28:57 crc kubenswrapper[4867]: I0214 04:28:57.579018 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/353b0cad-bb6a-4a68-b787-64fb7b32ee27-config\") pod \"ovsdbserver-nb-0\" (UID: \"353b0cad-bb6a-4a68-b787-64fb7b32ee27\") " pod="openstack/ovsdbserver-nb-0" Feb 14 04:28:57 crc kubenswrapper[4867]: I0214 04:28:57.579690 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/353b0cad-bb6a-4a68-b787-64fb7b32ee27-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"353b0cad-bb6a-4a68-b787-64fb7b32ee27\") " pod="openstack/ovsdbserver-nb-0" Feb 14 04:28:57 crc kubenswrapper[4867]: I0214 04:28:57.579725 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcrqc\" (UniqueName: \"kubernetes.io/projected/353b0cad-bb6a-4a68-b787-64fb7b32ee27-kube-api-access-wcrqc\") pod \"ovsdbserver-nb-0\" (UID: \"353b0cad-bb6a-4a68-b787-64fb7b32ee27\") " pod="openstack/ovsdbserver-nb-0" Feb 14 04:28:57 crc kubenswrapper[4867]: I0214 04:28:57.580214 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/353b0cad-bb6a-4a68-b787-64fb7b32ee27-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"353b0cad-bb6a-4a68-b787-64fb7b32ee27\") " pod="openstack/ovsdbserver-nb-0" Feb 14 04:28:57 crc kubenswrapper[4867]: I0214 04:28:57.581264 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/353b0cad-bb6a-4a68-b787-64fb7b32ee27-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"353b0cad-bb6a-4a68-b787-64fb7b32ee27\") " pod="openstack/ovsdbserver-nb-0" Feb 14 04:28:57 crc kubenswrapper[4867]: I0214 04:28:57.584889 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/353b0cad-bb6a-4a68-b787-64fb7b32ee27-config\") pod \"ovsdbserver-nb-0\" (UID: \"353b0cad-bb6a-4a68-b787-64fb7b32ee27\") " pod="openstack/ovsdbserver-nb-0" Feb 14 04:28:57 crc kubenswrapper[4867]: I0214 04:28:57.585169 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/353b0cad-bb6a-4a68-b787-64fb7b32ee27-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"353b0cad-bb6a-4a68-b787-64fb7b32ee27\") " pod="openstack/ovsdbserver-nb-0" Feb 14 04:28:57 crc kubenswrapper[4867]: I0214 04:28:57.586847 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/353b0cad-bb6a-4a68-b787-64fb7b32ee27-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"353b0cad-bb6a-4a68-b787-64fb7b32ee27\") " pod="openstack/ovsdbserver-nb-0" Feb 14 04:28:57 crc kubenswrapper[4867]: I0214 04:28:57.586871 4867 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 04:28:57 crc kubenswrapper[4867]: I0214 04:28:57.586979 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-63ae6954-6a1d-48f9-b6a7-ee0e266f72bb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-63ae6954-6a1d-48f9-b6a7-ee0e266f72bb\") pod \"ovsdbserver-nb-0\" (UID: \"353b0cad-bb6a-4a68-b787-64fb7b32ee27\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/2791e6f04a407c8a08ed17014ba6b90fc1c1aed99508ca220d2fd83daa6b717c/globalmount\"" pod="openstack/ovsdbserver-nb-0" Feb 14 04:28:57 crc kubenswrapper[4867]: I0214 04:28:57.594043 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/353b0cad-bb6a-4a68-b787-64fb7b32ee27-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"353b0cad-bb6a-4a68-b787-64fb7b32ee27\") " pod="openstack/ovsdbserver-nb-0" Feb 14 04:28:57 crc kubenswrapper[4867]: I0214 04:28:57.596446 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcrqc\" (UniqueName: \"kubernetes.io/projected/353b0cad-bb6a-4a68-b787-64fb7b32ee27-kube-api-access-wcrqc\") pod \"ovsdbserver-nb-0\" (UID: \"353b0cad-bb6a-4a68-b787-64fb7b32ee27\") " pod="openstack/ovsdbserver-nb-0" Feb 14 04:28:57 crc kubenswrapper[4867]: I0214 04:28:57.665841 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-63ae6954-6a1d-48f9-b6a7-ee0e266f72bb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-63ae6954-6a1d-48f9-b6a7-ee0e266f72bb\") pod \"ovsdbserver-nb-0\" (UID: \"353b0cad-bb6a-4a68-b787-64fb7b32ee27\") " pod="openstack/ovsdbserver-nb-0" Feb 14 04:28:57 crc kubenswrapper[4867]: I0214 04:28:57.690100 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 14 04:29:00 crc kubenswrapper[4867]: I0214 04:29:00.461085 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 14 04:29:00 crc kubenswrapper[4867]: I0214 04:29:00.466531 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 14 04:29:00 crc kubenswrapper[4867]: I0214 04:29:00.470350 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 14 04:29:00 crc kubenswrapper[4867]: I0214 04:29:00.470429 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 14 04:29:00 crc kubenswrapper[4867]: I0214 04:29:00.470658 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 14 04:29:00 crc kubenswrapper[4867]: I0214 04:29:00.470755 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-w4792" Feb 14 04:29:00 crc kubenswrapper[4867]: I0214 04:29:00.500960 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 14 04:29:00 crc kubenswrapper[4867]: I0214 04:29:00.586284 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9faf0052-6200-4ac5-9216-7a26a29f4508-config\") pod \"ovsdbserver-sb-0\" (UID: \"9faf0052-6200-4ac5-9216-7a26a29f4508\") " pod="openstack/ovsdbserver-sb-0" Feb 14 04:29:00 crc kubenswrapper[4867]: I0214 04:29:00.586323 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9faf0052-6200-4ac5-9216-7a26a29f4508-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"9faf0052-6200-4ac5-9216-7a26a29f4508\") " pod="openstack/ovsdbserver-sb-0" Feb 14 04:29:00 crc kubenswrapper[4867]: I0214 04:29:00.586348 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9faf0052-6200-4ac5-9216-7a26a29f4508-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"9faf0052-6200-4ac5-9216-7a26a29f4508\") " pod="openstack/ovsdbserver-sb-0" Feb 14 04:29:00 crc kubenswrapper[4867]: I0214 04:29:00.586378 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9faf0052-6200-4ac5-9216-7a26a29f4508-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"9faf0052-6200-4ac5-9216-7a26a29f4508\") " pod="openstack/ovsdbserver-sb-0" Feb 14 04:29:00 crc kubenswrapper[4867]: I0214 04:29:00.586418 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b1340507-39b4-4147-a2fe-5c4d09e854ad\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b1340507-39b4-4147-a2fe-5c4d09e854ad\") pod \"ovsdbserver-sb-0\" (UID: \"9faf0052-6200-4ac5-9216-7a26a29f4508\") " pod="openstack/ovsdbserver-sb-0" Feb 14 04:29:00 crc kubenswrapper[4867]: I0214 04:29:00.586817 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qw78\" (UniqueName: \"kubernetes.io/projected/9faf0052-6200-4ac5-9216-7a26a29f4508-kube-api-access-4qw78\") pod \"ovsdbserver-sb-0\" (UID: \"9faf0052-6200-4ac5-9216-7a26a29f4508\") " pod="openstack/ovsdbserver-sb-0" Feb 14 04:29:00 crc kubenswrapper[4867]: I0214 04:29:00.586907 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9faf0052-6200-4ac5-9216-7a26a29f4508-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"9faf0052-6200-4ac5-9216-7a26a29f4508\") " pod="openstack/ovsdbserver-sb-0" Feb 14 04:29:00 crc kubenswrapper[4867]: I0214 04:29:00.586952 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9faf0052-6200-4ac5-9216-7a26a29f4508-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"9faf0052-6200-4ac5-9216-7a26a29f4508\") " pod="openstack/ovsdbserver-sb-0" Feb 14 04:29:00 crc kubenswrapper[4867]: I0214 04:29:00.689369 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9faf0052-6200-4ac5-9216-7a26a29f4508-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"9faf0052-6200-4ac5-9216-7a26a29f4508\") " pod="openstack/ovsdbserver-sb-0" Feb 14 04:29:00 crc kubenswrapper[4867]: I0214 04:29:00.689440 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9faf0052-6200-4ac5-9216-7a26a29f4508-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"9faf0052-6200-4ac5-9216-7a26a29f4508\") " pod="openstack/ovsdbserver-sb-0" Feb 14 04:29:00 crc kubenswrapper[4867]: I0214 04:29:00.689539 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b1340507-39b4-4147-a2fe-5c4d09e854ad\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b1340507-39b4-4147-a2fe-5c4d09e854ad\") pod \"ovsdbserver-sb-0\" (UID: \"9faf0052-6200-4ac5-9216-7a26a29f4508\") " pod="openstack/ovsdbserver-sb-0" Feb 14 04:29:00 crc kubenswrapper[4867]: I0214 04:29:00.689748 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qw78\" (UniqueName: \"kubernetes.io/projected/9faf0052-6200-4ac5-9216-7a26a29f4508-kube-api-access-4qw78\") pod \"ovsdbserver-sb-0\" (UID: \"9faf0052-6200-4ac5-9216-7a26a29f4508\") " pod="openstack/ovsdbserver-sb-0" Feb 14 04:29:00 crc kubenswrapper[4867]: I0214 04:29:00.690268 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9faf0052-6200-4ac5-9216-7a26a29f4508-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"9faf0052-6200-4ac5-9216-7a26a29f4508\") " pod="openstack/ovsdbserver-sb-0" Feb 14 04:29:00 crc kubenswrapper[4867]: I0214 04:29:00.690316 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9faf0052-6200-4ac5-9216-7a26a29f4508-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"9faf0052-6200-4ac5-9216-7a26a29f4508\") " pod="openstack/ovsdbserver-sb-0" Feb 14 04:29:00 crc kubenswrapper[4867]: I0214 04:29:00.691120 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9faf0052-6200-4ac5-9216-7a26a29f4508-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"9faf0052-6200-4ac5-9216-7a26a29f4508\") " pod="openstack/ovsdbserver-sb-0" Feb 14 04:29:00 crc kubenswrapper[4867]: I0214 04:29:00.691210 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9faf0052-6200-4ac5-9216-7a26a29f4508-config\") pod \"ovsdbserver-sb-0\" (UID: \"9faf0052-6200-4ac5-9216-7a26a29f4508\") " pod="openstack/ovsdbserver-sb-0" Feb 14 04:29:00 crc kubenswrapper[4867]: I0214 04:29:00.691237 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9faf0052-6200-4ac5-9216-7a26a29f4508-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"9faf0052-6200-4ac5-9216-7a26a29f4508\") " pod="openstack/ovsdbserver-sb-0" Feb 14 04:29:00 crc kubenswrapper[4867]: I0214 04:29:00.692389 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9faf0052-6200-4ac5-9216-7a26a29f4508-config\") pod \"ovsdbserver-sb-0\" (UID: \"9faf0052-6200-4ac5-9216-7a26a29f4508\") " pod="openstack/ovsdbserver-sb-0" Feb 14 04:29:00 crc kubenswrapper[4867]: I0214 04:29:00.692645 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9faf0052-6200-4ac5-9216-7a26a29f4508-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"9faf0052-6200-4ac5-9216-7a26a29f4508\") " pod="openstack/ovsdbserver-sb-0" Feb 14 04:29:00 crc kubenswrapper[4867]: I0214 04:29:00.693363 4867 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 04:29:00 crc kubenswrapper[4867]: I0214 04:29:00.693402 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b1340507-39b4-4147-a2fe-5c4d09e854ad\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b1340507-39b4-4147-a2fe-5c4d09e854ad\") pod \"ovsdbserver-sb-0\" (UID: \"9faf0052-6200-4ac5-9216-7a26a29f4508\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/0489361f098bfb09ef2865530d497df974905e0ea95999431299d200f73e3b92/globalmount\"" pod="openstack/ovsdbserver-sb-0" Feb 14 04:29:00 crc kubenswrapper[4867]: I0214 04:29:00.697484 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9faf0052-6200-4ac5-9216-7a26a29f4508-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"9faf0052-6200-4ac5-9216-7a26a29f4508\") " pod="openstack/ovsdbserver-sb-0" Feb 14 04:29:00 crc kubenswrapper[4867]: I0214 04:29:00.697534 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9faf0052-6200-4ac5-9216-7a26a29f4508-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"9faf0052-6200-4ac5-9216-7a26a29f4508\") " pod="openstack/ovsdbserver-sb-0" Feb 14 04:29:00 crc kubenswrapper[4867]: I0214 04:29:00.697944 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9faf0052-6200-4ac5-9216-7a26a29f4508-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"9faf0052-6200-4ac5-9216-7a26a29f4508\") " pod="openstack/ovsdbserver-sb-0" Feb 14 04:29:00 crc kubenswrapper[4867]: I0214 04:29:00.709479 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qw78\" (UniqueName: \"kubernetes.io/projected/9faf0052-6200-4ac5-9216-7a26a29f4508-kube-api-access-4qw78\") pod \"ovsdbserver-sb-0\" (UID: \"9faf0052-6200-4ac5-9216-7a26a29f4508\") " pod="openstack/ovsdbserver-sb-0" Feb 14 04:29:00 crc kubenswrapper[4867]: I0214 04:29:00.812937 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b1340507-39b4-4147-a2fe-5c4d09e854ad\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b1340507-39b4-4147-a2fe-5c4d09e854ad\") pod \"ovsdbserver-sb-0\" (UID: \"9faf0052-6200-4ac5-9216-7a26a29f4508\") " pod="openstack/ovsdbserver-sb-0" Feb 14 04:29:01 crc kubenswrapper[4867]: I0214 04:29:01.101152 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 14 04:29:01 crc kubenswrapper[4867]: I0214 04:29:01.255347 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 04:29:01 crc kubenswrapper[4867]: I0214 04:29:01.255764 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 04:29:01 crc kubenswrapper[4867]: I0214 04:29:01.255822 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" Feb 14 04:29:01 crc kubenswrapper[4867]: I0214 04:29:01.256769 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a6dbe719cdc073fcc8481a2727f00815982a8bd61b2cd10d4229a11b7b5cb46c"} pod="openshift-machine-config-operator/machine-config-daemon-4s95t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 04:29:01 crc kubenswrapper[4867]: I0214 04:29:01.256823 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" containerID="cri-o://a6dbe719cdc073fcc8481a2727f00815982a8bd61b2cd10d4229a11b7b5cb46c" gracePeriod=600 Feb 14 04:29:02 crc kubenswrapper[4867]: I0214 04:29:02.470711 4867 generic.go:334] "Generic (PLEG): container finished" podID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerID="a6dbe719cdc073fcc8481a2727f00815982a8bd61b2cd10d4229a11b7b5cb46c" exitCode=0 Feb 14 04:29:02 crc kubenswrapper[4867]: I0214 04:29:02.470763 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" event={"ID":"5992e46c-bce7-4b9f-82f2-c7ffb93286cd","Type":"ContainerDied","Data":"a6dbe719cdc073fcc8481a2727f00815982a8bd61b2cd10d4229a11b7b5cb46c"} Feb 14 04:29:02 crc kubenswrapper[4867]: I0214 04:29:02.470804 4867 scope.go:117] "RemoveContainer" containerID="3ce87267e4cadbd1bac903bbe9da7eec07159552420bcd52dda15fc535f1ace5" Feb 14 04:29:07 crc kubenswrapper[4867]: W0214 04:29:07.577632 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda78fec22_f395_42fc_a228_8d896580bc95.slice/crio-7872a307f41dac436f282982837819d0b6f5a19b6e81efabef32ab85041cfe4d WatchSource:0}: Error finding container 7872a307f41dac436f282982837819d0b6f5a19b6e81efabef32ab85041cfe4d: Status 404 returned error can't find the container with id 7872a307f41dac436f282982837819d0b6f5a19b6e81efabef32ab85041cfe4d Feb 14 04:29:08 crc kubenswrapper[4867]: I0214 04:29:08.544333 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a78fec22-f395-42fc-a228-8d896580bc95","Type":"ContainerStarted","Data":"7872a307f41dac436f282982837819d0b6f5a19b6e81efabef32ab85041cfe4d"} Feb 14 04:29:08 crc kubenswrapper[4867]: I0214 04:29:08.545540 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-796d588566-h9wcn" event={"ID":"41d35864-bb64-45f3-bc1e-a7d5440c35ad","Type":"ContainerStarted","Data":"35c6aea9ac553c5348e6649237f624ad7062d04a2f6e1250a646ada88b211005"} Feb 14 04:29:21 crc kubenswrapper[4867]: E0214 04:29:21.438896 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Feb 14 04:29:21 crc kubenswrapper[4867]: E0214 04:29:21.439703 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-24d97,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(b27199a8-11ac-4e59-90b8-b42387dd6dd2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 04:29:21 crc kubenswrapper[4867]: E0214 04:29:21.440989 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="b27199a8-11ac-4e59-90b8-b42387dd6dd2" Feb 14 04:29:21 crc kubenswrapper[4867]: E0214 04:29:21.673359 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-galera-0" podUID="b27199a8-11ac-4e59-90b8-b42387dd6dd2" Feb 14 04:29:21 crc kubenswrapper[4867]: E0214 04:29:21.893331 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/dashboards-console-plugin-rhel9@sha256:093d2731ac848ed5fd57356b155a19d3bf7b8db96d95b09c5d0095e143f7254f" Feb 14 04:29:21 crc kubenswrapper[4867]: E0214 04:29:21.893501 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:observability-ui-dashboards,Image:registry.redhat.io/cluster-observability-operator/dashboards-console-plugin-rhel9@sha256:093d2731ac848ed5fd57356b155a19d3bf7b8db96d95b09c5d0095e143f7254f,Command:[],Args:[-port=9443 -cert=/var/serving-cert/tls.crt -key=/var/serving-cert/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:web,HostPort:0,ContainerPort:9443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serving-cert,ReadOnly:true,MountPath:/var/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kf2rw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000350000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod observability-ui-dashboards-66cbf594b5-492b9_openshift-operators(701367b7-aef6-43b5-a0f9-3a91206962de): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 14 04:29:21 crc kubenswrapper[4867]: E0214 04:29:21.894985 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"observability-ui-dashboards\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-492b9" podUID="701367b7-aef6-43b5-a0f9-3a91206962de" Feb 14 04:29:22 crc kubenswrapper[4867]: E0214 04:29:22.685739 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"observability-ui-dashboards\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/dashboards-console-plugin-rhel9@sha256:093d2731ac848ed5fd57356b155a19d3bf7b8db96d95b09c5d0095e143f7254f\\\"\"" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-492b9" podUID="701367b7-aef6-43b5-a0f9-3a91206962de" Feb 14 04:29:24 crc kubenswrapper[4867]: E0214 04:29:24.310856 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Feb 14 04:29:24 crc kubenswrapper[4867]: E0214 04:29:24.311278 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-294tk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-1_openstack(6bc83863-74f4-4509-969c-0f3305a542a8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 04:29:24 crc kubenswrapper[4867]: E0214 04:29:24.312475 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-1" podUID="6bc83863-74f4-4509-969c-0f3305a542a8" Feb 14 04:29:24 crc kubenswrapper[4867]: E0214 04:29:24.321374 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Feb 14 04:29:24 crc kubenswrapper[4867]: E0214 04:29:24.321478 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q676p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-2_openstack(9bba5174-edd6-4e59-8b84-6c50439be88e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 04:29:24 crc kubenswrapper[4867]: E0214 04:29:24.321804 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Feb 14 04:29:24 crc kubenswrapper[4867]: E0214 04:29:24.322073 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7sm7m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(505de461-9e6f-4914-bf50-e2bf4149b566): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 04:29:24 crc kubenswrapper[4867]: E0214 04:29:24.322916 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-2" podUID="9bba5174-edd6-4e59-8b84-6c50439be88e" Feb 14 04:29:24 crc kubenswrapper[4867]: E0214 04:29:24.324029 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="505de461-9e6f-4914-bf50-e2bf4149b566" Feb 14 04:29:24 crc kubenswrapper[4867]: E0214 04:29:24.364824 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Feb 14 04:29:24 crc kubenswrapper[4867]: E0214 04:29:24.365325 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6kp9g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(647ba30a-5526-4e27-9095-680c31ff4eb3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 04:29:24 crc kubenswrapper[4867]: E0214 04:29:24.366680 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="647ba30a-5526-4e27-9095-680c31ff4eb3" Feb 14 04:29:24 crc kubenswrapper[4867]: E0214 04:29:24.381769 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Feb 14 04:29:24 crc kubenswrapper[4867]: E0214 04:29:24.382410 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wrf6j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(e1e022d9-e2db-41eb-bbc8-36a85211a141): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 04:29:24 crc kubenswrapper[4867]: E0214 04:29:24.383746 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="e1e022d9-e2db-41eb-bbc8-36a85211a141" Feb 14 04:29:24 crc kubenswrapper[4867]: E0214 04:29:24.709984 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="505de461-9e6f-4914-bf50-e2bf4149b566" Feb 14 04:29:24 crc kubenswrapper[4867]: E0214 04:29:24.710255 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="647ba30a-5526-4e27-9095-680c31ff4eb3" Feb 14 04:29:24 crc kubenswrapper[4867]: E0214 04:29:24.710303 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-1" podUID="6bc83863-74f4-4509-969c-0f3305a542a8" Feb 14 04:29:24 crc kubenswrapper[4867]: E0214 04:29:24.710341 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-2" podUID="9bba5174-edd6-4e59-8b84-6c50439be88e" Feb 14 04:29:24 crc kubenswrapper[4867]: E0214 04:29:24.710378 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="e1e022d9-e2db-41eb-bbc8-36a85211a141" Feb 14 04:29:24 crc kubenswrapper[4867]: I0214 04:29:24.835620 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-7lpqj"] Feb 14 04:29:25 crc kubenswrapper[4867]: E0214 04:29:25.494817 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 14 04:29:25 crc kubenswrapper[4867]: E0214 04:29:25.495285 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4khb7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-z692n_openstack(6a87cc0d-e74a-4be2-9ac2-7f9d565f34e3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 04:29:25 crc kubenswrapper[4867]: E0214 04:29:25.496937 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-z692n" podUID="6a87cc0d-e74a-4be2-9ac2-7f9d565f34e3" Feb 14 04:29:25 crc kubenswrapper[4867]: E0214 04:29:25.504780 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 14 04:29:25 crc kubenswrapper[4867]: E0214 04:29:25.504972 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-959cx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-s87hs_openstack(1e9ddba3-128d-4025-9661-b07c5e1e9329): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 04:29:25 crc kubenswrapper[4867]: E0214 04:29:25.506279 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-s87hs" podUID="1e9ddba3-128d-4025-9661-b07c5e1e9329" Feb 14 04:29:25 crc kubenswrapper[4867]: E0214 04:29:25.514560 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 14 04:29:25 crc kubenswrapper[4867]: E0214 04:29:25.514747 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xj27v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-hxkz7_openstack(a5f0e82b-f765-4fe1-b74e-856e1a6d8b8c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 04:29:25 crc kubenswrapper[4867]: E0214 04:29:25.516036 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-hxkz7" podUID="a5f0e82b-f765-4fe1-b74e-856e1a6d8b8c" Feb 14 04:29:25 crc kubenswrapper[4867]: W0214 04:29:25.516999 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod16c28c0f_9310_4721_87cf_2d1bb88b5bba.slice/crio-4822560c390676a0713ab62f4d8795270c0dda5e0fda188c00b5e4cfe5130c2a WatchSource:0}: Error finding container 4822560c390676a0713ab62f4d8795270c0dda5e0fda188c00b5e4cfe5130c2a: Status 404 returned error can't find the container with id 4822560c390676a0713ab62f4d8795270c0dda5e0fda188c00b5e4cfe5130c2a Feb 14 04:29:25 crc kubenswrapper[4867]: E0214 04:29:25.523963 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 14 04:29:25 crc kubenswrapper[4867]: E0214 04:29:25.524194 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfdh5dfhb6h64h676hc4h78h97h669h54chfbh696hb5h54bh5d4h6bh64h644h677h584h5cbh698h9dh5bbh5f8h5b8hcdh644h5c7h694hbfh589q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7j8f4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5ccc8479f9-lbzlt_openstack(dbe41be0-f7f8-47ff-a587-b85e282fa5ee): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 04:29:25 crc kubenswrapper[4867]: E0214 04:29:25.525680 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-5ccc8479f9-lbzlt" podUID="dbe41be0-f7f8-47ff-a587-b85e282fa5ee" Feb 14 04:29:25 crc kubenswrapper[4867]: I0214 04:29:25.739840 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-7lpqj" event={"ID":"16c28c0f-9310-4721-87cf-2d1bb88b5bba","Type":"ContainerStarted","Data":"4822560c390676a0713ab62f4d8795270c0dda5e0fda188c00b5e4cfe5130c2a"} Feb 14 04:29:25 crc kubenswrapper[4867]: E0214 04:29:25.750097 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-5ccc8479f9-lbzlt" podUID="dbe41be0-f7f8-47ff-a587-b85e282fa5ee" Feb 14 04:29:25 crc kubenswrapper[4867]: E0214 04:29:25.750487 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-hxkz7" podUID="a5f0e82b-f765-4fe1-b74e-856e1a6d8b8c" Feb 14 04:29:25 crc kubenswrapper[4867]: E0214 04:29:25.973270 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Feb 14 04:29:25 crc kubenswrapper[4867]: E0214 04:29:25.973572 4867 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Feb 14 04:29:25 crc kubenswrapper[4867]: E0214 04:29:25.973709 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-h5zbq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(a78fec22-f395-42fc-a228-8d896580bc95): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled" logger="UnhandledError" Feb 14 04:29:25 crc kubenswrapper[4867]: E0214 04:29:25.975091 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="a78fec22-f395-42fc-a228-8d896580bc95" Feb 14 04:29:26 crc kubenswrapper[4867]: I0214 04:29:26.372297 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-z692n" Feb 14 04:29:26 crc kubenswrapper[4867]: I0214 04:29:26.384675 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-s87hs" Feb 14 04:29:26 crc kubenswrapper[4867]: I0214 04:29:26.486777 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-959cx\" (UniqueName: \"kubernetes.io/projected/1e9ddba3-128d-4025-9661-b07c5e1e9329-kube-api-access-959cx\") pod \"1e9ddba3-128d-4025-9661-b07c5e1e9329\" (UID: \"1e9ddba3-128d-4025-9661-b07c5e1e9329\") " Feb 14 04:29:26 crc kubenswrapper[4867]: I0214 04:29:26.487086 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6a87cc0d-e74a-4be2-9ac2-7f9d565f34e3-dns-svc\") pod \"6a87cc0d-e74a-4be2-9ac2-7f9d565f34e3\" (UID: \"6a87cc0d-e74a-4be2-9ac2-7f9d565f34e3\") " Feb 14 04:29:26 crc kubenswrapper[4867]: I0214 04:29:26.487154 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e9ddba3-128d-4025-9661-b07c5e1e9329-config\") pod \"1e9ddba3-128d-4025-9661-b07c5e1e9329\" (UID: \"1e9ddba3-128d-4025-9661-b07c5e1e9329\") " Feb 14 04:29:26 crc kubenswrapper[4867]: I0214 04:29:26.487365 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a87cc0d-e74a-4be2-9ac2-7f9d565f34e3-config\") pod \"6a87cc0d-e74a-4be2-9ac2-7f9d565f34e3\" (UID: \"6a87cc0d-e74a-4be2-9ac2-7f9d565f34e3\") " Feb 14 04:29:26 crc kubenswrapper[4867]: I0214 04:29:26.487407 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4khb7\" (UniqueName: \"kubernetes.io/projected/6a87cc0d-e74a-4be2-9ac2-7f9d565f34e3-kube-api-access-4khb7\") pod \"6a87cc0d-e74a-4be2-9ac2-7f9d565f34e3\" (UID: \"6a87cc0d-e74a-4be2-9ac2-7f9d565f34e3\") " Feb 14 04:29:26 crc kubenswrapper[4867]: I0214 04:29:26.487878 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a87cc0d-e74a-4be2-9ac2-7f9d565f34e3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6a87cc0d-e74a-4be2-9ac2-7f9d565f34e3" (UID: "6a87cc0d-e74a-4be2-9ac2-7f9d565f34e3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:29:26 crc kubenswrapper[4867]: I0214 04:29:26.488260 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e9ddba3-128d-4025-9661-b07c5e1e9329-config" (OuterVolumeSpecName: "config") pod "1e9ddba3-128d-4025-9661-b07c5e1e9329" (UID: "1e9ddba3-128d-4025-9661-b07c5e1e9329"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:29:26 crc kubenswrapper[4867]: I0214 04:29:26.488574 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a87cc0d-e74a-4be2-9ac2-7f9d565f34e3-config" (OuterVolumeSpecName: "config") pod "6a87cc0d-e74a-4be2-9ac2-7f9d565f34e3" (UID: "6a87cc0d-e74a-4be2-9ac2-7f9d565f34e3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:29:26 crc kubenswrapper[4867]: I0214 04:29:26.494392 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a87cc0d-e74a-4be2-9ac2-7f9d565f34e3-kube-api-access-4khb7" (OuterVolumeSpecName: "kube-api-access-4khb7") pod "6a87cc0d-e74a-4be2-9ac2-7f9d565f34e3" (UID: "6a87cc0d-e74a-4be2-9ac2-7f9d565f34e3"). InnerVolumeSpecName "kube-api-access-4khb7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:29:26 crc kubenswrapper[4867]: I0214 04:29:26.495412 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e9ddba3-128d-4025-9661-b07c5e1e9329-kube-api-access-959cx" (OuterVolumeSpecName: "kube-api-access-959cx") pod "1e9ddba3-128d-4025-9661-b07c5e1e9329" (UID: "1e9ddba3-128d-4025-9661-b07c5e1e9329"). InnerVolumeSpecName "kube-api-access-959cx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:29:26 crc kubenswrapper[4867]: I0214 04:29:26.499146 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 14 04:29:26 crc kubenswrapper[4867]: I0214 04:29:26.590585 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a87cc0d-e74a-4be2-9ac2-7f9d565f34e3-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:29:26 crc kubenswrapper[4867]: I0214 04:29:26.590636 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4khb7\" (UniqueName: \"kubernetes.io/projected/6a87cc0d-e74a-4be2-9ac2-7f9d565f34e3-kube-api-access-4khb7\") on node \"crc\" DevicePath \"\"" Feb 14 04:29:26 crc kubenswrapper[4867]: I0214 04:29:26.590652 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-959cx\" (UniqueName: \"kubernetes.io/projected/1e9ddba3-128d-4025-9661-b07c5e1e9329-kube-api-access-959cx\") on node \"crc\" DevicePath \"\"" Feb 14 04:29:26 crc kubenswrapper[4867]: I0214 04:29:26.590661 4867 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6a87cc0d-e74a-4be2-9ac2-7f9d565f34e3-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 14 04:29:26 crc kubenswrapper[4867]: I0214 04:29:26.590674 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e9ddba3-128d-4025-9661-b07c5e1e9329-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:29:26 crc kubenswrapper[4867]: I0214 04:29:26.646640 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 14 04:29:26 crc kubenswrapper[4867]: W0214 04:29:26.663077 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod353b0cad_bb6a_4a68_b787_64fb7b32ee27.slice/crio-9bdefce622b59860174ddd872b95e01cd574127f0ad95423e2c0fcb3f2154c58 WatchSource:0}: Error finding container 9bdefce622b59860174ddd872b95e01cd574127f0ad95423e2c0fcb3f2154c58: Status 404 returned error can't find the container with id 9bdefce622b59860174ddd872b95e01cd574127f0ad95423e2c0fcb3f2154c58 Feb 14 04:29:26 crc kubenswrapper[4867]: I0214 04:29:26.758053 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-796d588566-h9wcn" event={"ID":"41d35864-bb64-45f3-bc1e-a7d5440c35ad","Type":"ContainerStarted","Data":"89de8ce8d39e362c2e7511282186708dca422f34e03beb9fef455082b5740e5a"} Feb 14 04:29:26 crc kubenswrapper[4867]: I0214 04:29:26.761886 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-z692n" event={"ID":"6a87cc0d-e74a-4be2-9ac2-7f9d565f34e3","Type":"ContainerDied","Data":"6f6ab74e4cce4dfd48bee7b02d98ab6f158452369ea200d7d42632fd7db4659d"} Feb 14 04:29:26 crc kubenswrapper[4867]: I0214 04:29:26.762051 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-z692n" Feb 14 04:29:26 crc kubenswrapper[4867]: I0214 04:29:26.767671 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c755009c-2bb6-4f8f-9b53-460a0e4c9447","Type":"ContainerStarted","Data":"bf0605b193983ab03177306fae17d696c18a8e3789f84b06d5ef6b3d006f8d77"} Feb 14 04:29:26 crc kubenswrapper[4867]: I0214 04:29:26.769318 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-s87hs" event={"ID":"1e9ddba3-128d-4025-9661-b07c5e1e9329","Type":"ContainerDied","Data":"37ef45b02f432cc3a119a40a28ecba25608ab0780e37295f9063ae35dc630718"} Feb 14 04:29:26 crc kubenswrapper[4867]: I0214 04:29:26.769463 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-s87hs" Feb 14 04:29:26 crc kubenswrapper[4867]: I0214 04:29:26.781691 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"353b0cad-bb6a-4a68-b787-64fb7b32ee27","Type":"ContainerStarted","Data":"9bdefce622b59860174ddd872b95e01cd574127f0ad95423e2c0fcb3f2154c58"} Feb 14 04:29:26 crc kubenswrapper[4867]: I0214 04:29:26.808240 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" event={"ID":"5992e46c-bce7-4b9f-82f2-c7ffb93286cd","Type":"ContainerStarted","Data":"9c4b967cf6b24751f9f07fc3f33e355390aef9adbb8efd8f22637fd0bfe6c0be"} Feb 14 04:29:26 crc kubenswrapper[4867]: I0214 04:29:26.815744 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"f1d6dceb-5ee5-407d-ade4-be35d128d8dc","Type":"ContainerStarted","Data":"cc7200bfcb007faa77f39190304eeb096c5f0018fd1bda42f79e7843d5cad132"} Feb 14 04:29:26 crc kubenswrapper[4867]: I0214 04:29:26.815866 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 14 04:29:26 crc kubenswrapper[4867]: E0214 04:29:26.817141 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="openstack/kube-state-metrics-0" podUID="a78fec22-f395-42fc-a228-8d896580bc95" Feb 14 04:29:26 crc kubenswrapper[4867]: I0214 04:29:26.836305 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-796d588566-h9wcn" podStartSLOduration=31.836282117 podStartE2EDuration="31.836282117s" podCreationTimestamp="2026-02-14 04:28:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:29:26.785125726 +0000 UTC m=+1198.866063050" watchObservedRunningTime="2026-02-14 04:29:26.836282117 +0000 UTC m=+1198.917219431" Feb 14 04:29:26 crc kubenswrapper[4867]: I0214 04:29:26.851935 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-s87hs"] Feb 14 04:29:26 crc kubenswrapper[4867]: I0214 04:29:26.862908 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-s87hs"] Feb 14 04:29:26 crc kubenswrapper[4867]: I0214 04:29:26.903757 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-z692n"] Feb 14 04:29:26 crc kubenswrapper[4867]: I0214 04:29:26.911133 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-z692n"] Feb 14 04:29:26 crc kubenswrapper[4867]: I0214 04:29:26.936670 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=4.129870626 podStartE2EDuration="36.936643537s" podCreationTimestamp="2026-02-14 04:28:50 +0000 UTC" firstStartedPulling="2026-02-14 04:28:52.88283099 +0000 UTC m=+1164.963768304" lastFinishedPulling="2026-02-14 04:29:25.689603901 +0000 UTC m=+1197.770541215" observedRunningTime="2026-02-14 04:29:26.921718449 +0000 UTC m=+1199.002655763" watchObservedRunningTime="2026-02-14 04:29:26.936643537 +0000 UTC m=+1199.017580851" Feb 14 04:29:27 crc kubenswrapper[4867]: I0214 04:29:27.013915 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e9ddba3-128d-4025-9661-b07c5e1e9329" path="/var/lib/kubelet/pods/1e9ddba3-128d-4025-9661-b07c5e1e9329/volumes" Feb 14 04:29:27 crc kubenswrapper[4867]: I0214 04:29:27.014336 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a87cc0d-e74a-4be2-9ac2-7f9d565f34e3" path="/var/lib/kubelet/pods/6a87cc0d-e74a-4be2-9ac2-7f9d565f34e3/volumes" Feb 14 04:29:27 crc kubenswrapper[4867]: I0214 04:29:27.275820 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 14 04:29:27 crc kubenswrapper[4867]: I0214 04:29:27.714447 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-dznst"] Feb 14 04:29:27 crc kubenswrapper[4867]: I0214 04:29:27.826561 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"9faf0052-6200-4ac5-9216-7a26a29f4508","Type":"ContainerStarted","Data":"7c55bc7f1bc686894b3f509f6c90a39778393c04ab65d3e679be60bc6d5ef550"} Feb 14 04:29:29 crc kubenswrapper[4867]: W0214 04:29:29.403221 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6f356df8_0955_46c4_9166_2c1eef982399.slice/crio-0805180949cae57ae21cd331fb9b565e19d17c17b10cb5bf5debc23283a6cf71 WatchSource:0}: Error finding container 0805180949cae57ae21cd331fb9b565e19d17c17b10cb5bf5debc23283a6cf71: Status 404 returned error can't find the container with id 0805180949cae57ae21cd331fb9b565e19d17c17b10cb5bf5debc23283a6cf71 Feb 14 04:29:29 crc kubenswrapper[4867]: I0214 04:29:29.852014 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-dznst" event={"ID":"6f356df8-0955-46c4-9166-2c1eef982399","Type":"ContainerStarted","Data":"0805180949cae57ae21cd331fb9b565e19d17c17b10cb5bf5debc23283a6cf71"} Feb 14 04:29:31 crc kubenswrapper[4867]: I0214 04:29:31.169962 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 14 04:29:31 crc kubenswrapper[4867]: I0214 04:29:31.880533 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-7lpqj" event={"ID":"16c28c0f-9310-4721-87cf-2d1bb88b5bba","Type":"ContainerStarted","Data":"024c92a0d3dd82c2ce5e4b6e61d011efe3cd5c6541f2cd352e0ab0c7a014be5b"} Feb 14 04:29:31 crc kubenswrapper[4867]: I0214 04:29:31.881004 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-7lpqj" Feb 14 04:29:31 crc kubenswrapper[4867]: I0214 04:29:31.882705 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"9faf0052-6200-4ac5-9216-7a26a29f4508","Type":"ContainerStarted","Data":"5b651ce16dda3789a2359fd3ea8f6a35daeb189d608b4f490fddfa341b8ae70d"} Feb 14 04:29:31 crc kubenswrapper[4867]: I0214 04:29:31.886915 4867 generic.go:334] "Generic (PLEG): container finished" podID="6f356df8-0955-46c4-9166-2c1eef982399" containerID="51d0f239c29026a75fd0385ee45a13f98a3f630daa99fbd65c626a238e95f520" exitCode=0 Feb 14 04:29:31 crc kubenswrapper[4867]: I0214 04:29:31.887135 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-dznst" event={"ID":"6f356df8-0955-46c4-9166-2c1eef982399","Type":"ContainerDied","Data":"51d0f239c29026a75fd0385ee45a13f98a3f630daa99fbd65c626a238e95f520"} Feb 14 04:29:31 crc kubenswrapper[4867]: I0214 04:29:31.893431 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"353b0cad-bb6a-4a68-b787-64fb7b32ee27","Type":"ContainerStarted","Data":"9c75821820c4bca9c1e46189f48b8c3613810190444cd4b1ab130fbd23a5988b"} Feb 14 04:29:31 crc kubenswrapper[4867]: I0214 04:29:31.910950 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-7lpqj" podStartSLOduration=30.786869513 podStartE2EDuration="35.91091897s" podCreationTimestamp="2026-02-14 04:28:56 +0000 UTC" firstStartedPulling="2026-02-14 04:29:25.51995734 +0000 UTC m=+1197.600894654" lastFinishedPulling="2026-02-14 04:29:30.644006797 +0000 UTC m=+1202.724944111" observedRunningTime="2026-02-14 04:29:31.898912377 +0000 UTC m=+1203.979849701" watchObservedRunningTime="2026-02-14 04:29:31.91091897 +0000 UTC m=+1203.991856284" Feb 14 04:29:32 crc kubenswrapper[4867]: I0214 04:29:32.904194 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-dznst" event={"ID":"6f356df8-0955-46c4-9166-2c1eef982399","Type":"ContainerStarted","Data":"42e8d69fc5fa2650c4797b1653adebe3254082cae9e55c7ebdc44341f489e759"} Feb 14 04:29:33 crc kubenswrapper[4867]: I0214 04:29:33.914918 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"9faf0052-6200-4ac5-9216-7a26a29f4508","Type":"ContainerStarted","Data":"cc1860ae628ffb43275f44883ae0b3aefc69b7d7264ec66a15337b6960dc2076"} Feb 14 04:29:33 crc kubenswrapper[4867]: I0214 04:29:33.918679 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-dznst" event={"ID":"6f356df8-0955-46c4-9166-2c1eef982399","Type":"ContainerStarted","Data":"aa174664422328dad834c3062854d4b324ab232201f0b150013b14519d1c38f7"} Feb 14 04:29:33 crc kubenswrapper[4867]: I0214 04:29:33.919848 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-dznst" Feb 14 04:29:33 crc kubenswrapper[4867]: I0214 04:29:33.919892 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-dznst" Feb 14 04:29:33 crc kubenswrapper[4867]: I0214 04:29:33.922851 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"353b0cad-bb6a-4a68-b787-64fb7b32ee27","Type":"ContainerStarted","Data":"967e4a50c913d0ec337bb8ac1a062e340511c61d9594b56fe9c5a8e8fc49544f"} Feb 14 04:29:33 crc kubenswrapper[4867]: I0214 04:29:33.942905 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=28.775696911 podStartE2EDuration="34.942887231s" podCreationTimestamp="2026-02-14 04:28:59 +0000 UTC" firstStartedPulling="2026-02-14 04:29:27.308453888 +0000 UTC m=+1199.389391202" lastFinishedPulling="2026-02-14 04:29:33.475644208 +0000 UTC m=+1205.556581522" observedRunningTime="2026-02-14 04:29:33.936125375 +0000 UTC m=+1206.017062719" watchObservedRunningTime="2026-02-14 04:29:33.942887231 +0000 UTC m=+1206.023824545" Feb 14 04:29:33 crc kubenswrapper[4867]: I0214 04:29:33.963195 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-dznst" podStartSLOduration=36.587511448 podStartE2EDuration="37.963163609s" podCreationTimestamp="2026-02-14 04:28:56 +0000 UTC" firstStartedPulling="2026-02-14 04:29:29.407386362 +0000 UTC m=+1201.488323676" lastFinishedPulling="2026-02-14 04:29:30.783038523 +0000 UTC m=+1202.863975837" observedRunningTime="2026-02-14 04:29:33.954135704 +0000 UTC m=+1206.035073028" watchObservedRunningTime="2026-02-14 04:29:33.963163609 +0000 UTC m=+1206.044100923" Feb 14 04:29:33 crc kubenswrapper[4867]: I0214 04:29:33.976735 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=31.099212155 podStartE2EDuration="37.976708071s" podCreationTimestamp="2026-02-14 04:28:56 +0000 UTC" firstStartedPulling="2026-02-14 04:29:26.665988107 +0000 UTC m=+1198.746925421" lastFinishedPulling="2026-02-14 04:29:33.543484023 +0000 UTC m=+1205.624421337" observedRunningTime="2026-02-14 04:29:33.970763596 +0000 UTC m=+1206.051700920" watchObservedRunningTime="2026-02-14 04:29:33.976708071 +0000 UTC m=+1206.057645385" Feb 14 04:29:34 crc kubenswrapper[4867]: I0214 04:29:34.102014 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 14 04:29:34 crc kubenswrapper[4867]: I0214 04:29:34.164976 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 14 04:29:34 crc kubenswrapper[4867]: I0214 04:29:34.184881 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-lbzlt"] Feb 14 04:29:34 crc kubenswrapper[4867]: I0214 04:29:34.253997 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-cl29c"] Feb 14 04:29:34 crc kubenswrapper[4867]: I0214 04:29:34.255927 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-cl29c" Feb 14 04:29:34 crc kubenswrapper[4867]: I0214 04:29:34.295997 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-cl29c"] Feb 14 04:29:34 crc kubenswrapper[4867]: I0214 04:29:34.379045 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fa85f647-f104-47eb-800c-5926241431c6-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-cl29c\" (UID: \"fa85f647-f104-47eb-800c-5926241431c6\") " pod="openstack/dnsmasq-dns-7cb5889db5-cl29c" Feb 14 04:29:34 crc kubenswrapper[4867]: I0214 04:29:34.379677 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa85f647-f104-47eb-800c-5926241431c6-config\") pod \"dnsmasq-dns-7cb5889db5-cl29c\" (UID: \"fa85f647-f104-47eb-800c-5926241431c6\") " pod="openstack/dnsmasq-dns-7cb5889db5-cl29c" Feb 14 04:29:34 crc kubenswrapper[4867]: I0214 04:29:34.379859 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zs8x\" (UniqueName: \"kubernetes.io/projected/fa85f647-f104-47eb-800c-5926241431c6-kube-api-access-8zs8x\") pod \"dnsmasq-dns-7cb5889db5-cl29c\" (UID: \"fa85f647-f104-47eb-800c-5926241431c6\") " pod="openstack/dnsmasq-dns-7cb5889db5-cl29c" Feb 14 04:29:34 crc kubenswrapper[4867]: I0214 04:29:34.482891 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa85f647-f104-47eb-800c-5926241431c6-config\") pod \"dnsmasq-dns-7cb5889db5-cl29c\" (UID: \"fa85f647-f104-47eb-800c-5926241431c6\") " pod="openstack/dnsmasq-dns-7cb5889db5-cl29c" Feb 14 04:29:34 crc kubenswrapper[4867]: I0214 04:29:34.483162 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zs8x\" (UniqueName: \"kubernetes.io/projected/fa85f647-f104-47eb-800c-5926241431c6-kube-api-access-8zs8x\") pod \"dnsmasq-dns-7cb5889db5-cl29c\" (UID: \"fa85f647-f104-47eb-800c-5926241431c6\") " pod="openstack/dnsmasq-dns-7cb5889db5-cl29c" Feb 14 04:29:34 crc kubenswrapper[4867]: I0214 04:29:34.483302 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fa85f647-f104-47eb-800c-5926241431c6-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-cl29c\" (UID: \"fa85f647-f104-47eb-800c-5926241431c6\") " pod="openstack/dnsmasq-dns-7cb5889db5-cl29c" Feb 14 04:29:34 crc kubenswrapper[4867]: I0214 04:29:34.484448 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fa85f647-f104-47eb-800c-5926241431c6-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-cl29c\" (UID: \"fa85f647-f104-47eb-800c-5926241431c6\") " pod="openstack/dnsmasq-dns-7cb5889db5-cl29c" Feb 14 04:29:34 crc kubenswrapper[4867]: I0214 04:29:34.485057 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa85f647-f104-47eb-800c-5926241431c6-config\") pod \"dnsmasq-dns-7cb5889db5-cl29c\" (UID: \"fa85f647-f104-47eb-800c-5926241431c6\") " pod="openstack/dnsmasq-dns-7cb5889db5-cl29c" Feb 14 04:29:34 crc kubenswrapper[4867]: I0214 04:29:34.528395 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zs8x\" (UniqueName: \"kubernetes.io/projected/fa85f647-f104-47eb-800c-5926241431c6-kube-api-access-8zs8x\") pod \"dnsmasq-dns-7cb5889db5-cl29c\" (UID: \"fa85f647-f104-47eb-800c-5926241431c6\") " pod="openstack/dnsmasq-dns-7cb5889db5-cl29c" Feb 14 04:29:34 crc kubenswrapper[4867]: I0214 04:29:34.604829 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-cl29c" Feb 14 04:29:34 crc kubenswrapper[4867]: I0214 04:29:34.765107 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-lbzlt" Feb 14 04:29:34 crc kubenswrapper[4867]: I0214 04:29:34.891774 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbe41be0-f7f8-47ff-a587-b85e282fa5ee-config\") pod \"dbe41be0-f7f8-47ff-a587-b85e282fa5ee\" (UID: \"dbe41be0-f7f8-47ff-a587-b85e282fa5ee\") " Feb 14 04:29:34 crc kubenswrapper[4867]: I0214 04:29:34.892298 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7j8f4\" (UniqueName: \"kubernetes.io/projected/dbe41be0-f7f8-47ff-a587-b85e282fa5ee-kube-api-access-7j8f4\") pod \"dbe41be0-f7f8-47ff-a587-b85e282fa5ee\" (UID: \"dbe41be0-f7f8-47ff-a587-b85e282fa5ee\") " Feb 14 04:29:34 crc kubenswrapper[4867]: I0214 04:29:34.892374 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dbe41be0-f7f8-47ff-a587-b85e282fa5ee-dns-svc\") pod \"dbe41be0-f7f8-47ff-a587-b85e282fa5ee\" (UID: \"dbe41be0-f7f8-47ff-a587-b85e282fa5ee\") " Feb 14 04:29:34 crc kubenswrapper[4867]: I0214 04:29:34.893722 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbe41be0-f7f8-47ff-a587-b85e282fa5ee-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "dbe41be0-f7f8-47ff-a587-b85e282fa5ee" (UID: "dbe41be0-f7f8-47ff-a587-b85e282fa5ee"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:29:34 crc kubenswrapper[4867]: I0214 04:29:34.894470 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbe41be0-f7f8-47ff-a587-b85e282fa5ee-config" (OuterVolumeSpecName: "config") pod "dbe41be0-f7f8-47ff-a587-b85e282fa5ee" (UID: "dbe41be0-f7f8-47ff-a587-b85e282fa5ee"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:29:34 crc kubenswrapper[4867]: I0214 04:29:34.902735 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbe41be0-f7f8-47ff-a587-b85e282fa5ee-kube-api-access-7j8f4" (OuterVolumeSpecName: "kube-api-access-7j8f4") pod "dbe41be0-f7f8-47ff-a587-b85e282fa5ee" (UID: "dbe41be0-f7f8-47ff-a587-b85e282fa5ee"). InnerVolumeSpecName "kube-api-access-7j8f4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:29:34 crc kubenswrapper[4867]: I0214 04:29:34.972126 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c755009c-2bb6-4f8f-9b53-460a0e4c9447","Type":"ContainerStarted","Data":"a1fd36c74b9a00850c975f49583fd6e7537b5b3ab16d29f2ed2f5ae6fb4437b4"} Feb 14 04:29:35 crc kubenswrapper[4867]: I0214 04:29:35.004414 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbe41be0-f7f8-47ff-a587-b85e282fa5ee-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:29:35 crc kubenswrapper[4867]: I0214 04:29:35.004442 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7j8f4\" (UniqueName: \"kubernetes.io/projected/dbe41be0-f7f8-47ff-a587-b85e282fa5ee-kube-api-access-7j8f4\") on node \"crc\" DevicePath \"\"" Feb 14 04:29:35 crc kubenswrapper[4867]: I0214 04:29:35.004452 4867 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dbe41be0-f7f8-47ff-a587-b85e282fa5ee-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 14 04:29:35 crc kubenswrapper[4867]: I0214 04:29:35.017659 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"b27199a8-11ac-4e59-90b8-b42387dd6dd2","Type":"ContainerStarted","Data":"88cb930154e07e378cec2e1f6e9deef9c47de4c5b43c2284262de9eb71194722"} Feb 14 04:29:35 crc kubenswrapper[4867]: I0214 04:29:35.024135 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-lbzlt" event={"ID":"dbe41be0-f7f8-47ff-a587-b85e282fa5ee","Type":"ContainerDied","Data":"e2cde2f1b32b51340fd763b4ceea3517b72f2d3aa4a0cc5b4b3855a816cd999b"} Feb 14 04:29:35 crc kubenswrapper[4867]: I0214 04:29:35.024249 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-lbzlt" Feb 14 04:29:35 crc kubenswrapper[4867]: I0214 04:29:35.024442 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 14 04:29:35 crc kubenswrapper[4867]: I0214 04:29:35.182702 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-lbzlt"] Feb 14 04:29:35 crc kubenswrapper[4867]: I0214 04:29:35.206587 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-lbzlt"] Feb 14 04:29:35 crc kubenswrapper[4867]: I0214 04:29:35.338783 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-cl29c"] Feb 14 04:29:35 crc kubenswrapper[4867]: W0214 04:29:35.339013 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfa85f647_f104_47eb_800c_5926241431c6.slice/crio-b689e14869d0b7bebda2bfe1f81a3f0324cf2d9cbabff503414d1c60e7a92163 WatchSource:0}: Error finding container b689e14869d0b7bebda2bfe1f81a3f0324cf2d9cbabff503414d1c60e7a92163: Status 404 returned error can't find the container with id b689e14869d0b7bebda2bfe1f81a3f0324cf2d9cbabff503414d1c60e7a92163 Feb 14 04:29:35 crc kubenswrapper[4867]: I0214 04:29:35.584368 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 14 04:29:35 crc kubenswrapper[4867]: I0214 04:29:35.590416 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 14 04:29:35 crc kubenswrapper[4867]: I0214 04:29:35.592119 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-nmjhj" Feb 14 04:29:35 crc kubenswrapper[4867]: I0214 04:29:35.592119 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 14 04:29:35 crc kubenswrapper[4867]: I0214 04:29:35.592457 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 14 04:29:35 crc kubenswrapper[4867]: I0214 04:29:35.594081 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 14 04:29:35 crc kubenswrapper[4867]: I0214 04:29:35.614420 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 14 04:29:35 crc kubenswrapper[4867]: I0214 04:29:35.686151 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-796d588566-h9wcn" Feb 14 04:29:35 crc kubenswrapper[4867]: I0214 04:29:35.686208 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-796d588566-h9wcn" Feb 14 04:29:35 crc kubenswrapper[4867]: I0214 04:29:35.691991 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-796d588566-h9wcn" Feb 14 04:29:35 crc kubenswrapper[4867]: I0214 04:29:35.723858 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1d9f9909-1442-4d83-b2aa-0f58d4022338-etc-swift\") pod \"swift-storage-0\" (UID: \"1d9f9909-1442-4d83-b2aa-0f58d4022338\") " pod="openstack/swift-storage-0" Feb 14 04:29:35 crc kubenswrapper[4867]: I0214 04:29:35.724178 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/1d9f9909-1442-4d83-b2aa-0f58d4022338-cache\") pod \"swift-storage-0\" (UID: \"1d9f9909-1442-4d83-b2aa-0f58d4022338\") " pod="openstack/swift-storage-0" Feb 14 04:29:35 crc kubenswrapper[4867]: I0214 04:29:35.724237 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4v8rn\" (UniqueName: \"kubernetes.io/projected/1d9f9909-1442-4d83-b2aa-0f58d4022338-kube-api-access-4v8rn\") pod \"swift-storage-0\" (UID: \"1d9f9909-1442-4d83-b2aa-0f58d4022338\") " pod="openstack/swift-storage-0" Feb 14 04:29:35 crc kubenswrapper[4867]: I0214 04:29:35.724270 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-319b99f7-9436-4c11-9b1c-dc8e7768f04e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-319b99f7-9436-4c11-9b1c-dc8e7768f04e\") pod \"swift-storage-0\" (UID: \"1d9f9909-1442-4d83-b2aa-0f58d4022338\") " pod="openstack/swift-storage-0" Feb 14 04:29:35 crc kubenswrapper[4867]: I0214 04:29:35.724319 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d9f9909-1442-4d83-b2aa-0f58d4022338-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"1d9f9909-1442-4d83-b2aa-0f58d4022338\") " pod="openstack/swift-storage-0" Feb 14 04:29:35 crc kubenswrapper[4867]: I0214 04:29:35.724370 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/1d9f9909-1442-4d83-b2aa-0f58d4022338-lock\") pod \"swift-storage-0\" (UID: \"1d9f9909-1442-4d83-b2aa-0f58d4022338\") " pod="openstack/swift-storage-0" Feb 14 04:29:35 crc kubenswrapper[4867]: I0214 04:29:35.827085 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d9f9909-1442-4d83-b2aa-0f58d4022338-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"1d9f9909-1442-4d83-b2aa-0f58d4022338\") " pod="openstack/swift-storage-0" Feb 14 04:29:35 crc kubenswrapper[4867]: I0214 04:29:35.827232 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/1d9f9909-1442-4d83-b2aa-0f58d4022338-lock\") pod \"swift-storage-0\" (UID: \"1d9f9909-1442-4d83-b2aa-0f58d4022338\") " pod="openstack/swift-storage-0" Feb 14 04:29:35 crc kubenswrapper[4867]: I0214 04:29:35.827341 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1d9f9909-1442-4d83-b2aa-0f58d4022338-etc-swift\") pod \"swift-storage-0\" (UID: \"1d9f9909-1442-4d83-b2aa-0f58d4022338\") " pod="openstack/swift-storage-0" Feb 14 04:29:35 crc kubenswrapper[4867]: I0214 04:29:35.827383 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/1d9f9909-1442-4d83-b2aa-0f58d4022338-cache\") pod \"swift-storage-0\" (UID: \"1d9f9909-1442-4d83-b2aa-0f58d4022338\") " pod="openstack/swift-storage-0" Feb 14 04:29:35 crc kubenswrapper[4867]: I0214 04:29:35.827462 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4v8rn\" (UniqueName: \"kubernetes.io/projected/1d9f9909-1442-4d83-b2aa-0f58d4022338-kube-api-access-4v8rn\") pod \"swift-storage-0\" (UID: \"1d9f9909-1442-4d83-b2aa-0f58d4022338\") " pod="openstack/swift-storage-0" Feb 14 04:29:35 crc kubenswrapper[4867]: I0214 04:29:35.827495 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-319b99f7-9436-4c11-9b1c-dc8e7768f04e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-319b99f7-9436-4c11-9b1c-dc8e7768f04e\") pod \"swift-storage-0\" (UID: \"1d9f9909-1442-4d83-b2aa-0f58d4022338\") " pod="openstack/swift-storage-0" Feb 14 04:29:35 crc kubenswrapper[4867]: I0214 04:29:35.828394 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/1d9f9909-1442-4d83-b2aa-0f58d4022338-lock\") pod \"swift-storage-0\" (UID: \"1d9f9909-1442-4d83-b2aa-0f58d4022338\") " pod="openstack/swift-storage-0" Feb 14 04:29:35 crc kubenswrapper[4867]: E0214 04:29:35.828467 4867 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 14 04:29:35 crc kubenswrapper[4867]: E0214 04:29:35.828896 4867 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 14 04:29:35 crc kubenswrapper[4867]: E0214 04:29:35.829040 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1d9f9909-1442-4d83-b2aa-0f58d4022338-etc-swift podName:1d9f9909-1442-4d83-b2aa-0f58d4022338 nodeName:}" failed. No retries permitted until 2026-02-14 04:29:36.32901716 +0000 UTC m=+1208.409954474 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/1d9f9909-1442-4d83-b2aa-0f58d4022338-etc-swift") pod "swift-storage-0" (UID: "1d9f9909-1442-4d83-b2aa-0f58d4022338") : configmap "swift-ring-files" not found Feb 14 04:29:35 crc kubenswrapper[4867]: I0214 04:29:35.829060 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/1d9f9909-1442-4d83-b2aa-0f58d4022338-cache\") pod \"swift-storage-0\" (UID: \"1d9f9909-1442-4d83-b2aa-0f58d4022338\") " pod="openstack/swift-storage-0" Feb 14 04:29:35 crc kubenswrapper[4867]: I0214 04:29:35.834521 4867 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 04:29:35 crc kubenswrapper[4867]: I0214 04:29:35.834642 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-319b99f7-9436-4c11-9b1c-dc8e7768f04e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-319b99f7-9436-4c11-9b1c-dc8e7768f04e\") pod \"swift-storage-0\" (UID: \"1d9f9909-1442-4d83-b2aa-0f58d4022338\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/267768274c449aec6b5b6bd87651d01565bcb26558a88e152a72bbebcd71e6ea/globalmount\"" pod="openstack/swift-storage-0" Feb 14 04:29:35 crc kubenswrapper[4867]: I0214 04:29:35.837613 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d9f9909-1442-4d83-b2aa-0f58d4022338-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"1d9f9909-1442-4d83-b2aa-0f58d4022338\") " pod="openstack/swift-storage-0" Feb 14 04:29:35 crc kubenswrapper[4867]: I0214 04:29:35.848858 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4v8rn\" (UniqueName: \"kubernetes.io/projected/1d9f9909-1442-4d83-b2aa-0f58d4022338-kube-api-access-4v8rn\") pod \"swift-storage-0\" (UID: \"1d9f9909-1442-4d83-b2aa-0f58d4022338\") " pod="openstack/swift-storage-0" Feb 14 04:29:35 crc kubenswrapper[4867]: I0214 04:29:35.886626 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-319b99f7-9436-4c11-9b1c-dc8e7768f04e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-319b99f7-9436-4c11-9b1c-dc8e7768f04e\") pod \"swift-storage-0\" (UID: \"1d9f9909-1442-4d83-b2aa-0f58d4022338\") " pod="openstack/swift-storage-0" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.059630 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-cl29c" event={"ID":"fa85f647-f104-47eb-800c-5926241431c6","Type":"ContainerStarted","Data":"b689e14869d0b7bebda2bfe1f81a3f0324cf2d9cbabff503414d1c60e7a92163"} Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.077201 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-796d588566-h9wcn" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.171269 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.292236 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-27bx5"] Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.293729 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-27bx5" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.296331 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.296375 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.296764 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.315736 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-27bx5"] Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.333780 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6c8864b6b5-mwdd6"] Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.353869 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqpbj\" (UniqueName: \"kubernetes.io/projected/2eb35c23-c6de-46f0-a7bf-8390d9eefd42-kube-api-access-tqpbj\") pod \"swift-ring-rebalance-27bx5\" (UID: \"2eb35c23-c6de-46f0-a7bf-8390d9eefd42\") " pod="openstack/swift-ring-rebalance-27bx5" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.353931 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/2eb35c23-c6de-46f0-a7bf-8390d9eefd42-ring-data-devices\") pod \"swift-ring-rebalance-27bx5\" (UID: \"2eb35c23-c6de-46f0-a7bf-8390d9eefd42\") " pod="openstack/swift-ring-rebalance-27bx5" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.353967 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2eb35c23-c6de-46f0-a7bf-8390d9eefd42-scripts\") pod \"swift-ring-rebalance-27bx5\" (UID: \"2eb35c23-c6de-46f0-a7bf-8390d9eefd42\") " pod="openstack/swift-ring-rebalance-27bx5" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.354016 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/2eb35c23-c6de-46f0-a7bf-8390d9eefd42-etc-swift\") pod \"swift-ring-rebalance-27bx5\" (UID: \"2eb35c23-c6de-46f0-a7bf-8390d9eefd42\") " pod="openstack/swift-ring-rebalance-27bx5" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.354077 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1d9f9909-1442-4d83-b2aa-0f58d4022338-etc-swift\") pod \"swift-storage-0\" (UID: \"1d9f9909-1442-4d83-b2aa-0f58d4022338\") " pod="openstack/swift-storage-0" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.354120 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/2eb35c23-c6de-46f0-a7bf-8390d9eefd42-dispersionconf\") pod \"swift-ring-rebalance-27bx5\" (UID: \"2eb35c23-c6de-46f0-a7bf-8390d9eefd42\") " pod="openstack/swift-ring-rebalance-27bx5" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.354136 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/2eb35c23-c6de-46f0-a7bf-8390d9eefd42-swiftconf\") pod \"swift-ring-rebalance-27bx5\" (UID: \"2eb35c23-c6de-46f0-a7bf-8390d9eefd42\") " pod="openstack/swift-ring-rebalance-27bx5" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.354173 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2eb35c23-c6de-46f0-a7bf-8390d9eefd42-combined-ca-bundle\") pod \"swift-ring-rebalance-27bx5\" (UID: \"2eb35c23-c6de-46f0-a7bf-8390d9eefd42\") " pod="openstack/swift-ring-rebalance-27bx5" Feb 14 04:29:36 crc kubenswrapper[4867]: E0214 04:29:36.354378 4867 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 14 04:29:36 crc kubenswrapper[4867]: E0214 04:29:36.354394 4867 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 14 04:29:36 crc kubenswrapper[4867]: E0214 04:29:36.354433 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1d9f9909-1442-4d83-b2aa-0f58d4022338-etc-swift podName:1d9f9909-1442-4d83-b2aa-0f58d4022338 nodeName:}" failed. No retries permitted until 2026-02-14 04:29:37.354417805 +0000 UTC m=+1209.435355119 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/1d9f9909-1442-4d83-b2aa-0f58d4022338-etc-swift") pod "swift-storage-0" (UID: "1d9f9909-1442-4d83-b2aa-0f58d4022338") : configmap "swift-ring-files" not found Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.380687 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-dc8sm"] Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.382757 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-dc8sm" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.387649 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-27bx5"] Feb 14 04:29:36 crc kubenswrapper[4867]: E0214 04:29:36.388495 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-tqpbj ring-data-devices scripts swiftconf], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/swift-ring-rebalance-27bx5" podUID="2eb35c23-c6de-46f0-a7bf-8390d9eefd42" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.402294 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-dc8sm"] Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.463866 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/92f44db3-78d7-4707-af34-daf9f3bbc0bf-ring-data-devices\") pod \"swift-ring-rebalance-dc8sm\" (UID: \"92f44db3-78d7-4707-af34-daf9f3bbc0bf\") " pod="openstack/swift-ring-rebalance-dc8sm" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.463958 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqpbj\" (UniqueName: \"kubernetes.io/projected/2eb35c23-c6de-46f0-a7bf-8390d9eefd42-kube-api-access-tqpbj\") pod \"swift-ring-rebalance-27bx5\" (UID: \"2eb35c23-c6de-46f0-a7bf-8390d9eefd42\") " pod="openstack/swift-ring-rebalance-27bx5" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.464001 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/2eb35c23-c6de-46f0-a7bf-8390d9eefd42-ring-data-devices\") pod \"swift-ring-rebalance-27bx5\" (UID: \"2eb35c23-c6de-46f0-a7bf-8390d9eefd42\") " pod="openstack/swift-ring-rebalance-27bx5" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.464049 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2eb35c23-c6de-46f0-a7bf-8390d9eefd42-scripts\") pod \"swift-ring-rebalance-27bx5\" (UID: \"2eb35c23-c6de-46f0-a7bf-8390d9eefd42\") " pod="openstack/swift-ring-rebalance-27bx5" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.464078 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/92f44db3-78d7-4707-af34-daf9f3bbc0bf-dispersionconf\") pod \"swift-ring-rebalance-dc8sm\" (UID: \"92f44db3-78d7-4707-af34-daf9f3bbc0bf\") " pod="openstack/swift-ring-rebalance-dc8sm" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.464124 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8d9fk\" (UniqueName: \"kubernetes.io/projected/92f44db3-78d7-4707-af34-daf9f3bbc0bf-kube-api-access-8d9fk\") pod \"swift-ring-rebalance-dc8sm\" (UID: \"92f44db3-78d7-4707-af34-daf9f3bbc0bf\") " pod="openstack/swift-ring-rebalance-dc8sm" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.464170 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/2eb35c23-c6de-46f0-a7bf-8390d9eefd42-etc-swift\") pod \"swift-ring-rebalance-27bx5\" (UID: \"2eb35c23-c6de-46f0-a7bf-8390d9eefd42\") " pod="openstack/swift-ring-rebalance-27bx5" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.464231 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/92f44db3-78d7-4707-af34-daf9f3bbc0bf-swiftconf\") pod \"swift-ring-rebalance-dc8sm\" (UID: \"92f44db3-78d7-4707-af34-daf9f3bbc0bf\") " pod="openstack/swift-ring-rebalance-dc8sm" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.464267 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92f44db3-78d7-4707-af34-daf9f3bbc0bf-combined-ca-bundle\") pod \"swift-ring-rebalance-dc8sm\" (UID: \"92f44db3-78d7-4707-af34-daf9f3bbc0bf\") " pod="openstack/swift-ring-rebalance-dc8sm" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.464338 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/2eb35c23-c6de-46f0-a7bf-8390d9eefd42-dispersionconf\") pod \"swift-ring-rebalance-27bx5\" (UID: \"2eb35c23-c6de-46f0-a7bf-8390d9eefd42\") " pod="openstack/swift-ring-rebalance-27bx5" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.464365 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/2eb35c23-c6de-46f0-a7bf-8390d9eefd42-swiftconf\") pod \"swift-ring-rebalance-27bx5\" (UID: \"2eb35c23-c6de-46f0-a7bf-8390d9eefd42\") " pod="openstack/swift-ring-rebalance-27bx5" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.464409 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/92f44db3-78d7-4707-af34-daf9f3bbc0bf-etc-swift\") pod \"swift-ring-rebalance-dc8sm\" (UID: \"92f44db3-78d7-4707-af34-daf9f3bbc0bf\") " pod="openstack/swift-ring-rebalance-dc8sm" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.464443 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2eb35c23-c6de-46f0-a7bf-8390d9eefd42-combined-ca-bundle\") pod \"swift-ring-rebalance-27bx5\" (UID: \"2eb35c23-c6de-46f0-a7bf-8390d9eefd42\") " pod="openstack/swift-ring-rebalance-27bx5" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.464465 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/92f44db3-78d7-4707-af34-daf9f3bbc0bf-scripts\") pod \"swift-ring-rebalance-dc8sm\" (UID: \"92f44db3-78d7-4707-af34-daf9f3bbc0bf\") " pod="openstack/swift-ring-rebalance-dc8sm" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.464951 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/2eb35c23-c6de-46f0-a7bf-8390d9eefd42-etc-swift\") pod \"swift-ring-rebalance-27bx5\" (UID: \"2eb35c23-c6de-46f0-a7bf-8390d9eefd42\") " pod="openstack/swift-ring-rebalance-27bx5" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.467641 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/2eb35c23-c6de-46f0-a7bf-8390d9eefd42-ring-data-devices\") pod \"swift-ring-rebalance-27bx5\" (UID: \"2eb35c23-c6de-46f0-a7bf-8390d9eefd42\") " pod="openstack/swift-ring-rebalance-27bx5" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.514249 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2eb35c23-c6de-46f0-a7bf-8390d9eefd42-scripts\") pod \"swift-ring-rebalance-27bx5\" (UID: \"2eb35c23-c6de-46f0-a7bf-8390d9eefd42\") " pod="openstack/swift-ring-rebalance-27bx5" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.518753 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/2eb35c23-c6de-46f0-a7bf-8390d9eefd42-dispersionconf\") pod \"swift-ring-rebalance-27bx5\" (UID: \"2eb35c23-c6de-46f0-a7bf-8390d9eefd42\") " pod="openstack/swift-ring-rebalance-27bx5" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.519792 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqpbj\" (UniqueName: \"kubernetes.io/projected/2eb35c23-c6de-46f0-a7bf-8390d9eefd42-kube-api-access-tqpbj\") pod \"swift-ring-rebalance-27bx5\" (UID: \"2eb35c23-c6de-46f0-a7bf-8390d9eefd42\") " pod="openstack/swift-ring-rebalance-27bx5" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.520936 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/2eb35c23-c6de-46f0-a7bf-8390d9eefd42-swiftconf\") pod \"swift-ring-rebalance-27bx5\" (UID: \"2eb35c23-c6de-46f0-a7bf-8390d9eefd42\") " pod="openstack/swift-ring-rebalance-27bx5" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.522530 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2eb35c23-c6de-46f0-a7bf-8390d9eefd42-combined-ca-bundle\") pod \"swift-ring-rebalance-27bx5\" (UID: \"2eb35c23-c6de-46f0-a7bf-8390d9eefd42\") " pod="openstack/swift-ring-rebalance-27bx5" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.556012 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-hxkz7"] Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.568089 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/92f44db3-78d7-4707-af34-daf9f3bbc0bf-dispersionconf\") pod \"swift-ring-rebalance-dc8sm\" (UID: \"92f44db3-78d7-4707-af34-daf9f3bbc0bf\") " pod="openstack/swift-ring-rebalance-dc8sm" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.568194 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8d9fk\" (UniqueName: \"kubernetes.io/projected/92f44db3-78d7-4707-af34-daf9f3bbc0bf-kube-api-access-8d9fk\") pod \"swift-ring-rebalance-dc8sm\" (UID: \"92f44db3-78d7-4707-af34-daf9f3bbc0bf\") " pod="openstack/swift-ring-rebalance-dc8sm" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.568299 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/92f44db3-78d7-4707-af34-daf9f3bbc0bf-swiftconf\") pod \"swift-ring-rebalance-dc8sm\" (UID: \"92f44db3-78d7-4707-af34-daf9f3bbc0bf\") " pod="openstack/swift-ring-rebalance-dc8sm" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.568328 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92f44db3-78d7-4707-af34-daf9f3bbc0bf-combined-ca-bundle\") pod \"swift-ring-rebalance-dc8sm\" (UID: \"92f44db3-78d7-4707-af34-daf9f3bbc0bf\") " pod="openstack/swift-ring-rebalance-dc8sm" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.568429 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/92f44db3-78d7-4707-af34-daf9f3bbc0bf-etc-swift\") pod \"swift-ring-rebalance-dc8sm\" (UID: \"92f44db3-78d7-4707-af34-daf9f3bbc0bf\") " pod="openstack/swift-ring-rebalance-dc8sm" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.568471 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/92f44db3-78d7-4707-af34-daf9f3bbc0bf-scripts\") pod \"swift-ring-rebalance-dc8sm\" (UID: \"92f44db3-78d7-4707-af34-daf9f3bbc0bf\") " pod="openstack/swift-ring-rebalance-dc8sm" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.568554 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/92f44db3-78d7-4707-af34-daf9f3bbc0bf-ring-data-devices\") pod \"swift-ring-rebalance-dc8sm\" (UID: \"92f44db3-78d7-4707-af34-daf9f3bbc0bf\") " pod="openstack/swift-ring-rebalance-dc8sm" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.569767 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/92f44db3-78d7-4707-af34-daf9f3bbc0bf-ring-data-devices\") pod \"swift-ring-rebalance-dc8sm\" (UID: \"92f44db3-78d7-4707-af34-daf9f3bbc0bf\") " pod="openstack/swift-ring-rebalance-dc8sm" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.570867 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/92f44db3-78d7-4707-af34-daf9f3bbc0bf-etc-swift\") pod \"swift-ring-rebalance-dc8sm\" (UID: \"92f44db3-78d7-4707-af34-daf9f3bbc0bf\") " pod="openstack/swift-ring-rebalance-dc8sm" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.572093 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/92f44db3-78d7-4707-af34-daf9f3bbc0bf-scripts\") pod \"swift-ring-rebalance-dc8sm\" (UID: \"92f44db3-78d7-4707-af34-daf9f3bbc0bf\") " pod="openstack/swift-ring-rebalance-dc8sm" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.576927 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/92f44db3-78d7-4707-af34-daf9f3bbc0bf-dispersionconf\") pod \"swift-ring-rebalance-dc8sm\" (UID: \"92f44db3-78d7-4707-af34-daf9f3bbc0bf\") " pod="openstack/swift-ring-rebalance-dc8sm" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.577307 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92f44db3-78d7-4707-af34-daf9f3bbc0bf-combined-ca-bundle\") pod \"swift-ring-rebalance-dc8sm\" (UID: \"92f44db3-78d7-4707-af34-daf9f3bbc0bf\") " pod="openstack/swift-ring-rebalance-dc8sm" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.589014 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/92f44db3-78d7-4707-af34-daf9f3bbc0bf-swiftconf\") pod \"swift-ring-rebalance-dc8sm\" (UID: \"92f44db3-78d7-4707-af34-daf9f3bbc0bf\") " pod="openstack/swift-ring-rebalance-dc8sm" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.601625 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8d9fk\" (UniqueName: \"kubernetes.io/projected/92f44db3-78d7-4707-af34-daf9f3bbc0bf-kube-api-access-8d9fk\") pod \"swift-ring-rebalance-dc8sm\" (UID: \"92f44db3-78d7-4707-af34-daf9f3bbc0bf\") " pod="openstack/swift-ring-rebalance-dc8sm" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.611664 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6c89d5d749-b7rzr"] Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.614053 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c89d5d749-b7rzr" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.617882 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.640815 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c89d5d749-b7rzr"] Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.671628 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/deae29d8-abfa-4fe4-8314-b02cf70eb5be-config\") pod \"dnsmasq-dns-6c89d5d749-b7rzr\" (UID: \"deae29d8-abfa-4fe4-8314-b02cf70eb5be\") " pod="openstack/dnsmasq-dns-6c89d5d749-b7rzr" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.671720 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/deae29d8-abfa-4fe4-8314-b02cf70eb5be-dns-svc\") pod \"dnsmasq-dns-6c89d5d749-b7rzr\" (UID: \"deae29d8-abfa-4fe4-8314-b02cf70eb5be\") " pod="openstack/dnsmasq-dns-6c89d5d749-b7rzr" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.671746 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/deae29d8-abfa-4fe4-8314-b02cf70eb5be-ovsdbserver-sb\") pod \"dnsmasq-dns-6c89d5d749-b7rzr\" (UID: \"deae29d8-abfa-4fe4-8314-b02cf70eb5be\") " pod="openstack/dnsmasq-dns-6c89d5d749-b7rzr" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.671864 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p426f\" (UniqueName: \"kubernetes.io/projected/deae29d8-abfa-4fe4-8314-b02cf70eb5be-kube-api-access-p426f\") pod \"dnsmasq-dns-6c89d5d749-b7rzr\" (UID: \"deae29d8-abfa-4fe4-8314-b02cf70eb5be\") " pod="openstack/dnsmasq-dns-6c89d5d749-b7rzr" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.696088 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.724071 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-dc8sm" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.735696 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-4gz6p"] Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.737447 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-4gz6p" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.741562 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.748731 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-4gz6p"] Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.774765 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43e8f5ec-ba3d-4962-97f1-2be3a087852e-config\") pod \"ovn-controller-metrics-4gz6p\" (UID: \"43e8f5ec-ba3d-4962-97f1-2be3a087852e\") " pod="openstack/ovn-controller-metrics-4gz6p" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.776021 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/deae29d8-abfa-4fe4-8314-b02cf70eb5be-config\") pod \"dnsmasq-dns-6c89d5d749-b7rzr\" (UID: \"deae29d8-abfa-4fe4-8314-b02cf70eb5be\") " pod="openstack/dnsmasq-dns-6c89d5d749-b7rzr" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.776373 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/deae29d8-abfa-4fe4-8314-b02cf70eb5be-dns-svc\") pod \"dnsmasq-dns-6c89d5d749-b7rzr\" (UID: \"deae29d8-abfa-4fe4-8314-b02cf70eb5be\") " pod="openstack/dnsmasq-dns-6c89d5d749-b7rzr" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.776709 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/deae29d8-abfa-4fe4-8314-b02cf70eb5be-ovsdbserver-sb\") pod \"dnsmasq-dns-6c89d5d749-b7rzr\" (UID: \"deae29d8-abfa-4fe4-8314-b02cf70eb5be\") " pod="openstack/dnsmasq-dns-6c89d5d749-b7rzr" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.776883 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pwkl\" (UniqueName: \"kubernetes.io/projected/43e8f5ec-ba3d-4962-97f1-2be3a087852e-kube-api-access-5pwkl\") pod \"ovn-controller-metrics-4gz6p\" (UID: \"43e8f5ec-ba3d-4962-97f1-2be3a087852e\") " pod="openstack/ovn-controller-metrics-4gz6p" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.777026 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43e8f5ec-ba3d-4962-97f1-2be3a087852e-combined-ca-bundle\") pod \"ovn-controller-metrics-4gz6p\" (UID: \"43e8f5ec-ba3d-4962-97f1-2be3a087852e\") " pod="openstack/ovn-controller-metrics-4gz6p" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.777225 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/43e8f5ec-ba3d-4962-97f1-2be3a087852e-ovs-rundir\") pod \"ovn-controller-metrics-4gz6p\" (UID: \"43e8f5ec-ba3d-4962-97f1-2be3a087852e\") " pod="openstack/ovn-controller-metrics-4gz6p" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.778794 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/43e8f5ec-ba3d-4962-97f1-2be3a087852e-ovn-rundir\") pod \"ovn-controller-metrics-4gz6p\" (UID: \"43e8f5ec-ba3d-4962-97f1-2be3a087852e\") " pod="openstack/ovn-controller-metrics-4gz6p" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.779098 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p426f\" (UniqueName: \"kubernetes.io/projected/deae29d8-abfa-4fe4-8314-b02cf70eb5be-kube-api-access-p426f\") pod \"dnsmasq-dns-6c89d5d749-b7rzr\" (UID: \"deae29d8-abfa-4fe4-8314-b02cf70eb5be\") " pod="openstack/dnsmasq-dns-6c89d5d749-b7rzr" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.779207 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/43e8f5ec-ba3d-4962-97f1-2be3a087852e-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-4gz6p\" (UID: \"43e8f5ec-ba3d-4962-97f1-2be3a087852e\") " pod="openstack/ovn-controller-metrics-4gz6p" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.780774 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/deae29d8-abfa-4fe4-8314-b02cf70eb5be-dns-svc\") pod \"dnsmasq-dns-6c89d5d749-b7rzr\" (UID: \"deae29d8-abfa-4fe4-8314-b02cf70eb5be\") " pod="openstack/dnsmasq-dns-6c89d5d749-b7rzr" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.781045 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/deae29d8-abfa-4fe4-8314-b02cf70eb5be-ovsdbserver-sb\") pod \"dnsmasq-dns-6c89d5d749-b7rzr\" (UID: \"deae29d8-abfa-4fe4-8314-b02cf70eb5be\") " pod="openstack/dnsmasq-dns-6c89d5d749-b7rzr" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.781340 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/deae29d8-abfa-4fe4-8314-b02cf70eb5be-config\") pod \"dnsmasq-dns-6c89d5d749-b7rzr\" (UID: \"deae29d8-abfa-4fe4-8314-b02cf70eb5be\") " pod="openstack/dnsmasq-dns-6c89d5d749-b7rzr" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.806605 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p426f\" (UniqueName: \"kubernetes.io/projected/deae29d8-abfa-4fe4-8314-b02cf70eb5be-kube-api-access-p426f\") pod \"dnsmasq-dns-6c89d5d749-b7rzr\" (UID: \"deae29d8-abfa-4fe4-8314-b02cf70eb5be\") " pod="openstack/dnsmasq-dns-6c89d5d749-b7rzr" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.840860 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.886137 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/43e8f5ec-ba3d-4962-97f1-2be3a087852e-ovn-rundir\") pod \"ovn-controller-metrics-4gz6p\" (UID: \"43e8f5ec-ba3d-4962-97f1-2be3a087852e\") " pod="openstack/ovn-controller-metrics-4gz6p" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.886241 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/43e8f5ec-ba3d-4962-97f1-2be3a087852e-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-4gz6p\" (UID: \"43e8f5ec-ba3d-4962-97f1-2be3a087852e\") " pod="openstack/ovn-controller-metrics-4gz6p" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.886287 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43e8f5ec-ba3d-4962-97f1-2be3a087852e-config\") pod \"ovn-controller-metrics-4gz6p\" (UID: \"43e8f5ec-ba3d-4962-97f1-2be3a087852e\") " pod="openstack/ovn-controller-metrics-4gz6p" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.886414 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pwkl\" (UniqueName: \"kubernetes.io/projected/43e8f5ec-ba3d-4962-97f1-2be3a087852e-kube-api-access-5pwkl\") pod \"ovn-controller-metrics-4gz6p\" (UID: \"43e8f5ec-ba3d-4962-97f1-2be3a087852e\") " pod="openstack/ovn-controller-metrics-4gz6p" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.886437 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43e8f5ec-ba3d-4962-97f1-2be3a087852e-combined-ca-bundle\") pod \"ovn-controller-metrics-4gz6p\" (UID: \"43e8f5ec-ba3d-4962-97f1-2be3a087852e\") " pod="openstack/ovn-controller-metrics-4gz6p" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.886467 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/43e8f5ec-ba3d-4962-97f1-2be3a087852e-ovs-rundir\") pod \"ovn-controller-metrics-4gz6p\" (UID: \"43e8f5ec-ba3d-4962-97f1-2be3a087852e\") " pod="openstack/ovn-controller-metrics-4gz6p" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.887098 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/43e8f5ec-ba3d-4962-97f1-2be3a087852e-ovs-rundir\") pod \"ovn-controller-metrics-4gz6p\" (UID: \"43e8f5ec-ba3d-4962-97f1-2be3a087852e\") " pod="openstack/ovn-controller-metrics-4gz6p" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.887181 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/43e8f5ec-ba3d-4962-97f1-2be3a087852e-ovn-rundir\") pod \"ovn-controller-metrics-4gz6p\" (UID: \"43e8f5ec-ba3d-4962-97f1-2be3a087852e\") " pod="openstack/ovn-controller-metrics-4gz6p" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.889674 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43e8f5ec-ba3d-4962-97f1-2be3a087852e-config\") pod \"ovn-controller-metrics-4gz6p\" (UID: \"43e8f5ec-ba3d-4962-97f1-2be3a087852e\") " pod="openstack/ovn-controller-metrics-4gz6p" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.898688 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43e8f5ec-ba3d-4962-97f1-2be3a087852e-combined-ca-bundle\") pod \"ovn-controller-metrics-4gz6p\" (UID: \"43e8f5ec-ba3d-4962-97f1-2be3a087852e\") " pod="openstack/ovn-controller-metrics-4gz6p" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.900785 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/43e8f5ec-ba3d-4962-97f1-2be3a087852e-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-4gz6p\" (UID: \"43e8f5ec-ba3d-4962-97f1-2be3a087852e\") " pod="openstack/ovn-controller-metrics-4gz6p" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.926234 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pwkl\" (UniqueName: \"kubernetes.io/projected/43e8f5ec-ba3d-4962-97f1-2be3a087852e-kube-api-access-5pwkl\") pod \"ovn-controller-metrics-4gz6p\" (UID: \"43e8f5ec-ba3d-4962-97f1-2be3a087852e\") " pod="openstack/ovn-controller-metrics-4gz6p" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.992122 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c89d5d749-b7rzr" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.012702 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbe41be0-f7f8-47ff-a587-b85e282fa5ee" path="/var/lib/kubelet/pods/dbe41be0-f7f8-47ff-a587-b85e282fa5ee/volumes" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.096165 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-cl29c"] Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.109821 4867 generic.go:334] "Generic (PLEG): container finished" podID="fa85f647-f104-47eb-800c-5926241431c6" containerID="7059749a0f090f4fcadd34570c504de064398543b7a31431508b3c8aff49c475" exitCode=0 Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.109895 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-cl29c" event={"ID":"fa85f647-f104-47eb-800c-5926241431c6","Type":"ContainerDied","Data":"7059749a0f090f4fcadd34570c504de064398543b7a31431508b3c8aff49c475"} Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.113413 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"505de461-9e6f-4914-bf50-e2bf4149b566","Type":"ContainerStarted","Data":"6112e5b28cbdeaa3d1c11987b58af4ae7e622169b457b89afe74d3879df320fd"} Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.114384 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-27bx5" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.116261 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.126833 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-cp76f"] Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.132081 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-cp76f" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.136794 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-27bx5" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.139799 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-cp76f"] Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.141232 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.150390 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-4gz6p" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.256441 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.301593 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-hxkz7" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.306889 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2eb35c23-c6de-46f0-a7bf-8390d9eefd42-scripts\") pod \"2eb35c23-c6de-46f0-a7bf-8390d9eefd42\" (UID: \"2eb35c23-c6de-46f0-a7bf-8390d9eefd42\") " Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.311698 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2eb35c23-c6de-46f0-a7bf-8390d9eefd42-scripts" (OuterVolumeSpecName: "scripts") pod "2eb35c23-c6de-46f0-a7bf-8390d9eefd42" (UID: "2eb35c23-c6de-46f0-a7bf-8390d9eefd42"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.325579 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/2eb35c23-c6de-46f0-a7bf-8390d9eefd42-swiftconf\") pod \"2eb35c23-c6de-46f0-a7bf-8390d9eefd42\" (UID: \"2eb35c23-c6de-46f0-a7bf-8390d9eefd42\") " Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.325664 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tqpbj\" (UniqueName: \"kubernetes.io/projected/2eb35c23-c6de-46f0-a7bf-8390d9eefd42-kube-api-access-tqpbj\") pod \"2eb35c23-c6de-46f0-a7bf-8390d9eefd42\" (UID: \"2eb35c23-c6de-46f0-a7bf-8390d9eefd42\") " Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.325695 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/2eb35c23-c6de-46f0-a7bf-8390d9eefd42-ring-data-devices\") pod \"2eb35c23-c6de-46f0-a7bf-8390d9eefd42\" (UID: \"2eb35c23-c6de-46f0-a7bf-8390d9eefd42\") " Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.325785 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/2eb35c23-c6de-46f0-a7bf-8390d9eefd42-dispersionconf\") pod \"2eb35c23-c6de-46f0-a7bf-8390d9eefd42\" (UID: \"2eb35c23-c6de-46f0-a7bf-8390d9eefd42\") " Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.325815 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2eb35c23-c6de-46f0-a7bf-8390d9eefd42-combined-ca-bundle\") pod \"2eb35c23-c6de-46f0-a7bf-8390d9eefd42\" (UID: \"2eb35c23-c6de-46f0-a7bf-8390d9eefd42\") " Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.325871 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/2eb35c23-c6de-46f0-a7bf-8390d9eefd42-etc-swift\") pod \"2eb35c23-c6de-46f0-a7bf-8390d9eefd42\" (UID: \"2eb35c23-c6de-46f0-a7bf-8390d9eefd42\") " Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.326622 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7-config\") pod \"dnsmasq-dns-698758b865-cp76f\" (UID: \"af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7\") " pod="openstack/dnsmasq-dns-698758b865-cp76f" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.326841 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7-dns-svc\") pod \"dnsmasq-dns-698758b865-cp76f\" (UID: \"af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7\") " pod="openstack/dnsmasq-dns-698758b865-cp76f" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.326874 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-cp76f\" (UID: \"af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7\") " pod="openstack/dnsmasq-dns-698758b865-cp76f" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.327063 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-cp76f\" (UID: \"af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7\") " pod="openstack/dnsmasq-dns-698758b865-cp76f" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.327111 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gndq6\" (UniqueName: \"kubernetes.io/projected/af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7-kube-api-access-gndq6\") pod \"dnsmasq-dns-698758b865-cp76f\" (UID: \"af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7\") " pod="openstack/dnsmasq-dns-698758b865-cp76f" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.327170 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2eb35c23-c6de-46f0-a7bf-8390d9eefd42-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.344694 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2eb35c23-c6de-46f0-a7bf-8390d9eefd42-kube-api-access-tqpbj" (OuterVolumeSpecName: "kube-api-access-tqpbj") pod "2eb35c23-c6de-46f0-a7bf-8390d9eefd42" (UID: "2eb35c23-c6de-46f0-a7bf-8390d9eefd42"). InnerVolumeSpecName "kube-api-access-tqpbj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.345095 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2eb35c23-c6de-46f0-a7bf-8390d9eefd42-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "2eb35c23-c6de-46f0-a7bf-8390d9eefd42" (UID: "2eb35c23-c6de-46f0-a7bf-8390d9eefd42"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.363926 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2eb35c23-c6de-46f0-a7bf-8390d9eefd42-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "2eb35c23-c6de-46f0-a7bf-8390d9eefd42" (UID: "2eb35c23-c6de-46f0-a7bf-8390d9eefd42"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.364818 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2eb35c23-c6de-46f0-a7bf-8390d9eefd42-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "2eb35c23-c6de-46f0-a7bf-8390d9eefd42" (UID: "2eb35c23-c6de-46f0-a7bf-8390d9eefd42"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.374472 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2eb35c23-c6de-46f0-a7bf-8390d9eefd42-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "2eb35c23-c6de-46f0-a7bf-8390d9eefd42" (UID: "2eb35c23-c6de-46f0-a7bf-8390d9eefd42"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.482962 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1d9f9909-1442-4d83-b2aa-0f58d4022338-etc-swift\") pod \"swift-storage-0\" (UID: \"1d9f9909-1442-4d83-b2aa-0f58d4022338\") " pod="openstack/swift-storage-0" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.483329 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7-config\") pod \"dnsmasq-dns-698758b865-cp76f\" (UID: \"af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7\") " pod="openstack/dnsmasq-dns-698758b865-cp76f" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.483593 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7-dns-svc\") pod \"dnsmasq-dns-698758b865-cp76f\" (UID: \"af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7\") " pod="openstack/dnsmasq-dns-698758b865-cp76f" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.483660 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-cp76f\" (UID: \"af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7\") " pod="openstack/dnsmasq-dns-698758b865-cp76f" Feb 14 04:29:37 crc kubenswrapper[4867]: E0214 04:29:37.483826 4867 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 14 04:29:37 crc kubenswrapper[4867]: E0214 04:29:37.483922 4867 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 14 04:29:37 crc kubenswrapper[4867]: E0214 04:29:37.484091 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1d9f9909-1442-4d83-b2aa-0f58d4022338-etc-swift podName:1d9f9909-1442-4d83-b2aa-0f58d4022338 nodeName:}" failed. No retries permitted until 2026-02-14 04:29:39.484061178 +0000 UTC m=+1211.564998492 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/1d9f9909-1442-4d83-b2aa-0f58d4022338-etc-swift") pod "swift-storage-0" (UID: "1d9f9909-1442-4d83-b2aa-0f58d4022338") : configmap "swift-ring-files" not found Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.483868 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-cp76f\" (UID: \"af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7\") " pod="openstack/dnsmasq-dns-698758b865-cp76f" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.484730 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-cp76f\" (UID: \"af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7\") " pod="openstack/dnsmasq-dns-698758b865-cp76f" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.484914 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gndq6\" (UniqueName: \"kubernetes.io/projected/af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7-kube-api-access-gndq6\") pod \"dnsmasq-dns-698758b865-cp76f\" (UID: \"af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7\") " pod="openstack/dnsmasq-dns-698758b865-cp76f" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.484640 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7-config\") pod \"dnsmasq-dns-698758b865-cp76f\" (UID: \"af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7\") " pod="openstack/dnsmasq-dns-698758b865-cp76f" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.485336 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7-dns-svc\") pod \"dnsmasq-dns-698758b865-cp76f\" (UID: \"af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7\") " pod="openstack/dnsmasq-dns-698758b865-cp76f" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.485486 4867 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/2eb35c23-c6de-46f0-a7bf-8390d9eefd42-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.485540 4867 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/2eb35c23-c6de-46f0-a7bf-8390d9eefd42-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.485554 4867 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/2eb35c23-c6de-46f0-a7bf-8390d9eefd42-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.485568 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tqpbj\" (UniqueName: \"kubernetes.io/projected/2eb35c23-c6de-46f0-a7bf-8390d9eefd42-kube-api-access-tqpbj\") on node \"crc\" DevicePath \"\"" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.485581 4867 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/2eb35c23-c6de-46f0-a7bf-8390d9eefd42-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.486287 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-cp76f\" (UID: \"af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7\") " pod="openstack/dnsmasq-dns-698758b865-cp76f" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.512361 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2eb35c23-c6de-46f0-a7bf-8390d9eefd42-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2eb35c23-c6de-46f0-a7bf-8390d9eefd42" (UID: "2eb35c23-c6de-46f0-a7bf-8390d9eefd42"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.554441 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gndq6\" (UniqueName: \"kubernetes.io/projected/af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7-kube-api-access-gndq6\") pod \"dnsmasq-dns-698758b865-cp76f\" (UID: \"af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7\") " pod="openstack/dnsmasq-dns-698758b865-cp76f" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.573107 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-dc8sm"] Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.608225 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a5f0e82b-f765-4fe1-b74e-856e1a6d8b8c-dns-svc\") pod \"a5f0e82b-f765-4fe1-b74e-856e1a6d8b8c\" (UID: \"a5f0e82b-f765-4fe1-b74e-856e1a6d8b8c\") " Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.608278 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5f0e82b-f765-4fe1-b74e-856e1a6d8b8c-config\") pod \"a5f0e82b-f765-4fe1-b74e-856e1a6d8b8c\" (UID: \"a5f0e82b-f765-4fe1-b74e-856e1a6d8b8c\") " Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.608549 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xj27v\" (UniqueName: \"kubernetes.io/projected/a5f0e82b-f765-4fe1-b74e-856e1a6d8b8c-kube-api-access-xj27v\") pod \"a5f0e82b-f765-4fe1-b74e-856e1a6d8b8c\" (UID: \"a5f0e82b-f765-4fe1-b74e-856e1a6d8b8c\") " Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.609040 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2eb35c23-c6de-46f0-a7bf-8390d9eefd42-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.609991 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5f0e82b-f765-4fe1-b74e-856e1a6d8b8c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a5f0e82b-f765-4fe1-b74e-856e1a6d8b8c" (UID: "a5f0e82b-f765-4fe1-b74e-856e1a6d8b8c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.610266 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5f0e82b-f765-4fe1-b74e-856e1a6d8b8c-config" (OuterVolumeSpecName: "config") pod "a5f0e82b-f765-4fe1-b74e-856e1a6d8b8c" (UID: "a5f0e82b-f765-4fe1-b74e-856e1a6d8b8c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.618753 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5f0e82b-f765-4fe1-b74e-856e1a6d8b8c-kube-api-access-xj27v" (OuterVolumeSpecName: "kube-api-access-xj27v") pod "a5f0e82b-f765-4fe1-b74e-856e1a6d8b8c" (UID: "a5f0e82b-f765-4fe1-b74e-856e1a6d8b8c"). InnerVolumeSpecName "kube-api-access-xj27v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.716780 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xj27v\" (UniqueName: \"kubernetes.io/projected/a5f0e82b-f765-4fe1-b74e-856e1a6d8b8c-kube-api-access-xj27v\") on node \"crc\" DevicePath \"\"" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.716815 4867 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a5f0e82b-f765-4fe1-b74e-856e1a6d8b8c-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.716825 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5f0e82b-f765-4fe1-b74e-856e1a6d8b8c-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.765173 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-cp76f" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.777085 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.779288 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.792535 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-jjsz4" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.792783 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.792960 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.794011 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.813181 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.924880 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0552eb77-2bc5-49dd-911e-f08071a83da9-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"0552eb77-2bc5-49dd-911e-f08071a83da9\") " pod="openstack/ovn-northd-0" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.924935 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0552eb77-2bc5-49dd-911e-f08071a83da9-config\") pod \"ovn-northd-0\" (UID: \"0552eb77-2bc5-49dd-911e-f08071a83da9\") " pod="openstack/ovn-northd-0" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.925022 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0552eb77-2bc5-49dd-911e-f08071a83da9-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"0552eb77-2bc5-49dd-911e-f08071a83da9\") " pod="openstack/ovn-northd-0" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.925081 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kh6pq\" (UniqueName: \"kubernetes.io/projected/0552eb77-2bc5-49dd-911e-f08071a83da9-kube-api-access-kh6pq\") pod \"ovn-northd-0\" (UID: \"0552eb77-2bc5-49dd-911e-f08071a83da9\") " pod="openstack/ovn-northd-0" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.925129 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/0552eb77-2bc5-49dd-911e-f08071a83da9-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"0552eb77-2bc5-49dd-911e-f08071a83da9\") " pod="openstack/ovn-northd-0" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.925167 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0552eb77-2bc5-49dd-911e-f08071a83da9-scripts\") pod \"ovn-northd-0\" (UID: \"0552eb77-2bc5-49dd-911e-f08071a83da9\") " pod="openstack/ovn-northd-0" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.925195 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0552eb77-2bc5-49dd-911e-f08071a83da9-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"0552eb77-2bc5-49dd-911e-f08071a83da9\") " pod="openstack/ovn-northd-0" Feb 14 04:29:38 crc kubenswrapper[4867]: I0214 04:29:38.028568 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0552eb77-2bc5-49dd-911e-f08071a83da9-scripts\") pod \"ovn-northd-0\" (UID: \"0552eb77-2bc5-49dd-911e-f08071a83da9\") " pod="openstack/ovn-northd-0" Feb 14 04:29:38 crc kubenswrapper[4867]: I0214 04:29:38.028643 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0552eb77-2bc5-49dd-911e-f08071a83da9-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"0552eb77-2bc5-49dd-911e-f08071a83da9\") " pod="openstack/ovn-northd-0" Feb 14 04:29:38 crc kubenswrapper[4867]: I0214 04:29:38.028735 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0552eb77-2bc5-49dd-911e-f08071a83da9-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"0552eb77-2bc5-49dd-911e-f08071a83da9\") " pod="openstack/ovn-northd-0" Feb 14 04:29:38 crc kubenswrapper[4867]: I0214 04:29:38.028774 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0552eb77-2bc5-49dd-911e-f08071a83da9-config\") pod \"ovn-northd-0\" (UID: \"0552eb77-2bc5-49dd-911e-f08071a83da9\") " pod="openstack/ovn-northd-0" Feb 14 04:29:38 crc kubenswrapper[4867]: I0214 04:29:38.028884 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0552eb77-2bc5-49dd-911e-f08071a83da9-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"0552eb77-2bc5-49dd-911e-f08071a83da9\") " pod="openstack/ovn-northd-0" Feb 14 04:29:38 crc kubenswrapper[4867]: I0214 04:29:38.028954 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kh6pq\" (UniqueName: \"kubernetes.io/projected/0552eb77-2bc5-49dd-911e-f08071a83da9-kube-api-access-kh6pq\") pod \"ovn-northd-0\" (UID: \"0552eb77-2bc5-49dd-911e-f08071a83da9\") " pod="openstack/ovn-northd-0" Feb 14 04:29:38 crc kubenswrapper[4867]: I0214 04:29:38.028995 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/0552eb77-2bc5-49dd-911e-f08071a83da9-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"0552eb77-2bc5-49dd-911e-f08071a83da9\") " pod="openstack/ovn-northd-0" Feb 14 04:29:38 crc kubenswrapper[4867]: I0214 04:29:38.030447 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0552eb77-2bc5-49dd-911e-f08071a83da9-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"0552eb77-2bc5-49dd-911e-f08071a83da9\") " pod="openstack/ovn-northd-0" Feb 14 04:29:38 crc kubenswrapper[4867]: I0214 04:29:38.031077 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0552eb77-2bc5-49dd-911e-f08071a83da9-scripts\") pod \"ovn-northd-0\" (UID: \"0552eb77-2bc5-49dd-911e-f08071a83da9\") " pod="openstack/ovn-northd-0" Feb 14 04:29:38 crc kubenswrapper[4867]: I0214 04:29:38.035435 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0552eb77-2bc5-49dd-911e-f08071a83da9-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"0552eb77-2bc5-49dd-911e-f08071a83da9\") " pod="openstack/ovn-northd-0" Feb 14 04:29:38 crc kubenswrapper[4867]: I0214 04:29:38.035721 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/0552eb77-2bc5-49dd-911e-f08071a83da9-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"0552eb77-2bc5-49dd-911e-f08071a83da9\") " pod="openstack/ovn-northd-0" Feb 14 04:29:38 crc kubenswrapper[4867]: I0214 04:29:38.036848 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0552eb77-2bc5-49dd-911e-f08071a83da9-config\") pod \"ovn-northd-0\" (UID: \"0552eb77-2bc5-49dd-911e-f08071a83da9\") " pod="openstack/ovn-northd-0" Feb 14 04:29:38 crc kubenswrapper[4867]: I0214 04:29:38.043672 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0552eb77-2bc5-49dd-911e-f08071a83da9-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"0552eb77-2bc5-49dd-911e-f08071a83da9\") " pod="openstack/ovn-northd-0" Feb 14 04:29:38 crc kubenswrapper[4867]: I0214 04:29:38.080046 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kh6pq\" (UniqueName: \"kubernetes.io/projected/0552eb77-2bc5-49dd-911e-f08071a83da9-kube-api-access-kh6pq\") pod \"ovn-northd-0\" (UID: \"0552eb77-2bc5-49dd-911e-f08071a83da9\") " pod="openstack/ovn-northd-0" Feb 14 04:29:38 crc kubenswrapper[4867]: I0214 04:29:38.119848 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c89d5d749-b7rzr"] Feb 14 04:29:38 crc kubenswrapper[4867]: I0214 04:29:38.146907 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-4gz6p"] Feb 14 04:29:38 crc kubenswrapper[4867]: I0214 04:29:38.158685 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-dc8sm" event={"ID":"92f44db3-78d7-4707-af34-daf9f3bbc0bf","Type":"ContainerStarted","Data":"2560c4e53d69d39e5b6393b89e72bba71dd48e723971acf1a56bff692ff3065d"} Feb 14 04:29:38 crc kubenswrapper[4867]: W0214 04:29:38.160821 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43e8f5ec_ba3d_4962_97f1_2be3a087852e.slice/crio-d30a2736478f9fa24942a5f0daa69bfafa12068836ee45d12cbe6581c5ac334b WatchSource:0}: Error finding container d30a2736478f9fa24942a5f0daa69bfafa12068836ee45d12cbe6581c5ac334b: Status 404 returned error can't find the container with id d30a2736478f9fa24942a5f0daa69bfafa12068836ee45d12cbe6581c5ac334b Feb 14 04:29:38 crc kubenswrapper[4867]: I0214 04:29:38.166011 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-492b9" event={"ID":"701367b7-aef6-43b5-a0f9-3a91206962de","Type":"ContainerStarted","Data":"781c47958fe4be489d80deefc216efc94eebd58f1a594f810fd549eb698505ed"} Feb 14 04:29:38 crc kubenswrapper[4867]: I0214 04:29:38.170207 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-hxkz7" event={"ID":"a5f0e82b-f765-4fe1-b74e-856e1a6d8b8c","Type":"ContainerDied","Data":"9892bc720311d5c087d97016222dedfbfd5d79d98d86d65c02c43134fdd42239"} Feb 14 04:29:38 crc kubenswrapper[4867]: I0214 04:29:38.170275 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-hxkz7" Feb 14 04:29:38 crc kubenswrapper[4867]: I0214 04:29:38.170316 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-27bx5" Feb 14 04:29:38 crc kubenswrapper[4867]: I0214 04:29:38.208475 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 14 04:29:38 crc kubenswrapper[4867]: I0214 04:29:38.255411 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-492b9" podStartSLOduration=3.369719905 podStartE2EDuration="44.25538503s" podCreationTimestamp="2026-02-14 04:28:54 +0000 UTC" firstStartedPulling="2026-02-14 04:28:56.259984291 +0000 UTC m=+1168.340921605" lastFinishedPulling="2026-02-14 04:29:37.145649416 +0000 UTC m=+1209.226586730" observedRunningTime="2026-02-14 04:29:38.203153352 +0000 UTC m=+1210.284090666" watchObservedRunningTime="2026-02-14 04:29:38.25538503 +0000 UTC m=+1210.336322344" Feb 14 04:29:38 crc kubenswrapper[4867]: I0214 04:29:38.283523 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-hxkz7"] Feb 14 04:29:38 crc kubenswrapper[4867]: I0214 04:29:38.293259 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-hxkz7"] Feb 14 04:29:38 crc kubenswrapper[4867]: I0214 04:29:38.319707 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-27bx5"] Feb 14 04:29:38 crc kubenswrapper[4867]: I0214 04:29:38.332624 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-27bx5"] Feb 14 04:29:38 crc kubenswrapper[4867]: I0214 04:29:38.578953 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-cp76f"] Feb 14 04:29:38 crc kubenswrapper[4867]: I0214 04:29:38.893683 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 14 04:29:39 crc kubenswrapper[4867]: I0214 04:29:39.019744 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2eb35c23-c6de-46f0-a7bf-8390d9eefd42" path="/var/lib/kubelet/pods/2eb35c23-c6de-46f0-a7bf-8390d9eefd42/volumes" Feb 14 04:29:39 crc kubenswrapper[4867]: I0214 04:29:39.020932 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5f0e82b-f765-4fe1-b74e-856e1a6d8b8c" path="/var/lib/kubelet/pods/a5f0e82b-f765-4fe1-b74e-856e1a6d8b8c/volumes" Feb 14 04:29:39 crc kubenswrapper[4867]: I0214 04:29:39.199167 4867 generic.go:334] "Generic (PLEG): container finished" podID="deae29d8-abfa-4fe4-8314-b02cf70eb5be" containerID="b1095c8191bae78e5faa82320823678ede638e643a2b7ac06c8450de766b1b8a" exitCode=0 Feb 14 04:29:39 crc kubenswrapper[4867]: I0214 04:29:39.199983 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c89d5d749-b7rzr" event={"ID":"deae29d8-abfa-4fe4-8314-b02cf70eb5be","Type":"ContainerDied","Data":"b1095c8191bae78e5faa82320823678ede638e643a2b7ac06c8450de766b1b8a"} Feb 14 04:29:39 crc kubenswrapper[4867]: I0214 04:29:39.200203 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c89d5d749-b7rzr" event={"ID":"deae29d8-abfa-4fe4-8314-b02cf70eb5be","Type":"ContainerStarted","Data":"a0ffea8d48e001e089ae4bf9bd0aae709da26664483eee7930b16346800bdb97"} Feb 14 04:29:39 crc kubenswrapper[4867]: I0214 04:29:39.211721 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-cl29c" event={"ID":"fa85f647-f104-47eb-800c-5926241431c6","Type":"ContainerStarted","Data":"f602853b9c9099bd5cca86b27c567097f0af7a70be9d8b6daffa58b6753bb07c"} Feb 14 04:29:39 crc kubenswrapper[4867]: I0214 04:29:39.211944 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7cb5889db5-cl29c" podUID="fa85f647-f104-47eb-800c-5926241431c6" containerName="dnsmasq-dns" containerID="cri-o://f602853b9c9099bd5cca86b27c567097f0af7a70be9d8b6daffa58b6753bb07c" gracePeriod=10 Feb 14 04:29:39 crc kubenswrapper[4867]: I0214 04:29:39.212282 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7cb5889db5-cl29c" Feb 14 04:29:39 crc kubenswrapper[4867]: I0214 04:29:39.216671 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"0552eb77-2bc5-49dd-911e-f08071a83da9","Type":"ContainerStarted","Data":"7d5658b951af8fdef68cbab2977b1cf3210f036612287fad2460830c62bef625"} Feb 14 04:29:39 crc kubenswrapper[4867]: I0214 04:29:39.223791 4867 generic.go:334] "Generic (PLEG): container finished" podID="af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7" containerID="06776f7c91b51ef4ae24e9a96a1d7ce732c0aeceef3722062fef6d1c2167d74f" exitCode=0 Feb 14 04:29:39 crc kubenswrapper[4867]: I0214 04:29:39.224398 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-cp76f" event={"ID":"af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7","Type":"ContainerDied","Data":"06776f7c91b51ef4ae24e9a96a1d7ce732c0aeceef3722062fef6d1c2167d74f"} Feb 14 04:29:39 crc kubenswrapper[4867]: I0214 04:29:39.224463 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-cp76f" event={"ID":"af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7","Type":"ContainerStarted","Data":"41aaccd20d5bf4daeae755d0c155b427f29d56138b6d3562c58792965bd5ee9b"} Feb 14 04:29:39 crc kubenswrapper[4867]: I0214 04:29:39.228422 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-4gz6p" event={"ID":"43e8f5ec-ba3d-4962-97f1-2be3a087852e","Type":"ContainerStarted","Data":"502b1bca37b9d2434aff0aaa6973356854bda054b38f0ecd6832adcbe53c59f9"} Feb 14 04:29:39 crc kubenswrapper[4867]: I0214 04:29:39.228472 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-4gz6p" event={"ID":"43e8f5ec-ba3d-4962-97f1-2be3a087852e","Type":"ContainerStarted","Data":"d30a2736478f9fa24942a5f0daa69bfafa12068836ee45d12cbe6581c5ac334b"} Feb 14 04:29:39 crc kubenswrapper[4867]: I0214 04:29:39.245402 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7cb5889db5-cl29c" podStartSLOduration=4.820032128 podStartE2EDuration="5.245381001s" podCreationTimestamp="2026-02-14 04:29:34 +0000 UTC" firstStartedPulling="2026-02-14 04:29:35.341412998 +0000 UTC m=+1207.422350312" lastFinishedPulling="2026-02-14 04:29:35.766761871 +0000 UTC m=+1207.847699185" observedRunningTime="2026-02-14 04:29:39.238190704 +0000 UTC m=+1211.319128018" watchObservedRunningTime="2026-02-14 04:29:39.245381001 +0000 UTC m=+1211.326318315" Feb 14 04:29:39 crc kubenswrapper[4867]: I0214 04:29:39.307593 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-4gz6p" podStartSLOduration=3.307382613 podStartE2EDuration="3.307382613s" podCreationTimestamp="2026-02-14 04:29:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:29:39.287163257 +0000 UTC m=+1211.368100601" watchObservedRunningTime="2026-02-14 04:29:39.307382613 +0000 UTC m=+1211.388319927" Feb 14 04:29:39 crc kubenswrapper[4867]: I0214 04:29:39.493154 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1d9f9909-1442-4d83-b2aa-0f58d4022338-etc-swift\") pod \"swift-storage-0\" (UID: \"1d9f9909-1442-4d83-b2aa-0f58d4022338\") " pod="openstack/swift-storage-0" Feb 14 04:29:39 crc kubenswrapper[4867]: E0214 04:29:39.493409 4867 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 14 04:29:39 crc kubenswrapper[4867]: E0214 04:29:39.493793 4867 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 14 04:29:39 crc kubenswrapper[4867]: E0214 04:29:39.493865 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1d9f9909-1442-4d83-b2aa-0f58d4022338-etc-swift podName:1d9f9909-1442-4d83-b2aa-0f58d4022338 nodeName:}" failed. No retries permitted until 2026-02-14 04:29:43.493843193 +0000 UTC m=+1215.574780507 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/1d9f9909-1442-4d83-b2aa-0f58d4022338-etc-swift") pod "swift-storage-0" (UID: "1d9f9909-1442-4d83-b2aa-0f58d4022338") : configmap "swift-ring-files" not found Feb 14 04:29:40 crc kubenswrapper[4867]: I0214 04:29:40.094453 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-cl29c" Feb 14 04:29:40 crc kubenswrapper[4867]: I0214 04:29:40.215187 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zs8x\" (UniqueName: \"kubernetes.io/projected/fa85f647-f104-47eb-800c-5926241431c6-kube-api-access-8zs8x\") pod \"fa85f647-f104-47eb-800c-5926241431c6\" (UID: \"fa85f647-f104-47eb-800c-5926241431c6\") " Feb 14 04:29:40 crc kubenswrapper[4867]: I0214 04:29:40.215585 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa85f647-f104-47eb-800c-5926241431c6-config\") pod \"fa85f647-f104-47eb-800c-5926241431c6\" (UID: \"fa85f647-f104-47eb-800c-5926241431c6\") " Feb 14 04:29:40 crc kubenswrapper[4867]: I0214 04:29:40.215644 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fa85f647-f104-47eb-800c-5926241431c6-dns-svc\") pod \"fa85f647-f104-47eb-800c-5926241431c6\" (UID: \"fa85f647-f104-47eb-800c-5926241431c6\") " Feb 14 04:29:40 crc kubenswrapper[4867]: I0214 04:29:40.225190 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa85f647-f104-47eb-800c-5926241431c6-kube-api-access-8zs8x" (OuterVolumeSpecName: "kube-api-access-8zs8x") pod "fa85f647-f104-47eb-800c-5926241431c6" (UID: "fa85f647-f104-47eb-800c-5926241431c6"). InnerVolumeSpecName "kube-api-access-8zs8x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:29:40 crc kubenswrapper[4867]: I0214 04:29:40.243706 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c89d5d749-b7rzr" event={"ID":"deae29d8-abfa-4fe4-8314-b02cf70eb5be","Type":"ContainerStarted","Data":"d9bc20eb397e5cdd69feae306038d003a806f85daf7db6e801792855182536ab"} Feb 14 04:29:40 crc kubenswrapper[4867]: I0214 04:29:40.243889 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6c89d5d749-b7rzr" Feb 14 04:29:40 crc kubenswrapper[4867]: I0214 04:29:40.246037 4867 generic.go:334] "Generic (PLEG): container finished" podID="fa85f647-f104-47eb-800c-5926241431c6" containerID="f602853b9c9099bd5cca86b27c567097f0af7a70be9d8b6daffa58b6753bb07c" exitCode=0 Feb 14 04:29:40 crc kubenswrapper[4867]: I0214 04:29:40.246128 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-cl29c" event={"ID":"fa85f647-f104-47eb-800c-5926241431c6","Type":"ContainerDied","Data":"f602853b9c9099bd5cca86b27c567097f0af7a70be9d8b6daffa58b6753bb07c"} Feb 14 04:29:40 crc kubenswrapper[4867]: I0214 04:29:40.246167 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-cl29c" event={"ID":"fa85f647-f104-47eb-800c-5926241431c6","Type":"ContainerDied","Data":"b689e14869d0b7bebda2bfe1f81a3f0324cf2d9cbabff503414d1c60e7a92163"} Feb 14 04:29:40 crc kubenswrapper[4867]: I0214 04:29:40.246190 4867 scope.go:117] "RemoveContainer" containerID="f602853b9c9099bd5cca86b27c567097f0af7a70be9d8b6daffa58b6753bb07c" Feb 14 04:29:40 crc kubenswrapper[4867]: I0214 04:29:40.246256 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-cl29c" Feb 14 04:29:40 crc kubenswrapper[4867]: I0214 04:29:40.248232 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-cp76f" event={"ID":"af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7","Type":"ContainerStarted","Data":"287c9079ac6589b0605e06afbed45de89a1a8760239c3526cc0564c3247ada73"} Feb 14 04:29:40 crc kubenswrapper[4867]: I0214 04:29:40.248934 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-cp76f" Feb 14 04:29:40 crc kubenswrapper[4867]: I0214 04:29:40.251204 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"647ba30a-5526-4e27-9095-680c31ff4eb3","Type":"ContainerStarted","Data":"2985355e95eee0dc957c0e21e160693198281b44121fdf6f1cd86e16275d7eea"} Feb 14 04:29:40 crc kubenswrapper[4867]: I0214 04:29:40.253687 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e1e022d9-e2db-41eb-bbc8-36a85211a141","Type":"ContainerStarted","Data":"262c6cf6afafb6e46f694f14f681aa82c37388eec461cacbdee05ba39ec4b230"} Feb 14 04:29:40 crc kubenswrapper[4867]: I0214 04:29:40.275451 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6c89d5d749-b7rzr" podStartSLOduration=4.275429842 podStartE2EDuration="4.275429842s" podCreationTimestamp="2026-02-14 04:29:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:29:40.266933511 +0000 UTC m=+1212.347870825" watchObservedRunningTime="2026-02-14 04:29:40.275429842 +0000 UTC m=+1212.356367146" Feb 14 04:29:40 crc kubenswrapper[4867]: I0214 04:29:40.303190 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa85f647-f104-47eb-800c-5926241431c6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fa85f647-f104-47eb-800c-5926241431c6" (UID: "fa85f647-f104-47eb-800c-5926241431c6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:29:40 crc kubenswrapper[4867]: I0214 04:29:40.314862 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-cp76f" podStartSLOduration=3.314841297 podStartE2EDuration="3.314841297s" podCreationTimestamp="2026-02-14 04:29:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:29:40.308574004 +0000 UTC m=+1212.389511328" watchObservedRunningTime="2026-02-14 04:29:40.314841297 +0000 UTC m=+1212.395778611" Feb 14 04:29:40 crc kubenswrapper[4867]: I0214 04:29:40.318120 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa85f647-f104-47eb-800c-5926241431c6-config" (OuterVolumeSpecName: "config") pod "fa85f647-f104-47eb-800c-5926241431c6" (UID: "fa85f647-f104-47eb-800c-5926241431c6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:29:40 crc kubenswrapper[4867]: I0214 04:29:40.318150 4867 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fa85f647-f104-47eb-800c-5926241431c6-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 14 04:29:40 crc kubenswrapper[4867]: I0214 04:29:40.318185 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8zs8x\" (UniqueName: \"kubernetes.io/projected/fa85f647-f104-47eb-800c-5926241431c6-kube-api-access-8zs8x\") on node \"crc\" DevicePath \"\"" Feb 14 04:29:40 crc kubenswrapper[4867]: I0214 04:29:40.419845 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa85f647-f104-47eb-800c-5926241431c6-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:29:40 crc kubenswrapper[4867]: I0214 04:29:40.497491 4867 scope.go:117] "RemoveContainer" containerID="7059749a0f090f4fcadd34570c504de064398543b7a31431508b3c8aff49c475" Feb 14 04:29:40 crc kubenswrapper[4867]: I0214 04:29:40.608777 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-cl29c"] Feb 14 04:29:40 crc kubenswrapper[4867]: I0214 04:29:40.624601 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-cl29c"] Feb 14 04:29:41 crc kubenswrapper[4867]: I0214 04:29:41.017853 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa85f647-f104-47eb-800c-5926241431c6" path="/var/lib/kubelet/pods/fa85f647-f104-47eb-800c-5926241431c6/volumes" Feb 14 04:29:41 crc kubenswrapper[4867]: I0214 04:29:41.265700 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"9bba5174-edd6-4e59-8b84-6c50439be88e","Type":"ContainerStarted","Data":"cdd34e48fd8308f6fcb0879223cfb287fe4fad8d2d81caedd7f537716f873d08"} Feb 14 04:29:41 crc kubenswrapper[4867]: I0214 04:29:41.268764 4867 generic.go:334] "Generic (PLEG): container finished" podID="b27199a8-11ac-4e59-90b8-b42387dd6dd2" containerID="88cb930154e07e378cec2e1f6e9deef9c47de4c5b43c2284262de9eb71194722" exitCode=0 Feb 14 04:29:41 crc kubenswrapper[4867]: I0214 04:29:41.268852 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"b27199a8-11ac-4e59-90b8-b42387dd6dd2","Type":"ContainerDied","Data":"88cb930154e07e378cec2e1f6e9deef9c47de4c5b43c2284262de9eb71194722"} Feb 14 04:29:41 crc kubenswrapper[4867]: I0214 04:29:41.272334 4867 generic.go:334] "Generic (PLEG): container finished" podID="c755009c-2bb6-4f8f-9b53-460a0e4c9447" containerID="a1fd36c74b9a00850c975f49583fd6e7537b5b3ab16d29f2ed2f5ae6fb4437b4" exitCode=0 Feb 14 04:29:41 crc kubenswrapper[4867]: I0214 04:29:41.273265 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c755009c-2bb6-4f8f-9b53-460a0e4c9447","Type":"ContainerDied","Data":"a1fd36c74b9a00850c975f49583fd6e7537b5b3ab16d29f2ed2f5ae6fb4437b4"} Feb 14 04:29:42 crc kubenswrapper[4867]: I0214 04:29:42.285293 4867 generic.go:334] "Generic (PLEG): container finished" podID="505de461-9e6f-4914-bf50-e2bf4149b566" containerID="6112e5b28cbdeaa3d1c11987b58af4ae7e622169b457b89afe74d3879df320fd" exitCode=0 Feb 14 04:29:42 crc kubenswrapper[4867]: I0214 04:29:42.285353 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"505de461-9e6f-4914-bf50-e2bf4149b566","Type":"ContainerDied","Data":"6112e5b28cbdeaa3d1c11987b58af4ae7e622169b457b89afe74d3879df320fd"} Feb 14 04:29:42 crc kubenswrapper[4867]: I0214 04:29:42.745107 4867 scope.go:117] "RemoveContainer" containerID="f602853b9c9099bd5cca86b27c567097f0af7a70be9d8b6daffa58b6753bb07c" Feb 14 04:29:42 crc kubenswrapper[4867]: E0214 04:29:42.747771 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f602853b9c9099bd5cca86b27c567097f0af7a70be9d8b6daffa58b6753bb07c\": container with ID starting with f602853b9c9099bd5cca86b27c567097f0af7a70be9d8b6daffa58b6753bb07c not found: ID does not exist" containerID="f602853b9c9099bd5cca86b27c567097f0af7a70be9d8b6daffa58b6753bb07c" Feb 14 04:29:42 crc kubenswrapper[4867]: I0214 04:29:42.747819 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f602853b9c9099bd5cca86b27c567097f0af7a70be9d8b6daffa58b6753bb07c"} err="failed to get container status \"f602853b9c9099bd5cca86b27c567097f0af7a70be9d8b6daffa58b6753bb07c\": rpc error: code = NotFound desc = could not find container \"f602853b9c9099bd5cca86b27c567097f0af7a70be9d8b6daffa58b6753bb07c\": container with ID starting with f602853b9c9099bd5cca86b27c567097f0af7a70be9d8b6daffa58b6753bb07c not found: ID does not exist" Feb 14 04:29:42 crc kubenswrapper[4867]: I0214 04:29:42.747848 4867 scope.go:117] "RemoveContainer" containerID="7059749a0f090f4fcadd34570c504de064398543b7a31431508b3c8aff49c475" Feb 14 04:29:42 crc kubenswrapper[4867]: E0214 04:29:42.748354 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7059749a0f090f4fcadd34570c504de064398543b7a31431508b3c8aff49c475\": container with ID starting with 7059749a0f090f4fcadd34570c504de064398543b7a31431508b3c8aff49c475 not found: ID does not exist" containerID="7059749a0f090f4fcadd34570c504de064398543b7a31431508b3c8aff49c475" Feb 14 04:29:42 crc kubenswrapper[4867]: I0214 04:29:42.748378 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7059749a0f090f4fcadd34570c504de064398543b7a31431508b3c8aff49c475"} err="failed to get container status \"7059749a0f090f4fcadd34570c504de064398543b7a31431508b3c8aff49c475\": rpc error: code = NotFound desc = could not find container \"7059749a0f090f4fcadd34570c504de064398543b7a31431508b3c8aff49c475\": container with ID starting with 7059749a0f090f4fcadd34570c504de064398543b7a31431508b3c8aff49c475 not found: ID does not exist" Feb 14 04:29:43 crc kubenswrapper[4867]: I0214 04:29:43.297035 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"505de461-9e6f-4914-bf50-e2bf4149b566","Type":"ContainerStarted","Data":"339fe681bb88adb32b1f3cac0ab3a9a7c019700102a8ea9f39f2eb6eacf010e9"} Feb 14 04:29:43 crc kubenswrapper[4867]: I0214 04:29:43.302176 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"b27199a8-11ac-4e59-90b8-b42387dd6dd2","Type":"ContainerStarted","Data":"fcaa00f4074b2721a8dae207c9036fd698a9b4947b9c404b3f74667a5403e217"} Feb 14 04:29:43 crc kubenswrapper[4867]: I0214 04:29:43.305482 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-dc8sm" event={"ID":"92f44db3-78d7-4707-af34-daf9f3bbc0bf","Type":"ContainerStarted","Data":"fff43a494e3449e28ca6700d0874bdb37750b54043064c0f45ea967f6e1b3a87"} Feb 14 04:29:43 crc kubenswrapper[4867]: I0214 04:29:43.307718 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"0552eb77-2bc5-49dd-911e-f08071a83da9","Type":"ContainerStarted","Data":"6be4d4eb29aec6a4a6bed660df9a7013dba5f0240aa9354739d1d64a318f086d"} Feb 14 04:29:43 crc kubenswrapper[4867]: I0214 04:29:43.319920 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=-9223371982.534874 podStartE2EDuration="54.31990206s" podCreationTimestamp="2026-02-14 04:28:49 +0000 UTC" firstStartedPulling="2026-02-14 04:28:53.078831149 +0000 UTC m=+1165.159768463" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:29:43.319288414 +0000 UTC m=+1215.400225728" watchObservedRunningTime="2026-02-14 04:29:43.31990206 +0000 UTC m=+1215.400839374" Feb 14 04:29:43 crc kubenswrapper[4867]: I0214 04:29:43.353701 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=13.175407872 podStartE2EDuration="56.353659108s" podCreationTimestamp="2026-02-14 04:28:47 +0000 UTC" firstStartedPulling="2026-02-14 04:28:50.57566008 +0000 UTC m=+1162.656597394" lastFinishedPulling="2026-02-14 04:29:33.753911306 +0000 UTC m=+1205.834848630" observedRunningTime="2026-02-14 04:29:43.344631103 +0000 UTC m=+1215.425568417" watchObservedRunningTime="2026-02-14 04:29:43.353659108 +0000 UTC m=+1215.434596432" Feb 14 04:29:43 crc kubenswrapper[4867]: I0214 04:29:43.368078 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-dc8sm" podStartSLOduration=2.17615468 podStartE2EDuration="7.368060983s" podCreationTimestamp="2026-02-14 04:29:36 +0000 UTC" firstStartedPulling="2026-02-14 04:29:37.606525503 +0000 UTC m=+1209.687462817" lastFinishedPulling="2026-02-14 04:29:42.798431806 +0000 UTC m=+1214.879369120" observedRunningTime="2026-02-14 04:29:43.364706515 +0000 UTC m=+1215.445643829" watchObservedRunningTime="2026-02-14 04:29:43.368060983 +0000 UTC m=+1215.448998297" Feb 14 04:29:43 crc kubenswrapper[4867]: I0214 04:29:43.500185 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1d9f9909-1442-4d83-b2aa-0f58d4022338-etc-swift\") pod \"swift-storage-0\" (UID: \"1d9f9909-1442-4d83-b2aa-0f58d4022338\") " pod="openstack/swift-storage-0" Feb 14 04:29:43 crc kubenswrapper[4867]: E0214 04:29:43.500472 4867 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 14 04:29:43 crc kubenswrapper[4867]: E0214 04:29:43.500538 4867 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 14 04:29:43 crc kubenswrapper[4867]: E0214 04:29:43.500616 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1d9f9909-1442-4d83-b2aa-0f58d4022338-etc-swift podName:1d9f9909-1442-4d83-b2aa-0f58d4022338 nodeName:}" failed. No retries permitted until 2026-02-14 04:29:51.50059229 +0000 UTC m=+1223.581529604 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/1d9f9909-1442-4d83-b2aa-0f58d4022338-etc-swift") pod "swift-storage-0" (UID: "1d9f9909-1442-4d83-b2aa-0f58d4022338") : configmap "swift-ring-files" not found Feb 14 04:29:46 crc kubenswrapper[4867]: I0214 04:29:46.350328 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"6bc83863-74f4-4509-969c-0f3305a542a8","Type":"ContainerStarted","Data":"da72547c3496fadaa474b36d059bf8582881ee27c6b6aa73c9aa360c8e76f26d"} Feb 14 04:29:46 crc kubenswrapper[4867]: E0214 04:29:46.774326 4867 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.113:39416->38.102.83.113:33373: write tcp 38.102.83.113:39416->38.102.83.113:33373: write: connection reset by peer Feb 14 04:29:46 crc kubenswrapper[4867]: I0214 04:29:46.994721 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6c89d5d749-b7rzr" Feb 14 04:29:47 crc kubenswrapper[4867]: I0214 04:29:47.768765 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-cp76f" Feb 14 04:29:47 crc kubenswrapper[4867]: I0214 04:29:47.830002 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c89d5d749-b7rzr"] Feb 14 04:29:47 crc kubenswrapper[4867]: I0214 04:29:47.833856 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6c89d5d749-b7rzr" podUID="deae29d8-abfa-4fe4-8314-b02cf70eb5be" containerName="dnsmasq-dns" containerID="cri-o://d9bc20eb397e5cdd69feae306038d003a806f85daf7db6e801792855182536ab" gracePeriod=10 Feb 14 04:29:48 crc kubenswrapper[4867]: I0214 04:29:48.381120 4867 generic.go:334] "Generic (PLEG): container finished" podID="deae29d8-abfa-4fe4-8314-b02cf70eb5be" containerID="d9bc20eb397e5cdd69feae306038d003a806f85daf7db6e801792855182536ab" exitCode=0 Feb 14 04:29:48 crc kubenswrapper[4867]: I0214 04:29:48.381179 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c89d5d749-b7rzr" event={"ID":"deae29d8-abfa-4fe4-8314-b02cf70eb5be","Type":"ContainerDied","Data":"d9bc20eb397e5cdd69feae306038d003a806f85daf7db6e801792855182536ab"} Feb 14 04:29:49 crc kubenswrapper[4867]: I0214 04:29:49.632272 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 14 04:29:49 crc kubenswrapper[4867]: I0214 04:29:49.632671 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 14 04:29:49 crc kubenswrapper[4867]: I0214 04:29:49.982008 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 14 04:29:50 crc kubenswrapper[4867]: I0214 04:29:50.514531 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 14 04:29:50 crc kubenswrapper[4867]: I0214 04:29:50.896490 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c89d5d749-b7rzr" Feb 14 04:29:50 crc kubenswrapper[4867]: I0214 04:29:50.983870 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/deae29d8-abfa-4fe4-8314-b02cf70eb5be-config\") pod \"deae29d8-abfa-4fe4-8314-b02cf70eb5be\" (UID: \"deae29d8-abfa-4fe4-8314-b02cf70eb5be\") " Feb 14 04:29:50 crc kubenswrapper[4867]: I0214 04:29:50.984376 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p426f\" (UniqueName: \"kubernetes.io/projected/deae29d8-abfa-4fe4-8314-b02cf70eb5be-kube-api-access-p426f\") pod \"deae29d8-abfa-4fe4-8314-b02cf70eb5be\" (UID: \"deae29d8-abfa-4fe4-8314-b02cf70eb5be\") " Feb 14 04:29:50 crc kubenswrapper[4867]: I0214 04:29:50.984452 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/deae29d8-abfa-4fe4-8314-b02cf70eb5be-dns-svc\") pod \"deae29d8-abfa-4fe4-8314-b02cf70eb5be\" (UID: \"deae29d8-abfa-4fe4-8314-b02cf70eb5be\") " Feb 14 04:29:50 crc kubenswrapper[4867]: I0214 04:29:50.984580 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/deae29d8-abfa-4fe4-8314-b02cf70eb5be-ovsdbserver-sb\") pod \"deae29d8-abfa-4fe4-8314-b02cf70eb5be\" (UID: \"deae29d8-abfa-4fe4-8314-b02cf70eb5be\") " Feb 14 04:29:50 crc kubenswrapper[4867]: I0214 04:29:50.988669 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/deae29d8-abfa-4fe4-8314-b02cf70eb5be-kube-api-access-p426f" (OuterVolumeSpecName: "kube-api-access-p426f") pod "deae29d8-abfa-4fe4-8314-b02cf70eb5be" (UID: "deae29d8-abfa-4fe4-8314-b02cf70eb5be"). InnerVolumeSpecName "kube-api-access-p426f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.044306 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/deae29d8-abfa-4fe4-8314-b02cf70eb5be-config" (OuterVolumeSpecName: "config") pod "deae29d8-abfa-4fe4-8314-b02cf70eb5be" (UID: "deae29d8-abfa-4fe4-8314-b02cf70eb5be"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.051153 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/deae29d8-abfa-4fe4-8314-b02cf70eb5be-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "deae29d8-abfa-4fe4-8314-b02cf70eb5be" (UID: "deae29d8-abfa-4fe4-8314-b02cf70eb5be"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.060426 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/deae29d8-abfa-4fe4-8314-b02cf70eb5be-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "deae29d8-abfa-4fe4-8314-b02cf70eb5be" (UID: "deae29d8-abfa-4fe4-8314-b02cf70eb5be"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.092365 4867 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/deae29d8-abfa-4fe4-8314-b02cf70eb5be-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.092406 4867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/deae29d8-abfa-4fe4-8314-b02cf70eb5be-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.092421 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/deae29d8-abfa-4fe4-8314-b02cf70eb5be-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.092435 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p426f\" (UniqueName: \"kubernetes.io/projected/deae29d8-abfa-4fe4-8314-b02cf70eb5be-kube-api-access-p426f\") on node \"crc\" DevicePath \"\"" Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.420390 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c755009c-2bb6-4f8f-9b53-460a0e4c9447","Type":"ContainerStarted","Data":"4692a5c730542a5c7abd2ae37dcefb0197b935ec9ce8b16d0469afd4527db7f5"} Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.422164 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c89d5d749-b7rzr" event={"ID":"deae29d8-abfa-4fe4-8314-b02cf70eb5be","Type":"ContainerDied","Data":"a0ffea8d48e001e089ae4bf9bd0aae709da26664483eee7930b16346800bdb97"} Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.422225 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c89d5d749-b7rzr" Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.422210 4867 scope.go:117] "RemoveContainer" containerID="d9bc20eb397e5cdd69feae306038d003a806f85daf7db6e801792855182536ab" Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.424020 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a78fec22-f395-42fc-a228-8d896580bc95","Type":"ContainerStarted","Data":"c6296689e104eeb9513087c1b6ad0a291438f63926c611686753788a4db4940c"} Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.424227 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.426878 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"0552eb77-2bc5-49dd-911e-f08071a83da9","Type":"ContainerStarted","Data":"45cdcdab2bca2f249b4526281374a26986b946d9e5b8bf5149fcc82a569681fc"} Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.426953 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.430009 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.430045 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.483623 4867 scope.go:117] "RemoveContainer" containerID="b1095c8191bae78e5faa82320823678ede638e643a2b7ac06c8450de766b1b8a" Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.484195 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-cff6-account-create-update-ktnvw"] Feb 14 04:29:51 crc kubenswrapper[4867]: E0214 04:29:51.484593 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="deae29d8-abfa-4fe4-8314-b02cf70eb5be" containerName="init" Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.484608 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="deae29d8-abfa-4fe4-8314-b02cf70eb5be" containerName="init" Feb 14 04:29:51 crc kubenswrapper[4867]: E0214 04:29:51.484633 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa85f647-f104-47eb-800c-5926241431c6" containerName="dnsmasq-dns" Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.484641 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa85f647-f104-47eb-800c-5926241431c6" containerName="dnsmasq-dns" Feb 14 04:29:51 crc kubenswrapper[4867]: E0214 04:29:51.484661 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa85f647-f104-47eb-800c-5926241431c6" containerName="init" Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.484668 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa85f647-f104-47eb-800c-5926241431c6" containerName="init" Feb 14 04:29:51 crc kubenswrapper[4867]: E0214 04:29:51.484680 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="deae29d8-abfa-4fe4-8314-b02cf70eb5be" containerName="dnsmasq-dns" Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.484685 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="deae29d8-abfa-4fe4-8314-b02cf70eb5be" containerName="dnsmasq-dns" Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.484889 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa85f647-f104-47eb-800c-5926241431c6" containerName="dnsmasq-dns" Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.484899 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="deae29d8-abfa-4fe4-8314-b02cf70eb5be" containerName="dnsmasq-dns" Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.485839 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-cff6-account-create-update-ktnvw" Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.496921 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.501449 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1d9f9909-1442-4d83-b2aa-0f58d4022338-etc-swift\") pod \"swift-storage-0\" (UID: \"1d9f9909-1442-4d83-b2aa-0f58d4022338\") " pod="openstack/swift-storage-0" Feb 14 04:29:51 crc kubenswrapper[4867]: E0214 04:29:51.502058 4867 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 14 04:29:51 crc kubenswrapper[4867]: E0214 04:29:51.502088 4867 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 14 04:29:51 crc kubenswrapper[4867]: E0214 04:29:51.502141 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1d9f9909-1442-4d83-b2aa-0f58d4022338-etc-swift podName:1d9f9909-1442-4d83-b2aa-0f58d4022338 nodeName:}" failed. No retries permitted until 2026-02-14 04:30:07.502124551 +0000 UTC m=+1239.583061865 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/1d9f9909-1442-4d83-b2aa-0f58d4022338-etc-swift") pod "swift-storage-0" (UID: "1d9f9909-1442-4d83-b2aa-0f58d4022338") : configmap "swift-ring-files" not found Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.531699 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-cff6-account-create-update-ktnvw"] Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.554449 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=15.167913319 podStartE2EDuration="58.554427841s" podCreationTimestamp="2026-02-14 04:28:53 +0000 UTC" firstStartedPulling="2026-02-14 04:29:07.58666246 +0000 UTC m=+1179.667599774" lastFinishedPulling="2026-02-14 04:29:50.973176982 +0000 UTC m=+1223.054114296" observedRunningTime="2026-02-14 04:29:51.497262034 +0000 UTC m=+1223.578199348" watchObservedRunningTime="2026-02-14 04:29:51.554427841 +0000 UTC m=+1223.635365155" Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.607340 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-t56pc"] Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.609664 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-t56pc" Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.640949 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=10.869751372 podStartE2EDuration="14.640922141s" podCreationTimestamp="2026-02-14 04:29:37 +0000 UTC" firstStartedPulling="2026-02-14 04:29:38.974413313 +0000 UTC m=+1211.055350627" lastFinishedPulling="2026-02-14 04:29:42.745584082 +0000 UTC m=+1214.826521396" observedRunningTime="2026-02-14 04:29:51.528588309 +0000 UTC m=+1223.609525623" watchObservedRunningTime="2026-02-14 04:29:51.640922141 +0000 UTC m=+1223.721859455" Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.690719 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-t56pc"] Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.705879 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b72434a2-25c0-4fd4-89cf-eff7bee167c3-operator-scripts\") pod \"glance-cff6-account-create-update-ktnvw\" (UID: \"b72434a2-25c0-4fd4-89cf-eff7bee167c3\") " pod="openstack/glance-cff6-account-create-update-ktnvw" Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.705973 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0fef49b7-7486-40dc-aedc-9814adb071e2-operator-scripts\") pod \"glance-db-create-t56pc\" (UID: \"0fef49b7-7486-40dc-aedc-9814adb071e2\") " pod="openstack/glance-db-create-t56pc" Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.706061 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fxhr\" (UniqueName: \"kubernetes.io/projected/0fef49b7-7486-40dc-aedc-9814adb071e2-kube-api-access-9fxhr\") pod \"glance-db-create-t56pc\" (UID: \"0fef49b7-7486-40dc-aedc-9814adb071e2\") " pod="openstack/glance-db-create-t56pc" Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.706187 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dwgw\" (UniqueName: \"kubernetes.io/projected/b72434a2-25c0-4fd4-89cf-eff7bee167c3-kube-api-access-4dwgw\") pod \"glance-cff6-account-create-update-ktnvw\" (UID: \"b72434a2-25c0-4fd4-89cf-eff7bee167c3\") " pod="openstack/glance-cff6-account-create-update-ktnvw" Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.723593 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c89d5d749-b7rzr"] Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.733227 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6c89d5d749-b7rzr"] Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.756053 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.808571 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0fef49b7-7486-40dc-aedc-9814adb071e2-operator-scripts\") pod \"glance-db-create-t56pc\" (UID: \"0fef49b7-7486-40dc-aedc-9814adb071e2\") " pod="openstack/glance-db-create-t56pc" Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.808670 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fxhr\" (UniqueName: \"kubernetes.io/projected/0fef49b7-7486-40dc-aedc-9814adb071e2-kube-api-access-9fxhr\") pod \"glance-db-create-t56pc\" (UID: \"0fef49b7-7486-40dc-aedc-9814adb071e2\") " pod="openstack/glance-db-create-t56pc" Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.808776 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dwgw\" (UniqueName: \"kubernetes.io/projected/b72434a2-25c0-4fd4-89cf-eff7bee167c3-kube-api-access-4dwgw\") pod \"glance-cff6-account-create-update-ktnvw\" (UID: \"b72434a2-25c0-4fd4-89cf-eff7bee167c3\") " pod="openstack/glance-cff6-account-create-update-ktnvw" Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.808845 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b72434a2-25c0-4fd4-89cf-eff7bee167c3-operator-scripts\") pod \"glance-cff6-account-create-update-ktnvw\" (UID: \"b72434a2-25c0-4fd4-89cf-eff7bee167c3\") " pod="openstack/glance-cff6-account-create-update-ktnvw" Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.809453 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0fef49b7-7486-40dc-aedc-9814adb071e2-operator-scripts\") pod \"glance-db-create-t56pc\" (UID: \"0fef49b7-7486-40dc-aedc-9814adb071e2\") " pod="openstack/glance-db-create-t56pc" Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.809625 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b72434a2-25c0-4fd4-89cf-eff7bee167c3-operator-scripts\") pod \"glance-cff6-account-create-update-ktnvw\" (UID: \"b72434a2-25c0-4fd4-89cf-eff7bee167c3\") " pod="openstack/glance-cff6-account-create-update-ktnvw" Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.829311 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fxhr\" (UniqueName: \"kubernetes.io/projected/0fef49b7-7486-40dc-aedc-9814adb071e2-kube-api-access-9fxhr\") pod \"glance-db-create-t56pc\" (UID: \"0fef49b7-7486-40dc-aedc-9814adb071e2\") " pod="openstack/glance-db-create-t56pc" Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.838413 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dwgw\" (UniqueName: \"kubernetes.io/projected/b72434a2-25c0-4fd4-89cf-eff7bee167c3-kube-api-access-4dwgw\") pod \"glance-cff6-account-create-update-ktnvw\" (UID: \"b72434a2-25c0-4fd4-89cf-eff7bee167c3\") " pod="openstack/glance-cff6-account-create-update-ktnvw" Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.847427 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-cff6-account-create-update-ktnvw" Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.943791 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-t56pc" Feb 14 04:29:52 crc kubenswrapper[4867]: I0214 04:29:52.314617 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-cff6-account-create-update-ktnvw"] Feb 14 04:29:52 crc kubenswrapper[4867]: I0214 04:29:52.441690 4867 generic.go:334] "Generic (PLEG): container finished" podID="92f44db3-78d7-4707-af34-daf9f3bbc0bf" containerID="fff43a494e3449e28ca6700d0874bdb37750b54043064c0f45ea967f6e1b3a87" exitCode=0 Feb 14 04:29:52 crc kubenswrapper[4867]: I0214 04:29:52.441768 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-dc8sm" event={"ID":"92f44db3-78d7-4707-af34-daf9f3bbc0bf","Type":"ContainerDied","Data":"fff43a494e3449e28ca6700d0874bdb37750b54043064c0f45ea967f6e1b3a87"} Feb 14 04:29:52 crc kubenswrapper[4867]: I0214 04:29:52.443920 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-cff6-account-create-update-ktnvw" event={"ID":"b72434a2-25c0-4fd4-89cf-eff7bee167c3","Type":"ContainerStarted","Data":"46a9a76f15cacb4a470e49f4c30581d530830b0fd8172437a64106eaad5727e9"} Feb 14 04:29:52 crc kubenswrapper[4867]: I0214 04:29:52.527875 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-t56pc"] Feb 14 04:29:52 crc kubenswrapper[4867]: W0214 04:29:52.529211 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0fef49b7_7486_40dc_aedc_9814adb071e2.slice/crio-045d88360f02bd01b9a0a10a071b2c33fedbb74ad62b8c840dfe74592b470dd8 WatchSource:0}: Error finding container 045d88360f02bd01b9a0a10a071b2c33fedbb74ad62b8c840dfe74592b470dd8: Status 404 returned error can't find the container with id 045d88360f02bd01b9a0a10a071b2c33fedbb74ad62b8c840dfe74592b470dd8 Feb 14 04:29:52 crc kubenswrapper[4867]: I0214 04:29:52.565566 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 14 04:29:52 crc kubenswrapper[4867]: I0214 04:29:52.627879 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-qmj24"] Feb 14 04:29:52 crc kubenswrapper[4867]: I0214 04:29:52.629499 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-qmj24" Feb 14 04:29:52 crc kubenswrapper[4867]: I0214 04:29:52.653972 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-qmj24"] Feb 14 04:29:52 crc kubenswrapper[4867]: I0214 04:29:52.728356 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/853d3739-366e-498f-ac28-6df19ee88dee-operator-scripts\") pod \"keystone-db-create-qmj24\" (UID: \"853d3739-366e-498f-ac28-6df19ee88dee\") " pod="openstack/keystone-db-create-qmj24" Feb 14 04:29:52 crc kubenswrapper[4867]: I0214 04:29:52.728641 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgmt4\" (UniqueName: \"kubernetes.io/projected/853d3739-366e-498f-ac28-6df19ee88dee-kube-api-access-wgmt4\") pod \"keystone-db-create-qmj24\" (UID: \"853d3739-366e-498f-ac28-6df19ee88dee\") " pod="openstack/keystone-db-create-qmj24" Feb 14 04:29:52 crc kubenswrapper[4867]: I0214 04:29:52.740063 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-brnhd"] Feb 14 04:29:52 crc kubenswrapper[4867]: I0214 04:29:52.742000 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-brnhd" Feb 14 04:29:52 crc kubenswrapper[4867]: I0214 04:29:52.749792 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-aef7-account-create-update-w7xz9"] Feb 14 04:29:52 crc kubenswrapper[4867]: I0214 04:29:52.752969 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-aef7-account-create-update-w7xz9" Feb 14 04:29:52 crc kubenswrapper[4867]: I0214 04:29:52.759027 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 14 04:29:52 crc kubenswrapper[4867]: I0214 04:29:52.765933 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-brnhd"] Feb 14 04:29:52 crc kubenswrapper[4867]: I0214 04:29:52.773827 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-aef7-account-create-update-w7xz9"] Feb 14 04:29:52 crc kubenswrapper[4867]: I0214 04:29:52.830004 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/853d3739-366e-498f-ac28-6df19ee88dee-operator-scripts\") pod \"keystone-db-create-qmj24\" (UID: \"853d3739-366e-498f-ac28-6df19ee88dee\") " pod="openstack/keystone-db-create-qmj24" Feb 14 04:29:52 crc kubenswrapper[4867]: I0214 04:29:52.830048 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e62c2a1e-55e4-4b7d-90db-ab37eecdb659-operator-scripts\") pod \"placement-aef7-account-create-update-w7xz9\" (UID: \"e62c2a1e-55e4-4b7d-90db-ab37eecdb659\") " pod="openstack/placement-aef7-account-create-update-w7xz9" Feb 14 04:29:52 crc kubenswrapper[4867]: I0214 04:29:52.830141 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af1b76a6-cc66-4a23-893d-df38ba5aac38-operator-scripts\") pod \"placement-db-create-brnhd\" (UID: \"af1b76a6-cc66-4a23-893d-df38ba5aac38\") " pod="openstack/placement-db-create-brnhd" Feb 14 04:29:52 crc kubenswrapper[4867]: I0214 04:29:52.830188 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwv5c\" (UniqueName: \"kubernetes.io/projected/e62c2a1e-55e4-4b7d-90db-ab37eecdb659-kube-api-access-lwv5c\") pod \"placement-aef7-account-create-update-w7xz9\" (UID: \"e62c2a1e-55e4-4b7d-90db-ab37eecdb659\") " pod="openstack/placement-aef7-account-create-update-w7xz9" Feb 14 04:29:52 crc kubenswrapper[4867]: I0214 04:29:52.830282 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgmt4\" (UniqueName: \"kubernetes.io/projected/853d3739-366e-498f-ac28-6df19ee88dee-kube-api-access-wgmt4\") pod \"keystone-db-create-qmj24\" (UID: \"853d3739-366e-498f-ac28-6df19ee88dee\") " pod="openstack/keystone-db-create-qmj24" Feb 14 04:29:52 crc kubenswrapper[4867]: I0214 04:29:52.830347 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2vmp\" (UniqueName: \"kubernetes.io/projected/af1b76a6-cc66-4a23-893d-df38ba5aac38-kube-api-access-z2vmp\") pod \"placement-db-create-brnhd\" (UID: \"af1b76a6-cc66-4a23-893d-df38ba5aac38\") " pod="openstack/placement-db-create-brnhd" Feb 14 04:29:52 crc kubenswrapper[4867]: I0214 04:29:52.831222 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/853d3739-366e-498f-ac28-6df19ee88dee-operator-scripts\") pod \"keystone-db-create-qmj24\" (UID: \"853d3739-366e-498f-ac28-6df19ee88dee\") " pod="openstack/keystone-db-create-qmj24" Feb 14 04:29:52 crc kubenswrapper[4867]: I0214 04:29:52.845238 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-a782-account-create-update-dzhfz"] Feb 14 04:29:52 crc kubenswrapper[4867]: I0214 04:29:52.847225 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-a782-account-create-update-dzhfz" Feb 14 04:29:52 crc kubenswrapper[4867]: I0214 04:29:52.849100 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 14 04:29:52 crc kubenswrapper[4867]: I0214 04:29:52.851770 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgmt4\" (UniqueName: \"kubernetes.io/projected/853d3739-366e-498f-ac28-6df19ee88dee-kube-api-access-wgmt4\") pod \"keystone-db-create-qmj24\" (UID: \"853d3739-366e-498f-ac28-6df19ee88dee\") " pod="openstack/keystone-db-create-qmj24" Feb 14 04:29:52 crc kubenswrapper[4867]: I0214 04:29:52.876133 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-a782-account-create-update-dzhfz"] Feb 14 04:29:52 crc kubenswrapper[4867]: I0214 04:29:52.932537 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwv5c\" (UniqueName: \"kubernetes.io/projected/e62c2a1e-55e4-4b7d-90db-ab37eecdb659-kube-api-access-lwv5c\") pod \"placement-aef7-account-create-update-w7xz9\" (UID: \"e62c2a1e-55e4-4b7d-90db-ab37eecdb659\") " pod="openstack/placement-aef7-account-create-update-w7xz9" Feb 14 04:29:52 crc kubenswrapper[4867]: I0214 04:29:52.932614 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cf9dm\" (UniqueName: \"kubernetes.io/projected/b10f828b-59d6-4eb2-8922-aec92f274280-kube-api-access-cf9dm\") pod \"keystone-a782-account-create-update-dzhfz\" (UID: \"b10f828b-59d6-4eb2-8922-aec92f274280\") " pod="openstack/keystone-a782-account-create-update-dzhfz" Feb 14 04:29:52 crc kubenswrapper[4867]: I0214 04:29:52.932662 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b10f828b-59d6-4eb2-8922-aec92f274280-operator-scripts\") pod \"keystone-a782-account-create-update-dzhfz\" (UID: \"b10f828b-59d6-4eb2-8922-aec92f274280\") " pod="openstack/keystone-a782-account-create-update-dzhfz" Feb 14 04:29:52 crc kubenswrapper[4867]: I0214 04:29:52.932720 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2vmp\" (UniqueName: \"kubernetes.io/projected/af1b76a6-cc66-4a23-893d-df38ba5aac38-kube-api-access-z2vmp\") pod \"placement-db-create-brnhd\" (UID: \"af1b76a6-cc66-4a23-893d-df38ba5aac38\") " pod="openstack/placement-db-create-brnhd" Feb 14 04:29:52 crc kubenswrapper[4867]: I0214 04:29:52.933115 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e62c2a1e-55e4-4b7d-90db-ab37eecdb659-operator-scripts\") pod \"placement-aef7-account-create-update-w7xz9\" (UID: \"e62c2a1e-55e4-4b7d-90db-ab37eecdb659\") " pod="openstack/placement-aef7-account-create-update-w7xz9" Feb 14 04:29:52 crc kubenswrapper[4867]: I0214 04:29:52.933314 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af1b76a6-cc66-4a23-893d-df38ba5aac38-operator-scripts\") pod \"placement-db-create-brnhd\" (UID: \"af1b76a6-cc66-4a23-893d-df38ba5aac38\") " pod="openstack/placement-db-create-brnhd" Feb 14 04:29:52 crc kubenswrapper[4867]: I0214 04:29:52.934086 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e62c2a1e-55e4-4b7d-90db-ab37eecdb659-operator-scripts\") pod \"placement-aef7-account-create-update-w7xz9\" (UID: \"e62c2a1e-55e4-4b7d-90db-ab37eecdb659\") " pod="openstack/placement-aef7-account-create-update-w7xz9" Feb 14 04:29:52 crc kubenswrapper[4867]: I0214 04:29:52.934103 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af1b76a6-cc66-4a23-893d-df38ba5aac38-operator-scripts\") pod \"placement-db-create-brnhd\" (UID: \"af1b76a6-cc66-4a23-893d-df38ba5aac38\") " pod="openstack/placement-db-create-brnhd" Feb 14 04:29:52 crc kubenswrapper[4867]: I0214 04:29:52.952783 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwv5c\" (UniqueName: \"kubernetes.io/projected/e62c2a1e-55e4-4b7d-90db-ab37eecdb659-kube-api-access-lwv5c\") pod \"placement-aef7-account-create-update-w7xz9\" (UID: \"e62c2a1e-55e4-4b7d-90db-ab37eecdb659\") " pod="openstack/placement-aef7-account-create-update-w7xz9" Feb 14 04:29:52 crc kubenswrapper[4867]: I0214 04:29:52.956378 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2vmp\" (UniqueName: \"kubernetes.io/projected/af1b76a6-cc66-4a23-893d-df38ba5aac38-kube-api-access-z2vmp\") pod \"placement-db-create-brnhd\" (UID: \"af1b76a6-cc66-4a23-893d-df38ba5aac38\") " pod="openstack/placement-db-create-brnhd" Feb 14 04:29:52 crc kubenswrapper[4867]: I0214 04:29:52.995452 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-qmj24" Feb 14 04:29:53 crc kubenswrapper[4867]: I0214 04:29:53.008606 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="deae29d8-abfa-4fe4-8314-b02cf70eb5be" path="/var/lib/kubelet/pods/deae29d8-abfa-4fe4-8314-b02cf70eb5be/volumes" Feb 14 04:29:53 crc kubenswrapper[4867]: I0214 04:29:53.034920 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b10f828b-59d6-4eb2-8922-aec92f274280-operator-scripts\") pod \"keystone-a782-account-create-update-dzhfz\" (UID: \"b10f828b-59d6-4eb2-8922-aec92f274280\") " pod="openstack/keystone-a782-account-create-update-dzhfz" Feb 14 04:29:53 crc kubenswrapper[4867]: I0214 04:29:53.035681 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cf9dm\" (UniqueName: \"kubernetes.io/projected/b10f828b-59d6-4eb2-8922-aec92f274280-kube-api-access-cf9dm\") pod \"keystone-a782-account-create-update-dzhfz\" (UID: \"b10f828b-59d6-4eb2-8922-aec92f274280\") " pod="openstack/keystone-a782-account-create-update-dzhfz" Feb 14 04:29:53 crc kubenswrapper[4867]: I0214 04:29:53.035691 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b10f828b-59d6-4eb2-8922-aec92f274280-operator-scripts\") pod \"keystone-a782-account-create-update-dzhfz\" (UID: \"b10f828b-59d6-4eb2-8922-aec92f274280\") " pod="openstack/keystone-a782-account-create-update-dzhfz" Feb 14 04:29:53 crc kubenswrapper[4867]: I0214 04:29:53.058055 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cf9dm\" (UniqueName: \"kubernetes.io/projected/b10f828b-59d6-4eb2-8922-aec92f274280-kube-api-access-cf9dm\") pod \"keystone-a782-account-create-update-dzhfz\" (UID: \"b10f828b-59d6-4eb2-8922-aec92f274280\") " pod="openstack/keystone-a782-account-create-update-dzhfz" Feb 14 04:29:53 crc kubenswrapper[4867]: I0214 04:29:53.118248 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-brnhd" Feb 14 04:29:53 crc kubenswrapper[4867]: I0214 04:29:53.139662 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-aef7-account-create-update-w7xz9" Feb 14 04:29:53 crc kubenswrapper[4867]: I0214 04:29:53.181935 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-a782-account-create-update-dzhfz" Feb 14 04:29:53 crc kubenswrapper[4867]: I0214 04:29:53.462816 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-t56pc" event={"ID":"0fef49b7-7486-40dc-aedc-9814adb071e2","Type":"ContainerStarted","Data":"ae0a83f28bdc3a06d4663a0d9d8e67b0716eee94221bc552fd5d22ba9ecc6605"} Feb 14 04:29:53 crc kubenswrapper[4867]: I0214 04:29:53.463107 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-t56pc" event={"ID":"0fef49b7-7486-40dc-aedc-9814adb071e2","Type":"ContainerStarted","Data":"045d88360f02bd01b9a0a10a071b2c33fedbb74ad62b8c840dfe74592b470dd8"} Feb 14 04:29:53 crc kubenswrapper[4867]: I0214 04:29:53.493129 4867 generic.go:334] "Generic (PLEG): container finished" podID="b72434a2-25c0-4fd4-89cf-eff7bee167c3" containerID="63b1841b94ccfe878085e7aaa4ff2044786571fd3492e4ffbe7576e35506afb2" exitCode=0 Feb 14 04:29:53 crc kubenswrapper[4867]: I0214 04:29:53.493664 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-cff6-account-create-update-ktnvw" event={"ID":"b72434a2-25c0-4fd4-89cf-eff7bee167c3","Type":"ContainerDied","Data":"63b1841b94ccfe878085e7aaa4ff2044786571fd3492e4ffbe7576e35506afb2"} Feb 14 04:29:53 crc kubenswrapper[4867]: I0214 04:29:53.494355 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-t56pc" podStartSLOduration=2.4943317990000002 podStartE2EDuration="2.494331799s" podCreationTimestamp="2026-02-14 04:29:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:29:53.488926918 +0000 UTC m=+1225.569864232" watchObservedRunningTime="2026-02-14 04:29:53.494331799 +0000 UTC m=+1225.575269113" Feb 14 04:29:53 crc kubenswrapper[4867]: I0214 04:29:53.592047 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-qmj24"] Feb 14 04:29:53 crc kubenswrapper[4867]: I0214 04:29:53.801669 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-brnhd"] Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.074227 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-dc8sm" Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.158554 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-aef7-account-create-update-w7xz9"] Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.181990 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-a782-account-create-update-dzhfz"] Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.183110 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/92f44db3-78d7-4707-af34-daf9f3bbc0bf-swiftconf\") pod \"92f44db3-78d7-4707-af34-daf9f3bbc0bf\" (UID: \"92f44db3-78d7-4707-af34-daf9f3bbc0bf\") " Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.183347 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/92f44db3-78d7-4707-af34-daf9f3bbc0bf-scripts\") pod \"92f44db3-78d7-4707-af34-daf9f3bbc0bf\" (UID: \"92f44db3-78d7-4707-af34-daf9f3bbc0bf\") " Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.183521 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/92f44db3-78d7-4707-af34-daf9f3bbc0bf-dispersionconf\") pod \"92f44db3-78d7-4707-af34-daf9f3bbc0bf\" (UID: \"92f44db3-78d7-4707-af34-daf9f3bbc0bf\") " Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.183621 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/92f44db3-78d7-4707-af34-daf9f3bbc0bf-ring-data-devices\") pod \"92f44db3-78d7-4707-af34-daf9f3bbc0bf\" (UID: \"92f44db3-78d7-4707-af34-daf9f3bbc0bf\") " Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.183732 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8d9fk\" (UniqueName: \"kubernetes.io/projected/92f44db3-78d7-4707-af34-daf9f3bbc0bf-kube-api-access-8d9fk\") pod \"92f44db3-78d7-4707-af34-daf9f3bbc0bf\" (UID: \"92f44db3-78d7-4707-af34-daf9f3bbc0bf\") " Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.183819 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92f44db3-78d7-4707-af34-daf9f3bbc0bf-combined-ca-bundle\") pod \"92f44db3-78d7-4707-af34-daf9f3bbc0bf\" (UID: \"92f44db3-78d7-4707-af34-daf9f3bbc0bf\") " Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.183948 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/92f44db3-78d7-4707-af34-daf9f3bbc0bf-etc-swift\") pod \"92f44db3-78d7-4707-af34-daf9f3bbc0bf\" (UID: \"92f44db3-78d7-4707-af34-daf9f3bbc0bf\") " Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.185541 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92f44db3-78d7-4707-af34-daf9f3bbc0bf-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "92f44db3-78d7-4707-af34-daf9f3bbc0bf" (UID: "92f44db3-78d7-4707-af34-daf9f3bbc0bf"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.187564 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92f44db3-78d7-4707-af34-daf9f3bbc0bf-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "92f44db3-78d7-4707-af34-daf9f3bbc0bf" (UID: "92f44db3-78d7-4707-af34-daf9f3bbc0bf"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.200670 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-7klnf"] Feb 14 04:29:54 crc kubenswrapper[4867]: E0214 04:29:54.201668 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92f44db3-78d7-4707-af34-daf9f3bbc0bf" containerName="swift-ring-rebalance" Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.201688 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="92f44db3-78d7-4707-af34-daf9f3bbc0bf" containerName="swift-ring-rebalance" Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.203255 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="92f44db3-78d7-4707-af34-daf9f3bbc0bf" containerName="swift-ring-rebalance" Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.203641 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92f44db3-78d7-4707-af34-daf9f3bbc0bf-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "92f44db3-78d7-4707-af34-daf9f3bbc0bf" (UID: "92f44db3-78d7-4707-af34-daf9f3bbc0bf"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.204419 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-7klnf" Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.212986 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-7klnf"] Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.235171 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92f44db3-78d7-4707-af34-daf9f3bbc0bf-kube-api-access-8d9fk" (OuterVolumeSpecName: "kube-api-access-8d9fk") pod "92f44db3-78d7-4707-af34-daf9f3bbc0bf" (UID: "92f44db3-78d7-4707-af34-daf9f3bbc0bf"). InnerVolumeSpecName "kube-api-access-8d9fk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.290789 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa8913cb-b163-4973-b6e2-ac741177964e-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-7klnf\" (UID: \"fa8913cb-b163-4973-b6e2-ac741177964e\") " pod="openstack/mysqld-exporter-openstack-db-create-7klnf" Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.290877 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbsfl\" (UniqueName: \"kubernetes.io/projected/fa8913cb-b163-4973-b6e2-ac741177964e-kube-api-access-cbsfl\") pod \"mysqld-exporter-openstack-db-create-7klnf\" (UID: \"fa8913cb-b163-4973-b6e2-ac741177964e\") " pod="openstack/mysqld-exporter-openstack-db-create-7klnf" Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.290939 4867 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/92f44db3-78d7-4707-af34-daf9f3bbc0bf-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.290949 4867 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/92f44db3-78d7-4707-af34-daf9f3bbc0bf-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.290959 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8d9fk\" (UniqueName: \"kubernetes.io/projected/92f44db3-78d7-4707-af34-daf9f3bbc0bf-kube-api-access-8d9fk\") on node \"crc\" DevicePath \"\"" Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.290969 4867 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/92f44db3-78d7-4707-af34-daf9f3bbc0bf-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.359576 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-4f85-account-create-update-7m6h2"] Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.361013 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-4f85-account-create-update-7m6h2" Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.376262 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-db-secret" Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.392906 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa8913cb-b163-4973-b6e2-ac741177964e-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-7klnf\" (UID: \"fa8913cb-b163-4973-b6e2-ac741177964e\") " pod="openstack/mysqld-exporter-openstack-db-create-7klnf" Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.393001 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbsfl\" (UniqueName: \"kubernetes.io/projected/fa8913cb-b163-4973-b6e2-ac741177964e-kube-api-access-cbsfl\") pod \"mysqld-exporter-openstack-db-create-7klnf\" (UID: \"fa8913cb-b163-4973-b6e2-ac741177964e\") " pod="openstack/mysqld-exporter-openstack-db-create-7klnf" Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.394191 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa8913cb-b163-4973-b6e2-ac741177964e-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-7klnf\" (UID: \"fa8913cb-b163-4973-b6e2-ac741177964e\") " pod="openstack/mysqld-exporter-openstack-db-create-7klnf" Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.408644 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92f44db3-78d7-4707-af34-daf9f3bbc0bf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "92f44db3-78d7-4707-af34-daf9f3bbc0bf" (UID: "92f44db3-78d7-4707-af34-daf9f3bbc0bf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.415697 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92f44db3-78d7-4707-af34-daf9f3bbc0bf-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "92f44db3-78d7-4707-af34-daf9f3bbc0bf" (UID: "92f44db3-78d7-4707-af34-daf9f3bbc0bf"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.423854 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92f44db3-78d7-4707-af34-daf9f3bbc0bf-scripts" (OuterVolumeSpecName: "scripts") pod "92f44db3-78d7-4707-af34-daf9f3bbc0bf" (UID: "92f44db3-78d7-4707-af34-daf9f3bbc0bf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.426200 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbsfl\" (UniqueName: \"kubernetes.io/projected/fa8913cb-b163-4973-b6e2-ac741177964e-kube-api-access-cbsfl\") pod \"mysqld-exporter-openstack-db-create-7klnf\" (UID: \"fa8913cb-b163-4973-b6e2-ac741177964e\") " pod="openstack/mysqld-exporter-openstack-db-create-7klnf" Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.426265 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-4f85-account-create-update-7m6h2"] Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.494856 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mnvv\" (UniqueName: \"kubernetes.io/projected/1207dbcf-080a-40c2-a0cb-ab39e7225aaf-kube-api-access-7mnvv\") pod \"mysqld-exporter-4f85-account-create-update-7m6h2\" (UID: \"1207dbcf-080a-40c2-a0cb-ab39e7225aaf\") " pod="openstack/mysqld-exporter-4f85-account-create-update-7m6h2" Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.494975 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1207dbcf-080a-40c2-a0cb-ab39e7225aaf-operator-scripts\") pod \"mysqld-exporter-4f85-account-create-update-7m6h2\" (UID: \"1207dbcf-080a-40c2-a0cb-ab39e7225aaf\") " pod="openstack/mysqld-exporter-4f85-account-create-update-7m6h2" Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.495129 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/92f44db3-78d7-4707-af34-daf9f3bbc0bf-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.495140 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92f44db3-78d7-4707-af34-daf9f3bbc0bf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.495150 4867 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/92f44db3-78d7-4707-af34-daf9f3bbc0bf-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.555394 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-brnhd" event={"ID":"af1b76a6-cc66-4a23-893d-df38ba5aac38","Type":"ContainerStarted","Data":"3153c4a07960d41a74e24a7930f090f79335648803580b8746827c9d1b684552"} Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.582220 4867 generic.go:334] "Generic (PLEG): container finished" podID="0fef49b7-7486-40dc-aedc-9814adb071e2" containerID="ae0a83f28bdc3a06d4663a0d9d8e67b0716eee94221bc552fd5d22ba9ecc6605" exitCode=0 Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.582284 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-t56pc" event={"ID":"0fef49b7-7486-40dc-aedc-9814adb071e2","Type":"ContainerDied","Data":"ae0a83f28bdc3a06d4663a0d9d8e67b0716eee94221bc552fd5d22ba9ecc6605"} Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.598790 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mnvv\" (UniqueName: \"kubernetes.io/projected/1207dbcf-080a-40c2-a0cb-ab39e7225aaf-kube-api-access-7mnvv\") pod \"mysqld-exporter-4f85-account-create-update-7m6h2\" (UID: \"1207dbcf-080a-40c2-a0cb-ab39e7225aaf\") " pod="openstack/mysqld-exporter-4f85-account-create-update-7m6h2" Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.598893 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1207dbcf-080a-40c2-a0cb-ab39e7225aaf-operator-scripts\") pod \"mysqld-exporter-4f85-account-create-update-7m6h2\" (UID: \"1207dbcf-080a-40c2-a0cb-ab39e7225aaf\") " pod="openstack/mysqld-exporter-4f85-account-create-update-7m6h2" Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.601110 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1207dbcf-080a-40c2-a0cb-ab39e7225aaf-operator-scripts\") pod \"mysqld-exporter-4f85-account-create-update-7m6h2\" (UID: \"1207dbcf-080a-40c2-a0cb-ab39e7225aaf\") " pod="openstack/mysqld-exporter-4f85-account-create-update-7m6h2" Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.614794 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-qmj24" event={"ID":"853d3739-366e-498f-ac28-6df19ee88dee","Type":"ContainerStarted","Data":"4f99901f0da4b1da0863796edd2dde44662d1bb2b2807e64f939fdf575d0e6af"} Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.614847 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-qmj24" event={"ID":"853d3739-366e-498f-ac28-6df19ee88dee","Type":"ContainerStarted","Data":"1449dd6ba694df817431a1fd128385596c23c85bf11d7b4f85aa2c4a119c2a6e"} Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.627947 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-dc8sm" event={"ID":"92f44db3-78d7-4707-af34-daf9f3bbc0bf","Type":"ContainerDied","Data":"2560c4e53d69d39e5b6393b89e72bba71dd48e723971acf1a56bff692ff3065d"} Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.627990 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2560c4e53d69d39e5b6393b89e72bba71dd48e723971acf1a56bff692ff3065d" Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.628055 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-dc8sm" Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.634151 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mnvv\" (UniqueName: \"kubernetes.io/projected/1207dbcf-080a-40c2-a0cb-ab39e7225aaf-kube-api-access-7mnvv\") pod \"mysqld-exporter-4f85-account-create-update-7m6h2\" (UID: \"1207dbcf-080a-40c2-a0cb-ab39e7225aaf\") " pod="openstack/mysqld-exporter-4f85-account-create-update-7m6h2" Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.634362 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-a782-account-create-update-dzhfz" event={"ID":"b10f828b-59d6-4eb2-8922-aec92f274280","Type":"ContainerStarted","Data":"8b296b5d58f442c00028c4fdc60d37ab84f498118087ec78a227389a7fbdf5d6"} Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.634619 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-7klnf" Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.638806 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-aef7-account-create-update-w7xz9" event={"ID":"e62c2a1e-55e4-4b7d-90db-ab37eecdb659","Type":"ContainerStarted","Data":"003208322c8aa81d26f8c4c81ed09f0fbc97445ca54ffd023f4cbeef6d71c09f"} Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.655620 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-qmj24" podStartSLOduration=2.655586883 podStartE2EDuration="2.655586883s" podCreationTimestamp="2026-02-14 04:29:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:29:54.652370559 +0000 UTC m=+1226.733307873" watchObservedRunningTime="2026-02-14 04:29:54.655586883 +0000 UTC m=+1226.736524207" Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.659544 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-4f85-account-create-update-7m6h2" Feb 14 04:29:55 crc kubenswrapper[4867]: I0214 04:29:55.120187 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-cff6-account-create-update-ktnvw" Feb 14 04:29:55 crc kubenswrapper[4867]: I0214 04:29:55.217736 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b72434a2-25c0-4fd4-89cf-eff7bee167c3-operator-scripts\") pod \"b72434a2-25c0-4fd4-89cf-eff7bee167c3\" (UID: \"b72434a2-25c0-4fd4-89cf-eff7bee167c3\") " Feb 14 04:29:55 crc kubenswrapper[4867]: I0214 04:29:55.217876 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4dwgw\" (UniqueName: \"kubernetes.io/projected/b72434a2-25c0-4fd4-89cf-eff7bee167c3-kube-api-access-4dwgw\") pod \"b72434a2-25c0-4fd4-89cf-eff7bee167c3\" (UID: \"b72434a2-25c0-4fd4-89cf-eff7bee167c3\") " Feb 14 04:29:55 crc kubenswrapper[4867]: I0214 04:29:55.218566 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b72434a2-25c0-4fd4-89cf-eff7bee167c3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b72434a2-25c0-4fd4-89cf-eff7bee167c3" (UID: "b72434a2-25c0-4fd4-89cf-eff7bee167c3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:29:55 crc kubenswrapper[4867]: I0214 04:29:55.251460 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b72434a2-25c0-4fd4-89cf-eff7bee167c3-kube-api-access-4dwgw" (OuterVolumeSpecName: "kube-api-access-4dwgw") pod "b72434a2-25c0-4fd4-89cf-eff7bee167c3" (UID: "b72434a2-25c0-4fd4-89cf-eff7bee167c3"). InnerVolumeSpecName "kube-api-access-4dwgw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:29:55 crc kubenswrapper[4867]: I0214 04:29:55.320056 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b72434a2-25c0-4fd4-89cf-eff7bee167c3-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:29:55 crc kubenswrapper[4867]: I0214 04:29:55.320096 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4dwgw\" (UniqueName: \"kubernetes.io/projected/b72434a2-25c0-4fd4-89cf-eff7bee167c3-kube-api-access-4dwgw\") on node \"crc\" DevicePath \"\"" Feb 14 04:29:55 crc kubenswrapper[4867]: I0214 04:29:55.343613 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-4f85-account-create-update-7m6h2"] Feb 14 04:29:55 crc kubenswrapper[4867]: I0214 04:29:55.351357 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-7klnf"] Feb 14 04:29:55 crc kubenswrapper[4867]: I0214 04:29:55.651688 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-cff6-account-create-update-ktnvw" Feb 14 04:29:55 crc kubenswrapper[4867]: I0214 04:29:55.651736 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-cff6-account-create-update-ktnvw" event={"ID":"b72434a2-25c0-4fd4-89cf-eff7bee167c3","Type":"ContainerDied","Data":"46a9a76f15cacb4a470e49f4c30581d530830b0fd8172437a64106eaad5727e9"} Feb 14 04:29:55 crc kubenswrapper[4867]: I0214 04:29:55.652128 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="46a9a76f15cacb4a470e49f4c30581d530830b0fd8172437a64106eaad5727e9" Feb 14 04:29:55 crc kubenswrapper[4867]: I0214 04:29:55.654498 4867 generic.go:334] "Generic (PLEG): container finished" podID="853d3739-366e-498f-ac28-6df19ee88dee" containerID="4f99901f0da4b1da0863796edd2dde44662d1bb2b2807e64f939fdf575d0e6af" exitCode=0 Feb 14 04:29:55 crc kubenswrapper[4867]: I0214 04:29:55.654595 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-qmj24" event={"ID":"853d3739-366e-498f-ac28-6df19ee88dee","Type":"ContainerDied","Data":"4f99901f0da4b1da0863796edd2dde44662d1bb2b2807e64f939fdf575d0e6af"} Feb 14 04:29:55 crc kubenswrapper[4867]: I0214 04:29:55.659349 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c755009c-2bb6-4f8f-9b53-460a0e4c9447","Type":"ContainerStarted","Data":"c62b1e6f71da03f759075e45d595dab84ceabe23bcfb61adf4ba71561bb4ec1e"} Feb 14 04:29:55 crc kubenswrapper[4867]: I0214 04:29:55.661968 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-7klnf" event={"ID":"fa8913cb-b163-4973-b6e2-ac741177964e","Type":"ContainerStarted","Data":"b15af05372af83870ac8348103bb677c8c101f4ec816b4f3aac84c848cfde8bf"} Feb 14 04:29:55 crc kubenswrapper[4867]: I0214 04:29:55.664863 4867 generic.go:334] "Generic (PLEG): container finished" podID="b10f828b-59d6-4eb2-8922-aec92f274280" containerID="4331549532fda4f50fc6d3ddd019e8a773925579f6102f8ec4140112305629a4" exitCode=0 Feb 14 04:29:55 crc kubenswrapper[4867]: I0214 04:29:55.664923 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-a782-account-create-update-dzhfz" event={"ID":"b10f828b-59d6-4eb2-8922-aec92f274280","Type":"ContainerDied","Data":"4331549532fda4f50fc6d3ddd019e8a773925579f6102f8ec4140112305629a4"} Feb 14 04:29:55 crc kubenswrapper[4867]: I0214 04:29:55.666951 4867 generic.go:334] "Generic (PLEG): container finished" podID="e62c2a1e-55e4-4b7d-90db-ab37eecdb659" containerID="659356ffd1920059def60984a1f291aad46ef6d15393b55c49987a54a05704a7" exitCode=0 Feb 14 04:29:55 crc kubenswrapper[4867]: I0214 04:29:55.666997 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-aef7-account-create-update-w7xz9" event={"ID":"e62c2a1e-55e4-4b7d-90db-ab37eecdb659","Type":"ContainerDied","Data":"659356ffd1920059def60984a1f291aad46ef6d15393b55c49987a54a05704a7"} Feb 14 04:29:55 crc kubenswrapper[4867]: I0214 04:29:55.669385 4867 generic.go:334] "Generic (PLEG): container finished" podID="af1b76a6-cc66-4a23-893d-df38ba5aac38" containerID="6169e5fdf0e74fe086570773b95de46198a0244319d8d869f06e9d58ae4d08cb" exitCode=0 Feb 14 04:29:55 crc kubenswrapper[4867]: I0214 04:29:55.669491 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-brnhd" event={"ID":"af1b76a6-cc66-4a23-893d-df38ba5aac38","Type":"ContainerDied","Data":"6169e5fdf0e74fe086570773b95de46198a0244319d8d869f06e9d58ae4d08cb"} Feb 14 04:29:55 crc kubenswrapper[4867]: I0214 04:29:55.671005 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-4f85-account-create-update-7m6h2" event={"ID":"1207dbcf-080a-40c2-a0cb-ab39e7225aaf","Type":"ContainerStarted","Data":"e85e63230db29e0559e88714471c8d9ce8ccc4c7c8f8d4e8ba69289318b4674c"} Feb 14 04:29:56 crc kubenswrapper[4867]: I0214 04:29:56.234049 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-t56pc" Feb 14 04:29:56 crc kubenswrapper[4867]: I0214 04:29:56.349808 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0fef49b7-7486-40dc-aedc-9814adb071e2-operator-scripts\") pod \"0fef49b7-7486-40dc-aedc-9814adb071e2\" (UID: \"0fef49b7-7486-40dc-aedc-9814adb071e2\") " Feb 14 04:29:56 crc kubenswrapper[4867]: I0214 04:29:56.349885 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9fxhr\" (UniqueName: \"kubernetes.io/projected/0fef49b7-7486-40dc-aedc-9814adb071e2-kube-api-access-9fxhr\") pod \"0fef49b7-7486-40dc-aedc-9814adb071e2\" (UID: \"0fef49b7-7486-40dc-aedc-9814adb071e2\") " Feb 14 04:29:56 crc kubenswrapper[4867]: I0214 04:29:56.350590 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0fef49b7-7486-40dc-aedc-9814adb071e2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0fef49b7-7486-40dc-aedc-9814adb071e2" (UID: "0fef49b7-7486-40dc-aedc-9814adb071e2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:29:56 crc kubenswrapper[4867]: I0214 04:29:56.370381 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fef49b7-7486-40dc-aedc-9814adb071e2-kube-api-access-9fxhr" (OuterVolumeSpecName: "kube-api-access-9fxhr") pod "0fef49b7-7486-40dc-aedc-9814adb071e2" (UID: "0fef49b7-7486-40dc-aedc-9814adb071e2"). InnerVolumeSpecName "kube-api-access-9fxhr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:29:56 crc kubenswrapper[4867]: I0214 04:29:56.452835 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0fef49b7-7486-40dc-aedc-9814adb071e2-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:29:56 crc kubenswrapper[4867]: I0214 04:29:56.452881 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9fxhr\" (UniqueName: \"kubernetes.io/projected/0fef49b7-7486-40dc-aedc-9814adb071e2-kube-api-access-9fxhr\") on node \"crc\" DevicePath \"\"" Feb 14 04:29:56 crc kubenswrapper[4867]: I0214 04:29:56.682995 4867 generic.go:334] "Generic (PLEG): container finished" podID="1207dbcf-080a-40c2-a0cb-ab39e7225aaf" containerID="f4258135bf11c6ed1dd99f5c1f581fcb97da6e22ed3370067c3b4edacd5e6962" exitCode=0 Feb 14 04:29:56 crc kubenswrapper[4867]: I0214 04:29:56.683111 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-4f85-account-create-update-7m6h2" event={"ID":"1207dbcf-080a-40c2-a0cb-ab39e7225aaf","Type":"ContainerDied","Data":"f4258135bf11c6ed1dd99f5c1f581fcb97da6e22ed3370067c3b4edacd5e6962"} Feb 14 04:29:56 crc kubenswrapper[4867]: I0214 04:29:56.684918 4867 generic.go:334] "Generic (PLEG): container finished" podID="fa8913cb-b163-4973-b6e2-ac741177964e" containerID="41305e93b907718ed0332e27cd0c47623d93ba3f8546dbde9032dfe08f5e2a6c" exitCode=0 Feb 14 04:29:56 crc kubenswrapper[4867]: I0214 04:29:56.685029 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-7klnf" event={"ID":"fa8913cb-b163-4973-b6e2-ac741177964e","Type":"ContainerDied","Data":"41305e93b907718ed0332e27cd0c47623d93ba3f8546dbde9032dfe08f5e2a6c"} Feb 14 04:29:56 crc kubenswrapper[4867]: I0214 04:29:56.687377 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-t56pc" event={"ID":"0fef49b7-7486-40dc-aedc-9814adb071e2","Type":"ContainerDied","Data":"045d88360f02bd01b9a0a10a071b2c33fedbb74ad62b8c840dfe74592b470dd8"} Feb 14 04:29:56 crc kubenswrapper[4867]: I0214 04:29:56.687415 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="045d88360f02bd01b9a0a10a071b2c33fedbb74ad62b8c840dfe74592b470dd8" Feb 14 04:29:56 crc kubenswrapper[4867]: I0214 04:29:56.687631 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-t56pc" Feb 14 04:29:56 crc kubenswrapper[4867]: E0214 04:29:56.765718 4867 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0fef49b7_7486_40dc_aedc_9814adb071e2.slice\": RecentStats: unable to find data in memory cache]" Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.238199 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-aef7-account-create-update-w7xz9" Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.414010 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e62c2a1e-55e4-4b7d-90db-ab37eecdb659-operator-scripts\") pod \"e62c2a1e-55e4-4b7d-90db-ab37eecdb659\" (UID: \"e62c2a1e-55e4-4b7d-90db-ab37eecdb659\") " Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.415417 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e62c2a1e-55e4-4b7d-90db-ab37eecdb659-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e62c2a1e-55e4-4b7d-90db-ab37eecdb659" (UID: "e62c2a1e-55e4-4b7d-90db-ab37eecdb659"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.416770 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lwv5c\" (UniqueName: \"kubernetes.io/projected/e62c2a1e-55e4-4b7d-90db-ab37eecdb659-kube-api-access-lwv5c\") pod \"e62c2a1e-55e4-4b7d-90db-ab37eecdb659\" (UID: \"e62c2a1e-55e4-4b7d-90db-ab37eecdb659\") " Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.417355 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e62c2a1e-55e4-4b7d-90db-ab37eecdb659-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.435553 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e62c2a1e-55e4-4b7d-90db-ab37eecdb659-kube-api-access-lwv5c" (OuterVolumeSpecName: "kube-api-access-lwv5c") pod "e62c2a1e-55e4-4b7d-90db-ab37eecdb659" (UID: "e62c2a1e-55e4-4b7d-90db-ab37eecdb659"). InnerVolumeSpecName "kube-api-access-lwv5c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.519930 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lwv5c\" (UniqueName: \"kubernetes.io/projected/e62c2a1e-55e4-4b7d-90db-ab37eecdb659-kube-api-access-lwv5c\") on node \"crc\" DevicePath \"\"" Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.564002 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-a782-account-create-update-dzhfz" Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.670086 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-brnhd" Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.680345 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-qmj24" Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.705650 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-qmj24" Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.705806 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-qmj24" event={"ID":"853d3739-366e-498f-ac28-6df19ee88dee","Type":"ContainerDied","Data":"1449dd6ba694df817431a1fd128385596c23c85bf11d7b4f85aa2c4a119c2a6e"} Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.705939 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1449dd6ba694df817431a1fd128385596c23c85bf11d7b4f85aa2c4a119c2a6e" Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.707571 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-a782-account-create-update-dzhfz" event={"ID":"b10f828b-59d6-4eb2-8922-aec92f274280","Type":"ContainerDied","Data":"8b296b5d58f442c00028c4fdc60d37ab84f498118087ec78a227389a7fbdf5d6"} Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.707594 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b296b5d58f442c00028c4fdc60d37ab84f498118087ec78a227389a7fbdf5d6" Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.707635 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-a782-account-create-update-dzhfz" Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.712104 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-aef7-account-create-update-w7xz9" event={"ID":"e62c2a1e-55e4-4b7d-90db-ab37eecdb659","Type":"ContainerDied","Data":"003208322c8aa81d26f8c4c81ed09f0fbc97445ca54ffd023f4cbeef6d71c09f"} Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.712418 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="003208322c8aa81d26f8c4c81ed09f0fbc97445ca54ffd023f4cbeef6d71c09f" Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.712472 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-aef7-account-create-update-w7xz9" Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.727841 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cf9dm\" (UniqueName: \"kubernetes.io/projected/b10f828b-59d6-4eb2-8922-aec92f274280-kube-api-access-cf9dm\") pod \"b10f828b-59d6-4eb2-8922-aec92f274280\" (UID: \"b10f828b-59d6-4eb2-8922-aec92f274280\") " Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.727921 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b10f828b-59d6-4eb2-8922-aec92f274280-operator-scripts\") pod \"b10f828b-59d6-4eb2-8922-aec92f274280\" (UID: \"b10f828b-59d6-4eb2-8922-aec92f274280\") " Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.729118 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b10f828b-59d6-4eb2-8922-aec92f274280-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b10f828b-59d6-4eb2-8922-aec92f274280" (UID: "b10f828b-59d6-4eb2-8922-aec92f274280"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.745899 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b10f828b-59d6-4eb2-8922-aec92f274280-kube-api-access-cf9dm" (OuterVolumeSpecName: "kube-api-access-cf9dm") pod "b10f828b-59d6-4eb2-8922-aec92f274280" (UID: "b10f828b-59d6-4eb2-8922-aec92f274280"). InnerVolumeSpecName "kube-api-access-cf9dm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.774487 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-brnhd" Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.774688 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-brnhd" event={"ID":"af1b76a6-cc66-4a23-893d-df38ba5aac38","Type":"ContainerDied","Data":"3153c4a07960d41a74e24a7930f090f79335648803580b8746827c9d1b684552"} Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.774742 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3153c4a07960d41a74e24a7930f090f79335648803580b8746827c9d1b684552" Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.831730 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af1b76a6-cc66-4a23-893d-df38ba5aac38-operator-scripts\") pod \"af1b76a6-cc66-4a23-893d-df38ba5aac38\" (UID: \"af1b76a6-cc66-4a23-893d-df38ba5aac38\") " Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.831864 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/853d3739-366e-498f-ac28-6df19ee88dee-operator-scripts\") pod \"853d3739-366e-498f-ac28-6df19ee88dee\" (UID: \"853d3739-366e-498f-ac28-6df19ee88dee\") " Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.831893 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wgmt4\" (UniqueName: \"kubernetes.io/projected/853d3739-366e-498f-ac28-6df19ee88dee-kube-api-access-wgmt4\") pod \"853d3739-366e-498f-ac28-6df19ee88dee\" (UID: \"853d3739-366e-498f-ac28-6df19ee88dee\") " Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.831933 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z2vmp\" (UniqueName: \"kubernetes.io/projected/af1b76a6-cc66-4a23-893d-df38ba5aac38-kube-api-access-z2vmp\") pod \"af1b76a6-cc66-4a23-893d-df38ba5aac38\" (UID: \"af1b76a6-cc66-4a23-893d-df38ba5aac38\") " Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.832427 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cf9dm\" (UniqueName: \"kubernetes.io/projected/b10f828b-59d6-4eb2-8922-aec92f274280-kube-api-access-cf9dm\") on node \"crc\" DevicePath \"\"" Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.832440 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b10f828b-59d6-4eb2-8922-aec92f274280-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.841250 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/853d3739-366e-498f-ac28-6df19ee88dee-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "853d3739-366e-498f-ac28-6df19ee88dee" (UID: "853d3739-366e-498f-ac28-6df19ee88dee"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.841660 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af1b76a6-cc66-4a23-893d-df38ba5aac38-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "af1b76a6-cc66-4a23-893d-df38ba5aac38" (UID: "af1b76a6-cc66-4a23-893d-df38ba5aac38"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.852680 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/853d3739-366e-498f-ac28-6df19ee88dee-kube-api-access-wgmt4" (OuterVolumeSpecName: "kube-api-access-wgmt4") pod "853d3739-366e-498f-ac28-6df19ee88dee" (UID: "853d3739-366e-498f-ac28-6df19ee88dee"). InnerVolumeSpecName "kube-api-access-wgmt4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.869808 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af1b76a6-cc66-4a23-893d-df38ba5aac38-kube-api-access-z2vmp" (OuterVolumeSpecName: "kube-api-access-z2vmp") pod "af1b76a6-cc66-4a23-893d-df38ba5aac38" (UID: "af1b76a6-cc66-4a23-893d-df38ba5aac38"). InnerVolumeSpecName "kube-api-access-z2vmp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.923011 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-s7x2m"] Feb 14 04:29:57 crc kubenswrapper[4867]: E0214 04:29:57.923763 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e62c2a1e-55e4-4b7d-90db-ab37eecdb659" containerName="mariadb-account-create-update" Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.923901 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="e62c2a1e-55e4-4b7d-90db-ab37eecdb659" containerName="mariadb-account-create-update" Feb 14 04:29:57 crc kubenswrapper[4867]: E0214 04:29:57.923987 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af1b76a6-cc66-4a23-893d-df38ba5aac38" containerName="mariadb-database-create" Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.924048 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="af1b76a6-cc66-4a23-893d-df38ba5aac38" containerName="mariadb-database-create" Feb 14 04:29:57 crc kubenswrapper[4867]: E0214 04:29:57.924106 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b72434a2-25c0-4fd4-89cf-eff7bee167c3" containerName="mariadb-account-create-update" Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.924156 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="b72434a2-25c0-4fd4-89cf-eff7bee167c3" containerName="mariadb-account-create-update" Feb 14 04:29:57 crc kubenswrapper[4867]: E0214 04:29:57.924206 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b10f828b-59d6-4eb2-8922-aec92f274280" containerName="mariadb-account-create-update" Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.924253 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="b10f828b-59d6-4eb2-8922-aec92f274280" containerName="mariadb-account-create-update" Feb 14 04:29:57 crc kubenswrapper[4867]: E0214 04:29:57.924317 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fef49b7-7486-40dc-aedc-9814adb071e2" containerName="mariadb-database-create" Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.924383 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fef49b7-7486-40dc-aedc-9814adb071e2" containerName="mariadb-database-create" Feb 14 04:29:57 crc kubenswrapper[4867]: E0214 04:29:57.924436 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="853d3739-366e-498f-ac28-6df19ee88dee" containerName="mariadb-database-create" Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.924494 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="853d3739-366e-498f-ac28-6df19ee88dee" containerName="mariadb-database-create" Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.924829 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="af1b76a6-cc66-4a23-893d-df38ba5aac38" containerName="mariadb-database-create" Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.924902 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="e62c2a1e-55e4-4b7d-90db-ab37eecdb659" containerName="mariadb-account-create-update" Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.924970 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="853d3739-366e-498f-ac28-6df19ee88dee" containerName="mariadb-database-create" Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.925037 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="b72434a2-25c0-4fd4-89cf-eff7bee167c3" containerName="mariadb-account-create-update" Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.925178 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="b10f828b-59d6-4eb2-8922-aec92f274280" containerName="mariadb-account-create-update" Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.925256 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fef49b7-7486-40dc-aedc-9814adb071e2" containerName="mariadb-database-create" Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.926242 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-s7x2m" Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.932913 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.937016 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/853d3739-366e-498f-ac28-6df19ee88dee-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.937053 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wgmt4\" (UniqueName: \"kubernetes.io/projected/853d3739-366e-498f-ac28-6df19ee88dee-kube-api-access-wgmt4\") on node \"crc\" DevicePath \"\"" Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.937065 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z2vmp\" (UniqueName: \"kubernetes.io/projected/af1b76a6-cc66-4a23-893d-df38ba5aac38-kube-api-access-z2vmp\") on node \"crc\" DevicePath \"\"" Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.937076 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af1b76a6-cc66-4a23-893d-df38ba5aac38-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.971578 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-s7x2m"] Feb 14 04:29:58 crc kubenswrapper[4867]: I0214 04:29:58.047879 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69e82314-4716-4d79-b6bf-777f09ee83f7-operator-scripts\") pod \"root-account-create-update-s7x2m\" (UID: \"69e82314-4716-4d79-b6bf-777f09ee83f7\") " pod="openstack/root-account-create-update-s7x2m" Feb 14 04:29:58 crc kubenswrapper[4867]: I0214 04:29:58.048041 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gg7t\" (UniqueName: \"kubernetes.io/projected/69e82314-4716-4d79-b6bf-777f09ee83f7-kube-api-access-4gg7t\") pod \"root-account-create-update-s7x2m\" (UID: \"69e82314-4716-4d79-b6bf-777f09ee83f7\") " pod="openstack/root-account-create-update-s7x2m" Feb 14 04:29:58 crc kubenswrapper[4867]: I0214 04:29:58.157251 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69e82314-4716-4d79-b6bf-777f09ee83f7-operator-scripts\") pod \"root-account-create-update-s7x2m\" (UID: \"69e82314-4716-4d79-b6bf-777f09ee83f7\") " pod="openstack/root-account-create-update-s7x2m" Feb 14 04:29:58 crc kubenswrapper[4867]: I0214 04:29:58.157441 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gg7t\" (UniqueName: \"kubernetes.io/projected/69e82314-4716-4d79-b6bf-777f09ee83f7-kube-api-access-4gg7t\") pod \"root-account-create-update-s7x2m\" (UID: \"69e82314-4716-4d79-b6bf-777f09ee83f7\") " pod="openstack/root-account-create-update-s7x2m" Feb 14 04:29:58 crc kubenswrapper[4867]: I0214 04:29:58.159287 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69e82314-4716-4d79-b6bf-777f09ee83f7-operator-scripts\") pod \"root-account-create-update-s7x2m\" (UID: \"69e82314-4716-4d79-b6bf-777f09ee83f7\") " pod="openstack/root-account-create-update-s7x2m" Feb 14 04:29:58 crc kubenswrapper[4867]: I0214 04:29:58.208074 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gg7t\" (UniqueName: \"kubernetes.io/projected/69e82314-4716-4d79-b6bf-777f09ee83f7-kube-api-access-4gg7t\") pod \"root-account-create-update-s7x2m\" (UID: \"69e82314-4716-4d79-b6bf-777f09ee83f7\") " pod="openstack/root-account-create-update-s7x2m" Feb 14 04:29:58 crc kubenswrapper[4867]: I0214 04:29:58.255120 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-s7x2m" Feb 14 04:29:58 crc kubenswrapper[4867]: I0214 04:29:58.348763 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 14 04:29:58 crc kubenswrapper[4867]: I0214 04:29:58.624116 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-4f85-account-create-update-7m6h2" Feb 14 04:29:58 crc kubenswrapper[4867]: I0214 04:29:58.771522 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7mnvv\" (UniqueName: \"kubernetes.io/projected/1207dbcf-080a-40c2-a0cb-ab39e7225aaf-kube-api-access-7mnvv\") pod \"1207dbcf-080a-40c2-a0cb-ab39e7225aaf\" (UID: \"1207dbcf-080a-40c2-a0cb-ab39e7225aaf\") " Feb 14 04:29:58 crc kubenswrapper[4867]: I0214 04:29:58.771644 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1207dbcf-080a-40c2-a0cb-ab39e7225aaf-operator-scripts\") pod \"1207dbcf-080a-40c2-a0cb-ab39e7225aaf\" (UID: \"1207dbcf-080a-40c2-a0cb-ab39e7225aaf\") " Feb 14 04:29:58 crc kubenswrapper[4867]: I0214 04:29:58.772465 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1207dbcf-080a-40c2-a0cb-ab39e7225aaf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1207dbcf-080a-40c2-a0cb-ab39e7225aaf" (UID: "1207dbcf-080a-40c2-a0cb-ab39e7225aaf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:29:58 crc kubenswrapper[4867]: I0214 04:29:58.775821 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1207dbcf-080a-40c2-a0cb-ab39e7225aaf-kube-api-access-7mnvv" (OuterVolumeSpecName: "kube-api-access-7mnvv") pod "1207dbcf-080a-40c2-a0cb-ab39e7225aaf" (UID: "1207dbcf-080a-40c2-a0cb-ab39e7225aaf"). InnerVolumeSpecName "kube-api-access-7mnvv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:29:58 crc kubenswrapper[4867]: I0214 04:29:58.802565 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-4f85-account-create-update-7m6h2" event={"ID":"1207dbcf-080a-40c2-a0cb-ab39e7225aaf","Type":"ContainerDied","Data":"e85e63230db29e0559e88714471c8d9ce8ccc4c7c8f8d4e8ba69289318b4674c"} Feb 14 04:29:58 crc kubenswrapper[4867]: I0214 04:29:58.802615 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e85e63230db29e0559e88714471c8d9ce8ccc4c7c8f8d4e8ba69289318b4674c" Feb 14 04:29:58 crc kubenswrapper[4867]: I0214 04:29:58.802625 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-4f85-account-create-update-7m6h2" Feb 14 04:29:58 crc kubenswrapper[4867]: I0214 04:29:58.874380 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7mnvv\" (UniqueName: \"kubernetes.io/projected/1207dbcf-080a-40c2-a0cb-ab39e7225aaf-kube-api-access-7mnvv\") on node \"crc\" DevicePath \"\"" Feb 14 04:29:58 crc kubenswrapper[4867]: I0214 04:29:58.874422 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1207dbcf-080a-40c2-a0cb-ab39e7225aaf-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:29:58 crc kubenswrapper[4867]: I0214 04:29:58.924104 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-s7x2m"] Feb 14 04:29:59 crc kubenswrapper[4867]: W0214 04:29:59.600905 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod69e82314_4716_4d79_b6bf_777f09ee83f7.slice/crio-90a619509978f686be0c8500e2ce1d1e1d540d50a43739ab895b3767799dad1c WatchSource:0}: Error finding container 90a619509978f686be0c8500e2ce1d1e1d540d50a43739ab895b3767799dad1c: Status 404 returned error can't find the container with id 90a619509978f686be0c8500e2ce1d1e1d540d50a43739ab895b3767799dad1c Feb 14 04:29:59 crc kubenswrapper[4867]: I0214 04:29:59.733344 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-7klnf" Feb 14 04:29:59 crc kubenswrapper[4867]: I0214 04:29:59.822373 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-7klnf" event={"ID":"fa8913cb-b163-4973-b6e2-ac741177964e","Type":"ContainerDied","Data":"b15af05372af83870ac8348103bb677c8c101f4ec816b4f3aac84c848cfde8bf"} Feb 14 04:29:59 crc kubenswrapper[4867]: I0214 04:29:59.822415 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b15af05372af83870ac8348103bb677c8c101f4ec816b4f3aac84c848cfde8bf" Feb 14 04:29:59 crc kubenswrapper[4867]: I0214 04:29:59.822480 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-7klnf" Feb 14 04:29:59 crc kubenswrapper[4867]: I0214 04:29:59.824882 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-s7x2m" event={"ID":"69e82314-4716-4d79-b6bf-777f09ee83f7","Type":"ContainerStarted","Data":"90a619509978f686be0c8500e2ce1d1e1d540d50a43739ab895b3767799dad1c"} Feb 14 04:29:59 crc kubenswrapper[4867]: I0214 04:29:59.893653 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbsfl\" (UniqueName: \"kubernetes.io/projected/fa8913cb-b163-4973-b6e2-ac741177964e-kube-api-access-cbsfl\") pod \"fa8913cb-b163-4973-b6e2-ac741177964e\" (UID: \"fa8913cb-b163-4973-b6e2-ac741177964e\") " Feb 14 04:29:59 crc kubenswrapper[4867]: I0214 04:29:59.893976 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa8913cb-b163-4973-b6e2-ac741177964e-operator-scripts\") pod \"fa8913cb-b163-4973-b6e2-ac741177964e\" (UID: \"fa8913cb-b163-4973-b6e2-ac741177964e\") " Feb 14 04:29:59 crc kubenswrapper[4867]: I0214 04:29:59.894684 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa8913cb-b163-4973-b6e2-ac741177964e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fa8913cb-b163-4973-b6e2-ac741177964e" (UID: "fa8913cb-b163-4973-b6e2-ac741177964e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:29:59 crc kubenswrapper[4867]: I0214 04:29:59.899565 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa8913cb-b163-4973-b6e2-ac741177964e-kube-api-access-cbsfl" (OuterVolumeSpecName: "kube-api-access-cbsfl") pod "fa8913cb-b163-4973-b6e2-ac741177964e" (UID: "fa8913cb-b163-4973-b6e2-ac741177964e"). InnerVolumeSpecName "kube-api-access-cbsfl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:29:59 crc kubenswrapper[4867]: I0214 04:29:59.996723 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbsfl\" (UniqueName: \"kubernetes.io/projected/fa8913cb-b163-4973-b6e2-ac741177964e-kube-api-access-cbsfl\") on node \"crc\" DevicePath \"\"" Feb 14 04:29:59 crc kubenswrapper[4867]: I0214 04:29:59.996770 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa8913cb-b163-4973-b6e2-ac741177964e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:00 crc kubenswrapper[4867]: I0214 04:30:00.136146 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517390-kwnnx"] Feb 14 04:30:00 crc kubenswrapper[4867]: E0214 04:30:00.136984 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1207dbcf-080a-40c2-a0cb-ab39e7225aaf" containerName="mariadb-account-create-update" Feb 14 04:30:00 crc kubenswrapper[4867]: I0214 04:30:00.137080 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="1207dbcf-080a-40c2-a0cb-ab39e7225aaf" containerName="mariadb-account-create-update" Feb 14 04:30:00 crc kubenswrapper[4867]: E0214 04:30:00.137177 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa8913cb-b163-4973-b6e2-ac741177964e" containerName="mariadb-database-create" Feb 14 04:30:00 crc kubenswrapper[4867]: I0214 04:30:00.137286 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa8913cb-b163-4973-b6e2-ac741177964e" containerName="mariadb-database-create" Feb 14 04:30:00 crc kubenswrapper[4867]: I0214 04:30:00.137590 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="1207dbcf-080a-40c2-a0cb-ab39e7225aaf" containerName="mariadb-account-create-update" Feb 14 04:30:00 crc kubenswrapper[4867]: I0214 04:30:00.137692 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa8913cb-b163-4973-b6e2-ac741177964e" containerName="mariadb-database-create" Feb 14 04:30:00 crc kubenswrapper[4867]: I0214 04:30:00.139034 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517390-kwnnx" Feb 14 04:30:00 crc kubenswrapper[4867]: I0214 04:30:00.141127 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 14 04:30:00 crc kubenswrapper[4867]: I0214 04:30:00.141243 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 14 04:30:00 crc kubenswrapper[4867]: I0214 04:30:00.159926 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517390-kwnnx"] Feb 14 04:30:00 crc kubenswrapper[4867]: I0214 04:30:00.303009 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dxx2\" (UniqueName: \"kubernetes.io/projected/f7c88887-cc0d-4b61-9ccc-e5583c27322f-kube-api-access-4dxx2\") pod \"collect-profiles-29517390-kwnnx\" (UID: \"f7c88887-cc0d-4b61-9ccc-e5583c27322f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517390-kwnnx" Feb 14 04:30:00 crc kubenswrapper[4867]: I0214 04:30:00.303212 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f7c88887-cc0d-4b61-9ccc-e5583c27322f-secret-volume\") pod \"collect-profiles-29517390-kwnnx\" (UID: \"f7c88887-cc0d-4b61-9ccc-e5583c27322f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517390-kwnnx" Feb 14 04:30:00 crc kubenswrapper[4867]: I0214 04:30:00.303356 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f7c88887-cc0d-4b61-9ccc-e5583c27322f-config-volume\") pod \"collect-profiles-29517390-kwnnx\" (UID: \"f7c88887-cc0d-4b61-9ccc-e5583c27322f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517390-kwnnx" Feb 14 04:30:00 crc kubenswrapper[4867]: I0214 04:30:00.405121 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dxx2\" (UniqueName: \"kubernetes.io/projected/f7c88887-cc0d-4b61-9ccc-e5583c27322f-kube-api-access-4dxx2\") pod \"collect-profiles-29517390-kwnnx\" (UID: \"f7c88887-cc0d-4b61-9ccc-e5583c27322f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517390-kwnnx" Feb 14 04:30:00 crc kubenswrapper[4867]: I0214 04:30:00.405249 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f7c88887-cc0d-4b61-9ccc-e5583c27322f-secret-volume\") pod \"collect-profiles-29517390-kwnnx\" (UID: \"f7c88887-cc0d-4b61-9ccc-e5583c27322f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517390-kwnnx" Feb 14 04:30:00 crc kubenswrapper[4867]: I0214 04:30:00.405307 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f7c88887-cc0d-4b61-9ccc-e5583c27322f-config-volume\") pod \"collect-profiles-29517390-kwnnx\" (UID: \"f7c88887-cc0d-4b61-9ccc-e5583c27322f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517390-kwnnx" Feb 14 04:30:00 crc kubenswrapper[4867]: I0214 04:30:00.406736 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f7c88887-cc0d-4b61-9ccc-e5583c27322f-config-volume\") pod \"collect-profiles-29517390-kwnnx\" (UID: \"f7c88887-cc0d-4b61-9ccc-e5583c27322f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517390-kwnnx" Feb 14 04:30:00 crc kubenswrapper[4867]: I0214 04:30:00.410650 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f7c88887-cc0d-4b61-9ccc-e5583c27322f-secret-volume\") pod \"collect-profiles-29517390-kwnnx\" (UID: \"f7c88887-cc0d-4b61-9ccc-e5583c27322f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517390-kwnnx" Feb 14 04:30:00 crc kubenswrapper[4867]: I0214 04:30:00.426565 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dxx2\" (UniqueName: \"kubernetes.io/projected/f7c88887-cc0d-4b61-9ccc-e5583c27322f-kube-api-access-4dxx2\") pod \"collect-profiles-29517390-kwnnx\" (UID: \"f7c88887-cc0d-4b61-9ccc-e5583c27322f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517390-kwnnx" Feb 14 04:30:00 crc kubenswrapper[4867]: I0214 04:30:00.462267 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517390-kwnnx" Feb 14 04:30:00 crc kubenswrapper[4867]: I0214 04:30:00.835401 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c755009c-2bb6-4f8f-9b53-460a0e4c9447","Type":"ContainerStarted","Data":"ac3d49c697d1a12bf76bc8aaf7a8fbec4fa259f04adb512889e6f0cf63f6e93d"} Feb 14 04:30:00 crc kubenswrapper[4867]: I0214 04:30:00.836666 4867 generic.go:334] "Generic (PLEG): container finished" podID="69e82314-4716-4d79-b6bf-777f09ee83f7" containerID="f68abce2a11886ea053ab13b7ebbe72ba1f8d7abcfad4ba7b26252a8c0000f25" exitCode=0 Feb 14 04:30:00 crc kubenswrapper[4867]: I0214 04:30:00.836716 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-s7x2m" event={"ID":"69e82314-4716-4d79-b6bf-777f09ee83f7","Type":"ContainerDied","Data":"f68abce2a11886ea053ab13b7ebbe72ba1f8d7abcfad4ba7b26252a8c0000f25"} Feb 14 04:30:00 crc kubenswrapper[4867]: I0214 04:30:00.865406 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=33.664804234 podStartE2EDuration="1m6.865385879s" podCreationTimestamp="2026-02-14 04:28:54 +0000 UTC" firstStartedPulling="2026-02-14 04:29:26.500245996 +0000 UTC m=+1198.581183310" lastFinishedPulling="2026-02-14 04:29:59.700827641 +0000 UTC m=+1231.781764955" observedRunningTime="2026-02-14 04:30:00.862319697 +0000 UTC m=+1232.943257011" watchObservedRunningTime="2026-02-14 04:30:00.865385879 +0000 UTC m=+1232.946323193" Feb 14 04:30:00 crc kubenswrapper[4867]: W0214 04:30:00.932544 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7c88887_cc0d_4b61_9ccc_e5583c27322f.slice/crio-59711ea2ab0acd44b6bdb18cb66a56f569e3f705ecdd3852368745b58d075e40 WatchSource:0}: Error finding container 59711ea2ab0acd44b6bdb18cb66a56f569e3f705ecdd3852368745b58d075e40: Status 404 returned error can't find the container with id 59711ea2ab0acd44b6bdb18cb66a56f569e3f705ecdd3852368745b58d075e40 Feb 14 04:30:00 crc kubenswrapper[4867]: I0214 04:30:00.933750 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517390-kwnnx"] Feb 14 04:30:01 crc kubenswrapper[4867]: I0214 04:30:01.402068 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-6c8864b6b5-mwdd6" podUID="c4a25aef-4eee-4b48-b50a-0bf8fb0c1602" containerName="console" containerID="cri-o://c2a0f0ef4fc35a56210a1bd277b9f8c3dbe6b717fe6cba021a58146d554cbf3e" gracePeriod=15 Feb 14 04:30:01 crc kubenswrapper[4867]: I0214 04:30:01.402756 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:01 crc kubenswrapper[4867]: I0214 04:30:01.807404 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-gzvxs"] Feb 14 04:30:01 crc kubenswrapper[4867]: I0214 04:30:01.813302 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-gzvxs" Feb 14 04:30:01 crc kubenswrapper[4867]: I0214 04:30:01.817978 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 14 04:30:01 crc kubenswrapper[4867]: I0214 04:30:01.818197 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-vtnl4" Feb 14 04:30:01 crc kubenswrapper[4867]: I0214 04:30:01.823633 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-gzvxs"] Feb 14 04:30:01 crc kubenswrapper[4867]: I0214 04:30:01.854655 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6c8864b6b5-mwdd6_c4a25aef-4eee-4b48-b50a-0bf8fb0c1602/console/0.log" Feb 14 04:30:01 crc kubenswrapper[4867]: I0214 04:30:01.854711 4867 generic.go:334] "Generic (PLEG): container finished" podID="c4a25aef-4eee-4b48-b50a-0bf8fb0c1602" containerID="c2a0f0ef4fc35a56210a1bd277b9f8c3dbe6b717fe6cba021a58146d554cbf3e" exitCode=2 Feb 14 04:30:01 crc kubenswrapper[4867]: I0214 04:30:01.854794 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6c8864b6b5-mwdd6" event={"ID":"c4a25aef-4eee-4b48-b50a-0bf8fb0c1602","Type":"ContainerDied","Data":"c2a0f0ef4fc35a56210a1bd277b9f8c3dbe6b717fe6cba021a58146d554cbf3e"} Feb 14 04:30:01 crc kubenswrapper[4867]: I0214 04:30:01.858226 4867 generic.go:334] "Generic (PLEG): container finished" podID="f7c88887-cc0d-4b61-9ccc-e5583c27322f" containerID="1ad9cf29f8ad6082a18e81d3f3baec01fbc4267f231e524551a2925f597e672d" exitCode=0 Feb 14 04:30:01 crc kubenswrapper[4867]: I0214 04:30:01.858302 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517390-kwnnx" event={"ID":"f7c88887-cc0d-4b61-9ccc-e5583c27322f","Type":"ContainerDied","Data":"1ad9cf29f8ad6082a18e81d3f3baec01fbc4267f231e524551a2925f597e672d"} Feb 14 04:30:01 crc kubenswrapper[4867]: I0214 04:30:01.858358 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517390-kwnnx" event={"ID":"f7c88887-cc0d-4b61-9ccc-e5583c27322f","Type":"ContainerStarted","Data":"59711ea2ab0acd44b6bdb18cb66a56f569e3f705ecdd3852368745b58d075e40"} Feb 14 04:30:01 crc kubenswrapper[4867]: I0214 04:30:01.944788 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2-config-data\") pod \"glance-db-sync-gzvxs\" (UID: \"e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2\") " pod="openstack/glance-db-sync-gzvxs" Feb 14 04:30:01 crc kubenswrapper[4867]: I0214 04:30:01.944966 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlznh\" (UniqueName: \"kubernetes.io/projected/e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2-kube-api-access-wlznh\") pod \"glance-db-sync-gzvxs\" (UID: \"e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2\") " pod="openstack/glance-db-sync-gzvxs" Feb 14 04:30:01 crc kubenswrapper[4867]: I0214 04:30:01.945016 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2-combined-ca-bundle\") pod \"glance-db-sync-gzvxs\" (UID: \"e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2\") " pod="openstack/glance-db-sync-gzvxs" Feb 14 04:30:01 crc kubenswrapper[4867]: I0214 04:30:01.945193 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2-db-sync-config-data\") pod \"glance-db-sync-gzvxs\" (UID: \"e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2\") " pod="openstack/glance-db-sync-gzvxs" Feb 14 04:30:01 crc kubenswrapper[4867]: I0214 04:30:01.947774 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-7lpqj" podUID="16c28c0f-9310-4721-87cf-2d1bb88b5bba" containerName="ovn-controller" probeResult="failure" output=< Feb 14 04:30:01 crc kubenswrapper[4867]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 14 04:30:01 crc kubenswrapper[4867]: > Feb 14 04:30:02 crc kubenswrapper[4867]: I0214 04:30:02.047492 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlznh\" (UniqueName: \"kubernetes.io/projected/e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2-kube-api-access-wlznh\") pod \"glance-db-sync-gzvxs\" (UID: \"e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2\") " pod="openstack/glance-db-sync-gzvxs" Feb 14 04:30:02 crc kubenswrapper[4867]: I0214 04:30:02.047563 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2-combined-ca-bundle\") pod \"glance-db-sync-gzvxs\" (UID: \"e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2\") " pod="openstack/glance-db-sync-gzvxs" Feb 14 04:30:02 crc kubenswrapper[4867]: I0214 04:30:02.047645 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2-db-sync-config-data\") pod \"glance-db-sync-gzvxs\" (UID: \"e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2\") " pod="openstack/glance-db-sync-gzvxs" Feb 14 04:30:02 crc kubenswrapper[4867]: I0214 04:30:02.047724 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2-config-data\") pod \"glance-db-sync-gzvxs\" (UID: \"e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2\") " pod="openstack/glance-db-sync-gzvxs" Feb 14 04:30:02 crc kubenswrapper[4867]: I0214 04:30:02.053665 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2-config-data\") pod \"glance-db-sync-gzvxs\" (UID: \"e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2\") " pod="openstack/glance-db-sync-gzvxs" Feb 14 04:30:02 crc kubenswrapper[4867]: I0214 04:30:02.053663 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2-db-sync-config-data\") pod \"glance-db-sync-gzvxs\" (UID: \"e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2\") " pod="openstack/glance-db-sync-gzvxs" Feb 14 04:30:02 crc kubenswrapper[4867]: I0214 04:30:02.053704 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2-combined-ca-bundle\") pod \"glance-db-sync-gzvxs\" (UID: \"e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2\") " pod="openstack/glance-db-sync-gzvxs" Feb 14 04:30:02 crc kubenswrapper[4867]: I0214 04:30:02.064317 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlznh\" (UniqueName: \"kubernetes.io/projected/e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2-kube-api-access-wlznh\") pod \"glance-db-sync-gzvxs\" (UID: \"e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2\") " pod="openstack/glance-db-sync-gzvxs" Feb 14 04:30:02 crc kubenswrapper[4867]: I0214 04:30:02.124769 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6c8864b6b5-mwdd6_c4a25aef-4eee-4b48-b50a-0bf8fb0c1602/console/0.log" Feb 14 04:30:02 crc kubenswrapper[4867]: I0214 04:30:02.125112 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6c8864b6b5-mwdd6" Feb 14 04:30:02 crc kubenswrapper[4867]: I0214 04:30:02.148397 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-gzvxs" Feb 14 04:30:02 crc kubenswrapper[4867]: I0214 04:30:02.257703 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c4a25aef-4eee-4b48-b50a-0bf8fb0c1602-service-ca\") pod \"c4a25aef-4eee-4b48-b50a-0bf8fb0c1602\" (UID: \"c4a25aef-4eee-4b48-b50a-0bf8fb0c1602\") " Feb 14 04:30:02 crc kubenswrapper[4867]: I0214 04:30:02.258150 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c4a25aef-4eee-4b48-b50a-0bf8fb0c1602-console-serving-cert\") pod \"c4a25aef-4eee-4b48-b50a-0bf8fb0c1602\" (UID: \"c4a25aef-4eee-4b48-b50a-0bf8fb0c1602\") " Feb 14 04:30:02 crc kubenswrapper[4867]: I0214 04:30:02.258333 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c4a25aef-4eee-4b48-b50a-0bf8fb0c1602-console-config\") pod \"c4a25aef-4eee-4b48-b50a-0bf8fb0c1602\" (UID: \"c4a25aef-4eee-4b48-b50a-0bf8fb0c1602\") " Feb 14 04:30:02 crc kubenswrapper[4867]: I0214 04:30:02.258406 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lnn87\" (UniqueName: \"kubernetes.io/projected/c4a25aef-4eee-4b48-b50a-0bf8fb0c1602-kube-api-access-lnn87\") pod \"c4a25aef-4eee-4b48-b50a-0bf8fb0c1602\" (UID: \"c4a25aef-4eee-4b48-b50a-0bf8fb0c1602\") " Feb 14 04:30:02 crc kubenswrapper[4867]: I0214 04:30:02.258566 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4a25aef-4eee-4b48-b50a-0bf8fb0c1602-trusted-ca-bundle\") pod \"c4a25aef-4eee-4b48-b50a-0bf8fb0c1602\" (UID: \"c4a25aef-4eee-4b48-b50a-0bf8fb0c1602\") " Feb 14 04:30:02 crc kubenswrapper[4867]: I0214 04:30:02.258702 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c4a25aef-4eee-4b48-b50a-0bf8fb0c1602-oauth-serving-cert\") pod \"c4a25aef-4eee-4b48-b50a-0bf8fb0c1602\" (UID: \"c4a25aef-4eee-4b48-b50a-0bf8fb0c1602\") " Feb 14 04:30:02 crc kubenswrapper[4867]: I0214 04:30:02.258807 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c4a25aef-4eee-4b48-b50a-0bf8fb0c1602-console-oauth-config\") pod \"c4a25aef-4eee-4b48-b50a-0bf8fb0c1602\" (UID: \"c4a25aef-4eee-4b48-b50a-0bf8fb0c1602\") " Feb 14 04:30:02 crc kubenswrapper[4867]: I0214 04:30:02.259854 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4a25aef-4eee-4b48-b50a-0bf8fb0c1602-service-ca" (OuterVolumeSpecName: "service-ca") pod "c4a25aef-4eee-4b48-b50a-0bf8fb0c1602" (UID: "c4a25aef-4eee-4b48-b50a-0bf8fb0c1602"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:30:02 crc kubenswrapper[4867]: I0214 04:30:02.259884 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4a25aef-4eee-4b48-b50a-0bf8fb0c1602-console-config" (OuterVolumeSpecName: "console-config") pod "c4a25aef-4eee-4b48-b50a-0bf8fb0c1602" (UID: "c4a25aef-4eee-4b48-b50a-0bf8fb0c1602"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:30:02 crc kubenswrapper[4867]: I0214 04:30:02.260547 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4a25aef-4eee-4b48-b50a-0bf8fb0c1602-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "c4a25aef-4eee-4b48-b50a-0bf8fb0c1602" (UID: "c4a25aef-4eee-4b48-b50a-0bf8fb0c1602"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:30:02 crc kubenswrapper[4867]: I0214 04:30:02.261041 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4a25aef-4eee-4b48-b50a-0bf8fb0c1602-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "c4a25aef-4eee-4b48-b50a-0bf8fb0c1602" (UID: "c4a25aef-4eee-4b48-b50a-0bf8fb0c1602"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:30:02 crc kubenswrapper[4867]: I0214 04:30:02.263603 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4a25aef-4eee-4b48-b50a-0bf8fb0c1602-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "c4a25aef-4eee-4b48-b50a-0bf8fb0c1602" (UID: "c4a25aef-4eee-4b48-b50a-0bf8fb0c1602"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:30:02 crc kubenswrapper[4867]: I0214 04:30:02.263822 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4a25aef-4eee-4b48-b50a-0bf8fb0c1602-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "c4a25aef-4eee-4b48-b50a-0bf8fb0c1602" (UID: "c4a25aef-4eee-4b48-b50a-0bf8fb0c1602"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:30:02 crc kubenswrapper[4867]: I0214 04:30:02.267347 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4a25aef-4eee-4b48-b50a-0bf8fb0c1602-kube-api-access-lnn87" (OuterVolumeSpecName: "kube-api-access-lnn87") pod "c4a25aef-4eee-4b48-b50a-0bf8fb0c1602" (UID: "c4a25aef-4eee-4b48-b50a-0bf8fb0c1602"). InnerVolumeSpecName "kube-api-access-lnn87". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:30:02 crc kubenswrapper[4867]: I0214 04:30:02.361617 4867 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c4a25aef-4eee-4b48-b50a-0bf8fb0c1602-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:02 crc kubenswrapper[4867]: I0214 04:30:02.361655 4867 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c4a25aef-4eee-4b48-b50a-0bf8fb0c1602-service-ca\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:02 crc kubenswrapper[4867]: I0214 04:30:02.361669 4867 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c4a25aef-4eee-4b48-b50a-0bf8fb0c1602-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:02 crc kubenswrapper[4867]: I0214 04:30:02.361681 4867 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c4a25aef-4eee-4b48-b50a-0bf8fb0c1602-console-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:02 crc kubenswrapper[4867]: I0214 04:30:02.361699 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lnn87\" (UniqueName: \"kubernetes.io/projected/c4a25aef-4eee-4b48-b50a-0bf8fb0c1602-kube-api-access-lnn87\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:02 crc kubenswrapper[4867]: I0214 04:30:02.361712 4867 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4a25aef-4eee-4b48-b50a-0bf8fb0c1602-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:02 crc kubenswrapper[4867]: I0214 04:30:02.361720 4867 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c4a25aef-4eee-4b48-b50a-0bf8fb0c1602-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:02 crc kubenswrapper[4867]: I0214 04:30:02.388490 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-s7x2m" Feb 14 04:30:02 crc kubenswrapper[4867]: I0214 04:30:02.463361 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69e82314-4716-4d79-b6bf-777f09ee83f7-operator-scripts\") pod \"69e82314-4716-4d79-b6bf-777f09ee83f7\" (UID: \"69e82314-4716-4d79-b6bf-777f09ee83f7\") " Feb 14 04:30:02 crc kubenswrapper[4867]: I0214 04:30:02.464069 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4gg7t\" (UniqueName: \"kubernetes.io/projected/69e82314-4716-4d79-b6bf-777f09ee83f7-kube-api-access-4gg7t\") pod \"69e82314-4716-4d79-b6bf-777f09ee83f7\" (UID: \"69e82314-4716-4d79-b6bf-777f09ee83f7\") " Feb 14 04:30:02 crc kubenswrapper[4867]: I0214 04:30:02.465243 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69e82314-4716-4d79-b6bf-777f09ee83f7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "69e82314-4716-4d79-b6bf-777f09ee83f7" (UID: "69e82314-4716-4d79-b6bf-777f09ee83f7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:30:02 crc kubenswrapper[4867]: I0214 04:30:02.472392 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69e82314-4716-4d79-b6bf-777f09ee83f7-kube-api-access-4gg7t" (OuterVolumeSpecName: "kube-api-access-4gg7t") pod "69e82314-4716-4d79-b6bf-777f09ee83f7" (UID: "69e82314-4716-4d79-b6bf-777f09ee83f7"). InnerVolumeSpecName "kube-api-access-4gg7t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:30:02 crc kubenswrapper[4867]: I0214 04:30:02.566798 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69e82314-4716-4d79-b6bf-777f09ee83f7-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:02 crc kubenswrapper[4867]: I0214 04:30:02.566857 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4gg7t\" (UniqueName: \"kubernetes.io/projected/69e82314-4716-4d79-b6bf-777f09ee83f7-kube-api-access-4gg7t\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:02 crc kubenswrapper[4867]: I0214 04:30:02.804397 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-gzvxs"] Feb 14 04:30:02 crc kubenswrapper[4867]: I0214 04:30:02.877487 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-s7x2m" event={"ID":"69e82314-4716-4d79-b6bf-777f09ee83f7","Type":"ContainerDied","Data":"90a619509978f686be0c8500e2ce1d1e1d540d50a43739ab895b3767799dad1c"} Feb 14 04:30:02 crc kubenswrapper[4867]: I0214 04:30:02.877551 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90a619509978f686be0c8500e2ce1d1e1d540d50a43739ab895b3767799dad1c" Feb 14 04:30:02 crc kubenswrapper[4867]: I0214 04:30:02.877522 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-s7x2m" Feb 14 04:30:02 crc kubenswrapper[4867]: I0214 04:30:02.879022 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-gzvxs" event={"ID":"e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2","Type":"ContainerStarted","Data":"a9f2241d04b1388d688071a01711ae33a99077041ba77e2f0164bc2d8abe8d1e"} Feb 14 04:30:02 crc kubenswrapper[4867]: I0214 04:30:02.881244 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6c8864b6b5-mwdd6_c4a25aef-4eee-4b48-b50a-0bf8fb0c1602/console/0.log" Feb 14 04:30:02 crc kubenswrapper[4867]: I0214 04:30:02.881751 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6c8864b6b5-mwdd6" event={"ID":"c4a25aef-4eee-4b48-b50a-0bf8fb0c1602","Type":"ContainerDied","Data":"999f569ca24af828fccac613f37abfd55e6b13b288390e3bcddcc9896a94a3f7"} Feb 14 04:30:02 crc kubenswrapper[4867]: I0214 04:30:02.881797 4867 scope.go:117] "RemoveContainer" containerID="c2a0f0ef4fc35a56210a1bd277b9f8c3dbe6b717fe6cba021a58146d554cbf3e" Feb 14 04:30:02 crc kubenswrapper[4867]: I0214 04:30:02.881832 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6c8864b6b5-mwdd6" Feb 14 04:30:02 crc kubenswrapper[4867]: I0214 04:30:02.927348 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6c8864b6b5-mwdd6"] Feb 14 04:30:02 crc kubenswrapper[4867]: I0214 04:30:02.936849 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-6c8864b6b5-mwdd6"] Feb 14 04:30:03 crc kubenswrapper[4867]: I0214 04:30:03.026672 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4a25aef-4eee-4b48-b50a-0bf8fb0c1602" path="/var/lib/kubelet/pods/c4a25aef-4eee-4b48-b50a-0bf8fb0c1602/volumes" Feb 14 04:30:03 crc kubenswrapper[4867]: I0214 04:30:03.371639 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517390-kwnnx" Feb 14 04:30:03 crc kubenswrapper[4867]: I0214 04:30:03.506774 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f7c88887-cc0d-4b61-9ccc-e5583c27322f-secret-volume\") pod \"f7c88887-cc0d-4b61-9ccc-e5583c27322f\" (UID: \"f7c88887-cc0d-4b61-9ccc-e5583c27322f\") " Feb 14 04:30:03 crc kubenswrapper[4867]: I0214 04:30:03.506884 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4dxx2\" (UniqueName: \"kubernetes.io/projected/f7c88887-cc0d-4b61-9ccc-e5583c27322f-kube-api-access-4dxx2\") pod \"f7c88887-cc0d-4b61-9ccc-e5583c27322f\" (UID: \"f7c88887-cc0d-4b61-9ccc-e5583c27322f\") " Feb 14 04:30:03 crc kubenswrapper[4867]: I0214 04:30:03.506987 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f7c88887-cc0d-4b61-9ccc-e5583c27322f-config-volume\") pod \"f7c88887-cc0d-4b61-9ccc-e5583c27322f\" (UID: \"f7c88887-cc0d-4b61-9ccc-e5583c27322f\") " Feb 14 04:30:03 crc kubenswrapper[4867]: I0214 04:30:03.509131 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7c88887-cc0d-4b61-9ccc-e5583c27322f-config-volume" (OuterVolumeSpecName: "config-volume") pod "f7c88887-cc0d-4b61-9ccc-e5583c27322f" (UID: "f7c88887-cc0d-4b61-9ccc-e5583c27322f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:30:03 crc kubenswrapper[4867]: I0214 04:30:03.514719 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7c88887-cc0d-4b61-9ccc-e5583c27322f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f7c88887-cc0d-4b61-9ccc-e5583c27322f" (UID: "f7c88887-cc0d-4b61-9ccc-e5583c27322f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:30:03 crc kubenswrapper[4867]: I0214 04:30:03.517977 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7c88887-cc0d-4b61-9ccc-e5583c27322f-kube-api-access-4dxx2" (OuterVolumeSpecName: "kube-api-access-4dxx2") pod "f7c88887-cc0d-4b61-9ccc-e5583c27322f" (UID: "f7c88887-cc0d-4b61-9ccc-e5583c27322f"). InnerVolumeSpecName "kube-api-access-4dxx2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:30:03 crc kubenswrapper[4867]: I0214 04:30:03.611598 4867 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f7c88887-cc0d-4b61-9ccc-e5583c27322f-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:03 crc kubenswrapper[4867]: I0214 04:30:03.611628 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4dxx2\" (UniqueName: \"kubernetes.io/projected/f7c88887-cc0d-4b61-9ccc-e5583c27322f-kube-api-access-4dxx2\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:03 crc kubenswrapper[4867]: I0214 04:30:03.611638 4867 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f7c88887-cc0d-4b61-9ccc-e5583c27322f-config-volume\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:03 crc kubenswrapper[4867]: I0214 04:30:03.902156 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517390-kwnnx" Feb 14 04:30:03 crc kubenswrapper[4867]: I0214 04:30:03.902156 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517390-kwnnx" event={"ID":"f7c88887-cc0d-4b61-9ccc-e5583c27322f","Type":"ContainerDied","Data":"59711ea2ab0acd44b6bdb18cb66a56f569e3f705ecdd3852368745b58d075e40"} Feb 14 04:30:03 crc kubenswrapper[4867]: I0214 04:30:03.902224 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59711ea2ab0acd44b6bdb18cb66a56f569e3f705ecdd3852368745b58d075e40" Feb 14 04:30:04 crc kubenswrapper[4867]: I0214 04:30:04.180465 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 14 04:30:04 crc kubenswrapper[4867]: I0214 04:30:04.650236 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-pjc8k"] Feb 14 04:30:04 crc kubenswrapper[4867]: E0214 04:30:04.650749 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4a25aef-4eee-4b48-b50a-0bf8fb0c1602" containerName="console" Feb 14 04:30:04 crc kubenswrapper[4867]: I0214 04:30:04.650774 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4a25aef-4eee-4b48-b50a-0bf8fb0c1602" containerName="console" Feb 14 04:30:04 crc kubenswrapper[4867]: E0214 04:30:04.650798 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7c88887-cc0d-4b61-9ccc-e5583c27322f" containerName="collect-profiles" Feb 14 04:30:04 crc kubenswrapper[4867]: I0214 04:30:04.650807 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7c88887-cc0d-4b61-9ccc-e5583c27322f" containerName="collect-profiles" Feb 14 04:30:04 crc kubenswrapper[4867]: E0214 04:30:04.650827 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69e82314-4716-4d79-b6bf-777f09ee83f7" containerName="mariadb-account-create-update" Feb 14 04:30:04 crc kubenswrapper[4867]: I0214 04:30:04.650834 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="69e82314-4716-4d79-b6bf-777f09ee83f7" containerName="mariadb-account-create-update" Feb 14 04:30:04 crc kubenswrapper[4867]: I0214 04:30:04.651021 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4a25aef-4eee-4b48-b50a-0bf8fb0c1602" containerName="console" Feb 14 04:30:04 crc kubenswrapper[4867]: I0214 04:30:04.651034 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7c88887-cc0d-4b61-9ccc-e5583c27322f" containerName="collect-profiles" Feb 14 04:30:04 crc kubenswrapper[4867]: I0214 04:30:04.651048 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="69e82314-4716-4d79-b6bf-777f09ee83f7" containerName="mariadb-account-create-update" Feb 14 04:30:04 crc kubenswrapper[4867]: I0214 04:30:04.661678 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-pjc8k" Feb 14 04:30:04 crc kubenswrapper[4867]: I0214 04:30:04.671858 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-pjc8k"] Feb 14 04:30:04 crc kubenswrapper[4867]: I0214 04:30:04.739314 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9b8cq\" (UniqueName: \"kubernetes.io/projected/2e27a3cb-c301-4fa0-b9a1-9aa3bac0305a-kube-api-access-9b8cq\") pod \"mysqld-exporter-openstack-cell1-db-create-pjc8k\" (UID: \"2e27a3cb-c301-4fa0-b9a1-9aa3bac0305a\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-pjc8k" Feb 14 04:30:04 crc kubenswrapper[4867]: I0214 04:30:04.739668 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e27a3cb-c301-4fa0-b9a1-9aa3bac0305a-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-pjc8k\" (UID: \"2e27a3cb-c301-4fa0-b9a1-9aa3bac0305a\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-pjc8k" Feb 14 04:30:04 crc kubenswrapper[4867]: I0214 04:30:04.749550 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-s7x2m"] Feb 14 04:30:04 crc kubenswrapper[4867]: I0214 04:30:04.760575 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-s7x2m"] Feb 14 04:30:04 crc kubenswrapper[4867]: I0214 04:30:04.842563 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9b8cq\" (UniqueName: \"kubernetes.io/projected/2e27a3cb-c301-4fa0-b9a1-9aa3bac0305a-kube-api-access-9b8cq\") pod \"mysqld-exporter-openstack-cell1-db-create-pjc8k\" (UID: \"2e27a3cb-c301-4fa0-b9a1-9aa3bac0305a\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-pjc8k" Feb 14 04:30:04 crc kubenswrapper[4867]: I0214 04:30:04.842776 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e27a3cb-c301-4fa0-b9a1-9aa3bac0305a-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-pjc8k\" (UID: \"2e27a3cb-c301-4fa0-b9a1-9aa3bac0305a\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-pjc8k" Feb 14 04:30:04 crc kubenswrapper[4867]: I0214 04:30:04.844365 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e27a3cb-c301-4fa0-b9a1-9aa3bac0305a-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-pjc8k\" (UID: \"2e27a3cb-c301-4fa0-b9a1-9aa3bac0305a\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-pjc8k" Feb 14 04:30:04 crc kubenswrapper[4867]: I0214 04:30:04.857815 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-92c4-account-create-update-r2w8b"] Feb 14 04:30:04 crc kubenswrapper[4867]: I0214 04:30:04.859238 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-92c4-account-create-update-r2w8b" Feb 14 04:30:04 crc kubenswrapper[4867]: I0214 04:30:04.861607 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-cell1-db-secret" Feb 14 04:30:04 crc kubenswrapper[4867]: I0214 04:30:04.864882 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9b8cq\" (UniqueName: \"kubernetes.io/projected/2e27a3cb-c301-4fa0-b9a1-9aa3bac0305a-kube-api-access-9b8cq\") pod \"mysqld-exporter-openstack-cell1-db-create-pjc8k\" (UID: \"2e27a3cb-c301-4fa0-b9a1-9aa3bac0305a\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-pjc8k" Feb 14 04:30:04 crc kubenswrapper[4867]: I0214 04:30:04.870648 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-92c4-account-create-update-r2w8b"] Feb 14 04:30:04 crc kubenswrapper[4867]: I0214 04:30:04.944423 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8m5m\" (UniqueName: \"kubernetes.io/projected/36e07f1b-6481-42a9-a605-b472a8cc3945-kube-api-access-w8m5m\") pod \"mysqld-exporter-92c4-account-create-update-r2w8b\" (UID: \"36e07f1b-6481-42a9-a605-b472a8cc3945\") " pod="openstack/mysqld-exporter-92c4-account-create-update-r2w8b" Feb 14 04:30:04 crc kubenswrapper[4867]: I0214 04:30:04.944498 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36e07f1b-6481-42a9-a605-b472a8cc3945-operator-scripts\") pod \"mysqld-exporter-92c4-account-create-update-r2w8b\" (UID: \"36e07f1b-6481-42a9-a605-b472a8cc3945\") " pod="openstack/mysqld-exporter-92c4-account-create-update-r2w8b" Feb 14 04:30:04 crc kubenswrapper[4867]: I0214 04:30:04.993444 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-pjc8k" Feb 14 04:30:05 crc kubenswrapper[4867]: I0214 04:30:05.014739 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69e82314-4716-4d79-b6bf-777f09ee83f7" path="/var/lib/kubelet/pods/69e82314-4716-4d79-b6bf-777f09ee83f7/volumes" Feb 14 04:30:05 crc kubenswrapper[4867]: I0214 04:30:05.046652 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8m5m\" (UniqueName: \"kubernetes.io/projected/36e07f1b-6481-42a9-a605-b472a8cc3945-kube-api-access-w8m5m\") pod \"mysqld-exporter-92c4-account-create-update-r2w8b\" (UID: \"36e07f1b-6481-42a9-a605-b472a8cc3945\") " pod="openstack/mysqld-exporter-92c4-account-create-update-r2w8b" Feb 14 04:30:05 crc kubenswrapper[4867]: I0214 04:30:05.046722 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36e07f1b-6481-42a9-a605-b472a8cc3945-operator-scripts\") pod \"mysqld-exporter-92c4-account-create-update-r2w8b\" (UID: \"36e07f1b-6481-42a9-a605-b472a8cc3945\") " pod="openstack/mysqld-exporter-92c4-account-create-update-r2w8b" Feb 14 04:30:05 crc kubenswrapper[4867]: I0214 04:30:05.047886 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36e07f1b-6481-42a9-a605-b472a8cc3945-operator-scripts\") pod \"mysqld-exporter-92c4-account-create-update-r2w8b\" (UID: \"36e07f1b-6481-42a9-a605-b472a8cc3945\") " pod="openstack/mysqld-exporter-92c4-account-create-update-r2w8b" Feb 14 04:30:05 crc kubenswrapper[4867]: I0214 04:30:05.066698 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8m5m\" (UniqueName: \"kubernetes.io/projected/36e07f1b-6481-42a9-a605-b472a8cc3945-kube-api-access-w8m5m\") pod \"mysqld-exporter-92c4-account-create-update-r2w8b\" (UID: \"36e07f1b-6481-42a9-a605-b472a8cc3945\") " pod="openstack/mysqld-exporter-92c4-account-create-update-r2w8b" Feb 14 04:30:05 crc kubenswrapper[4867]: I0214 04:30:05.224493 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-92c4-account-create-update-r2w8b" Feb 14 04:30:06 crc kubenswrapper[4867]: I0214 04:30:06.394492 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-pjc8k"] Feb 14 04:30:06 crc kubenswrapper[4867]: I0214 04:30:06.558344 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-92c4-account-create-update-r2w8b"] Feb 14 04:30:06 crc kubenswrapper[4867]: I0214 04:30:06.932783 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-7lpqj" podUID="16c28c0f-9310-4721-87cf-2d1bb88b5bba" containerName="ovn-controller" probeResult="failure" output=< Feb 14 04:30:06 crc kubenswrapper[4867]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 14 04:30:06 crc kubenswrapper[4867]: > Feb 14 04:30:06 crc kubenswrapper[4867]: I0214 04:30:06.935704 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-92c4-account-create-update-r2w8b" event={"ID":"36e07f1b-6481-42a9-a605-b472a8cc3945","Type":"ContainerStarted","Data":"7ee48e595ead334c45b0c14aeec7251dc9cd4d60d85c2a40a47348b3ee0e687a"} Feb 14 04:30:06 crc kubenswrapper[4867]: I0214 04:30:06.935745 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-92c4-account-create-update-r2w8b" event={"ID":"36e07f1b-6481-42a9-a605-b472a8cc3945","Type":"ContainerStarted","Data":"78c1a5c6ba3bac40138a220ca33469d903b12f8b6d092ee2c71a5440c73661de"} Feb 14 04:30:06 crc kubenswrapper[4867]: I0214 04:30:06.939283 4867 generic.go:334] "Generic (PLEG): container finished" podID="2e27a3cb-c301-4fa0-b9a1-9aa3bac0305a" containerID="50f6a1e55c135273f16192c4d930b15a06776fce11c739aadacaa3a89fc4b153" exitCode=0 Feb 14 04:30:06 crc kubenswrapper[4867]: I0214 04:30:06.939344 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-pjc8k" event={"ID":"2e27a3cb-c301-4fa0-b9a1-9aa3bac0305a","Type":"ContainerDied","Data":"50f6a1e55c135273f16192c4d930b15a06776fce11c739aadacaa3a89fc4b153"} Feb 14 04:30:06 crc kubenswrapper[4867]: I0214 04:30:06.939371 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-pjc8k" event={"ID":"2e27a3cb-c301-4fa0-b9a1-9aa3bac0305a","Type":"ContainerStarted","Data":"38c9afded06746b29bfa2201d287dd3b9aab364027f290cc790a3d8432ff496a"} Feb 14 04:30:06 crc kubenswrapper[4867]: I0214 04:30:06.962338 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-92c4-account-create-update-r2w8b" podStartSLOduration=2.9623119989999998 podStartE2EDuration="2.962311999s" podCreationTimestamp="2026-02-14 04:30:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:30:06.951588515 +0000 UTC m=+1239.032525849" watchObservedRunningTime="2026-02-14 04:30:06.962311999 +0000 UTC m=+1239.043249313" Feb 14 04:30:06 crc kubenswrapper[4867]: I0214 04:30:06.987005 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-dznst" Feb 14 04:30:06 crc kubenswrapper[4867]: I0214 04:30:06.996561 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-dznst" Feb 14 04:30:07 crc kubenswrapper[4867]: I0214 04:30:07.245666 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-7lpqj-config-4bc2q"] Feb 14 04:30:07 crc kubenswrapper[4867]: I0214 04:30:07.247180 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-7lpqj-config-4bc2q" Feb 14 04:30:07 crc kubenswrapper[4867]: I0214 04:30:07.252724 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 14 04:30:07 crc kubenswrapper[4867]: I0214 04:30:07.254934 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-7lpqj-config-4bc2q"] Feb 14 04:30:07 crc kubenswrapper[4867]: I0214 04:30:07.297056 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5aa59e7c-c4ba-4a88-9744-c2b0752de11e-var-log-ovn\") pod \"ovn-controller-7lpqj-config-4bc2q\" (UID: \"5aa59e7c-c4ba-4a88-9744-c2b0752de11e\") " pod="openstack/ovn-controller-7lpqj-config-4bc2q" Feb 14 04:30:07 crc kubenswrapper[4867]: I0214 04:30:07.297103 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/5aa59e7c-c4ba-4a88-9744-c2b0752de11e-additional-scripts\") pod \"ovn-controller-7lpqj-config-4bc2q\" (UID: \"5aa59e7c-c4ba-4a88-9744-c2b0752de11e\") " pod="openstack/ovn-controller-7lpqj-config-4bc2q" Feb 14 04:30:07 crc kubenswrapper[4867]: I0214 04:30:07.297202 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76lbx\" (UniqueName: \"kubernetes.io/projected/5aa59e7c-c4ba-4a88-9744-c2b0752de11e-kube-api-access-76lbx\") pod \"ovn-controller-7lpqj-config-4bc2q\" (UID: \"5aa59e7c-c4ba-4a88-9744-c2b0752de11e\") " pod="openstack/ovn-controller-7lpqj-config-4bc2q" Feb 14 04:30:07 crc kubenswrapper[4867]: I0214 04:30:07.297220 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5aa59e7c-c4ba-4a88-9744-c2b0752de11e-var-run-ovn\") pod \"ovn-controller-7lpqj-config-4bc2q\" (UID: \"5aa59e7c-c4ba-4a88-9744-c2b0752de11e\") " pod="openstack/ovn-controller-7lpqj-config-4bc2q" Feb 14 04:30:07 crc kubenswrapper[4867]: I0214 04:30:07.297296 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5aa59e7c-c4ba-4a88-9744-c2b0752de11e-var-run\") pod \"ovn-controller-7lpqj-config-4bc2q\" (UID: \"5aa59e7c-c4ba-4a88-9744-c2b0752de11e\") " pod="openstack/ovn-controller-7lpqj-config-4bc2q" Feb 14 04:30:07 crc kubenswrapper[4867]: I0214 04:30:07.297371 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5aa59e7c-c4ba-4a88-9744-c2b0752de11e-scripts\") pod \"ovn-controller-7lpqj-config-4bc2q\" (UID: \"5aa59e7c-c4ba-4a88-9744-c2b0752de11e\") " pod="openstack/ovn-controller-7lpqj-config-4bc2q" Feb 14 04:30:07 crc kubenswrapper[4867]: I0214 04:30:07.398956 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/5aa59e7c-c4ba-4a88-9744-c2b0752de11e-additional-scripts\") pod \"ovn-controller-7lpqj-config-4bc2q\" (UID: \"5aa59e7c-c4ba-4a88-9744-c2b0752de11e\") " pod="openstack/ovn-controller-7lpqj-config-4bc2q" Feb 14 04:30:07 crc kubenswrapper[4867]: I0214 04:30:07.399011 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5aa59e7c-c4ba-4a88-9744-c2b0752de11e-var-log-ovn\") pod \"ovn-controller-7lpqj-config-4bc2q\" (UID: \"5aa59e7c-c4ba-4a88-9744-c2b0752de11e\") " pod="openstack/ovn-controller-7lpqj-config-4bc2q" Feb 14 04:30:07 crc kubenswrapper[4867]: I0214 04:30:07.399090 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76lbx\" (UniqueName: \"kubernetes.io/projected/5aa59e7c-c4ba-4a88-9744-c2b0752de11e-kube-api-access-76lbx\") pod \"ovn-controller-7lpqj-config-4bc2q\" (UID: \"5aa59e7c-c4ba-4a88-9744-c2b0752de11e\") " pod="openstack/ovn-controller-7lpqj-config-4bc2q" Feb 14 04:30:07 crc kubenswrapper[4867]: I0214 04:30:07.399110 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5aa59e7c-c4ba-4a88-9744-c2b0752de11e-var-run-ovn\") pod \"ovn-controller-7lpqj-config-4bc2q\" (UID: \"5aa59e7c-c4ba-4a88-9744-c2b0752de11e\") " pod="openstack/ovn-controller-7lpqj-config-4bc2q" Feb 14 04:30:07 crc kubenswrapper[4867]: I0214 04:30:07.399163 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5aa59e7c-c4ba-4a88-9744-c2b0752de11e-var-run\") pod \"ovn-controller-7lpqj-config-4bc2q\" (UID: \"5aa59e7c-c4ba-4a88-9744-c2b0752de11e\") " pod="openstack/ovn-controller-7lpqj-config-4bc2q" Feb 14 04:30:07 crc kubenswrapper[4867]: I0214 04:30:07.399210 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5aa59e7c-c4ba-4a88-9744-c2b0752de11e-scripts\") pod \"ovn-controller-7lpqj-config-4bc2q\" (UID: \"5aa59e7c-c4ba-4a88-9744-c2b0752de11e\") " pod="openstack/ovn-controller-7lpqj-config-4bc2q" Feb 14 04:30:07 crc kubenswrapper[4867]: I0214 04:30:07.400661 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5aa59e7c-c4ba-4a88-9744-c2b0752de11e-var-run-ovn\") pod \"ovn-controller-7lpqj-config-4bc2q\" (UID: \"5aa59e7c-c4ba-4a88-9744-c2b0752de11e\") " pod="openstack/ovn-controller-7lpqj-config-4bc2q" Feb 14 04:30:07 crc kubenswrapper[4867]: I0214 04:30:07.400707 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5aa59e7c-c4ba-4a88-9744-c2b0752de11e-var-run\") pod \"ovn-controller-7lpqj-config-4bc2q\" (UID: \"5aa59e7c-c4ba-4a88-9744-c2b0752de11e\") " pod="openstack/ovn-controller-7lpqj-config-4bc2q" Feb 14 04:30:07 crc kubenswrapper[4867]: I0214 04:30:07.400708 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5aa59e7c-c4ba-4a88-9744-c2b0752de11e-var-log-ovn\") pod \"ovn-controller-7lpqj-config-4bc2q\" (UID: \"5aa59e7c-c4ba-4a88-9744-c2b0752de11e\") " pod="openstack/ovn-controller-7lpqj-config-4bc2q" Feb 14 04:30:07 crc kubenswrapper[4867]: I0214 04:30:07.401222 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/5aa59e7c-c4ba-4a88-9744-c2b0752de11e-additional-scripts\") pod \"ovn-controller-7lpqj-config-4bc2q\" (UID: \"5aa59e7c-c4ba-4a88-9744-c2b0752de11e\") " pod="openstack/ovn-controller-7lpqj-config-4bc2q" Feb 14 04:30:07 crc kubenswrapper[4867]: I0214 04:30:07.401605 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5aa59e7c-c4ba-4a88-9744-c2b0752de11e-scripts\") pod \"ovn-controller-7lpqj-config-4bc2q\" (UID: \"5aa59e7c-c4ba-4a88-9744-c2b0752de11e\") " pod="openstack/ovn-controller-7lpqj-config-4bc2q" Feb 14 04:30:07 crc kubenswrapper[4867]: I0214 04:30:07.427835 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76lbx\" (UniqueName: \"kubernetes.io/projected/5aa59e7c-c4ba-4a88-9744-c2b0752de11e-kube-api-access-76lbx\") pod \"ovn-controller-7lpqj-config-4bc2q\" (UID: \"5aa59e7c-c4ba-4a88-9744-c2b0752de11e\") " pod="openstack/ovn-controller-7lpqj-config-4bc2q" Feb 14 04:30:07 crc kubenswrapper[4867]: I0214 04:30:07.574345 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-7lpqj-config-4bc2q" Feb 14 04:30:07 crc kubenswrapper[4867]: I0214 04:30:07.604162 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1d9f9909-1442-4d83-b2aa-0f58d4022338-etc-swift\") pod \"swift-storage-0\" (UID: \"1d9f9909-1442-4d83-b2aa-0f58d4022338\") " pod="openstack/swift-storage-0" Feb 14 04:30:07 crc kubenswrapper[4867]: I0214 04:30:07.609926 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1d9f9909-1442-4d83-b2aa-0f58d4022338-etc-swift\") pod \"swift-storage-0\" (UID: \"1d9f9909-1442-4d83-b2aa-0f58d4022338\") " pod="openstack/swift-storage-0" Feb 14 04:30:07 crc kubenswrapper[4867]: I0214 04:30:07.715415 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 14 04:30:07 crc kubenswrapper[4867]: I0214 04:30:07.969886 4867 generic.go:334] "Generic (PLEG): container finished" podID="36e07f1b-6481-42a9-a605-b472a8cc3945" containerID="7ee48e595ead334c45b0c14aeec7251dc9cd4d60d85c2a40a47348b3ee0e687a" exitCode=0 Feb 14 04:30:07 crc kubenswrapper[4867]: I0214 04:30:07.970120 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-92c4-account-create-update-r2w8b" event={"ID":"36e07f1b-6481-42a9-a605-b472a8cc3945","Type":"ContainerDied","Data":"7ee48e595ead334c45b0c14aeec7251dc9cd4d60d85c2a40a47348b3ee0e687a"} Feb 14 04:30:08 crc kubenswrapper[4867]: I0214 04:30:08.151146 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-7lpqj-config-4bc2q"] Feb 14 04:30:08 crc kubenswrapper[4867]: I0214 04:30:08.160202 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-wrzv9"] Feb 14 04:30:08 crc kubenswrapper[4867]: I0214 04:30:08.162694 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-wrzv9" Feb 14 04:30:08 crc kubenswrapper[4867]: I0214 04:30:08.165625 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 14 04:30:08 crc kubenswrapper[4867]: I0214 04:30:08.170453 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-wrzv9"] Feb 14 04:30:08 crc kubenswrapper[4867]: I0214 04:30:08.217058 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27300ba4-09df-4f4c-b247-4ba37572690d-operator-scripts\") pod \"root-account-create-update-wrzv9\" (UID: \"27300ba4-09df-4f4c-b247-4ba37572690d\") " pod="openstack/root-account-create-update-wrzv9" Feb 14 04:30:08 crc kubenswrapper[4867]: I0214 04:30:08.217217 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mz5x\" (UniqueName: \"kubernetes.io/projected/27300ba4-09df-4f4c-b247-4ba37572690d-kube-api-access-7mz5x\") pod \"root-account-create-update-wrzv9\" (UID: \"27300ba4-09df-4f4c-b247-4ba37572690d\") " pod="openstack/root-account-create-update-wrzv9" Feb 14 04:30:08 crc kubenswrapper[4867]: I0214 04:30:08.319047 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mz5x\" (UniqueName: \"kubernetes.io/projected/27300ba4-09df-4f4c-b247-4ba37572690d-kube-api-access-7mz5x\") pod \"root-account-create-update-wrzv9\" (UID: \"27300ba4-09df-4f4c-b247-4ba37572690d\") " pod="openstack/root-account-create-update-wrzv9" Feb 14 04:30:08 crc kubenswrapper[4867]: I0214 04:30:08.319190 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27300ba4-09df-4f4c-b247-4ba37572690d-operator-scripts\") pod \"root-account-create-update-wrzv9\" (UID: \"27300ba4-09df-4f4c-b247-4ba37572690d\") " pod="openstack/root-account-create-update-wrzv9" Feb 14 04:30:08 crc kubenswrapper[4867]: I0214 04:30:08.320221 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27300ba4-09df-4f4c-b247-4ba37572690d-operator-scripts\") pod \"root-account-create-update-wrzv9\" (UID: \"27300ba4-09df-4f4c-b247-4ba37572690d\") " pod="openstack/root-account-create-update-wrzv9" Feb 14 04:30:08 crc kubenswrapper[4867]: I0214 04:30:08.341551 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mz5x\" (UniqueName: \"kubernetes.io/projected/27300ba4-09df-4f4c-b247-4ba37572690d-kube-api-access-7mz5x\") pod \"root-account-create-update-wrzv9\" (UID: \"27300ba4-09df-4f4c-b247-4ba37572690d\") " pod="openstack/root-account-create-update-wrzv9" Feb 14 04:30:08 crc kubenswrapper[4867]: I0214 04:30:08.439797 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 14 04:30:08 crc kubenswrapper[4867]: W0214 04:30:08.450941 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1d9f9909_1442_4d83_b2aa_0f58d4022338.slice/crio-23eb505016f90201653e7b35eef6125dd740c7a5fd2dd394403138863672d2e6 WatchSource:0}: Error finding container 23eb505016f90201653e7b35eef6125dd740c7a5fd2dd394403138863672d2e6: Status 404 returned error can't find the container with id 23eb505016f90201653e7b35eef6125dd740c7a5fd2dd394403138863672d2e6 Feb 14 04:30:08 crc kubenswrapper[4867]: I0214 04:30:08.458720 4867 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 14 04:30:08 crc kubenswrapper[4867]: I0214 04:30:08.477868 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-pjc8k" Feb 14 04:30:08 crc kubenswrapper[4867]: I0214 04:30:08.518878 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-wrzv9" Feb 14 04:30:08 crc kubenswrapper[4867]: I0214 04:30:08.522607 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e27a3cb-c301-4fa0-b9a1-9aa3bac0305a-operator-scripts\") pod \"2e27a3cb-c301-4fa0-b9a1-9aa3bac0305a\" (UID: \"2e27a3cb-c301-4fa0-b9a1-9aa3bac0305a\") " Feb 14 04:30:08 crc kubenswrapper[4867]: I0214 04:30:08.523216 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9b8cq\" (UniqueName: \"kubernetes.io/projected/2e27a3cb-c301-4fa0-b9a1-9aa3bac0305a-kube-api-access-9b8cq\") pod \"2e27a3cb-c301-4fa0-b9a1-9aa3bac0305a\" (UID: \"2e27a3cb-c301-4fa0-b9a1-9aa3bac0305a\") " Feb 14 04:30:08 crc kubenswrapper[4867]: I0214 04:30:08.523424 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e27a3cb-c301-4fa0-b9a1-9aa3bac0305a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2e27a3cb-c301-4fa0-b9a1-9aa3bac0305a" (UID: "2e27a3cb-c301-4fa0-b9a1-9aa3bac0305a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:30:08 crc kubenswrapper[4867]: I0214 04:30:08.523891 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e27a3cb-c301-4fa0-b9a1-9aa3bac0305a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:08 crc kubenswrapper[4867]: I0214 04:30:08.526694 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e27a3cb-c301-4fa0-b9a1-9aa3bac0305a-kube-api-access-9b8cq" (OuterVolumeSpecName: "kube-api-access-9b8cq") pod "2e27a3cb-c301-4fa0-b9a1-9aa3bac0305a" (UID: "2e27a3cb-c301-4fa0-b9a1-9aa3bac0305a"). InnerVolumeSpecName "kube-api-access-9b8cq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:30:08 crc kubenswrapper[4867]: I0214 04:30:08.626379 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9b8cq\" (UniqueName: \"kubernetes.io/projected/2e27a3cb-c301-4fa0-b9a1-9aa3bac0305a-kube-api-access-9b8cq\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:08 crc kubenswrapper[4867]: I0214 04:30:08.985299 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-pjc8k" Feb 14 04:30:08 crc kubenswrapper[4867]: I0214 04:30:08.982196 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-pjc8k" event={"ID":"2e27a3cb-c301-4fa0-b9a1-9aa3bac0305a","Type":"ContainerDied","Data":"38c9afded06746b29bfa2201d287dd3b9aab364027f290cc790a3d8432ff496a"} Feb 14 04:30:08 crc kubenswrapper[4867]: I0214 04:30:08.986230 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38c9afded06746b29bfa2201d287dd3b9aab364027f290cc790a3d8432ff496a" Feb 14 04:30:08 crc kubenswrapper[4867]: I0214 04:30:08.987401 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-7lpqj-config-4bc2q" event={"ID":"5aa59e7c-c4ba-4a88-9744-c2b0752de11e","Type":"ContainerStarted","Data":"026325c8f6cfe452fbbf5a283d6335d1b62be9618bc89fae94bbe5dcc2c9e96d"} Feb 14 04:30:08 crc kubenswrapper[4867]: I0214 04:30:08.987457 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-7lpqj-config-4bc2q" event={"ID":"5aa59e7c-c4ba-4a88-9744-c2b0752de11e","Type":"ContainerStarted","Data":"75debe79a7aa9be3c0df6cc3f6875a2099e92a0112633516361a18ba6a6f487b"} Feb 14 04:30:08 crc kubenswrapper[4867]: I0214 04:30:08.990402 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1d9f9909-1442-4d83-b2aa-0f58d4022338","Type":"ContainerStarted","Data":"23eb505016f90201653e7b35eef6125dd740c7a5fd2dd394403138863672d2e6"} Feb 14 04:30:09 crc kubenswrapper[4867]: I0214 04:30:09.021036 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-7lpqj-config-4bc2q" podStartSLOduration=2.021014042 podStartE2EDuration="2.021014042s" podCreationTimestamp="2026-02-14 04:30:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:30:09.005889141 +0000 UTC m=+1241.086826455" watchObservedRunningTime="2026-02-14 04:30:09.021014042 +0000 UTC m=+1241.101951376" Feb 14 04:30:09 crc kubenswrapper[4867]: I0214 04:30:09.027379 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-wrzv9"] Feb 14 04:30:09 crc kubenswrapper[4867]: I0214 04:30:09.421486 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-92c4-account-create-update-r2w8b" Feb 14 04:30:09 crc kubenswrapper[4867]: I0214 04:30:09.447437 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36e07f1b-6481-42a9-a605-b472a8cc3945-operator-scripts\") pod \"36e07f1b-6481-42a9-a605-b472a8cc3945\" (UID: \"36e07f1b-6481-42a9-a605-b472a8cc3945\") " Feb 14 04:30:09 crc kubenswrapper[4867]: I0214 04:30:09.447551 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8m5m\" (UniqueName: \"kubernetes.io/projected/36e07f1b-6481-42a9-a605-b472a8cc3945-kube-api-access-w8m5m\") pod \"36e07f1b-6481-42a9-a605-b472a8cc3945\" (UID: \"36e07f1b-6481-42a9-a605-b472a8cc3945\") " Feb 14 04:30:09 crc kubenswrapper[4867]: I0214 04:30:09.448364 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36e07f1b-6481-42a9-a605-b472a8cc3945-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "36e07f1b-6481-42a9-a605-b472a8cc3945" (UID: "36e07f1b-6481-42a9-a605-b472a8cc3945"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:30:09 crc kubenswrapper[4867]: I0214 04:30:09.453568 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36e07f1b-6481-42a9-a605-b472a8cc3945-kube-api-access-w8m5m" (OuterVolumeSpecName: "kube-api-access-w8m5m") pod "36e07f1b-6481-42a9-a605-b472a8cc3945" (UID: "36e07f1b-6481-42a9-a605-b472a8cc3945"). InnerVolumeSpecName "kube-api-access-w8m5m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:30:09 crc kubenswrapper[4867]: I0214 04:30:09.550731 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36e07f1b-6481-42a9-a605-b472a8cc3945-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:09 crc kubenswrapper[4867]: I0214 04:30:09.550767 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8m5m\" (UniqueName: \"kubernetes.io/projected/36e07f1b-6481-42a9-a605-b472a8cc3945-kube-api-access-w8m5m\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:09 crc kubenswrapper[4867]: I0214 04:30:09.965897 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Feb 14 04:30:09 crc kubenswrapper[4867]: E0214 04:30:09.966661 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e27a3cb-c301-4fa0-b9a1-9aa3bac0305a" containerName="mariadb-database-create" Feb 14 04:30:09 crc kubenswrapper[4867]: I0214 04:30:09.966679 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e27a3cb-c301-4fa0-b9a1-9aa3bac0305a" containerName="mariadb-database-create" Feb 14 04:30:09 crc kubenswrapper[4867]: E0214 04:30:09.966699 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36e07f1b-6481-42a9-a605-b472a8cc3945" containerName="mariadb-account-create-update" Feb 14 04:30:09 crc kubenswrapper[4867]: I0214 04:30:09.966707 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="36e07f1b-6481-42a9-a605-b472a8cc3945" containerName="mariadb-account-create-update" Feb 14 04:30:09 crc kubenswrapper[4867]: I0214 04:30:09.966968 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="36e07f1b-6481-42a9-a605-b472a8cc3945" containerName="mariadb-account-create-update" Feb 14 04:30:09 crc kubenswrapper[4867]: I0214 04:30:09.966988 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e27a3cb-c301-4fa0-b9a1-9aa3bac0305a" containerName="mariadb-database-create" Feb 14 04:30:09 crc kubenswrapper[4867]: I0214 04:30:09.967806 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 14 04:30:09 crc kubenswrapper[4867]: I0214 04:30:09.976300 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Feb 14 04:30:09 crc kubenswrapper[4867]: I0214 04:30:09.984098 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 14 04:30:10 crc kubenswrapper[4867]: I0214 04:30:10.037968 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-92c4-account-create-update-r2w8b" event={"ID":"36e07f1b-6481-42a9-a605-b472a8cc3945","Type":"ContainerDied","Data":"78c1a5c6ba3bac40138a220ca33469d903b12f8b6d092ee2c71a5440c73661de"} Feb 14 04:30:10 crc kubenswrapper[4867]: I0214 04:30:10.038012 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="78c1a5c6ba3bac40138a220ca33469d903b12f8b6d092ee2c71a5440c73661de" Feb 14 04:30:10 crc kubenswrapper[4867]: I0214 04:30:10.038071 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-92c4-account-create-update-r2w8b" Feb 14 04:30:10 crc kubenswrapper[4867]: I0214 04:30:10.039846 4867 generic.go:334] "Generic (PLEG): container finished" podID="27300ba4-09df-4f4c-b247-4ba37572690d" containerID="7429acc7d9da73b9750d17def9d8240155c7d41dbd196ce0d4607a1d9b14419f" exitCode=0 Feb 14 04:30:10 crc kubenswrapper[4867]: I0214 04:30:10.039917 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-wrzv9" event={"ID":"27300ba4-09df-4f4c-b247-4ba37572690d","Type":"ContainerDied","Data":"7429acc7d9da73b9750d17def9d8240155c7d41dbd196ce0d4607a1d9b14419f"} Feb 14 04:30:10 crc kubenswrapper[4867]: I0214 04:30:10.039961 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-wrzv9" event={"ID":"27300ba4-09df-4f4c-b247-4ba37572690d","Type":"ContainerStarted","Data":"43b73a74f924a2179cf54434848a87156879532d08026905255ec14a3199eb2e"} Feb 14 04:30:10 crc kubenswrapper[4867]: I0214 04:30:10.045576 4867 generic.go:334] "Generic (PLEG): container finished" podID="5aa59e7c-c4ba-4a88-9744-c2b0752de11e" containerID="026325c8f6cfe452fbbf5a283d6335d1b62be9618bc89fae94bbe5dcc2c9e96d" exitCode=0 Feb 14 04:30:10 crc kubenswrapper[4867]: I0214 04:30:10.045643 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-7lpqj-config-4bc2q" event={"ID":"5aa59e7c-c4ba-4a88-9744-c2b0752de11e","Type":"ContainerDied","Data":"026325c8f6cfe452fbbf5a283d6335d1b62be9618bc89fae94bbe5dcc2c9e96d"} Feb 14 04:30:10 crc kubenswrapper[4867]: I0214 04:30:10.172976 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e89a71e-e837-4d98-a707-27908a8342bc-config-data\") pod \"mysqld-exporter-0\" (UID: \"4e89a71e-e837-4d98-a707-27908a8342bc\") " pod="openstack/mysqld-exporter-0" Feb 14 04:30:10 crc kubenswrapper[4867]: I0214 04:30:10.173131 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e89a71e-e837-4d98-a707-27908a8342bc-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"4e89a71e-e837-4d98-a707-27908a8342bc\") " pod="openstack/mysqld-exporter-0" Feb 14 04:30:10 crc kubenswrapper[4867]: I0214 04:30:10.173176 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zlkj\" (UniqueName: \"kubernetes.io/projected/4e89a71e-e837-4d98-a707-27908a8342bc-kube-api-access-9zlkj\") pod \"mysqld-exporter-0\" (UID: \"4e89a71e-e837-4d98-a707-27908a8342bc\") " pod="openstack/mysqld-exporter-0" Feb 14 04:30:10 crc kubenswrapper[4867]: I0214 04:30:10.275351 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e89a71e-e837-4d98-a707-27908a8342bc-config-data\") pod \"mysqld-exporter-0\" (UID: \"4e89a71e-e837-4d98-a707-27908a8342bc\") " pod="openstack/mysqld-exporter-0" Feb 14 04:30:10 crc kubenswrapper[4867]: I0214 04:30:10.275473 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e89a71e-e837-4d98-a707-27908a8342bc-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"4e89a71e-e837-4d98-a707-27908a8342bc\") " pod="openstack/mysqld-exporter-0" Feb 14 04:30:10 crc kubenswrapper[4867]: I0214 04:30:10.275535 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zlkj\" (UniqueName: \"kubernetes.io/projected/4e89a71e-e837-4d98-a707-27908a8342bc-kube-api-access-9zlkj\") pod \"mysqld-exporter-0\" (UID: \"4e89a71e-e837-4d98-a707-27908a8342bc\") " pod="openstack/mysqld-exporter-0" Feb 14 04:30:10 crc kubenswrapper[4867]: I0214 04:30:10.281795 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e89a71e-e837-4d98-a707-27908a8342bc-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"4e89a71e-e837-4d98-a707-27908a8342bc\") " pod="openstack/mysqld-exporter-0" Feb 14 04:30:10 crc kubenswrapper[4867]: I0214 04:30:10.282324 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e89a71e-e837-4d98-a707-27908a8342bc-config-data\") pod \"mysqld-exporter-0\" (UID: \"4e89a71e-e837-4d98-a707-27908a8342bc\") " pod="openstack/mysqld-exporter-0" Feb 14 04:30:10 crc kubenswrapper[4867]: I0214 04:30:10.293942 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zlkj\" (UniqueName: \"kubernetes.io/projected/4e89a71e-e837-4d98-a707-27908a8342bc-kube-api-access-9zlkj\") pod \"mysqld-exporter-0\" (UID: \"4e89a71e-e837-4d98-a707-27908a8342bc\") " pod="openstack/mysqld-exporter-0" Feb 14 04:30:10 crc kubenswrapper[4867]: I0214 04:30:10.395201 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 14 04:30:10 crc kubenswrapper[4867]: I0214 04:30:10.994024 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 14 04:30:11 crc kubenswrapper[4867]: I0214 04:30:11.058466 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"4e89a71e-e837-4d98-a707-27908a8342bc","Type":"ContainerStarted","Data":"5b4f6da6858b80468a9ce475d2d3c8ccdc38ea567758289aef5a49879e4b28e8"} Feb 14 04:30:11 crc kubenswrapper[4867]: I0214 04:30:11.403176 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:11 crc kubenswrapper[4867]: I0214 04:30:11.417820 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:11 crc kubenswrapper[4867]: I0214 04:30:11.625036 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-7lpqj-config-4bc2q" Feb 14 04:30:11 crc kubenswrapper[4867]: I0214 04:30:11.631349 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-wrzv9" Feb 14 04:30:11 crc kubenswrapper[4867]: I0214 04:30:11.736609 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5aa59e7c-c4ba-4a88-9744-c2b0752de11e-var-run-ovn\") pod \"5aa59e7c-c4ba-4a88-9744-c2b0752de11e\" (UID: \"5aa59e7c-c4ba-4a88-9744-c2b0752de11e\") " Feb 14 04:30:11 crc kubenswrapper[4867]: I0214 04:30:11.736715 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27300ba4-09df-4f4c-b247-4ba37572690d-operator-scripts\") pod \"27300ba4-09df-4f4c-b247-4ba37572690d\" (UID: \"27300ba4-09df-4f4c-b247-4ba37572690d\") " Feb 14 04:30:11 crc kubenswrapper[4867]: I0214 04:30:11.736753 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/5aa59e7c-c4ba-4a88-9744-c2b0752de11e-additional-scripts\") pod \"5aa59e7c-c4ba-4a88-9744-c2b0752de11e\" (UID: \"5aa59e7c-c4ba-4a88-9744-c2b0752de11e\") " Feb 14 04:30:11 crc kubenswrapper[4867]: I0214 04:30:11.736838 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5aa59e7c-c4ba-4a88-9744-c2b0752de11e-var-run\") pod \"5aa59e7c-c4ba-4a88-9744-c2b0752de11e\" (UID: \"5aa59e7c-c4ba-4a88-9744-c2b0752de11e\") " Feb 14 04:30:11 crc kubenswrapper[4867]: I0214 04:30:11.736904 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5aa59e7c-c4ba-4a88-9744-c2b0752de11e-scripts\") pod \"5aa59e7c-c4ba-4a88-9744-c2b0752de11e\" (UID: \"5aa59e7c-c4ba-4a88-9744-c2b0752de11e\") " Feb 14 04:30:11 crc kubenswrapper[4867]: I0214 04:30:11.736971 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-76lbx\" (UniqueName: \"kubernetes.io/projected/5aa59e7c-c4ba-4a88-9744-c2b0752de11e-kube-api-access-76lbx\") pod \"5aa59e7c-c4ba-4a88-9744-c2b0752de11e\" (UID: \"5aa59e7c-c4ba-4a88-9744-c2b0752de11e\") " Feb 14 04:30:11 crc kubenswrapper[4867]: I0214 04:30:11.737114 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7mz5x\" (UniqueName: \"kubernetes.io/projected/27300ba4-09df-4f4c-b247-4ba37572690d-kube-api-access-7mz5x\") pod \"27300ba4-09df-4f4c-b247-4ba37572690d\" (UID: \"27300ba4-09df-4f4c-b247-4ba37572690d\") " Feb 14 04:30:11 crc kubenswrapper[4867]: I0214 04:30:11.737145 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5aa59e7c-c4ba-4a88-9744-c2b0752de11e-var-log-ovn\") pod \"5aa59e7c-c4ba-4a88-9744-c2b0752de11e\" (UID: \"5aa59e7c-c4ba-4a88-9744-c2b0752de11e\") " Feb 14 04:30:11 crc kubenswrapper[4867]: I0214 04:30:11.737638 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5aa59e7c-c4ba-4a88-9744-c2b0752de11e-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "5aa59e7c-c4ba-4a88-9744-c2b0752de11e" (UID: "5aa59e7c-c4ba-4a88-9744-c2b0752de11e"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 04:30:11 crc kubenswrapper[4867]: I0214 04:30:11.737724 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5aa59e7c-c4ba-4a88-9744-c2b0752de11e-var-run" (OuterVolumeSpecName: "var-run") pod "5aa59e7c-c4ba-4a88-9744-c2b0752de11e" (UID: "5aa59e7c-c4ba-4a88-9744-c2b0752de11e"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 04:30:11 crc kubenswrapper[4867]: I0214 04:30:11.737795 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5aa59e7c-c4ba-4a88-9744-c2b0752de11e-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "5aa59e7c-c4ba-4a88-9744-c2b0752de11e" (UID: "5aa59e7c-c4ba-4a88-9744-c2b0752de11e"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 04:30:11 crc kubenswrapper[4867]: I0214 04:30:11.738201 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27300ba4-09df-4f4c-b247-4ba37572690d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "27300ba4-09df-4f4c-b247-4ba37572690d" (UID: "27300ba4-09df-4f4c-b247-4ba37572690d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:30:11 crc kubenswrapper[4867]: I0214 04:30:11.738733 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5aa59e7c-c4ba-4a88-9744-c2b0752de11e-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "5aa59e7c-c4ba-4a88-9744-c2b0752de11e" (UID: "5aa59e7c-c4ba-4a88-9744-c2b0752de11e"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:30:11 crc kubenswrapper[4867]: I0214 04:30:11.742805 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5aa59e7c-c4ba-4a88-9744-c2b0752de11e-scripts" (OuterVolumeSpecName: "scripts") pod "5aa59e7c-c4ba-4a88-9744-c2b0752de11e" (UID: "5aa59e7c-c4ba-4a88-9744-c2b0752de11e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:30:11 crc kubenswrapper[4867]: I0214 04:30:11.742869 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5aa59e7c-c4ba-4a88-9744-c2b0752de11e-kube-api-access-76lbx" (OuterVolumeSpecName: "kube-api-access-76lbx") pod "5aa59e7c-c4ba-4a88-9744-c2b0752de11e" (UID: "5aa59e7c-c4ba-4a88-9744-c2b0752de11e"). InnerVolumeSpecName "kube-api-access-76lbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:30:11 crc kubenswrapper[4867]: I0214 04:30:11.745185 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27300ba4-09df-4f4c-b247-4ba37572690d-kube-api-access-7mz5x" (OuterVolumeSpecName: "kube-api-access-7mz5x") pod "27300ba4-09df-4f4c-b247-4ba37572690d" (UID: "27300ba4-09df-4f4c-b247-4ba37572690d"). InnerVolumeSpecName "kube-api-access-7mz5x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:30:11 crc kubenswrapper[4867]: I0214 04:30:11.839990 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-76lbx\" (UniqueName: \"kubernetes.io/projected/5aa59e7c-c4ba-4a88-9744-c2b0752de11e-kube-api-access-76lbx\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:11 crc kubenswrapper[4867]: I0214 04:30:11.840034 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7mz5x\" (UniqueName: \"kubernetes.io/projected/27300ba4-09df-4f4c-b247-4ba37572690d-kube-api-access-7mz5x\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:11 crc kubenswrapper[4867]: I0214 04:30:11.840045 4867 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5aa59e7c-c4ba-4a88-9744-c2b0752de11e-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:11 crc kubenswrapper[4867]: I0214 04:30:11.840055 4867 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5aa59e7c-c4ba-4a88-9744-c2b0752de11e-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:11 crc kubenswrapper[4867]: I0214 04:30:11.840064 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27300ba4-09df-4f4c-b247-4ba37572690d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:11 crc kubenswrapper[4867]: I0214 04:30:11.840072 4867 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/5aa59e7c-c4ba-4a88-9744-c2b0752de11e-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:11 crc kubenswrapper[4867]: I0214 04:30:11.840081 4867 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5aa59e7c-c4ba-4a88-9744-c2b0752de11e-var-run\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:11 crc kubenswrapper[4867]: I0214 04:30:11.840089 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5aa59e7c-c4ba-4a88-9744-c2b0752de11e-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:11 crc kubenswrapper[4867]: I0214 04:30:11.922745 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-7lpqj" Feb 14 04:30:12 crc kubenswrapper[4867]: I0214 04:30:12.086774 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1d9f9909-1442-4d83-b2aa-0f58d4022338","Type":"ContainerStarted","Data":"398bb71b3312708e844a7aaca2933d075418d20c032c02c76d00e463b4d57eae"} Feb 14 04:30:12 crc kubenswrapper[4867]: I0214 04:30:12.091260 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-wrzv9" Feb 14 04:30:12 crc kubenswrapper[4867]: I0214 04:30:12.091268 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-wrzv9" event={"ID":"27300ba4-09df-4f4c-b247-4ba37572690d","Type":"ContainerDied","Data":"43b73a74f924a2179cf54434848a87156879532d08026905255ec14a3199eb2e"} Feb 14 04:30:12 crc kubenswrapper[4867]: I0214 04:30:12.091436 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43b73a74f924a2179cf54434848a87156879532d08026905255ec14a3199eb2e" Feb 14 04:30:12 crc kubenswrapper[4867]: I0214 04:30:12.092719 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-7lpqj-config-4bc2q" event={"ID":"5aa59e7c-c4ba-4a88-9744-c2b0752de11e","Type":"ContainerDied","Data":"75debe79a7aa9be3c0df6cc3f6875a2099e92a0112633516361a18ba6a6f487b"} Feb 14 04:30:12 crc kubenswrapper[4867]: I0214 04:30:12.092753 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-7lpqj-config-4bc2q" Feb 14 04:30:12 crc kubenswrapper[4867]: I0214 04:30:12.092770 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75debe79a7aa9be3c0df6cc3f6875a2099e92a0112633516361a18ba6a6f487b" Feb 14 04:30:12 crc kubenswrapper[4867]: I0214 04:30:12.103914 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:12 crc kubenswrapper[4867]: I0214 04:30:12.138213 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-7lpqj-config-4bc2q"] Feb 14 04:30:12 crc kubenswrapper[4867]: I0214 04:30:12.155107 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-7lpqj-config-4bc2q"] Feb 14 04:30:13 crc kubenswrapper[4867]: I0214 04:30:13.011071 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5aa59e7c-c4ba-4a88-9744-c2b0752de11e" path="/var/lib/kubelet/pods/5aa59e7c-c4ba-4a88-9744-c2b0752de11e/volumes" Feb 14 04:30:13 crc kubenswrapper[4867]: I0214 04:30:13.105677 4867 generic.go:334] "Generic (PLEG): container finished" podID="647ba30a-5526-4e27-9095-680c31ff4eb3" containerID="2985355e95eee0dc957c0e21e160693198281b44121fdf6f1cd86e16275d7eea" exitCode=0 Feb 14 04:30:13 crc kubenswrapper[4867]: I0214 04:30:13.105743 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"647ba30a-5526-4e27-9095-680c31ff4eb3","Type":"ContainerDied","Data":"2985355e95eee0dc957c0e21e160693198281b44121fdf6f1cd86e16275d7eea"} Feb 14 04:30:13 crc kubenswrapper[4867]: I0214 04:30:13.111297 4867 generic.go:334] "Generic (PLEG): container finished" podID="e1e022d9-e2db-41eb-bbc8-36a85211a141" containerID="262c6cf6afafb6e46f694f14f681aa82c37388eec461cacbdee05ba39ec4b230" exitCode=0 Feb 14 04:30:13 crc kubenswrapper[4867]: I0214 04:30:13.111378 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e1e022d9-e2db-41eb-bbc8-36a85211a141","Type":"ContainerDied","Data":"262c6cf6afafb6e46f694f14f681aa82c37388eec461cacbdee05ba39ec4b230"} Feb 14 04:30:13 crc kubenswrapper[4867]: I0214 04:30:13.112998 4867 generic.go:334] "Generic (PLEG): container finished" podID="9bba5174-edd6-4e59-8b84-6c50439be88e" containerID="cdd34e48fd8308f6fcb0879223cfb287fe4fad8d2d81caedd7f537716f873d08" exitCode=0 Feb 14 04:30:13 crc kubenswrapper[4867]: I0214 04:30:13.113668 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"9bba5174-edd6-4e59-8b84-6c50439be88e","Type":"ContainerDied","Data":"cdd34e48fd8308f6fcb0879223cfb287fe4fad8d2d81caedd7f537716f873d08"} Feb 14 04:30:14 crc kubenswrapper[4867]: I0214 04:30:14.759643 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-wrzv9"] Feb 14 04:30:14 crc kubenswrapper[4867]: I0214 04:30:14.774501 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-wrzv9"] Feb 14 04:30:15 crc kubenswrapper[4867]: I0214 04:30:15.012923 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27300ba4-09df-4f4c-b247-4ba37572690d" path="/var/lib/kubelet/pods/27300ba4-09df-4f4c-b247-4ba37572690d/volumes" Feb 14 04:30:15 crc kubenswrapper[4867]: I0214 04:30:15.926398 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 14 04:30:15 crc kubenswrapper[4867]: I0214 04:30:15.927104 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="c755009c-2bb6-4f8f-9b53-460a0e4c9447" containerName="prometheus" containerID="cri-o://4692a5c730542a5c7abd2ae37dcefb0197b935ec9ce8b16d0469afd4527db7f5" gracePeriod=600 Feb 14 04:30:15 crc kubenswrapper[4867]: I0214 04:30:15.927202 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="c755009c-2bb6-4f8f-9b53-460a0e4c9447" containerName="thanos-sidecar" containerID="cri-o://ac3d49c697d1a12bf76bc8aaf7a8fbec4fa259f04adb512889e6f0cf63f6e93d" gracePeriod=600 Feb 14 04:30:15 crc kubenswrapper[4867]: I0214 04:30:15.927258 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="c755009c-2bb6-4f8f-9b53-460a0e4c9447" containerName="config-reloader" containerID="cri-o://c62b1e6f71da03f759075e45d595dab84ceabe23bcfb61adf4ba71561bb4ec1e" gracePeriod=600 Feb 14 04:30:16 crc kubenswrapper[4867]: I0214 04:30:16.140652 4867 generic.go:334] "Generic (PLEG): container finished" podID="c755009c-2bb6-4f8f-9b53-460a0e4c9447" containerID="ac3d49c697d1a12bf76bc8aaf7a8fbec4fa259f04adb512889e6f0cf63f6e93d" exitCode=0 Feb 14 04:30:16 crc kubenswrapper[4867]: I0214 04:30:16.140688 4867 generic.go:334] "Generic (PLEG): container finished" podID="c755009c-2bb6-4f8f-9b53-460a0e4c9447" containerID="4692a5c730542a5c7abd2ae37dcefb0197b935ec9ce8b16d0469afd4527db7f5" exitCode=0 Feb 14 04:30:16 crc kubenswrapper[4867]: I0214 04:30:16.140704 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c755009c-2bb6-4f8f-9b53-460a0e4c9447","Type":"ContainerDied","Data":"ac3d49c697d1a12bf76bc8aaf7a8fbec4fa259f04adb512889e6f0cf63f6e93d"} Feb 14 04:30:16 crc kubenswrapper[4867]: I0214 04:30:16.140759 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c755009c-2bb6-4f8f-9b53-460a0e4c9447","Type":"ContainerDied","Data":"4692a5c730542a5c7abd2ae37dcefb0197b935ec9ce8b16d0469afd4527db7f5"} Feb 14 04:30:16 crc kubenswrapper[4867]: I0214 04:30:16.403364 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="c755009c-2bb6-4f8f-9b53-460a0e4c9447" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.136:9090/-/ready\": dial tcp 10.217.0.136:9090: connect: connection refused" Feb 14 04:30:17 crc kubenswrapper[4867]: I0214 04:30:17.153266 4867 generic.go:334] "Generic (PLEG): container finished" podID="c755009c-2bb6-4f8f-9b53-460a0e4c9447" containerID="c62b1e6f71da03f759075e45d595dab84ceabe23bcfb61adf4ba71561bb4ec1e" exitCode=0 Feb 14 04:30:17 crc kubenswrapper[4867]: I0214 04:30:17.153352 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c755009c-2bb6-4f8f-9b53-460a0e4c9447","Type":"ContainerDied","Data":"c62b1e6f71da03f759075e45d595dab84ceabe23bcfb61adf4ba71561bb4ec1e"} Feb 14 04:30:18 crc kubenswrapper[4867]: I0214 04:30:18.200901 4867 generic.go:334] "Generic (PLEG): container finished" podID="6bc83863-74f4-4509-969c-0f3305a542a8" containerID="da72547c3496fadaa474b36d059bf8582881ee27c6b6aa73c9aa360c8e76f26d" exitCode=0 Feb 14 04:30:18 crc kubenswrapper[4867]: I0214 04:30:18.200953 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"6bc83863-74f4-4509-969c-0f3305a542a8","Type":"ContainerDied","Data":"da72547c3496fadaa474b36d059bf8582881ee27c6b6aa73c9aa360c8e76f26d"} Feb 14 04:30:19 crc kubenswrapper[4867]: I0214 04:30:19.787685 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-k62wg"] Feb 14 04:30:19 crc kubenswrapper[4867]: E0214 04:30:19.788759 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27300ba4-09df-4f4c-b247-4ba37572690d" containerName="mariadb-account-create-update" Feb 14 04:30:19 crc kubenswrapper[4867]: I0214 04:30:19.788779 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="27300ba4-09df-4f4c-b247-4ba37572690d" containerName="mariadb-account-create-update" Feb 14 04:30:19 crc kubenswrapper[4867]: E0214 04:30:19.788800 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5aa59e7c-c4ba-4a88-9744-c2b0752de11e" containerName="ovn-config" Feb 14 04:30:19 crc kubenswrapper[4867]: I0214 04:30:19.788806 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="5aa59e7c-c4ba-4a88-9744-c2b0752de11e" containerName="ovn-config" Feb 14 04:30:19 crc kubenswrapper[4867]: I0214 04:30:19.789113 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="27300ba4-09df-4f4c-b247-4ba37572690d" containerName="mariadb-account-create-update" Feb 14 04:30:19 crc kubenswrapper[4867]: I0214 04:30:19.789141 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="5aa59e7c-c4ba-4a88-9744-c2b0752de11e" containerName="ovn-config" Feb 14 04:30:19 crc kubenswrapper[4867]: I0214 04:30:19.790095 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-k62wg" Feb 14 04:30:19 crc kubenswrapper[4867]: I0214 04:30:19.794898 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 14 04:30:19 crc kubenswrapper[4867]: I0214 04:30:19.808321 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-k62wg"] Feb 14 04:30:19 crc kubenswrapper[4867]: I0214 04:30:19.937963 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6d562\" (UniqueName: \"kubernetes.io/projected/f0d44618-795d-4cc5-a98b-c0c5d77ffdcb-kube-api-access-6d562\") pod \"root-account-create-update-k62wg\" (UID: \"f0d44618-795d-4cc5-a98b-c0c5d77ffdcb\") " pod="openstack/root-account-create-update-k62wg" Feb 14 04:30:19 crc kubenswrapper[4867]: I0214 04:30:19.938119 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0d44618-795d-4cc5-a98b-c0c5d77ffdcb-operator-scripts\") pod \"root-account-create-update-k62wg\" (UID: \"f0d44618-795d-4cc5-a98b-c0c5d77ffdcb\") " pod="openstack/root-account-create-update-k62wg" Feb 14 04:30:20 crc kubenswrapper[4867]: I0214 04:30:20.039890 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0d44618-795d-4cc5-a98b-c0c5d77ffdcb-operator-scripts\") pod \"root-account-create-update-k62wg\" (UID: \"f0d44618-795d-4cc5-a98b-c0c5d77ffdcb\") " pod="openstack/root-account-create-update-k62wg" Feb 14 04:30:20 crc kubenswrapper[4867]: I0214 04:30:20.040076 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6d562\" (UniqueName: \"kubernetes.io/projected/f0d44618-795d-4cc5-a98b-c0c5d77ffdcb-kube-api-access-6d562\") pod \"root-account-create-update-k62wg\" (UID: \"f0d44618-795d-4cc5-a98b-c0c5d77ffdcb\") " pod="openstack/root-account-create-update-k62wg" Feb 14 04:30:20 crc kubenswrapper[4867]: I0214 04:30:20.040698 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0d44618-795d-4cc5-a98b-c0c5d77ffdcb-operator-scripts\") pod \"root-account-create-update-k62wg\" (UID: \"f0d44618-795d-4cc5-a98b-c0c5d77ffdcb\") " pod="openstack/root-account-create-update-k62wg" Feb 14 04:30:20 crc kubenswrapper[4867]: I0214 04:30:20.073389 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6d562\" (UniqueName: \"kubernetes.io/projected/f0d44618-795d-4cc5-a98b-c0c5d77ffdcb-kube-api-access-6d562\") pod \"root-account-create-update-k62wg\" (UID: \"f0d44618-795d-4cc5-a98b-c0c5d77ffdcb\") " pod="openstack/root-account-create-update-k62wg" Feb 14 04:30:20 crc kubenswrapper[4867]: I0214 04:30:20.114154 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-k62wg" Feb 14 04:30:21 crc kubenswrapper[4867]: I0214 04:30:21.403536 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="c755009c-2bb6-4f8f-9b53-460a0e4c9447" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.136:9090/-/ready\": dial tcp 10.217.0.136:9090: connect: connection refused" Feb 14 04:30:21 crc kubenswrapper[4867]: E0214 04:30:21.543609 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Feb 14 04:30:21 crc kubenswrapper[4867]: E0214 04:30:21.544694 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wlznh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-gzvxs_openstack(e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 04:30:21 crc kubenswrapper[4867]: E0214 04:30:21.546422 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-gzvxs" podUID="e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.162864 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.173443 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-k62wg"] Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.268403 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"6bc83863-74f4-4509-969c-0f3305a542a8","Type":"ContainerStarted","Data":"88c159d1a43dc50e68ca5c624034eb8becafe830a496b5d85f7c11e183f4f8b3"} Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.268818 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-1" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.278958 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"647ba30a-5526-4e27-9095-680c31ff4eb3","Type":"ContainerStarted","Data":"47b0dc8cf76452537b6a08713121a73a00752e3dfe3f1a9f1b2a3edca2f295a0"} Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.279212 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.295097 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1d9f9909-1442-4d83-b2aa-0f58d4022338","Type":"ContainerStarted","Data":"a8c244694fc7435e98a557f3f04e1495068244e31e093a964628b855e26e1004"} Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.321784 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e1e022d9-e2db-41eb-bbc8-36a85211a141","Type":"ContainerStarted","Data":"1c9536ee76daa0952682b4376762a2a587b803ad41d92cac29e3c1b5557102c7"} Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.322654 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.324680 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/c755009c-2bb6-4f8f-9b53-460a0e4c9447-prometheus-metric-storage-rulefiles-2\") pod \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\" (UID: \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\") " Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.324728 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tpz8v\" (UniqueName: \"kubernetes.io/projected/c755009c-2bb6-4f8f-9b53-460a0e4c9447-kube-api-access-tpz8v\") pod \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\" (UID: \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\") " Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.324750 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c755009c-2bb6-4f8f-9b53-460a0e4c9447-tls-assets\") pod \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\" (UID: \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\") " Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.324777 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c755009c-2bb6-4f8f-9b53-460a0e4c9447-thanos-prometheus-http-client-file\") pod \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\" (UID: \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\") " Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.324962 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0eda836b-4d69-49e8-a582-e29da56fd005\") pod \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\" (UID: \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\") " Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.325000 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c755009c-2bb6-4f8f-9b53-460a0e4c9447-web-config\") pod \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\" (UID: \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\") " Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.325023 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c755009c-2bb6-4f8f-9b53-460a0e4c9447-prometheus-metric-storage-rulefiles-0\") pod \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\" (UID: \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\") " Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.325041 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/c755009c-2bb6-4f8f-9b53-460a0e4c9447-prometheus-metric-storage-rulefiles-1\") pod \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\" (UID: \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\") " Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.325100 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c755009c-2bb6-4f8f-9b53-460a0e4c9447-config\") pod \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\" (UID: \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\") " Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.325187 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c755009c-2bb6-4f8f-9b53-460a0e4c9447-config-out\") pod \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\" (UID: \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\") " Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.327228 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c755009c-2bb6-4f8f-9b53-460a0e4c9447-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "c755009c-2bb6-4f8f-9b53-460a0e4c9447" (UID: "c755009c-2bb6-4f8f-9b53-460a0e4c9447"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.327315 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c755009c-2bb6-4f8f-9b53-460a0e4c9447-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "c755009c-2bb6-4f8f-9b53-460a0e4c9447" (UID: "c755009c-2bb6-4f8f-9b53-460a0e4c9447"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.327739 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c755009c-2bb6-4f8f-9b53-460a0e4c9447-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "c755009c-2bb6-4f8f-9b53-460a0e4c9447" (UID: "c755009c-2bb6-4f8f-9b53-460a0e4c9447"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.328566 4867 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/c755009c-2bb6-4f8f-9b53-460a0e4c9447-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.328582 4867 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c755009c-2bb6-4f8f-9b53-460a0e4c9447-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.328593 4867 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/c755009c-2bb6-4f8f-9b53-460a0e4c9447-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.334756 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c755009c-2bb6-4f8f-9b53-460a0e4c9447-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "c755009c-2bb6-4f8f-9b53-460a0e4c9447" (UID: "c755009c-2bb6-4f8f-9b53-460a0e4c9447"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.336687 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c755009c-2bb6-4f8f-9b53-460a0e4c9447-kube-api-access-tpz8v" (OuterVolumeSpecName: "kube-api-access-tpz8v") pod "c755009c-2bb6-4f8f-9b53-460a0e4c9447" (UID: "c755009c-2bb6-4f8f-9b53-460a0e4c9447"). InnerVolumeSpecName "kube-api-access-tpz8v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.342667 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c755009c-2bb6-4f8f-9b53-460a0e4c9447-config-out" (OuterVolumeSpecName: "config-out") pod "c755009c-2bb6-4f8f-9b53-460a0e4c9447" (UID: "c755009c-2bb6-4f8f-9b53-460a0e4c9447"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.344423 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c755009c-2bb6-4f8f-9b53-460a0e4c9447-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "c755009c-2bb6-4f8f-9b53-460a0e4c9447" (UID: "c755009c-2bb6-4f8f-9b53-460a0e4c9447"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.344571 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c755009c-2bb6-4f8f-9b53-460a0e4c9447-config" (OuterVolumeSpecName: "config") pod "c755009c-2bb6-4f8f-9b53-460a0e4c9447" (UID: "c755009c-2bb6-4f8f-9b53-460a0e4c9447"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.346139 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.346195 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c755009c-2bb6-4f8f-9b53-460a0e4c9447","Type":"ContainerDied","Data":"bf0605b193983ab03177306fae17d696c18a8e3789f84b06d5ef6b3d006f8d77"} Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.346268 4867 scope.go:117] "RemoveContainer" containerID="ac3d49c697d1a12bf76bc8aaf7a8fbec4fa259f04adb512889e6f0cf63f6e93d" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.348154 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-1" podStartSLOduration=-9223371940.506638 podStartE2EDuration="1m36.348138004s" podCreationTimestamp="2026-02-14 04:28:46 +0000 UTC" firstStartedPulling="2026-02-14 04:28:49.250294097 +0000 UTC m=+1161.331231411" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:30:22.319129885 +0000 UTC m=+1254.400067199" watchObservedRunningTime="2026-02-14 04:30:22.348138004 +0000 UTC m=+1254.429075318" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.364548 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"9bba5174-edd6-4e59-8b84-6c50439be88e","Type":"ContainerStarted","Data":"3a805b4a9b14096595ccbe2f2670f7820f5c356d6f6f2f30fc1ba861c96ba989"} Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.364801 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-2" Feb 14 04:30:22 crc kubenswrapper[4867]: E0214 04:30:22.368469 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-gzvxs" podUID="e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.380770 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=47.562173676 podStartE2EDuration="1m36.380747158s" podCreationTimestamp="2026-02-14 04:28:46 +0000 UTC" firstStartedPulling="2026-02-14 04:28:49.106641381 +0000 UTC m=+1161.187578685" lastFinishedPulling="2026-02-14 04:29:37.925214853 +0000 UTC m=+1210.006152167" observedRunningTime="2026-02-14 04:30:22.345095953 +0000 UTC m=+1254.426033257" watchObservedRunningTime="2026-02-14 04:30:22.380747158 +0000 UTC m=+1254.461684472" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.382300 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0eda836b-4d69-49e8-a582-e29da56fd005" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "c755009c-2bb6-4f8f-9b53-460a0e4c9447" (UID: "c755009c-2bb6-4f8f-9b53-460a0e4c9447"). InnerVolumeSpecName "pvc-0eda836b-4d69-49e8-a582-e29da56fd005". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.392643 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=-9223371939.462154 podStartE2EDuration="1m37.392620933s" podCreationTimestamp="2026-02-14 04:28:45 +0000 UTC" firstStartedPulling="2026-02-14 04:28:47.783547537 +0000 UTC m=+1159.864484851" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:30:22.391838812 +0000 UTC m=+1254.472776146" watchObservedRunningTime="2026-02-14 04:30:22.392620933 +0000 UTC m=+1254.473558247" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.404870 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c755009c-2bb6-4f8f-9b53-460a0e4c9447-web-config" (OuterVolumeSpecName: "web-config") pod "c755009c-2bb6-4f8f-9b53-460a0e4c9447" (UID: "c755009c-2bb6-4f8f-9b53-460a0e4c9447"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.430353 4867 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c755009c-2bb6-4f8f-9b53-460a0e4c9447-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.430405 4867 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-0eda836b-4d69-49e8-a582-e29da56fd005\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0eda836b-4d69-49e8-a582-e29da56fd005\") on node \"crc\" " Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.430420 4867 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c755009c-2bb6-4f8f-9b53-460a0e4c9447-web-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.430431 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/c755009c-2bb6-4f8f-9b53-460a0e4c9447-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.430442 4867 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c755009c-2bb6-4f8f-9b53-460a0e4c9447-config-out\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.430458 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tpz8v\" (UniqueName: \"kubernetes.io/projected/c755009c-2bb6-4f8f-9b53-460a0e4c9447-kube-api-access-tpz8v\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.430473 4867 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c755009c-2bb6-4f8f-9b53-460a0e4c9447-tls-assets\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.446141 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-2" podStartSLOduration=-9223371940.408657 podStartE2EDuration="1m36.44611959s" podCreationTimestamp="2026-02-14 04:28:46 +0000 UTC" firstStartedPulling="2026-02-14 04:28:49.444862718 +0000 UTC m=+1161.525800032" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:30:22.41326867 +0000 UTC m=+1254.494205984" watchObservedRunningTime="2026-02-14 04:30:22.44611959 +0000 UTC m=+1254.527056914" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.468475 4867 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.468661 4867 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-0eda836b-4d69-49e8-a582-e29da56fd005" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0eda836b-4d69-49e8-a582-e29da56fd005") on node "crc" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.535066 4867 reconciler_common.go:293] "Volume detached for volume \"pvc-0eda836b-4d69-49e8-a582-e29da56fd005\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0eda836b-4d69-49e8-a582-e29da56fd005\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.692872 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.715325 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.733982 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 14 04:30:22 crc kubenswrapper[4867]: E0214 04:30:22.734525 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c755009c-2bb6-4f8f-9b53-460a0e4c9447" containerName="config-reloader" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.734544 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="c755009c-2bb6-4f8f-9b53-460a0e4c9447" containerName="config-reloader" Feb 14 04:30:22 crc kubenswrapper[4867]: E0214 04:30:22.734561 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c755009c-2bb6-4f8f-9b53-460a0e4c9447" containerName="prometheus" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.734569 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="c755009c-2bb6-4f8f-9b53-460a0e4c9447" containerName="prometheus" Feb 14 04:30:22 crc kubenswrapper[4867]: E0214 04:30:22.734584 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c755009c-2bb6-4f8f-9b53-460a0e4c9447" containerName="init-config-reloader" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.734592 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="c755009c-2bb6-4f8f-9b53-460a0e4c9447" containerName="init-config-reloader" Feb 14 04:30:22 crc kubenswrapper[4867]: E0214 04:30:22.734607 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c755009c-2bb6-4f8f-9b53-460a0e4c9447" containerName="thanos-sidecar" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.734613 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="c755009c-2bb6-4f8f-9b53-460a0e4c9447" containerName="thanos-sidecar" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.734819 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="c755009c-2bb6-4f8f-9b53-460a0e4c9447" containerName="thanos-sidecar" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.734843 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="c755009c-2bb6-4f8f-9b53-460a0e4c9447" containerName="config-reloader" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.734854 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="c755009c-2bb6-4f8f-9b53-460a0e4c9447" containerName="prometheus" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.736791 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.744634 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.744905 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.745069 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.745181 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.745290 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.745397 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.745632 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.745717 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-dgxf9" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.764295 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.767009 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.840400 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8c8003cd-8992-4714-96a2-2e649aead118-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"8c8003cd-8992-4714-96a2-2e649aead118\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.840622 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkzqt\" (UniqueName: \"kubernetes.io/projected/8c8003cd-8992-4714-96a2-2e649aead118-kube-api-access-vkzqt\") pod \"prometheus-metric-storage-0\" (UID: \"8c8003cd-8992-4714-96a2-2e649aead118\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.840807 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c8003cd-8992-4714-96a2-2e649aead118-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"8c8003cd-8992-4714-96a2-2e649aead118\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.840849 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8c8003cd-8992-4714-96a2-2e649aead118-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"8c8003cd-8992-4714-96a2-2e649aead118\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.840874 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8c8003cd-8992-4714-96a2-2e649aead118-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"8c8003cd-8992-4714-96a2-2e649aead118\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.840991 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/8c8003cd-8992-4714-96a2-2e649aead118-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"8c8003cd-8992-4714-96a2-2e649aead118\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.841059 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/8c8003cd-8992-4714-96a2-2e649aead118-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"8c8003cd-8992-4714-96a2-2e649aead118\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.841152 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8c8003cd-8992-4714-96a2-2e649aead118-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"8c8003cd-8992-4714-96a2-2e649aead118\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.841215 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0eda836b-4d69-49e8-a582-e29da56fd005\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0eda836b-4d69-49e8-a582-e29da56fd005\") pod \"prometheus-metric-storage-0\" (UID: \"8c8003cd-8992-4714-96a2-2e649aead118\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.841344 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/8c8003cd-8992-4714-96a2-2e649aead118-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"8c8003cd-8992-4714-96a2-2e649aead118\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.841467 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8c8003cd-8992-4714-96a2-2e649aead118-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"8c8003cd-8992-4714-96a2-2e649aead118\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.841601 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8c8003cd-8992-4714-96a2-2e649aead118-config\") pod \"prometheus-metric-storage-0\" (UID: \"8c8003cd-8992-4714-96a2-2e649aead118\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.841634 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/8c8003cd-8992-4714-96a2-2e649aead118-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"8c8003cd-8992-4714-96a2-2e649aead118\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.943434 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8c8003cd-8992-4714-96a2-2e649aead118-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"8c8003cd-8992-4714-96a2-2e649aead118\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.943560 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkzqt\" (UniqueName: \"kubernetes.io/projected/8c8003cd-8992-4714-96a2-2e649aead118-kube-api-access-vkzqt\") pod \"prometheus-metric-storage-0\" (UID: \"8c8003cd-8992-4714-96a2-2e649aead118\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.943620 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c8003cd-8992-4714-96a2-2e649aead118-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"8c8003cd-8992-4714-96a2-2e649aead118\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.943650 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8c8003cd-8992-4714-96a2-2e649aead118-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"8c8003cd-8992-4714-96a2-2e649aead118\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.943672 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8c8003cd-8992-4714-96a2-2e649aead118-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"8c8003cd-8992-4714-96a2-2e649aead118\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.943705 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/8c8003cd-8992-4714-96a2-2e649aead118-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"8c8003cd-8992-4714-96a2-2e649aead118\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.943732 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/8c8003cd-8992-4714-96a2-2e649aead118-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"8c8003cd-8992-4714-96a2-2e649aead118\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.943766 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8c8003cd-8992-4714-96a2-2e649aead118-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"8c8003cd-8992-4714-96a2-2e649aead118\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.943793 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0eda836b-4d69-49e8-a582-e29da56fd005\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0eda836b-4d69-49e8-a582-e29da56fd005\") pod \"prometheus-metric-storage-0\" (UID: \"8c8003cd-8992-4714-96a2-2e649aead118\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.943833 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/8c8003cd-8992-4714-96a2-2e649aead118-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"8c8003cd-8992-4714-96a2-2e649aead118\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.943868 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8c8003cd-8992-4714-96a2-2e649aead118-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"8c8003cd-8992-4714-96a2-2e649aead118\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.943901 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8c8003cd-8992-4714-96a2-2e649aead118-config\") pod \"prometheus-metric-storage-0\" (UID: \"8c8003cd-8992-4714-96a2-2e649aead118\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.943917 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/8c8003cd-8992-4714-96a2-2e649aead118-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"8c8003cd-8992-4714-96a2-2e649aead118\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:22 crc kubenswrapper[4867]: E0214 04:30:22.943901 4867 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc755009c_2bb6_4f8f_9b53_460a0e4c9447.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc755009c_2bb6_4f8f_9b53_460a0e4c9447.slice/crio-bf0605b193983ab03177306fae17d696c18a8e3789f84b06d5ef6b3d006f8d77\": RecentStats: unable to find data in memory cache]" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.945297 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/8c8003cd-8992-4714-96a2-2e649aead118-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"8c8003cd-8992-4714-96a2-2e649aead118\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.945822 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/8c8003cd-8992-4714-96a2-2e649aead118-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"8c8003cd-8992-4714-96a2-2e649aead118\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.948603 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/8c8003cd-8992-4714-96a2-2e649aead118-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"8c8003cd-8992-4714-96a2-2e649aead118\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.956390 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/8c8003cd-8992-4714-96a2-2e649aead118-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"8c8003cd-8992-4714-96a2-2e649aead118\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.960034 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8c8003cd-8992-4714-96a2-2e649aead118-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"8c8003cd-8992-4714-96a2-2e649aead118\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.963716 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8c8003cd-8992-4714-96a2-2e649aead118-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"8c8003cd-8992-4714-96a2-2e649aead118\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.965850 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8c8003cd-8992-4714-96a2-2e649aead118-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"8c8003cd-8992-4714-96a2-2e649aead118\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.967109 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8c8003cd-8992-4714-96a2-2e649aead118-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"8c8003cd-8992-4714-96a2-2e649aead118\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.971906 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/8c8003cd-8992-4714-96a2-2e649aead118-config\") pod \"prometheus-metric-storage-0\" (UID: \"8c8003cd-8992-4714-96a2-2e649aead118\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.972321 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c8003cd-8992-4714-96a2-2e649aead118-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"8c8003cd-8992-4714-96a2-2e649aead118\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.990412 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8c8003cd-8992-4714-96a2-2e649aead118-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"8c8003cd-8992-4714-96a2-2e649aead118\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.994951 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkzqt\" (UniqueName: \"kubernetes.io/projected/8c8003cd-8992-4714-96a2-2e649aead118-kube-api-access-vkzqt\") pod \"prometheus-metric-storage-0\" (UID: \"8c8003cd-8992-4714-96a2-2e649aead118\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.999263 4867 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.999316 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0eda836b-4d69-49e8-a582-e29da56fd005\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0eda836b-4d69-49e8-a582-e29da56fd005\") pod \"prometheus-metric-storage-0\" (UID: \"8c8003cd-8992-4714-96a2-2e649aead118\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/7c69566d4c941ca8a51b196b92114beed9536eafb9e04e7c441265c9a20c9feb/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:23 crc kubenswrapper[4867]: I0214 04:30:23.033675 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c755009c-2bb6-4f8f-9b53-460a0e4c9447" path="/var/lib/kubelet/pods/c755009c-2bb6-4f8f-9b53-460a0e4c9447/volumes" Feb 14 04:30:23 crc kubenswrapper[4867]: I0214 04:30:23.133046 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0eda836b-4d69-49e8-a582-e29da56fd005\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0eda836b-4d69-49e8-a582-e29da56fd005\") pod \"prometheus-metric-storage-0\" (UID: \"8c8003cd-8992-4714-96a2-2e649aead118\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:23 crc kubenswrapper[4867]: I0214 04:30:23.149956 4867 scope.go:117] "RemoveContainer" containerID="c62b1e6f71da03f759075e45d595dab84ceabe23bcfb61adf4ba71561bb4ec1e" Feb 14 04:30:23 crc kubenswrapper[4867]: I0214 04:30:23.207100 4867 scope.go:117] "RemoveContainer" containerID="4692a5c730542a5c7abd2ae37dcefb0197b935ec9ce8b16d0469afd4527db7f5" Feb 14 04:30:23 crc kubenswrapper[4867]: I0214 04:30:23.235621 4867 scope.go:117] "RemoveContainer" containerID="a1fd36c74b9a00850c975f49583fd6e7537b5b3ab16d29f2ed2f5ae6fb4437b4" Feb 14 04:30:23 crc kubenswrapper[4867]: I0214 04:30:23.369702 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:23 crc kubenswrapper[4867]: I0214 04:30:23.376612 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-k62wg" event={"ID":"f0d44618-795d-4cc5-a98b-c0c5d77ffdcb","Type":"ContainerStarted","Data":"55a476092d45641169dd1f9b0ab4f579c321db2f63ef5761996c7ee620cab57b"} Feb 14 04:30:24 crc kubenswrapper[4867]: I0214 04:30:24.029548 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 14 04:30:24 crc kubenswrapper[4867]: I0214 04:30:24.390381 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"4e89a71e-e837-4d98-a707-27908a8342bc","Type":"ContainerStarted","Data":"46871adad84ae3334a9c8c1d7590115ccc3e6c56c62e9c431fc9f978e9e97ba6"} Feb 14 04:30:24 crc kubenswrapper[4867]: I0214 04:30:24.392760 4867 generic.go:334] "Generic (PLEG): container finished" podID="f0d44618-795d-4cc5-a98b-c0c5d77ffdcb" containerID="d05fe3ff5d6d0b733fa083ac07e6cf3331ccf5ca5bbba2a8f738913293195786" exitCode=0 Feb 14 04:30:24 crc kubenswrapper[4867]: I0214 04:30:24.392907 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-k62wg" event={"ID":"f0d44618-795d-4cc5-a98b-c0c5d77ffdcb","Type":"ContainerDied","Data":"d05fe3ff5d6d0b733fa083ac07e6cf3331ccf5ca5bbba2a8f738913293195786"} Feb 14 04:30:24 crc kubenswrapper[4867]: I0214 04:30:24.394212 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8c8003cd-8992-4714-96a2-2e649aead118","Type":"ContainerStarted","Data":"6ce12f713c335690a2513c8e5c41d62f6986ad5075f62ffd19b2415ed4452ee3"} Feb 14 04:30:24 crc kubenswrapper[4867]: I0214 04:30:24.397163 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1d9f9909-1442-4d83-b2aa-0f58d4022338","Type":"ContainerStarted","Data":"8da0aba94a4a7f951a4223525abb26545d4de6b66bc664f318fbea2911415a00"} Feb 14 04:30:24 crc kubenswrapper[4867]: I0214 04:30:24.397522 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1d9f9909-1442-4d83-b2aa-0f58d4022338","Type":"ContainerStarted","Data":"81fa65452a80b2b0c08b069b6dc00609e7d342372926d36aa52b686f77827908"} Feb 14 04:30:24 crc kubenswrapper[4867]: I0214 04:30:24.420019 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=3.226813423 podStartE2EDuration="15.420000695s" podCreationTimestamp="2026-02-14 04:30:09 +0000 UTC" firstStartedPulling="2026-02-14 04:30:11.015305349 +0000 UTC m=+1243.096242663" lastFinishedPulling="2026-02-14 04:30:23.208492621 +0000 UTC m=+1255.289429935" observedRunningTime="2026-02-14 04:30:24.414179641 +0000 UTC m=+1256.495116965" watchObservedRunningTime="2026-02-14 04:30:24.420000695 +0000 UTC m=+1256.500938009" Feb 14 04:30:25 crc kubenswrapper[4867]: I0214 04:30:25.783214 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-k62wg" Feb 14 04:30:25 crc kubenswrapper[4867]: I0214 04:30:25.928396 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6d562\" (UniqueName: \"kubernetes.io/projected/f0d44618-795d-4cc5-a98b-c0c5d77ffdcb-kube-api-access-6d562\") pod \"f0d44618-795d-4cc5-a98b-c0c5d77ffdcb\" (UID: \"f0d44618-795d-4cc5-a98b-c0c5d77ffdcb\") " Feb 14 04:30:25 crc kubenswrapper[4867]: I0214 04:30:25.928590 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0d44618-795d-4cc5-a98b-c0c5d77ffdcb-operator-scripts\") pod \"f0d44618-795d-4cc5-a98b-c0c5d77ffdcb\" (UID: \"f0d44618-795d-4cc5-a98b-c0c5d77ffdcb\") " Feb 14 04:30:25 crc kubenswrapper[4867]: I0214 04:30:25.928988 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0d44618-795d-4cc5-a98b-c0c5d77ffdcb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f0d44618-795d-4cc5-a98b-c0c5d77ffdcb" (UID: "f0d44618-795d-4cc5-a98b-c0c5d77ffdcb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:30:25 crc kubenswrapper[4867]: I0214 04:30:25.929730 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0d44618-795d-4cc5-a98b-c0c5d77ffdcb-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:25 crc kubenswrapper[4867]: I0214 04:30:25.933494 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0d44618-795d-4cc5-a98b-c0c5d77ffdcb-kube-api-access-6d562" (OuterVolumeSpecName: "kube-api-access-6d562") pod "f0d44618-795d-4cc5-a98b-c0c5d77ffdcb" (UID: "f0d44618-795d-4cc5-a98b-c0c5d77ffdcb"). InnerVolumeSpecName "kube-api-access-6d562". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:30:26 crc kubenswrapper[4867]: I0214 04:30:26.031955 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6d562\" (UniqueName: \"kubernetes.io/projected/f0d44618-795d-4cc5-a98b-c0c5d77ffdcb-kube-api-access-6d562\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:26 crc kubenswrapper[4867]: I0214 04:30:26.421759 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1d9f9909-1442-4d83-b2aa-0f58d4022338","Type":"ContainerStarted","Data":"efe6e04eebaa51a773f5ff3806454667c86de6e90aca56835882d6444f13caf6"} Feb 14 04:30:26 crc kubenswrapper[4867]: I0214 04:30:26.422015 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1d9f9909-1442-4d83-b2aa-0f58d4022338","Type":"ContainerStarted","Data":"1d04d99377725127e69150ab5bcfe965bd9b461870f6da8cec3a2ff8ed034518"} Feb 14 04:30:26 crc kubenswrapper[4867]: I0214 04:30:26.422026 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1d9f9909-1442-4d83-b2aa-0f58d4022338","Type":"ContainerStarted","Data":"686c73d533d973bb8153f7ef7326df85061774a9fc70120d3fa377b9e1387640"} Feb 14 04:30:26 crc kubenswrapper[4867]: I0214 04:30:26.422035 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1d9f9909-1442-4d83-b2aa-0f58d4022338","Type":"ContainerStarted","Data":"821b3a185a45849e9a1559bbd36e59ca48eb8658d160347636a2e1ad9db67f35"} Feb 14 04:30:26 crc kubenswrapper[4867]: I0214 04:30:26.425014 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-k62wg" event={"ID":"f0d44618-795d-4cc5-a98b-c0c5d77ffdcb","Type":"ContainerDied","Data":"55a476092d45641169dd1f9b0ab4f579c321db2f63ef5761996c7ee620cab57b"} Feb 14 04:30:26 crc kubenswrapper[4867]: I0214 04:30:26.425046 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55a476092d45641169dd1f9b0ab4f579c321db2f63ef5761996c7ee620cab57b" Feb 14 04:30:26 crc kubenswrapper[4867]: I0214 04:30:26.425100 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-k62wg" Feb 14 04:30:28 crc kubenswrapper[4867]: I0214 04:30:28.462257 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8c8003cd-8992-4714-96a2-2e649aead118","Type":"ContainerStarted","Data":"f794e66a47539fff9a1617446ae42aa3d803d21ed1af7899fe2421e8ae424f52"} Feb 14 04:30:28 crc kubenswrapper[4867]: I0214 04:30:28.502355 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1d9f9909-1442-4d83-b2aa-0f58d4022338","Type":"ContainerStarted","Data":"f3d0f45e0c8dcd27bf4c357f0760d7a88a21e4563646ed613a1f70b025d12de5"} Feb 14 04:30:28 crc kubenswrapper[4867]: I0214 04:30:28.502416 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1d9f9909-1442-4d83-b2aa-0f58d4022338","Type":"ContainerStarted","Data":"c56227dc31954100c0a5590c97187517174534391f7211ff7f5c7d543338986c"} Feb 14 04:30:29 crc kubenswrapper[4867]: I0214 04:30:29.519290 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1d9f9909-1442-4d83-b2aa-0f58d4022338","Type":"ContainerStarted","Data":"e945d1f621042caf5ca683f5765ab964805633398e805c62b5aa6173921fddae"} Feb 14 04:30:29 crc kubenswrapper[4867]: I0214 04:30:29.519794 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1d9f9909-1442-4d83-b2aa-0f58d4022338","Type":"ContainerStarted","Data":"0739eae9cc92796e37e6cda70f62623c7572481a8b083af30af4fa79725c6a08"} Feb 14 04:30:29 crc kubenswrapper[4867]: I0214 04:30:29.519809 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1d9f9909-1442-4d83-b2aa-0f58d4022338","Type":"ContainerStarted","Data":"8d7a4565e255548858645306dc0d9afcb0a7904220d938de57a504f698e378c1"} Feb 14 04:30:29 crc kubenswrapper[4867]: I0214 04:30:29.519818 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1d9f9909-1442-4d83-b2aa-0f58d4022338","Type":"ContainerStarted","Data":"cd2da4f9efd152f80c27e4bd000bff2c0c53a6103369c05f9ed818f34dcba557"} Feb 14 04:30:29 crc kubenswrapper[4867]: I0214 04:30:29.519828 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1d9f9909-1442-4d83-b2aa-0f58d4022338","Type":"ContainerStarted","Data":"2969b37cbba482ef69bdafb4a19ad0556d600cc40f9bed477663d1cf485b5736"} Feb 14 04:30:29 crc kubenswrapper[4867]: I0214 04:30:29.556389 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=36.465310755 podStartE2EDuration="55.556368372s" podCreationTimestamp="2026-02-14 04:29:34 +0000 UTC" firstStartedPulling="2026-02-14 04:30:08.458402764 +0000 UTC m=+1240.539340078" lastFinishedPulling="2026-02-14 04:30:27.549460381 +0000 UTC m=+1259.630397695" observedRunningTime="2026-02-14 04:30:29.555072568 +0000 UTC m=+1261.636009882" watchObservedRunningTime="2026-02-14 04:30:29.556368372 +0000 UTC m=+1261.637305686" Feb 14 04:30:29 crc kubenswrapper[4867]: I0214 04:30:29.886829 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-sp44n"] Feb 14 04:30:29 crc kubenswrapper[4867]: E0214 04:30:29.887273 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0d44618-795d-4cc5-a98b-c0c5d77ffdcb" containerName="mariadb-account-create-update" Feb 14 04:30:29 crc kubenswrapper[4867]: I0214 04:30:29.887293 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0d44618-795d-4cc5-a98b-c0c5d77ffdcb" containerName="mariadb-account-create-update" Feb 14 04:30:29 crc kubenswrapper[4867]: I0214 04:30:29.887537 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0d44618-795d-4cc5-a98b-c0c5d77ffdcb" containerName="mariadb-account-create-update" Feb 14 04:30:29 crc kubenswrapper[4867]: I0214 04:30:29.888675 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-sp44n" Feb 14 04:30:29 crc kubenswrapper[4867]: I0214 04:30:29.890913 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 14 04:30:29 crc kubenswrapper[4867]: I0214 04:30:29.904657 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-sp44n"] Feb 14 04:30:29 crc kubenswrapper[4867]: I0214 04:30:29.919596 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e2d457dc-19b4-4279-8c97-930f91291f98-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-sp44n\" (UID: \"e2d457dc-19b4-4279-8c97-930f91291f98\") " pod="openstack/dnsmasq-dns-764c5664d7-sp44n" Feb 14 04:30:29 crc kubenswrapper[4867]: I0214 04:30:29.919718 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e2d457dc-19b4-4279-8c97-930f91291f98-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-sp44n\" (UID: \"e2d457dc-19b4-4279-8c97-930f91291f98\") " pod="openstack/dnsmasq-dns-764c5664d7-sp44n" Feb 14 04:30:29 crc kubenswrapper[4867]: I0214 04:30:29.919780 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e2d457dc-19b4-4279-8c97-930f91291f98-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-sp44n\" (UID: \"e2d457dc-19b4-4279-8c97-930f91291f98\") " pod="openstack/dnsmasq-dns-764c5664d7-sp44n" Feb 14 04:30:29 crc kubenswrapper[4867]: I0214 04:30:29.919840 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xh2g2\" (UniqueName: \"kubernetes.io/projected/e2d457dc-19b4-4279-8c97-930f91291f98-kube-api-access-xh2g2\") pod \"dnsmasq-dns-764c5664d7-sp44n\" (UID: \"e2d457dc-19b4-4279-8c97-930f91291f98\") " pod="openstack/dnsmasq-dns-764c5664d7-sp44n" Feb 14 04:30:29 crc kubenswrapper[4867]: I0214 04:30:29.919863 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2d457dc-19b4-4279-8c97-930f91291f98-config\") pod \"dnsmasq-dns-764c5664d7-sp44n\" (UID: \"e2d457dc-19b4-4279-8c97-930f91291f98\") " pod="openstack/dnsmasq-dns-764c5664d7-sp44n" Feb 14 04:30:29 crc kubenswrapper[4867]: I0214 04:30:29.919914 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e2d457dc-19b4-4279-8c97-930f91291f98-dns-svc\") pod \"dnsmasq-dns-764c5664d7-sp44n\" (UID: \"e2d457dc-19b4-4279-8c97-930f91291f98\") " pod="openstack/dnsmasq-dns-764c5664d7-sp44n" Feb 14 04:30:30 crc kubenswrapper[4867]: I0214 04:30:30.022215 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e2d457dc-19b4-4279-8c97-930f91291f98-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-sp44n\" (UID: \"e2d457dc-19b4-4279-8c97-930f91291f98\") " pod="openstack/dnsmasq-dns-764c5664d7-sp44n" Feb 14 04:30:30 crc kubenswrapper[4867]: I0214 04:30:30.022307 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e2d457dc-19b4-4279-8c97-930f91291f98-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-sp44n\" (UID: \"e2d457dc-19b4-4279-8c97-930f91291f98\") " pod="openstack/dnsmasq-dns-764c5664d7-sp44n" Feb 14 04:30:30 crc kubenswrapper[4867]: I0214 04:30:30.022388 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xh2g2\" (UniqueName: \"kubernetes.io/projected/e2d457dc-19b4-4279-8c97-930f91291f98-kube-api-access-xh2g2\") pod \"dnsmasq-dns-764c5664d7-sp44n\" (UID: \"e2d457dc-19b4-4279-8c97-930f91291f98\") " pod="openstack/dnsmasq-dns-764c5664d7-sp44n" Feb 14 04:30:30 crc kubenswrapper[4867]: I0214 04:30:30.022423 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2d457dc-19b4-4279-8c97-930f91291f98-config\") pod \"dnsmasq-dns-764c5664d7-sp44n\" (UID: \"e2d457dc-19b4-4279-8c97-930f91291f98\") " pod="openstack/dnsmasq-dns-764c5664d7-sp44n" Feb 14 04:30:30 crc kubenswrapper[4867]: I0214 04:30:30.022489 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e2d457dc-19b4-4279-8c97-930f91291f98-dns-svc\") pod \"dnsmasq-dns-764c5664d7-sp44n\" (UID: \"e2d457dc-19b4-4279-8c97-930f91291f98\") " pod="openstack/dnsmasq-dns-764c5664d7-sp44n" Feb 14 04:30:30 crc kubenswrapper[4867]: I0214 04:30:30.022572 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e2d457dc-19b4-4279-8c97-930f91291f98-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-sp44n\" (UID: \"e2d457dc-19b4-4279-8c97-930f91291f98\") " pod="openstack/dnsmasq-dns-764c5664d7-sp44n" Feb 14 04:30:30 crc kubenswrapper[4867]: I0214 04:30:30.023170 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e2d457dc-19b4-4279-8c97-930f91291f98-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-sp44n\" (UID: \"e2d457dc-19b4-4279-8c97-930f91291f98\") " pod="openstack/dnsmasq-dns-764c5664d7-sp44n" Feb 14 04:30:30 crc kubenswrapper[4867]: I0214 04:30:30.023348 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e2d457dc-19b4-4279-8c97-930f91291f98-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-sp44n\" (UID: \"e2d457dc-19b4-4279-8c97-930f91291f98\") " pod="openstack/dnsmasq-dns-764c5664d7-sp44n" Feb 14 04:30:30 crc kubenswrapper[4867]: I0214 04:30:30.023569 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e2d457dc-19b4-4279-8c97-930f91291f98-dns-svc\") pod \"dnsmasq-dns-764c5664d7-sp44n\" (UID: \"e2d457dc-19b4-4279-8c97-930f91291f98\") " pod="openstack/dnsmasq-dns-764c5664d7-sp44n" Feb 14 04:30:30 crc kubenswrapper[4867]: I0214 04:30:30.023627 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e2d457dc-19b4-4279-8c97-930f91291f98-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-sp44n\" (UID: \"e2d457dc-19b4-4279-8c97-930f91291f98\") " pod="openstack/dnsmasq-dns-764c5664d7-sp44n" Feb 14 04:30:30 crc kubenswrapper[4867]: I0214 04:30:30.024115 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2d457dc-19b4-4279-8c97-930f91291f98-config\") pod \"dnsmasq-dns-764c5664d7-sp44n\" (UID: \"e2d457dc-19b4-4279-8c97-930f91291f98\") " pod="openstack/dnsmasq-dns-764c5664d7-sp44n" Feb 14 04:30:30 crc kubenswrapper[4867]: I0214 04:30:30.042078 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xh2g2\" (UniqueName: \"kubernetes.io/projected/e2d457dc-19b4-4279-8c97-930f91291f98-kube-api-access-xh2g2\") pod \"dnsmasq-dns-764c5664d7-sp44n\" (UID: \"e2d457dc-19b4-4279-8c97-930f91291f98\") " pod="openstack/dnsmasq-dns-764c5664d7-sp44n" Feb 14 04:30:30 crc kubenswrapper[4867]: I0214 04:30:30.511763 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-sp44n" Feb 14 04:30:31 crc kubenswrapper[4867]: I0214 04:30:31.063804 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-sp44n"] Feb 14 04:30:31 crc kubenswrapper[4867]: W0214 04:30:31.073752 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode2d457dc_19b4_4279_8c97_930f91291f98.slice/crio-3bb4499423a21fd6e6abed1bb4c19b4b9bfd321a8e7779e3689cb78809defb85 WatchSource:0}: Error finding container 3bb4499423a21fd6e6abed1bb4c19b4b9bfd321a8e7779e3689cb78809defb85: Status 404 returned error can't find the container with id 3bb4499423a21fd6e6abed1bb4c19b4b9bfd321a8e7779e3689cb78809defb85 Feb 14 04:30:31 crc kubenswrapper[4867]: I0214 04:30:31.542865 4867 generic.go:334] "Generic (PLEG): container finished" podID="e2d457dc-19b4-4279-8c97-930f91291f98" containerID="3ce430069186ce26ff0516293d97e3eab6ca721fa6eae3b7d027a605885cee6e" exitCode=0 Feb 14 04:30:31 crc kubenswrapper[4867]: I0214 04:30:31.542968 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-sp44n" event={"ID":"e2d457dc-19b4-4279-8c97-930f91291f98","Type":"ContainerDied","Data":"3ce430069186ce26ff0516293d97e3eab6ca721fa6eae3b7d027a605885cee6e"} Feb 14 04:30:31 crc kubenswrapper[4867]: I0214 04:30:31.543167 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-sp44n" event={"ID":"e2d457dc-19b4-4279-8c97-930f91291f98","Type":"ContainerStarted","Data":"3bb4499423a21fd6e6abed1bb4c19b4b9bfd321a8e7779e3689cb78809defb85"} Feb 14 04:30:32 crc kubenswrapper[4867]: I0214 04:30:32.555658 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-sp44n" event={"ID":"e2d457dc-19b4-4279-8c97-930f91291f98","Type":"ContainerStarted","Data":"cfefeb2b897af2fb3d5d274167a23f6d2bce6f0ba7bf17c5af7d0be9357e047c"} Feb 14 04:30:32 crc kubenswrapper[4867]: I0214 04:30:32.556847 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-764c5664d7-sp44n" Feb 14 04:30:32 crc kubenswrapper[4867]: I0214 04:30:32.578477 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-764c5664d7-sp44n" podStartSLOduration=3.578457613 podStartE2EDuration="3.578457613s" podCreationTimestamp="2026-02-14 04:30:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:30:32.572886976 +0000 UTC m=+1264.653824300" watchObservedRunningTime="2026-02-14 04:30:32.578457613 +0000 UTC m=+1264.659394927" Feb 14 04:30:34 crc kubenswrapper[4867]: I0214 04:30:34.574622 4867 generic.go:334] "Generic (PLEG): container finished" podID="8c8003cd-8992-4714-96a2-2e649aead118" containerID="f794e66a47539fff9a1617446ae42aa3d803d21ed1af7899fe2421e8ae424f52" exitCode=0 Feb 14 04:30:34 crc kubenswrapper[4867]: I0214 04:30:34.574684 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8c8003cd-8992-4714-96a2-2e649aead118","Type":"ContainerDied","Data":"f794e66a47539fff9a1617446ae42aa3d803d21ed1af7899fe2421e8ae424f52"} Feb 14 04:30:35 crc kubenswrapper[4867]: I0214 04:30:35.585643 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8c8003cd-8992-4714-96a2-2e649aead118","Type":"ContainerStarted","Data":"bd8e323aacf47614946e6e8abe299298cfd77b36b733c7407e17376f135951d2"} Feb 14 04:30:37 crc kubenswrapper[4867]: I0214 04:30:37.213840 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:30:37 crc kubenswrapper[4867]: I0214 04:30:37.920288 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="647ba30a-5526-4e27-9095-680c31ff4eb3" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.127:5671: connect: connection refused" Feb 14 04:30:38 crc kubenswrapper[4867]: I0214 04:30:38.058650 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="9bba5174-edd6-4e59-8b84-6c50439be88e" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.129:5671: connect: connection refused" Feb 14 04:30:38 crc kubenswrapper[4867]: I0214 04:30:38.252767 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-1" podUID="6bc83863-74f4-4509-969c-0f3305a542a8" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.128:5671: connect: connection refused" Feb 14 04:30:38 crc kubenswrapper[4867]: I0214 04:30:38.663649 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8c8003cd-8992-4714-96a2-2e649aead118","Type":"ContainerStarted","Data":"0b63afd8dc2a0d8476b73e2da3aa351e15e56e5caa4dd0a689dae875878c5456"} Feb 14 04:30:39 crc kubenswrapper[4867]: I0214 04:30:39.002747 4867 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod2e27a3cb-c301-4fa0-b9a1-9aa3bac0305a"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod2e27a3cb-c301-4fa0-b9a1-9aa3bac0305a] : Timed out while waiting for systemd to remove kubepods-besteffort-pod2e27a3cb_c301_4fa0_b9a1_9aa3bac0305a.slice" Feb 14 04:30:39 crc kubenswrapper[4867]: E0214 04:30:39.002818 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort pod2e27a3cb-c301-4fa0-b9a1-9aa3bac0305a] : unable to destroy cgroup paths for cgroup [kubepods besteffort pod2e27a3cb-c301-4fa0-b9a1-9aa3bac0305a] : Timed out while waiting for systemd to remove kubepods-besteffort-pod2e27a3cb_c301_4fa0_b9a1_9aa3bac0305a.slice" pod="openstack/mysqld-exporter-openstack-cell1-db-create-pjc8k" podUID="2e27a3cb-c301-4fa0-b9a1-9aa3bac0305a" Feb 14 04:30:39 crc kubenswrapper[4867]: I0214 04:30:39.675690 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-gzvxs" event={"ID":"e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2","Type":"ContainerStarted","Data":"cbf0ef6610c1740254fda0700aa42a6fdd3885fcc7d65e0c4bc4ef1fc1f78288"} Feb 14 04:30:39 crc kubenswrapper[4867]: I0214 04:30:39.680872 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-pjc8k" Feb 14 04:30:39 crc kubenswrapper[4867]: I0214 04:30:39.681648 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8c8003cd-8992-4714-96a2-2e649aead118","Type":"ContainerStarted","Data":"bd79661dacb3f2e02a3379c14a86446419599ff01a98ca85f72ac077fb6c5343"} Feb 14 04:30:39 crc kubenswrapper[4867]: I0214 04:30:39.720094 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-gzvxs" podStartSLOduration=2.858651086 podStartE2EDuration="38.720066978s" podCreationTimestamp="2026-02-14 04:30:01 +0000 UTC" firstStartedPulling="2026-02-14 04:30:02.78932871 +0000 UTC m=+1234.870266024" lastFinishedPulling="2026-02-14 04:30:38.650744602 +0000 UTC m=+1270.731681916" observedRunningTime="2026-02-14 04:30:39.695091756 +0000 UTC m=+1271.776029070" watchObservedRunningTime="2026-02-14 04:30:39.720066978 +0000 UTC m=+1271.801004292" Feb 14 04:30:39 crc kubenswrapper[4867]: I0214 04:30:39.738778 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=17.738759534 podStartE2EDuration="17.738759534s" podCreationTimestamp="2026-02-14 04:30:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:30:39.728912583 +0000 UTC m=+1271.809849897" watchObservedRunningTime="2026-02-14 04:30:39.738759534 +0000 UTC m=+1271.819696848" Feb 14 04:30:40 crc kubenswrapper[4867]: I0214 04:30:40.514704 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-764c5664d7-sp44n" Feb 14 04:30:40 crc kubenswrapper[4867]: I0214 04:30:40.571352 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-cp76f"] Feb 14 04:30:40 crc kubenswrapper[4867]: I0214 04:30:40.578844 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-cp76f" podUID="af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7" containerName="dnsmasq-dns" containerID="cri-o://287c9079ac6589b0605e06afbed45de89a1a8760239c3526cc0564c3247ada73" gracePeriod=10 Feb 14 04:30:41 crc kubenswrapper[4867]: I0214 04:30:41.133176 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-cp76f" Feb 14 04:30:41 crc kubenswrapper[4867]: I0214 04:30:41.233033 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7-dns-svc\") pod \"af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7\" (UID: \"af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7\") " Feb 14 04:30:41 crc kubenswrapper[4867]: I0214 04:30:41.233269 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7-ovsdbserver-sb\") pod \"af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7\" (UID: \"af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7\") " Feb 14 04:30:41 crc kubenswrapper[4867]: I0214 04:30:41.233308 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gndq6\" (UniqueName: \"kubernetes.io/projected/af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7-kube-api-access-gndq6\") pod \"af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7\" (UID: \"af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7\") " Feb 14 04:30:41 crc kubenswrapper[4867]: I0214 04:30:41.233423 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7-ovsdbserver-nb\") pod \"af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7\" (UID: \"af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7\") " Feb 14 04:30:41 crc kubenswrapper[4867]: I0214 04:30:41.233493 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7-config\") pod \"af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7\" (UID: \"af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7\") " Feb 14 04:30:41 crc kubenswrapper[4867]: I0214 04:30:41.252360 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7-kube-api-access-gndq6" (OuterVolumeSpecName: "kube-api-access-gndq6") pod "af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7" (UID: "af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7"). InnerVolumeSpecName "kube-api-access-gndq6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:30:41 crc kubenswrapper[4867]: I0214 04:30:41.293947 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7" (UID: "af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:30:41 crc kubenswrapper[4867]: I0214 04:30:41.310136 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7" (UID: "af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:30:41 crc kubenswrapper[4867]: I0214 04:30:41.310306 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7-config" (OuterVolumeSpecName: "config") pod "af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7" (UID: "af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:30:41 crc kubenswrapper[4867]: I0214 04:30:41.314212 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7" (UID: "af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:30:41 crc kubenswrapper[4867]: I0214 04:30:41.335649 4867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:41 crc kubenswrapper[4867]: I0214 04:30:41.336433 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gndq6\" (UniqueName: \"kubernetes.io/projected/af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7-kube-api-access-gndq6\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:41 crc kubenswrapper[4867]: I0214 04:30:41.336565 4867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:41 crc kubenswrapper[4867]: I0214 04:30:41.336643 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:41 crc kubenswrapper[4867]: I0214 04:30:41.336736 4867 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:41 crc kubenswrapper[4867]: I0214 04:30:41.712125 4867 generic.go:334] "Generic (PLEG): container finished" podID="af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7" containerID="287c9079ac6589b0605e06afbed45de89a1a8760239c3526cc0564c3247ada73" exitCode=0 Feb 14 04:30:41 crc kubenswrapper[4867]: I0214 04:30:41.712430 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-cp76f" event={"ID":"af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7","Type":"ContainerDied","Data":"287c9079ac6589b0605e06afbed45de89a1a8760239c3526cc0564c3247ada73"} Feb 14 04:30:41 crc kubenswrapper[4867]: I0214 04:30:41.712541 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-cp76f" event={"ID":"af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7","Type":"ContainerDied","Data":"41aaccd20d5bf4daeae755d0c155b427f29d56138b6d3562c58792965bd5ee9b"} Feb 14 04:30:41 crc kubenswrapper[4867]: I0214 04:30:41.712621 4867 scope.go:117] "RemoveContainer" containerID="287c9079ac6589b0605e06afbed45de89a1a8760239c3526cc0564c3247ada73" Feb 14 04:30:41 crc kubenswrapper[4867]: I0214 04:30:41.712828 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-cp76f" Feb 14 04:30:41 crc kubenswrapper[4867]: I0214 04:30:41.762210 4867 scope.go:117] "RemoveContainer" containerID="06776f7c91b51ef4ae24e9a96a1d7ce732c0aeceef3722062fef6d1c2167d74f" Feb 14 04:30:41 crc kubenswrapper[4867]: I0214 04:30:41.775393 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-cp76f"] Feb 14 04:30:41 crc kubenswrapper[4867]: I0214 04:30:41.784712 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-cp76f"] Feb 14 04:30:41 crc kubenswrapper[4867]: I0214 04:30:41.786167 4867 scope.go:117] "RemoveContainer" containerID="287c9079ac6589b0605e06afbed45de89a1a8760239c3526cc0564c3247ada73" Feb 14 04:30:41 crc kubenswrapper[4867]: E0214 04:30:41.786639 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"287c9079ac6589b0605e06afbed45de89a1a8760239c3526cc0564c3247ada73\": container with ID starting with 287c9079ac6589b0605e06afbed45de89a1a8760239c3526cc0564c3247ada73 not found: ID does not exist" containerID="287c9079ac6589b0605e06afbed45de89a1a8760239c3526cc0564c3247ada73" Feb 14 04:30:41 crc kubenswrapper[4867]: I0214 04:30:41.786684 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"287c9079ac6589b0605e06afbed45de89a1a8760239c3526cc0564c3247ada73"} err="failed to get container status \"287c9079ac6589b0605e06afbed45de89a1a8760239c3526cc0564c3247ada73\": rpc error: code = NotFound desc = could not find container \"287c9079ac6589b0605e06afbed45de89a1a8760239c3526cc0564c3247ada73\": container with ID starting with 287c9079ac6589b0605e06afbed45de89a1a8760239c3526cc0564c3247ada73 not found: ID does not exist" Feb 14 04:30:41 crc kubenswrapper[4867]: I0214 04:30:41.786711 4867 scope.go:117] "RemoveContainer" containerID="06776f7c91b51ef4ae24e9a96a1d7ce732c0aeceef3722062fef6d1c2167d74f" Feb 14 04:30:41 crc kubenswrapper[4867]: E0214 04:30:41.787017 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06776f7c91b51ef4ae24e9a96a1d7ce732c0aeceef3722062fef6d1c2167d74f\": container with ID starting with 06776f7c91b51ef4ae24e9a96a1d7ce732c0aeceef3722062fef6d1c2167d74f not found: ID does not exist" containerID="06776f7c91b51ef4ae24e9a96a1d7ce732c0aeceef3722062fef6d1c2167d74f" Feb 14 04:30:41 crc kubenswrapper[4867]: I0214 04:30:41.787052 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06776f7c91b51ef4ae24e9a96a1d7ce732c0aeceef3722062fef6d1c2167d74f"} err="failed to get container status \"06776f7c91b51ef4ae24e9a96a1d7ce732c0aeceef3722062fef6d1c2167d74f\": rpc error: code = NotFound desc = could not find container \"06776f7c91b51ef4ae24e9a96a1d7ce732c0aeceef3722062fef6d1c2167d74f\": container with ID starting with 06776f7c91b51ef4ae24e9a96a1d7ce732c0aeceef3722062fef6d1c2167d74f not found: ID does not exist" Feb 14 04:30:43 crc kubenswrapper[4867]: I0214 04:30:43.009368 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7" path="/var/lib/kubelet/pods/af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7/volumes" Feb 14 04:30:43 crc kubenswrapper[4867]: I0214 04:30:43.370640 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:46 crc kubenswrapper[4867]: I0214 04:30:46.771102 4867 generic.go:334] "Generic (PLEG): container finished" podID="e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2" containerID="cbf0ef6610c1740254fda0700aa42a6fdd3885fcc7d65e0c4bc4ef1fc1f78288" exitCode=0 Feb 14 04:30:46 crc kubenswrapper[4867]: I0214 04:30:46.771207 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-gzvxs" event={"ID":"e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2","Type":"ContainerDied","Data":"cbf0ef6610c1740254fda0700aa42a6fdd3885fcc7d65e0c4bc4ef1fc1f78288"} Feb 14 04:30:47 crc kubenswrapper[4867]: I0214 04:30:47.920683 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 14 04:30:48 crc kubenswrapper[4867]: I0214 04:30:48.057742 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-2" Feb 14 04:30:48 crc kubenswrapper[4867]: I0214 04:30:48.257288 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-1" Feb 14 04:30:48 crc kubenswrapper[4867]: I0214 04:30:48.354672 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-gzvxs" Feb 14 04:30:48 crc kubenswrapper[4867]: I0214 04:30:48.427051 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2-db-sync-config-data\") pod \"e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2\" (UID: \"e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2\") " Feb 14 04:30:48 crc kubenswrapper[4867]: I0214 04:30:48.427660 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2-combined-ca-bundle\") pod \"e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2\" (UID: \"e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2\") " Feb 14 04:30:48 crc kubenswrapper[4867]: I0214 04:30:48.427885 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlznh\" (UniqueName: \"kubernetes.io/projected/e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2-kube-api-access-wlznh\") pod \"e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2\" (UID: \"e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2\") " Feb 14 04:30:48 crc kubenswrapper[4867]: I0214 04:30:48.428045 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2-config-data\") pod \"e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2\" (UID: \"e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2\") " Feb 14 04:30:48 crc kubenswrapper[4867]: I0214 04:30:48.434956 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2" (UID: "e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:30:48 crc kubenswrapper[4867]: I0214 04:30:48.434994 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2-kube-api-access-wlznh" (OuterVolumeSpecName: "kube-api-access-wlznh") pod "e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2" (UID: "e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2"). InnerVolumeSpecName "kube-api-access-wlznh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:30:48 crc kubenswrapper[4867]: I0214 04:30:48.467447 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2" (UID: "e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:30:48 crc kubenswrapper[4867]: I0214 04:30:48.501420 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2-config-data" (OuterVolumeSpecName: "config-data") pod "e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2" (UID: "e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:30:48 crc kubenswrapper[4867]: I0214 04:30:48.534737 4867 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:48 crc kubenswrapper[4867]: I0214 04:30:48.534777 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:48 crc kubenswrapper[4867]: I0214 04:30:48.534788 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wlznh\" (UniqueName: \"kubernetes.io/projected/e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2-kube-api-access-wlznh\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:48 crc kubenswrapper[4867]: I0214 04:30:48.534801 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:48 crc kubenswrapper[4867]: I0214 04:30:48.794852 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-gzvxs" event={"ID":"e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2","Type":"ContainerDied","Data":"a9f2241d04b1388d688071a01711ae33a99077041ba77e2f0164bc2d8abe8d1e"} Feb 14 04:30:48 crc kubenswrapper[4867]: I0214 04:30:48.794902 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9f2241d04b1388d688071a01711ae33a99077041ba77e2f0164bc2d8abe8d1e" Feb 14 04:30:48 crc kubenswrapper[4867]: I0214 04:30:48.794946 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-gzvxs" Feb 14 04:30:49 crc kubenswrapper[4867]: I0214 04:30:49.259892 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-shjcj"] Feb 14 04:30:49 crc kubenswrapper[4867]: E0214 04:30:49.260635 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7" containerName="dnsmasq-dns" Feb 14 04:30:49 crc kubenswrapper[4867]: I0214 04:30:49.260650 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7" containerName="dnsmasq-dns" Feb 14 04:30:49 crc kubenswrapper[4867]: E0214 04:30:49.260671 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2" containerName="glance-db-sync" Feb 14 04:30:49 crc kubenswrapper[4867]: I0214 04:30:49.260678 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2" containerName="glance-db-sync" Feb 14 04:30:49 crc kubenswrapper[4867]: E0214 04:30:49.260702 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7" containerName="init" Feb 14 04:30:49 crc kubenswrapper[4867]: I0214 04:30:49.260707 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7" containerName="init" Feb 14 04:30:49 crc kubenswrapper[4867]: I0214 04:30:49.260895 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2" containerName="glance-db-sync" Feb 14 04:30:49 crc kubenswrapper[4867]: I0214 04:30:49.260920 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7" containerName="dnsmasq-dns" Feb 14 04:30:49 crc kubenswrapper[4867]: I0214 04:30:49.262111 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-shjcj" Feb 14 04:30:49 crc kubenswrapper[4867]: I0214 04:30:49.279776 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-shjcj"] Feb 14 04:30:49 crc kubenswrapper[4867]: I0214 04:30:49.352965 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/34e3aca5-c7d4-4401-b301-1ab6497cb1d7-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-shjcj\" (UID: \"34e3aca5-c7d4-4401-b301-1ab6497cb1d7\") " pod="openstack/dnsmasq-dns-74f6bcbc87-shjcj" Feb 14 04:30:49 crc kubenswrapper[4867]: I0214 04:30:49.353690 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/34e3aca5-c7d4-4401-b301-1ab6497cb1d7-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-shjcj\" (UID: \"34e3aca5-c7d4-4401-b301-1ab6497cb1d7\") " pod="openstack/dnsmasq-dns-74f6bcbc87-shjcj" Feb 14 04:30:49 crc kubenswrapper[4867]: I0214 04:30:49.353942 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvg6h\" (UniqueName: \"kubernetes.io/projected/34e3aca5-c7d4-4401-b301-1ab6497cb1d7-kube-api-access-bvg6h\") pod \"dnsmasq-dns-74f6bcbc87-shjcj\" (UID: \"34e3aca5-c7d4-4401-b301-1ab6497cb1d7\") " pod="openstack/dnsmasq-dns-74f6bcbc87-shjcj" Feb 14 04:30:49 crc kubenswrapper[4867]: I0214 04:30:49.354047 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/34e3aca5-c7d4-4401-b301-1ab6497cb1d7-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-shjcj\" (UID: \"34e3aca5-c7d4-4401-b301-1ab6497cb1d7\") " pod="openstack/dnsmasq-dns-74f6bcbc87-shjcj" Feb 14 04:30:49 crc kubenswrapper[4867]: I0214 04:30:49.354348 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34e3aca5-c7d4-4401-b301-1ab6497cb1d7-config\") pod \"dnsmasq-dns-74f6bcbc87-shjcj\" (UID: \"34e3aca5-c7d4-4401-b301-1ab6497cb1d7\") " pod="openstack/dnsmasq-dns-74f6bcbc87-shjcj" Feb 14 04:30:49 crc kubenswrapper[4867]: I0214 04:30:49.354695 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/34e3aca5-c7d4-4401-b301-1ab6497cb1d7-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-shjcj\" (UID: \"34e3aca5-c7d4-4401-b301-1ab6497cb1d7\") " pod="openstack/dnsmasq-dns-74f6bcbc87-shjcj" Feb 14 04:30:49 crc kubenswrapper[4867]: I0214 04:30:49.457496 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvg6h\" (UniqueName: \"kubernetes.io/projected/34e3aca5-c7d4-4401-b301-1ab6497cb1d7-kube-api-access-bvg6h\") pod \"dnsmasq-dns-74f6bcbc87-shjcj\" (UID: \"34e3aca5-c7d4-4401-b301-1ab6497cb1d7\") " pod="openstack/dnsmasq-dns-74f6bcbc87-shjcj" Feb 14 04:30:49 crc kubenswrapper[4867]: I0214 04:30:49.457570 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/34e3aca5-c7d4-4401-b301-1ab6497cb1d7-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-shjcj\" (UID: \"34e3aca5-c7d4-4401-b301-1ab6497cb1d7\") " pod="openstack/dnsmasq-dns-74f6bcbc87-shjcj" Feb 14 04:30:49 crc kubenswrapper[4867]: I0214 04:30:49.457670 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34e3aca5-c7d4-4401-b301-1ab6497cb1d7-config\") pod \"dnsmasq-dns-74f6bcbc87-shjcj\" (UID: \"34e3aca5-c7d4-4401-b301-1ab6497cb1d7\") " pod="openstack/dnsmasq-dns-74f6bcbc87-shjcj" Feb 14 04:30:49 crc kubenswrapper[4867]: I0214 04:30:49.457709 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/34e3aca5-c7d4-4401-b301-1ab6497cb1d7-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-shjcj\" (UID: \"34e3aca5-c7d4-4401-b301-1ab6497cb1d7\") " pod="openstack/dnsmasq-dns-74f6bcbc87-shjcj" Feb 14 04:30:49 crc kubenswrapper[4867]: I0214 04:30:49.457740 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/34e3aca5-c7d4-4401-b301-1ab6497cb1d7-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-shjcj\" (UID: \"34e3aca5-c7d4-4401-b301-1ab6497cb1d7\") " pod="openstack/dnsmasq-dns-74f6bcbc87-shjcj" Feb 14 04:30:49 crc kubenswrapper[4867]: I0214 04:30:49.457778 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/34e3aca5-c7d4-4401-b301-1ab6497cb1d7-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-shjcj\" (UID: \"34e3aca5-c7d4-4401-b301-1ab6497cb1d7\") " pod="openstack/dnsmasq-dns-74f6bcbc87-shjcj" Feb 14 04:30:49 crc kubenswrapper[4867]: I0214 04:30:49.458424 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/34e3aca5-c7d4-4401-b301-1ab6497cb1d7-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-shjcj\" (UID: \"34e3aca5-c7d4-4401-b301-1ab6497cb1d7\") " pod="openstack/dnsmasq-dns-74f6bcbc87-shjcj" Feb 14 04:30:49 crc kubenswrapper[4867]: I0214 04:30:49.458563 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34e3aca5-c7d4-4401-b301-1ab6497cb1d7-config\") pod \"dnsmasq-dns-74f6bcbc87-shjcj\" (UID: \"34e3aca5-c7d4-4401-b301-1ab6497cb1d7\") " pod="openstack/dnsmasq-dns-74f6bcbc87-shjcj" Feb 14 04:30:49 crc kubenswrapper[4867]: I0214 04:30:49.459075 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/34e3aca5-c7d4-4401-b301-1ab6497cb1d7-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-shjcj\" (UID: \"34e3aca5-c7d4-4401-b301-1ab6497cb1d7\") " pod="openstack/dnsmasq-dns-74f6bcbc87-shjcj" Feb 14 04:30:49 crc kubenswrapper[4867]: I0214 04:30:49.459265 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/34e3aca5-c7d4-4401-b301-1ab6497cb1d7-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-shjcj\" (UID: \"34e3aca5-c7d4-4401-b301-1ab6497cb1d7\") " pod="openstack/dnsmasq-dns-74f6bcbc87-shjcj" Feb 14 04:30:49 crc kubenswrapper[4867]: I0214 04:30:49.459396 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/34e3aca5-c7d4-4401-b301-1ab6497cb1d7-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-shjcj\" (UID: \"34e3aca5-c7d4-4401-b301-1ab6497cb1d7\") " pod="openstack/dnsmasq-dns-74f6bcbc87-shjcj" Feb 14 04:30:49 crc kubenswrapper[4867]: I0214 04:30:49.479026 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvg6h\" (UniqueName: \"kubernetes.io/projected/34e3aca5-c7d4-4401-b301-1ab6497cb1d7-kube-api-access-bvg6h\") pod \"dnsmasq-dns-74f6bcbc87-shjcj\" (UID: \"34e3aca5-c7d4-4401-b301-1ab6497cb1d7\") " pod="openstack/dnsmasq-dns-74f6bcbc87-shjcj" Feb 14 04:30:49 crc kubenswrapper[4867]: I0214 04:30:49.583989 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-shjcj" Feb 14 04:30:50 crc kubenswrapper[4867]: I0214 04:30:50.133980 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-shjcj"] Feb 14 04:30:50 crc kubenswrapper[4867]: I0214 04:30:50.655584 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-9vmb7"] Feb 14 04:30:50 crc kubenswrapper[4867]: I0214 04:30:50.657243 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-9vmb7" Feb 14 04:30:50 crc kubenswrapper[4867]: I0214 04:30:50.671676 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-9vmb7"] Feb 14 04:30:50 crc kubenswrapper[4867]: I0214 04:30:50.790282 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5n9np\" (UniqueName: \"kubernetes.io/projected/f90d34b6-263e-4515-a13a-a41fda1c40ca-kube-api-access-5n9np\") pod \"cinder-db-create-9vmb7\" (UID: \"f90d34b6-263e-4515-a13a-a41fda1c40ca\") " pod="openstack/cinder-db-create-9vmb7" Feb 14 04:30:50 crc kubenswrapper[4867]: I0214 04:30:50.790666 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f90d34b6-263e-4515-a13a-a41fda1c40ca-operator-scripts\") pod \"cinder-db-create-9vmb7\" (UID: \"f90d34b6-263e-4515-a13a-a41fda1c40ca\") " pod="openstack/cinder-db-create-9vmb7" Feb 14 04:30:50 crc kubenswrapper[4867]: I0214 04:30:50.832628 4867 generic.go:334] "Generic (PLEG): container finished" podID="34e3aca5-c7d4-4401-b301-1ab6497cb1d7" containerID="16409e89382c3b3bacc54f4af34e446329e86ddc39bf082ba4bf9fe2d118dfb6" exitCode=0 Feb 14 04:30:50 crc kubenswrapper[4867]: I0214 04:30:50.832927 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-shjcj" event={"ID":"34e3aca5-c7d4-4401-b301-1ab6497cb1d7","Type":"ContainerDied","Data":"16409e89382c3b3bacc54f4af34e446329e86ddc39bf082ba4bf9fe2d118dfb6"} Feb 14 04:30:50 crc kubenswrapper[4867]: I0214 04:30:50.833038 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-shjcj" event={"ID":"34e3aca5-c7d4-4401-b301-1ab6497cb1d7","Type":"ContainerStarted","Data":"ebbc4da8bb363e9a0155ec0e870c82eae82810ab31f3b604e5582d38957c9d4d"} Feb 14 04:30:50 crc kubenswrapper[4867]: I0214 04:30:50.901158 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5n9np\" (UniqueName: \"kubernetes.io/projected/f90d34b6-263e-4515-a13a-a41fda1c40ca-kube-api-access-5n9np\") pod \"cinder-db-create-9vmb7\" (UID: \"f90d34b6-263e-4515-a13a-a41fda1c40ca\") " pod="openstack/cinder-db-create-9vmb7" Feb 14 04:30:50 crc kubenswrapper[4867]: I0214 04:30:50.901215 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f90d34b6-263e-4515-a13a-a41fda1c40ca-operator-scripts\") pod \"cinder-db-create-9vmb7\" (UID: \"f90d34b6-263e-4515-a13a-a41fda1c40ca\") " pod="openstack/cinder-db-create-9vmb7" Feb 14 04:30:50 crc kubenswrapper[4867]: I0214 04:30:50.902695 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f90d34b6-263e-4515-a13a-a41fda1c40ca-operator-scripts\") pod \"cinder-db-create-9vmb7\" (UID: \"f90d34b6-263e-4515-a13a-a41fda1c40ca\") " pod="openstack/cinder-db-create-9vmb7" Feb 14 04:30:50 crc kubenswrapper[4867]: I0214 04:30:50.935765 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5n9np\" (UniqueName: \"kubernetes.io/projected/f90d34b6-263e-4515-a13a-a41fda1c40ca-kube-api-access-5n9np\") pod \"cinder-db-create-9vmb7\" (UID: \"f90d34b6-263e-4515-a13a-a41fda1c40ca\") " pod="openstack/cinder-db-create-9vmb7" Feb 14 04:30:50 crc kubenswrapper[4867]: I0214 04:30:50.937113 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-f62v7"] Feb 14 04:30:50 crc kubenswrapper[4867]: I0214 04:30:50.938915 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-f62v7" Feb 14 04:30:50 crc kubenswrapper[4867]: I0214 04:30:50.955456 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-f62v7"] Feb 14 04:30:50 crc kubenswrapper[4867]: I0214 04:30:50.977931 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-9vmb7" Feb 14 04:30:50 crc kubenswrapper[4867]: I0214 04:30:50.996620 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-fad3-account-create-update-zwwh5"] Feb 14 04:30:50 crc kubenswrapper[4867]: I0214 04:30:50.998087 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-fad3-account-create-update-zwwh5" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.002534 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c993d62-94a7-4903-b984-adcef36b53b8-operator-scripts\") pod \"heat-db-create-f62v7\" (UID: \"9c993d62-94a7-4903-b984-adcef36b53b8\") " pod="openstack/heat-db-create-f62v7" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.002734 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhjfs\" (UniqueName: \"kubernetes.io/projected/9c993d62-94a7-4903-b984-adcef36b53b8-kube-api-access-fhjfs\") pod \"heat-db-create-f62v7\" (UID: \"9c993d62-94a7-4903-b984-adcef36b53b8\") " pod="openstack/heat-db-create-f62v7" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.007489 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.074839 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-fad3-account-create-update-zwwh5"] Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.104148 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd001336-81f9-43f6-9540-432047e6c98a-operator-scripts\") pod \"cinder-fad3-account-create-update-zwwh5\" (UID: \"bd001336-81f9-43f6-9540-432047e6c98a\") " pod="openstack/cinder-fad3-account-create-update-zwwh5" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.104296 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c993d62-94a7-4903-b984-adcef36b53b8-operator-scripts\") pod \"heat-db-create-f62v7\" (UID: \"9c993d62-94a7-4903-b984-adcef36b53b8\") " pod="openstack/heat-db-create-f62v7" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.104327 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhjfs\" (UniqueName: \"kubernetes.io/projected/9c993d62-94a7-4903-b984-adcef36b53b8-kube-api-access-fhjfs\") pod \"heat-db-create-f62v7\" (UID: \"9c993d62-94a7-4903-b984-adcef36b53b8\") " pod="openstack/heat-db-create-f62v7" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.104354 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nx5sc\" (UniqueName: \"kubernetes.io/projected/bd001336-81f9-43f6-9540-432047e6c98a-kube-api-access-nx5sc\") pod \"cinder-fad3-account-create-update-zwwh5\" (UID: \"bd001336-81f9-43f6-9540-432047e6c98a\") " pod="openstack/cinder-fad3-account-create-update-zwwh5" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.105158 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c993d62-94a7-4903-b984-adcef36b53b8-operator-scripts\") pod \"heat-db-create-f62v7\" (UID: \"9c993d62-94a7-4903-b984-adcef36b53b8\") " pod="openstack/heat-db-create-f62v7" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.127306 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhjfs\" (UniqueName: \"kubernetes.io/projected/9c993d62-94a7-4903-b984-adcef36b53b8-kube-api-access-fhjfs\") pod \"heat-db-create-f62v7\" (UID: \"9c993d62-94a7-4903-b984-adcef36b53b8\") " pod="openstack/heat-db-create-f62v7" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.152614 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-gk75z"] Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.153964 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-gk75z" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.159247 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.159524 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.159644 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.161864 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-ffvbq" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.172042 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-gk75z"] Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.206493 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nx5sc\" (UniqueName: \"kubernetes.io/projected/bd001336-81f9-43f6-9540-432047e6c98a-kube-api-access-nx5sc\") pod \"cinder-fad3-account-create-update-zwwh5\" (UID: \"bd001336-81f9-43f6-9540-432047e6c98a\") " pod="openstack/cinder-fad3-account-create-update-zwwh5" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.206640 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd001336-81f9-43f6-9540-432047e6c98a-operator-scripts\") pod \"cinder-fad3-account-create-update-zwwh5\" (UID: \"bd001336-81f9-43f6-9540-432047e6c98a\") " pod="openstack/cinder-fad3-account-create-update-zwwh5" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.208066 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd001336-81f9-43f6-9540-432047e6c98a-operator-scripts\") pod \"cinder-fad3-account-create-update-zwwh5\" (UID: \"bd001336-81f9-43f6-9540-432047e6c98a\") " pod="openstack/cinder-fad3-account-create-update-zwwh5" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.231108 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nx5sc\" (UniqueName: \"kubernetes.io/projected/bd001336-81f9-43f6-9540-432047e6c98a-kube-api-access-nx5sc\") pod \"cinder-fad3-account-create-update-zwwh5\" (UID: \"bd001336-81f9-43f6-9540-432047e6c98a\") " pod="openstack/cinder-fad3-account-create-update-zwwh5" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.246258 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-8zqfs"] Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.248099 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-8zqfs" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.268041 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-8zqfs"] Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.283009 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-3b6b-account-create-update-74g2s"] Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.284674 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-3b6b-account-create-update-74g2s" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.286582 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.293955 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-3b6b-account-create-update-74g2s"] Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.308655 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49af28f1-d33f-4717-81a7-4377bfef388c-combined-ca-bundle\") pod \"keystone-db-sync-gk75z\" (UID: \"49af28f1-d33f-4717-81a7-4377bfef388c\") " pod="openstack/keystone-db-sync-gk75z" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.308710 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9gd9\" (UniqueName: \"kubernetes.io/projected/49af28f1-d33f-4717-81a7-4377bfef388c-kube-api-access-j9gd9\") pod \"keystone-db-sync-gk75z\" (UID: \"49af28f1-d33f-4717-81a7-4377bfef388c\") " pod="openstack/keystone-db-sync-gk75z" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.308803 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c14b9ea2-b4ee-4365-8b77-d58ff122fabb-operator-scripts\") pod \"neutron-db-create-8zqfs\" (UID: \"c14b9ea2-b4ee-4365-8b77-d58ff122fabb\") " pod="openstack/neutron-db-create-8zqfs" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.308842 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrw7v\" (UniqueName: \"kubernetes.io/projected/c14b9ea2-b4ee-4365-8b77-d58ff122fabb-kube-api-access-zrw7v\") pod \"neutron-db-create-8zqfs\" (UID: \"c14b9ea2-b4ee-4365-8b77-d58ff122fabb\") " pod="openstack/neutron-db-create-8zqfs" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.308888 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49af28f1-d33f-4717-81a7-4377bfef388c-config-data\") pod \"keystone-db-sync-gk75z\" (UID: \"49af28f1-d33f-4717-81a7-4377bfef388c\") " pod="openstack/keystone-db-sync-gk75z" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.354710 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-7kcws"] Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.356373 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-7kcws" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.367658 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-bab0-account-create-update-kmfpg"] Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.372764 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bab0-account-create-update-kmfpg" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.374494 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.380777 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-7kcws"] Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.395726 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-bab0-account-create-update-kmfpg"] Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.399544 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-f62v7" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.410713 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c14b9ea2-b4ee-4365-8b77-d58ff122fabb-operator-scripts\") pod \"neutron-db-create-8zqfs\" (UID: \"c14b9ea2-b4ee-4365-8b77-d58ff122fabb\") " pod="openstack/neutron-db-create-8zqfs" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.410761 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7srfx\" (UniqueName: \"kubernetes.io/projected/d1f3a1a1-5734-4782-98e1-1eb22cfbdf93-kube-api-access-7srfx\") pod \"barbican-db-create-7kcws\" (UID: \"d1f3a1a1-5734-4782-98e1-1eb22cfbdf93\") " pod="openstack/barbican-db-create-7kcws" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.410790 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jlgg\" (UniqueName: \"kubernetes.io/projected/6961722f-b14d-42f2-bd56-68686c2e8a9a-kube-api-access-7jlgg\") pod \"barbican-3b6b-account-create-update-74g2s\" (UID: \"6961722f-b14d-42f2-bd56-68686c2e8a9a\") " pod="openstack/barbican-3b6b-account-create-update-74g2s" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.410820 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrw7v\" (UniqueName: \"kubernetes.io/projected/c14b9ea2-b4ee-4365-8b77-d58ff122fabb-kube-api-access-zrw7v\") pod \"neutron-db-create-8zqfs\" (UID: \"c14b9ea2-b4ee-4365-8b77-d58ff122fabb\") " pod="openstack/neutron-db-create-8zqfs" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.411119 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49af28f1-d33f-4717-81a7-4377bfef388c-config-data\") pod \"keystone-db-sync-gk75z\" (UID: \"49af28f1-d33f-4717-81a7-4377bfef388c\") " pod="openstack/keystone-db-sync-gk75z" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.411222 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6961722f-b14d-42f2-bd56-68686c2e8a9a-operator-scripts\") pod \"barbican-3b6b-account-create-update-74g2s\" (UID: \"6961722f-b14d-42f2-bd56-68686c2e8a9a\") " pod="openstack/barbican-3b6b-account-create-update-74g2s" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.411419 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49af28f1-d33f-4717-81a7-4377bfef388c-combined-ca-bundle\") pod \"keystone-db-sync-gk75z\" (UID: \"49af28f1-d33f-4717-81a7-4377bfef388c\") " pod="openstack/keystone-db-sync-gk75z" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.411449 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c14b9ea2-b4ee-4365-8b77-d58ff122fabb-operator-scripts\") pod \"neutron-db-create-8zqfs\" (UID: \"c14b9ea2-b4ee-4365-8b77-d58ff122fabb\") " pod="openstack/neutron-db-create-8zqfs" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.411475 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9gd9\" (UniqueName: \"kubernetes.io/projected/49af28f1-d33f-4717-81a7-4377bfef388c-kube-api-access-j9gd9\") pod \"keystone-db-sync-gk75z\" (UID: \"49af28f1-d33f-4717-81a7-4377bfef388c\") " pod="openstack/keystone-db-sync-gk75z" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.411499 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1f3a1a1-5734-4782-98e1-1eb22cfbdf93-operator-scripts\") pod \"barbican-db-create-7kcws\" (UID: \"d1f3a1a1-5734-4782-98e1-1eb22cfbdf93\") " pod="openstack/barbican-db-create-7kcws" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.415876 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49af28f1-d33f-4717-81a7-4377bfef388c-config-data\") pod \"keystone-db-sync-gk75z\" (UID: \"49af28f1-d33f-4717-81a7-4377bfef388c\") " pod="openstack/keystone-db-sync-gk75z" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.416078 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49af28f1-d33f-4717-81a7-4377bfef388c-combined-ca-bundle\") pod \"keystone-db-sync-gk75z\" (UID: \"49af28f1-d33f-4717-81a7-4377bfef388c\") " pod="openstack/keystone-db-sync-gk75z" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.433473 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-fad3-account-create-update-zwwh5" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.437117 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9gd9\" (UniqueName: \"kubernetes.io/projected/49af28f1-d33f-4717-81a7-4377bfef388c-kube-api-access-j9gd9\") pod \"keystone-db-sync-gk75z\" (UID: \"49af28f1-d33f-4717-81a7-4377bfef388c\") " pod="openstack/keystone-db-sync-gk75z" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.442129 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrw7v\" (UniqueName: \"kubernetes.io/projected/c14b9ea2-b4ee-4365-8b77-d58ff122fabb-kube-api-access-zrw7v\") pod \"neutron-db-create-8zqfs\" (UID: \"c14b9ea2-b4ee-4365-8b77-d58ff122fabb\") " pod="openstack/neutron-db-create-8zqfs" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.473261 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-07f7-account-create-update-k24c7"] Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.474775 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-07f7-account-create-update-k24c7" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.484341 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-gk75z" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.484605 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.486571 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-07f7-account-create-update-k24c7"] Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.513122 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1f3a1a1-5734-4782-98e1-1eb22cfbdf93-operator-scripts\") pod \"barbican-db-create-7kcws\" (UID: \"d1f3a1a1-5734-4782-98e1-1eb22cfbdf93\") " pod="openstack/barbican-db-create-7kcws" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.513254 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c5e9025-3781-4461-98d7-0d0d72c3b59b-operator-scripts\") pod \"neutron-bab0-account-create-update-kmfpg\" (UID: \"2c5e9025-3781-4461-98d7-0d0d72c3b59b\") " pod="openstack/neutron-bab0-account-create-update-kmfpg" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.513336 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7srfx\" (UniqueName: \"kubernetes.io/projected/d1f3a1a1-5734-4782-98e1-1eb22cfbdf93-kube-api-access-7srfx\") pod \"barbican-db-create-7kcws\" (UID: \"d1f3a1a1-5734-4782-98e1-1eb22cfbdf93\") " pod="openstack/barbican-db-create-7kcws" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.513373 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jlgg\" (UniqueName: \"kubernetes.io/projected/6961722f-b14d-42f2-bd56-68686c2e8a9a-kube-api-access-7jlgg\") pod \"barbican-3b6b-account-create-update-74g2s\" (UID: \"6961722f-b14d-42f2-bd56-68686c2e8a9a\") " pod="openstack/barbican-3b6b-account-create-update-74g2s" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.513454 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sz4mb\" (UniqueName: \"kubernetes.io/projected/2c5e9025-3781-4461-98d7-0d0d72c3b59b-kube-api-access-sz4mb\") pod \"neutron-bab0-account-create-update-kmfpg\" (UID: \"2c5e9025-3781-4461-98d7-0d0d72c3b59b\") " pod="openstack/neutron-bab0-account-create-update-kmfpg" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.513542 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6961722f-b14d-42f2-bd56-68686c2e8a9a-operator-scripts\") pod \"barbican-3b6b-account-create-update-74g2s\" (UID: \"6961722f-b14d-42f2-bd56-68686c2e8a9a\") " pod="openstack/barbican-3b6b-account-create-update-74g2s" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.514458 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6961722f-b14d-42f2-bd56-68686c2e8a9a-operator-scripts\") pod \"barbican-3b6b-account-create-update-74g2s\" (UID: \"6961722f-b14d-42f2-bd56-68686c2e8a9a\") " pod="openstack/barbican-3b6b-account-create-update-74g2s" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.514969 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1f3a1a1-5734-4782-98e1-1eb22cfbdf93-operator-scripts\") pod \"barbican-db-create-7kcws\" (UID: \"d1f3a1a1-5734-4782-98e1-1eb22cfbdf93\") " pod="openstack/barbican-db-create-7kcws" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.534274 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7srfx\" (UniqueName: \"kubernetes.io/projected/d1f3a1a1-5734-4782-98e1-1eb22cfbdf93-kube-api-access-7srfx\") pod \"barbican-db-create-7kcws\" (UID: \"d1f3a1a1-5734-4782-98e1-1eb22cfbdf93\") " pod="openstack/barbican-db-create-7kcws" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.549202 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jlgg\" (UniqueName: \"kubernetes.io/projected/6961722f-b14d-42f2-bd56-68686c2e8a9a-kube-api-access-7jlgg\") pod \"barbican-3b6b-account-create-update-74g2s\" (UID: \"6961722f-b14d-42f2-bd56-68686c2e8a9a\") " pod="openstack/barbican-3b6b-account-create-update-74g2s" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.585098 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-8zqfs" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.604348 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-3b6b-account-create-update-74g2s" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.615396 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c5e9025-3781-4461-98d7-0d0d72c3b59b-operator-scripts\") pod \"neutron-bab0-account-create-update-kmfpg\" (UID: \"2c5e9025-3781-4461-98d7-0d0d72c3b59b\") " pod="openstack/neutron-bab0-account-create-update-kmfpg" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.615572 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sz4mb\" (UniqueName: \"kubernetes.io/projected/2c5e9025-3781-4461-98d7-0d0d72c3b59b-kube-api-access-sz4mb\") pod \"neutron-bab0-account-create-update-kmfpg\" (UID: \"2c5e9025-3781-4461-98d7-0d0d72c3b59b\") " pod="openstack/neutron-bab0-account-create-update-kmfpg" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.615627 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5vl8\" (UniqueName: \"kubernetes.io/projected/b1826e5b-3563-455f-9caf-9c4ee203210f-kube-api-access-t5vl8\") pod \"heat-07f7-account-create-update-k24c7\" (UID: \"b1826e5b-3563-455f-9caf-9c4ee203210f\") " pod="openstack/heat-07f7-account-create-update-k24c7" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.615666 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1826e5b-3563-455f-9caf-9c4ee203210f-operator-scripts\") pod \"heat-07f7-account-create-update-k24c7\" (UID: \"b1826e5b-3563-455f-9caf-9c4ee203210f\") " pod="openstack/heat-07f7-account-create-update-k24c7" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.616685 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c5e9025-3781-4461-98d7-0d0d72c3b59b-operator-scripts\") pod \"neutron-bab0-account-create-update-kmfpg\" (UID: \"2c5e9025-3781-4461-98d7-0d0d72c3b59b\") " pod="openstack/neutron-bab0-account-create-update-kmfpg" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.630355 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-9vmb7"] Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.641175 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sz4mb\" (UniqueName: \"kubernetes.io/projected/2c5e9025-3781-4461-98d7-0d0d72c3b59b-kube-api-access-sz4mb\") pod \"neutron-bab0-account-create-update-kmfpg\" (UID: \"2c5e9025-3781-4461-98d7-0d0d72c3b59b\") " pod="openstack/neutron-bab0-account-create-update-kmfpg" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.689535 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-7kcws" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.699778 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bab0-account-create-update-kmfpg" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.717522 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5vl8\" (UniqueName: \"kubernetes.io/projected/b1826e5b-3563-455f-9caf-9c4ee203210f-kube-api-access-t5vl8\") pod \"heat-07f7-account-create-update-k24c7\" (UID: \"b1826e5b-3563-455f-9caf-9c4ee203210f\") " pod="openstack/heat-07f7-account-create-update-k24c7" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.717585 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1826e5b-3563-455f-9caf-9c4ee203210f-operator-scripts\") pod \"heat-07f7-account-create-update-k24c7\" (UID: \"b1826e5b-3563-455f-9caf-9c4ee203210f\") " pod="openstack/heat-07f7-account-create-update-k24c7" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.719156 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1826e5b-3563-455f-9caf-9c4ee203210f-operator-scripts\") pod \"heat-07f7-account-create-update-k24c7\" (UID: \"b1826e5b-3563-455f-9caf-9c4ee203210f\") " pod="openstack/heat-07f7-account-create-update-k24c7" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.748276 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5vl8\" (UniqueName: \"kubernetes.io/projected/b1826e5b-3563-455f-9caf-9c4ee203210f-kube-api-access-t5vl8\") pod \"heat-07f7-account-create-update-k24c7\" (UID: \"b1826e5b-3563-455f-9caf-9c4ee203210f\") " pod="openstack/heat-07f7-account-create-update-k24c7" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.811054 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-07f7-account-create-update-k24c7" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.872738 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-shjcj" event={"ID":"34e3aca5-c7d4-4401-b301-1ab6497cb1d7","Type":"ContainerStarted","Data":"42be2316b4ae343fcb4b814718eabf5f7933e5e7ed598513fca11b7935007ed3"} Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.873820 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74f6bcbc87-shjcj" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.876904 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-9vmb7" event={"ID":"f90d34b6-263e-4515-a13a-a41fda1c40ca","Type":"ContainerStarted","Data":"18930daf07a76a05764aee269daec1f5c915570e7f3862b364d2758a4e346023"} Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.901707 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-74f6bcbc87-shjcj" podStartSLOduration=2.901686775 podStartE2EDuration="2.901686775s" podCreationTimestamp="2026-02-14 04:30:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:30:51.892429199 +0000 UTC m=+1283.973366523" watchObservedRunningTime="2026-02-14 04:30:51.901686775 +0000 UTC m=+1283.982624089" Feb 14 04:30:52 crc kubenswrapper[4867]: I0214 04:30:52.042898 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-f62v7"] Feb 14 04:30:52 crc kubenswrapper[4867]: I0214 04:30:52.176613 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-gk75z"] Feb 14 04:30:52 crc kubenswrapper[4867]: I0214 04:30:52.337407 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-fad3-account-create-update-zwwh5"] Feb 14 04:30:52 crc kubenswrapper[4867]: W0214 04:30:52.338742 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbd001336_81f9_43f6_9540_432047e6c98a.slice/crio-3e22e2f973dc94c1eb0c671f62dfabd49a9e62ca29a034bc4605cec2c1c2cb03 WatchSource:0}: Error finding container 3e22e2f973dc94c1eb0c671f62dfabd49a9e62ca29a034bc4605cec2c1c2cb03: Status 404 returned error can't find the container with id 3e22e2f973dc94c1eb0c671f62dfabd49a9e62ca29a034bc4605cec2c1c2cb03 Feb 14 04:30:52 crc kubenswrapper[4867]: W0214 04:30:52.424661 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6961722f_b14d_42f2_bd56_68686c2e8a9a.slice/crio-9e2a66b669523bd8ec4ed03fffefd52b1e590786cf2f25c6600bb3d0803c2a73 WatchSource:0}: Error finding container 9e2a66b669523bd8ec4ed03fffefd52b1e590786cf2f25c6600bb3d0803c2a73: Status 404 returned error can't find the container with id 9e2a66b669523bd8ec4ed03fffefd52b1e590786cf2f25c6600bb3d0803c2a73 Feb 14 04:30:52 crc kubenswrapper[4867]: I0214 04:30:52.429831 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-3b6b-account-create-update-74g2s"] Feb 14 04:30:52 crc kubenswrapper[4867]: I0214 04:30:52.480353 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-8zqfs"] Feb 14 04:30:52 crc kubenswrapper[4867]: I0214 04:30:52.615249 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-7kcws"] Feb 14 04:30:52 crc kubenswrapper[4867]: I0214 04:30:52.630484 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-bab0-account-create-update-kmfpg"] Feb 14 04:30:52 crc kubenswrapper[4867]: W0214 04:30:52.679443 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd1f3a1a1_5734_4782_98e1_1eb22cfbdf93.slice/crio-2fa740a1560381c1be588a63fa6eca1a87525a10489c0023604b28bbb422bcce WatchSource:0}: Error finding container 2fa740a1560381c1be588a63fa6eca1a87525a10489c0023604b28bbb422bcce: Status 404 returned error can't find the container with id 2fa740a1560381c1be588a63fa6eca1a87525a10489c0023604b28bbb422bcce Feb 14 04:30:52 crc kubenswrapper[4867]: W0214 04:30:52.679724 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2c5e9025_3781_4461_98d7_0d0d72c3b59b.slice/crio-accc0a0ad0ec78eb75ebe3797d9c1962821c2a1178aef8aa910c8f1b960f9f06 WatchSource:0}: Error finding container accc0a0ad0ec78eb75ebe3797d9c1962821c2a1178aef8aa910c8f1b960f9f06: Status 404 returned error can't find the container with id accc0a0ad0ec78eb75ebe3797d9c1962821c2a1178aef8aa910c8f1b960f9f06 Feb 14 04:30:52 crc kubenswrapper[4867]: I0214 04:30:52.727663 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-07f7-account-create-update-k24c7"] Feb 14 04:30:52 crc kubenswrapper[4867]: W0214 04:30:52.742496 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb1826e5b_3563_455f_9caf_9c4ee203210f.slice/crio-e3ca34489f1971b93da8534dbc2ac608bb089f7e2eaca7201880d10011bb4815 WatchSource:0}: Error finding container e3ca34489f1971b93da8534dbc2ac608bb089f7e2eaca7201880d10011bb4815: Status 404 returned error can't find the container with id e3ca34489f1971b93da8534dbc2ac608bb089f7e2eaca7201880d10011bb4815 Feb 14 04:30:52 crc kubenswrapper[4867]: I0214 04:30:52.891540 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-8zqfs" event={"ID":"c14b9ea2-b4ee-4365-8b77-d58ff122fabb","Type":"ContainerStarted","Data":"4ec63f92ddcb6034ab74dfd7e4ce3e903a8d6e48acae9dd2331725f1ae872cc4"} Feb 14 04:30:52 crc kubenswrapper[4867]: I0214 04:30:52.892861 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-gk75z" event={"ID":"49af28f1-d33f-4717-81a7-4377bfef388c","Type":"ContainerStarted","Data":"64fb40663b912dd7645436912b2fd2796b557bdb87fefac71729dbd2b250227b"} Feb 14 04:30:52 crc kubenswrapper[4867]: I0214 04:30:52.895320 4867 generic.go:334] "Generic (PLEG): container finished" podID="9c993d62-94a7-4903-b984-adcef36b53b8" containerID="b0ee3d8476bae8f4a3fe8c62bb7c061a9556901f3c45531ad9e5c2cc20102b49" exitCode=0 Feb 14 04:30:52 crc kubenswrapper[4867]: I0214 04:30:52.895372 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-f62v7" event={"ID":"9c993d62-94a7-4903-b984-adcef36b53b8","Type":"ContainerDied","Data":"b0ee3d8476bae8f4a3fe8c62bb7c061a9556901f3c45531ad9e5c2cc20102b49"} Feb 14 04:30:52 crc kubenswrapper[4867]: I0214 04:30:52.895491 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-f62v7" event={"ID":"9c993d62-94a7-4903-b984-adcef36b53b8","Type":"ContainerStarted","Data":"27d170506f928ebc8901447eb428c9fc3a1990f4e11bb4e89c49ea05c8cadca9"} Feb 14 04:30:52 crc kubenswrapper[4867]: I0214 04:30:52.898315 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bab0-account-create-update-kmfpg" event={"ID":"2c5e9025-3781-4461-98d7-0d0d72c3b59b","Type":"ContainerStarted","Data":"accc0a0ad0ec78eb75ebe3797d9c1962821c2a1178aef8aa910c8f1b960f9f06"} Feb 14 04:30:52 crc kubenswrapper[4867]: I0214 04:30:52.907370 4867 generic.go:334] "Generic (PLEG): container finished" podID="f90d34b6-263e-4515-a13a-a41fda1c40ca" containerID="8d4513234d1fad24212cdf82718a385562881173fcd13074ff0a12c06d73e620" exitCode=0 Feb 14 04:30:52 crc kubenswrapper[4867]: I0214 04:30:52.907473 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-9vmb7" event={"ID":"f90d34b6-263e-4515-a13a-a41fda1c40ca","Type":"ContainerDied","Data":"8d4513234d1fad24212cdf82718a385562881173fcd13074ff0a12c06d73e620"} Feb 14 04:30:52 crc kubenswrapper[4867]: I0214 04:30:52.911140 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-07f7-account-create-update-k24c7" event={"ID":"b1826e5b-3563-455f-9caf-9c4ee203210f","Type":"ContainerStarted","Data":"e3ca34489f1971b93da8534dbc2ac608bb089f7e2eaca7201880d10011bb4815"} Feb 14 04:30:52 crc kubenswrapper[4867]: I0214 04:30:52.913697 4867 generic.go:334] "Generic (PLEG): container finished" podID="bd001336-81f9-43f6-9540-432047e6c98a" containerID="e481f6b0c38be3cb0239424de842f33edc585ce836916de0d7d544ab198683d3" exitCode=0 Feb 14 04:30:52 crc kubenswrapper[4867]: I0214 04:30:52.913781 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-fad3-account-create-update-zwwh5" event={"ID":"bd001336-81f9-43f6-9540-432047e6c98a","Type":"ContainerDied","Data":"e481f6b0c38be3cb0239424de842f33edc585ce836916de0d7d544ab198683d3"} Feb 14 04:30:52 crc kubenswrapper[4867]: I0214 04:30:52.913827 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-fad3-account-create-update-zwwh5" event={"ID":"bd001336-81f9-43f6-9540-432047e6c98a","Type":"ContainerStarted","Data":"3e22e2f973dc94c1eb0c671f62dfabd49a9e62ca29a034bc4605cec2c1c2cb03"} Feb 14 04:30:52 crc kubenswrapper[4867]: I0214 04:30:52.914955 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-7kcws" event={"ID":"d1f3a1a1-5734-4782-98e1-1eb22cfbdf93","Type":"ContainerStarted","Data":"2fa740a1560381c1be588a63fa6eca1a87525a10489c0023604b28bbb422bcce"} Feb 14 04:30:52 crc kubenswrapper[4867]: I0214 04:30:52.922085 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-3b6b-account-create-update-74g2s" event={"ID":"6961722f-b14d-42f2-bd56-68686c2e8a9a","Type":"ContainerStarted","Data":"dac7c15e8d204db1888f9efc6944db09a4f811e1647c31593e86131c9a51b98c"} Feb 14 04:30:52 crc kubenswrapper[4867]: I0214 04:30:52.922121 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-3b6b-account-create-update-74g2s" event={"ID":"6961722f-b14d-42f2-bd56-68686c2e8a9a","Type":"ContainerStarted","Data":"9e2a66b669523bd8ec4ed03fffefd52b1e590786cf2f25c6600bb3d0803c2a73"} Feb 14 04:30:53 crc kubenswrapper[4867]: I0214 04:30:53.005224 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-3b6b-account-create-update-74g2s" podStartSLOduration=2.005199656 podStartE2EDuration="2.005199656s" podCreationTimestamp="2026-02-14 04:30:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:30:52.989410838 +0000 UTC m=+1285.070348152" watchObservedRunningTime="2026-02-14 04:30:53.005199656 +0000 UTC m=+1285.086136970" Feb 14 04:30:53 crc kubenswrapper[4867]: I0214 04:30:53.371009 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:53 crc kubenswrapper[4867]: I0214 04:30:53.388158 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:53 crc kubenswrapper[4867]: E0214 04:30:53.493683 4867 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2c5e9025_3781_4461_98d7_0d0d72c3b59b.slice/crio-cb180091e4ae70970aa78bde495475b793634681199f41c69a03b8635b020332.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb1826e5b_3563_455f_9caf_9c4ee203210f.slice/crio-4f77da80359dbcaaf7f1b0862edf00e5f51cbdfe953464edb0d8a0f3cd5a1425.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2c5e9025_3781_4461_98d7_0d0d72c3b59b.slice/crio-conmon-cb180091e4ae70970aa78bde495475b793634681199f41c69a03b8635b020332.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb1826e5b_3563_455f_9caf_9c4ee203210f.slice/crio-conmon-4f77da80359dbcaaf7f1b0862edf00e5f51cbdfe953464edb0d8a0f3cd5a1425.scope\": RecentStats: unable to find data in memory cache]" Feb 14 04:30:53 crc kubenswrapper[4867]: I0214 04:30:53.934789 4867 generic.go:334] "Generic (PLEG): container finished" podID="d1f3a1a1-5734-4782-98e1-1eb22cfbdf93" containerID="bd098d1d3f5431ee4dfc77512f72bdb3c684d719a4f758c6fe63d5e6f0d5b682" exitCode=0 Feb 14 04:30:53 crc kubenswrapper[4867]: I0214 04:30:53.934863 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-7kcws" event={"ID":"d1f3a1a1-5734-4782-98e1-1eb22cfbdf93","Type":"ContainerDied","Data":"bd098d1d3f5431ee4dfc77512f72bdb3c684d719a4f758c6fe63d5e6f0d5b682"} Feb 14 04:30:53 crc kubenswrapper[4867]: I0214 04:30:53.937877 4867 generic.go:334] "Generic (PLEG): container finished" podID="6961722f-b14d-42f2-bd56-68686c2e8a9a" containerID="dac7c15e8d204db1888f9efc6944db09a4f811e1647c31593e86131c9a51b98c" exitCode=0 Feb 14 04:30:53 crc kubenswrapper[4867]: I0214 04:30:53.937913 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-3b6b-account-create-update-74g2s" event={"ID":"6961722f-b14d-42f2-bd56-68686c2e8a9a","Type":"ContainerDied","Data":"dac7c15e8d204db1888f9efc6944db09a4f811e1647c31593e86131c9a51b98c"} Feb 14 04:30:53 crc kubenswrapper[4867]: I0214 04:30:53.941546 4867 generic.go:334] "Generic (PLEG): container finished" podID="c14b9ea2-b4ee-4365-8b77-d58ff122fabb" containerID="8042db461fd6eabaa93681751cc5037c8a7ddd74046cd943405dc18cc37f069c" exitCode=0 Feb 14 04:30:53 crc kubenswrapper[4867]: I0214 04:30:53.941649 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-8zqfs" event={"ID":"c14b9ea2-b4ee-4365-8b77-d58ff122fabb","Type":"ContainerDied","Data":"8042db461fd6eabaa93681751cc5037c8a7ddd74046cd943405dc18cc37f069c"} Feb 14 04:30:53 crc kubenswrapper[4867]: I0214 04:30:53.943831 4867 generic.go:334] "Generic (PLEG): container finished" podID="b1826e5b-3563-455f-9caf-9c4ee203210f" containerID="4f77da80359dbcaaf7f1b0862edf00e5f51cbdfe953464edb0d8a0f3cd5a1425" exitCode=0 Feb 14 04:30:53 crc kubenswrapper[4867]: I0214 04:30:53.943882 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-07f7-account-create-update-k24c7" event={"ID":"b1826e5b-3563-455f-9caf-9c4ee203210f","Type":"ContainerDied","Data":"4f77da80359dbcaaf7f1b0862edf00e5f51cbdfe953464edb0d8a0f3cd5a1425"} Feb 14 04:30:53 crc kubenswrapper[4867]: I0214 04:30:53.945687 4867 generic.go:334] "Generic (PLEG): container finished" podID="2c5e9025-3781-4461-98d7-0d0d72c3b59b" containerID="cb180091e4ae70970aa78bde495475b793634681199f41c69a03b8635b020332" exitCode=0 Feb 14 04:30:53 crc kubenswrapper[4867]: I0214 04:30:53.945758 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bab0-account-create-update-kmfpg" event={"ID":"2c5e9025-3781-4461-98d7-0d0d72c3b59b","Type":"ContainerDied","Data":"cb180091e4ae70970aa78bde495475b793634681199f41c69a03b8635b020332"} Feb 14 04:30:53 crc kubenswrapper[4867]: I0214 04:30:53.954856 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:54 crc kubenswrapper[4867]: I0214 04:30:54.500953 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-fad3-account-create-update-zwwh5" Feb 14 04:30:54 crc kubenswrapper[4867]: I0214 04:30:54.595014 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd001336-81f9-43f6-9540-432047e6c98a-operator-scripts\") pod \"bd001336-81f9-43f6-9540-432047e6c98a\" (UID: \"bd001336-81f9-43f6-9540-432047e6c98a\") " Feb 14 04:30:54 crc kubenswrapper[4867]: I0214 04:30:54.595103 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nx5sc\" (UniqueName: \"kubernetes.io/projected/bd001336-81f9-43f6-9540-432047e6c98a-kube-api-access-nx5sc\") pod \"bd001336-81f9-43f6-9540-432047e6c98a\" (UID: \"bd001336-81f9-43f6-9540-432047e6c98a\") " Feb 14 04:30:54 crc kubenswrapper[4867]: I0214 04:30:54.595808 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd001336-81f9-43f6-9540-432047e6c98a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bd001336-81f9-43f6-9540-432047e6c98a" (UID: "bd001336-81f9-43f6-9540-432047e6c98a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:30:54 crc kubenswrapper[4867]: I0214 04:30:54.595956 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd001336-81f9-43f6-9540-432047e6c98a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:54 crc kubenswrapper[4867]: I0214 04:30:54.602281 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd001336-81f9-43f6-9540-432047e6c98a-kube-api-access-nx5sc" (OuterVolumeSpecName: "kube-api-access-nx5sc") pod "bd001336-81f9-43f6-9540-432047e6c98a" (UID: "bd001336-81f9-43f6-9540-432047e6c98a"). InnerVolumeSpecName "kube-api-access-nx5sc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:30:54 crc kubenswrapper[4867]: I0214 04:30:54.680784 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-9vmb7" Feb 14 04:30:54 crc kubenswrapper[4867]: I0214 04:30:54.686919 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-f62v7" Feb 14 04:30:54 crc kubenswrapper[4867]: I0214 04:30:54.698892 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nx5sc\" (UniqueName: \"kubernetes.io/projected/bd001336-81f9-43f6-9540-432047e6c98a-kube-api-access-nx5sc\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:54 crc kubenswrapper[4867]: I0214 04:30:54.801822 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c993d62-94a7-4903-b984-adcef36b53b8-operator-scripts\") pod \"9c993d62-94a7-4903-b984-adcef36b53b8\" (UID: \"9c993d62-94a7-4903-b984-adcef36b53b8\") " Feb 14 04:30:54 crc kubenswrapper[4867]: I0214 04:30:54.802807 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5n9np\" (UniqueName: \"kubernetes.io/projected/f90d34b6-263e-4515-a13a-a41fda1c40ca-kube-api-access-5n9np\") pod \"f90d34b6-263e-4515-a13a-a41fda1c40ca\" (UID: \"f90d34b6-263e-4515-a13a-a41fda1c40ca\") " Feb 14 04:30:54 crc kubenswrapper[4867]: I0214 04:30:54.802914 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhjfs\" (UniqueName: \"kubernetes.io/projected/9c993d62-94a7-4903-b984-adcef36b53b8-kube-api-access-fhjfs\") pod \"9c993d62-94a7-4903-b984-adcef36b53b8\" (UID: \"9c993d62-94a7-4903-b984-adcef36b53b8\") " Feb 14 04:30:54 crc kubenswrapper[4867]: I0214 04:30:54.803146 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f90d34b6-263e-4515-a13a-a41fda1c40ca-operator-scripts\") pod \"f90d34b6-263e-4515-a13a-a41fda1c40ca\" (UID: \"f90d34b6-263e-4515-a13a-a41fda1c40ca\") " Feb 14 04:30:54 crc kubenswrapper[4867]: I0214 04:30:54.804436 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f90d34b6-263e-4515-a13a-a41fda1c40ca-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f90d34b6-263e-4515-a13a-a41fda1c40ca" (UID: "f90d34b6-263e-4515-a13a-a41fda1c40ca"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:30:54 crc kubenswrapper[4867]: I0214 04:30:54.804979 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c993d62-94a7-4903-b984-adcef36b53b8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9c993d62-94a7-4903-b984-adcef36b53b8" (UID: "9c993d62-94a7-4903-b984-adcef36b53b8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:30:54 crc kubenswrapper[4867]: I0214 04:30:54.809610 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c993d62-94a7-4903-b984-adcef36b53b8-kube-api-access-fhjfs" (OuterVolumeSpecName: "kube-api-access-fhjfs") pod "9c993d62-94a7-4903-b984-adcef36b53b8" (UID: "9c993d62-94a7-4903-b984-adcef36b53b8"). InnerVolumeSpecName "kube-api-access-fhjfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:30:54 crc kubenswrapper[4867]: I0214 04:30:54.812485 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f90d34b6-263e-4515-a13a-a41fda1c40ca-kube-api-access-5n9np" (OuterVolumeSpecName: "kube-api-access-5n9np") pod "f90d34b6-263e-4515-a13a-a41fda1c40ca" (UID: "f90d34b6-263e-4515-a13a-a41fda1c40ca"). InnerVolumeSpecName "kube-api-access-5n9np". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:30:54 crc kubenswrapper[4867]: I0214 04:30:54.906444 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c993d62-94a7-4903-b984-adcef36b53b8-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:54 crc kubenswrapper[4867]: I0214 04:30:54.906485 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5n9np\" (UniqueName: \"kubernetes.io/projected/f90d34b6-263e-4515-a13a-a41fda1c40ca-kube-api-access-5n9np\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:54 crc kubenswrapper[4867]: I0214 04:30:54.906497 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhjfs\" (UniqueName: \"kubernetes.io/projected/9c993d62-94a7-4903-b984-adcef36b53b8-kube-api-access-fhjfs\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:54 crc kubenswrapper[4867]: I0214 04:30:54.906522 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f90d34b6-263e-4515-a13a-a41fda1c40ca-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:54 crc kubenswrapper[4867]: I0214 04:30:54.962047 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-fad3-account-create-update-zwwh5" event={"ID":"bd001336-81f9-43f6-9540-432047e6c98a","Type":"ContainerDied","Data":"3e22e2f973dc94c1eb0c671f62dfabd49a9e62ca29a034bc4605cec2c1c2cb03"} Feb 14 04:30:54 crc kubenswrapper[4867]: I0214 04:30:54.962124 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e22e2f973dc94c1eb0c671f62dfabd49a9e62ca29a034bc4605cec2c1c2cb03" Feb 14 04:30:54 crc kubenswrapper[4867]: I0214 04:30:54.962077 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-fad3-account-create-update-zwwh5" Feb 14 04:30:54 crc kubenswrapper[4867]: I0214 04:30:54.967452 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-f62v7" event={"ID":"9c993d62-94a7-4903-b984-adcef36b53b8","Type":"ContainerDied","Data":"27d170506f928ebc8901447eb428c9fc3a1990f4e11bb4e89c49ea05c8cadca9"} Feb 14 04:30:54 crc kubenswrapper[4867]: I0214 04:30:54.967563 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27d170506f928ebc8901447eb428c9fc3a1990f4e11bb4e89c49ea05c8cadca9" Feb 14 04:30:54 crc kubenswrapper[4867]: I0214 04:30:54.968452 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-f62v7" Feb 14 04:30:54 crc kubenswrapper[4867]: I0214 04:30:54.972149 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-9vmb7" event={"ID":"f90d34b6-263e-4515-a13a-a41fda1c40ca","Type":"ContainerDied","Data":"18930daf07a76a05764aee269daec1f5c915570e7f3862b364d2758a4e346023"} Feb 14 04:30:54 crc kubenswrapper[4867]: I0214 04:30:54.972282 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18930daf07a76a05764aee269daec1f5c915570e7f3862b364d2758a4e346023" Feb 14 04:30:54 crc kubenswrapper[4867]: I0214 04:30:54.972376 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-9vmb7" Feb 14 04:30:58 crc kubenswrapper[4867]: I0214 04:30:58.007749 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-07f7-account-create-update-k24c7" event={"ID":"b1826e5b-3563-455f-9caf-9c4ee203210f","Type":"ContainerDied","Data":"e3ca34489f1971b93da8534dbc2ac608bb089f7e2eaca7201880d10011bb4815"} Feb 14 04:30:58 crc kubenswrapper[4867]: I0214 04:30:58.008257 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3ca34489f1971b93da8534dbc2ac608bb089f7e2eaca7201880d10011bb4815" Feb 14 04:30:58 crc kubenswrapper[4867]: I0214 04:30:58.011721 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bab0-account-create-update-kmfpg" event={"ID":"2c5e9025-3781-4461-98d7-0d0d72c3b59b","Type":"ContainerDied","Data":"accc0a0ad0ec78eb75ebe3797d9c1962821c2a1178aef8aa910c8f1b960f9f06"} Feb 14 04:30:58 crc kubenswrapper[4867]: I0214 04:30:58.011749 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="accc0a0ad0ec78eb75ebe3797d9c1962821c2a1178aef8aa910c8f1b960f9f06" Feb 14 04:30:58 crc kubenswrapper[4867]: I0214 04:30:58.013172 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-7kcws" event={"ID":"d1f3a1a1-5734-4782-98e1-1eb22cfbdf93","Type":"ContainerDied","Data":"2fa740a1560381c1be588a63fa6eca1a87525a10489c0023604b28bbb422bcce"} Feb 14 04:30:58 crc kubenswrapper[4867]: I0214 04:30:58.013195 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2fa740a1560381c1be588a63fa6eca1a87525a10489c0023604b28bbb422bcce" Feb 14 04:30:58 crc kubenswrapper[4867]: I0214 04:30:58.014191 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-3b6b-account-create-update-74g2s" event={"ID":"6961722f-b14d-42f2-bd56-68686c2e8a9a","Type":"ContainerDied","Data":"9e2a66b669523bd8ec4ed03fffefd52b1e590786cf2f25c6600bb3d0803c2a73"} Feb 14 04:30:58 crc kubenswrapper[4867]: I0214 04:30:58.014212 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e2a66b669523bd8ec4ed03fffefd52b1e590786cf2f25c6600bb3d0803c2a73" Feb 14 04:30:58 crc kubenswrapper[4867]: I0214 04:30:58.015472 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-8zqfs" event={"ID":"c14b9ea2-b4ee-4365-8b77-d58ff122fabb","Type":"ContainerDied","Data":"4ec63f92ddcb6034ab74dfd7e4ce3e903a8d6e48acae9dd2331725f1ae872cc4"} Feb 14 04:30:58 crc kubenswrapper[4867]: I0214 04:30:58.015498 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ec63f92ddcb6034ab74dfd7e4ce3e903a8d6e48acae9dd2331725f1ae872cc4" Feb 14 04:30:58 crc kubenswrapper[4867]: I0214 04:30:58.094679 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-07f7-account-create-update-k24c7" Feb 14 04:30:58 crc kubenswrapper[4867]: I0214 04:30:58.109985 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-8zqfs" Feb 14 04:30:58 crc kubenswrapper[4867]: I0214 04:30:58.120279 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-3b6b-account-create-update-74g2s" Feb 14 04:30:58 crc kubenswrapper[4867]: I0214 04:30:58.167697 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-7kcws" Feb 14 04:30:58 crc kubenswrapper[4867]: I0214 04:30:58.178880 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bab0-account-create-update-kmfpg" Feb 14 04:30:58 crc kubenswrapper[4867]: I0214 04:30:58.189308 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5vl8\" (UniqueName: \"kubernetes.io/projected/b1826e5b-3563-455f-9caf-9c4ee203210f-kube-api-access-t5vl8\") pod \"b1826e5b-3563-455f-9caf-9c4ee203210f\" (UID: \"b1826e5b-3563-455f-9caf-9c4ee203210f\") " Feb 14 04:30:58 crc kubenswrapper[4867]: I0214 04:30:58.189455 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrw7v\" (UniqueName: \"kubernetes.io/projected/c14b9ea2-b4ee-4365-8b77-d58ff122fabb-kube-api-access-zrw7v\") pod \"c14b9ea2-b4ee-4365-8b77-d58ff122fabb\" (UID: \"c14b9ea2-b4ee-4365-8b77-d58ff122fabb\") " Feb 14 04:30:58 crc kubenswrapper[4867]: I0214 04:30:58.189542 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1826e5b-3563-455f-9caf-9c4ee203210f-operator-scripts\") pod \"b1826e5b-3563-455f-9caf-9c4ee203210f\" (UID: \"b1826e5b-3563-455f-9caf-9c4ee203210f\") " Feb 14 04:30:58 crc kubenswrapper[4867]: I0214 04:30:58.189597 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6961722f-b14d-42f2-bd56-68686c2e8a9a-operator-scripts\") pod \"6961722f-b14d-42f2-bd56-68686c2e8a9a\" (UID: \"6961722f-b14d-42f2-bd56-68686c2e8a9a\") " Feb 14 04:30:58 crc kubenswrapper[4867]: I0214 04:30:58.189662 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jlgg\" (UniqueName: \"kubernetes.io/projected/6961722f-b14d-42f2-bd56-68686c2e8a9a-kube-api-access-7jlgg\") pod \"6961722f-b14d-42f2-bd56-68686c2e8a9a\" (UID: \"6961722f-b14d-42f2-bd56-68686c2e8a9a\") " Feb 14 04:30:58 crc kubenswrapper[4867]: I0214 04:30:58.189685 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c14b9ea2-b4ee-4365-8b77-d58ff122fabb-operator-scripts\") pod \"c14b9ea2-b4ee-4365-8b77-d58ff122fabb\" (UID: \"c14b9ea2-b4ee-4365-8b77-d58ff122fabb\") " Feb 14 04:30:58 crc kubenswrapper[4867]: I0214 04:30:58.190836 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c14b9ea2-b4ee-4365-8b77-d58ff122fabb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c14b9ea2-b4ee-4365-8b77-d58ff122fabb" (UID: "c14b9ea2-b4ee-4365-8b77-d58ff122fabb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:30:58 crc kubenswrapper[4867]: I0214 04:30:58.190836 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1826e5b-3563-455f-9caf-9c4ee203210f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b1826e5b-3563-455f-9caf-9c4ee203210f" (UID: "b1826e5b-3563-455f-9caf-9c4ee203210f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:30:58 crc kubenswrapper[4867]: I0214 04:30:58.191546 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6961722f-b14d-42f2-bd56-68686c2e8a9a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6961722f-b14d-42f2-bd56-68686c2e8a9a" (UID: "6961722f-b14d-42f2-bd56-68686c2e8a9a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:30:58 crc kubenswrapper[4867]: I0214 04:30:58.193242 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1826e5b-3563-455f-9caf-9c4ee203210f-kube-api-access-t5vl8" (OuterVolumeSpecName: "kube-api-access-t5vl8") pod "b1826e5b-3563-455f-9caf-9c4ee203210f" (UID: "b1826e5b-3563-455f-9caf-9c4ee203210f"). InnerVolumeSpecName "kube-api-access-t5vl8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:30:58 crc kubenswrapper[4867]: I0214 04:30:58.194965 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6961722f-b14d-42f2-bd56-68686c2e8a9a-kube-api-access-7jlgg" (OuterVolumeSpecName: "kube-api-access-7jlgg") pod "6961722f-b14d-42f2-bd56-68686c2e8a9a" (UID: "6961722f-b14d-42f2-bd56-68686c2e8a9a"). InnerVolumeSpecName "kube-api-access-7jlgg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:30:58 crc kubenswrapper[4867]: I0214 04:30:58.202330 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c14b9ea2-b4ee-4365-8b77-d58ff122fabb-kube-api-access-zrw7v" (OuterVolumeSpecName: "kube-api-access-zrw7v") pod "c14b9ea2-b4ee-4365-8b77-d58ff122fabb" (UID: "c14b9ea2-b4ee-4365-8b77-d58ff122fabb"). InnerVolumeSpecName "kube-api-access-zrw7v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:30:58 crc kubenswrapper[4867]: I0214 04:30:58.291216 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7srfx\" (UniqueName: \"kubernetes.io/projected/d1f3a1a1-5734-4782-98e1-1eb22cfbdf93-kube-api-access-7srfx\") pod \"d1f3a1a1-5734-4782-98e1-1eb22cfbdf93\" (UID: \"d1f3a1a1-5734-4782-98e1-1eb22cfbdf93\") " Feb 14 04:30:58 crc kubenswrapper[4867]: I0214 04:30:58.291299 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c5e9025-3781-4461-98d7-0d0d72c3b59b-operator-scripts\") pod \"2c5e9025-3781-4461-98d7-0d0d72c3b59b\" (UID: \"2c5e9025-3781-4461-98d7-0d0d72c3b59b\") " Feb 14 04:30:58 crc kubenswrapper[4867]: I0214 04:30:58.291403 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sz4mb\" (UniqueName: \"kubernetes.io/projected/2c5e9025-3781-4461-98d7-0d0d72c3b59b-kube-api-access-sz4mb\") pod \"2c5e9025-3781-4461-98d7-0d0d72c3b59b\" (UID: \"2c5e9025-3781-4461-98d7-0d0d72c3b59b\") " Feb 14 04:30:58 crc kubenswrapper[4867]: I0214 04:30:58.291477 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1f3a1a1-5734-4782-98e1-1eb22cfbdf93-operator-scripts\") pod \"d1f3a1a1-5734-4782-98e1-1eb22cfbdf93\" (UID: \"d1f3a1a1-5734-4782-98e1-1eb22cfbdf93\") " Feb 14 04:30:58 crc kubenswrapper[4867]: I0214 04:30:58.292149 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5vl8\" (UniqueName: \"kubernetes.io/projected/b1826e5b-3563-455f-9caf-9c4ee203210f-kube-api-access-t5vl8\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:58 crc kubenswrapper[4867]: I0214 04:30:58.292169 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrw7v\" (UniqueName: \"kubernetes.io/projected/c14b9ea2-b4ee-4365-8b77-d58ff122fabb-kube-api-access-zrw7v\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:58 crc kubenswrapper[4867]: I0214 04:30:58.292181 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1826e5b-3563-455f-9caf-9c4ee203210f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:58 crc kubenswrapper[4867]: I0214 04:30:58.292193 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6961722f-b14d-42f2-bd56-68686c2e8a9a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:58 crc kubenswrapper[4867]: I0214 04:30:58.292201 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7jlgg\" (UniqueName: \"kubernetes.io/projected/6961722f-b14d-42f2-bd56-68686c2e8a9a-kube-api-access-7jlgg\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:58 crc kubenswrapper[4867]: I0214 04:30:58.292209 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c14b9ea2-b4ee-4365-8b77-d58ff122fabb-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:58 crc kubenswrapper[4867]: I0214 04:30:58.292599 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1f3a1a1-5734-4782-98e1-1eb22cfbdf93-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d1f3a1a1-5734-4782-98e1-1eb22cfbdf93" (UID: "d1f3a1a1-5734-4782-98e1-1eb22cfbdf93"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:30:58 crc kubenswrapper[4867]: I0214 04:30:58.293433 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c5e9025-3781-4461-98d7-0d0d72c3b59b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2c5e9025-3781-4461-98d7-0d0d72c3b59b" (UID: "2c5e9025-3781-4461-98d7-0d0d72c3b59b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:30:58 crc kubenswrapper[4867]: I0214 04:30:58.296731 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c5e9025-3781-4461-98d7-0d0d72c3b59b-kube-api-access-sz4mb" (OuterVolumeSpecName: "kube-api-access-sz4mb") pod "2c5e9025-3781-4461-98d7-0d0d72c3b59b" (UID: "2c5e9025-3781-4461-98d7-0d0d72c3b59b"). InnerVolumeSpecName "kube-api-access-sz4mb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:30:58 crc kubenswrapper[4867]: I0214 04:30:58.296898 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1f3a1a1-5734-4782-98e1-1eb22cfbdf93-kube-api-access-7srfx" (OuterVolumeSpecName: "kube-api-access-7srfx") pod "d1f3a1a1-5734-4782-98e1-1eb22cfbdf93" (UID: "d1f3a1a1-5734-4782-98e1-1eb22cfbdf93"). InnerVolumeSpecName "kube-api-access-7srfx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:30:58 crc kubenswrapper[4867]: I0214 04:30:58.394524 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7srfx\" (UniqueName: \"kubernetes.io/projected/d1f3a1a1-5734-4782-98e1-1eb22cfbdf93-kube-api-access-7srfx\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:58 crc kubenswrapper[4867]: I0214 04:30:58.395132 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c5e9025-3781-4461-98d7-0d0d72c3b59b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:58 crc kubenswrapper[4867]: I0214 04:30:58.395146 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sz4mb\" (UniqueName: \"kubernetes.io/projected/2c5e9025-3781-4461-98d7-0d0d72c3b59b-kube-api-access-sz4mb\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:58 crc kubenswrapper[4867]: I0214 04:30:58.395155 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1f3a1a1-5734-4782-98e1-1eb22cfbdf93-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:59 crc kubenswrapper[4867]: I0214 04:30:59.037757 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-7kcws" Feb 14 04:30:59 crc kubenswrapper[4867]: I0214 04:30:59.037757 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bab0-account-create-update-kmfpg" Feb 14 04:30:59 crc kubenswrapper[4867]: I0214 04:30:59.037786 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-8zqfs" Feb 14 04:30:59 crc kubenswrapper[4867]: I0214 04:30:59.037796 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-07f7-account-create-update-k24c7" Feb 14 04:30:59 crc kubenswrapper[4867]: I0214 04:30:59.037811 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-gk75z" event={"ID":"49af28f1-d33f-4717-81a7-4377bfef388c","Type":"ContainerStarted","Data":"abb5bce0228ffe2b4f577c72d541587bc9ccc14c780b4813bbfbccab7bd48336"} Feb 14 04:30:59 crc kubenswrapper[4867]: I0214 04:30:59.039468 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-3b6b-account-create-update-74g2s" Feb 14 04:30:59 crc kubenswrapper[4867]: I0214 04:30:59.061294 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-gk75z" podStartSLOduration=2.348928085 podStartE2EDuration="8.061247314s" podCreationTimestamp="2026-02-14 04:30:51 +0000 UTC" firstStartedPulling="2026-02-14 04:30:52.201741156 +0000 UTC m=+1284.282678470" lastFinishedPulling="2026-02-14 04:30:57.914060385 +0000 UTC m=+1289.994997699" observedRunningTime="2026-02-14 04:30:59.056942529 +0000 UTC m=+1291.137879853" watchObservedRunningTime="2026-02-14 04:30:59.061247314 +0000 UTC m=+1291.142184628" Feb 14 04:30:59 crc kubenswrapper[4867]: I0214 04:30:59.586712 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-74f6bcbc87-shjcj" Feb 14 04:30:59 crc kubenswrapper[4867]: I0214 04:30:59.688536 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-sp44n"] Feb 14 04:30:59 crc kubenswrapper[4867]: I0214 04:30:59.688792 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-764c5664d7-sp44n" podUID="e2d457dc-19b4-4279-8c97-930f91291f98" containerName="dnsmasq-dns" containerID="cri-o://cfefeb2b897af2fb3d5d274167a23f6d2bce6f0ba7bf17c5af7d0be9357e047c" gracePeriod=10 Feb 14 04:31:00 crc kubenswrapper[4867]: I0214 04:31:00.061937 4867 generic.go:334] "Generic (PLEG): container finished" podID="e2d457dc-19b4-4279-8c97-930f91291f98" containerID="cfefeb2b897af2fb3d5d274167a23f6d2bce6f0ba7bf17c5af7d0be9357e047c" exitCode=0 Feb 14 04:31:00 crc kubenswrapper[4867]: I0214 04:31:00.062119 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-sp44n" event={"ID":"e2d457dc-19b4-4279-8c97-930f91291f98","Type":"ContainerDied","Data":"cfefeb2b897af2fb3d5d274167a23f6d2bce6f0ba7bf17c5af7d0be9357e047c"} Feb 14 04:31:00 crc kubenswrapper[4867]: I0214 04:31:00.326836 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-sp44n" Feb 14 04:31:00 crc kubenswrapper[4867]: I0214 04:31:00.338858 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xh2g2\" (UniqueName: \"kubernetes.io/projected/e2d457dc-19b4-4279-8c97-930f91291f98-kube-api-access-xh2g2\") pod \"e2d457dc-19b4-4279-8c97-930f91291f98\" (UID: \"e2d457dc-19b4-4279-8c97-930f91291f98\") " Feb 14 04:31:00 crc kubenswrapper[4867]: I0214 04:31:00.339239 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e2d457dc-19b4-4279-8c97-930f91291f98-ovsdbserver-nb\") pod \"e2d457dc-19b4-4279-8c97-930f91291f98\" (UID: \"e2d457dc-19b4-4279-8c97-930f91291f98\") " Feb 14 04:31:00 crc kubenswrapper[4867]: I0214 04:31:00.339295 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e2d457dc-19b4-4279-8c97-930f91291f98-dns-svc\") pod \"e2d457dc-19b4-4279-8c97-930f91291f98\" (UID: \"e2d457dc-19b4-4279-8c97-930f91291f98\") " Feb 14 04:31:00 crc kubenswrapper[4867]: I0214 04:31:00.339320 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2d457dc-19b4-4279-8c97-930f91291f98-config\") pod \"e2d457dc-19b4-4279-8c97-930f91291f98\" (UID: \"e2d457dc-19b4-4279-8c97-930f91291f98\") " Feb 14 04:31:00 crc kubenswrapper[4867]: I0214 04:31:00.339355 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e2d457dc-19b4-4279-8c97-930f91291f98-dns-swift-storage-0\") pod \"e2d457dc-19b4-4279-8c97-930f91291f98\" (UID: \"e2d457dc-19b4-4279-8c97-930f91291f98\") " Feb 14 04:31:00 crc kubenswrapper[4867]: I0214 04:31:00.339446 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e2d457dc-19b4-4279-8c97-930f91291f98-ovsdbserver-sb\") pod \"e2d457dc-19b4-4279-8c97-930f91291f98\" (UID: \"e2d457dc-19b4-4279-8c97-930f91291f98\") " Feb 14 04:31:00 crc kubenswrapper[4867]: I0214 04:31:00.348798 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2d457dc-19b4-4279-8c97-930f91291f98-kube-api-access-xh2g2" (OuterVolumeSpecName: "kube-api-access-xh2g2") pod "e2d457dc-19b4-4279-8c97-930f91291f98" (UID: "e2d457dc-19b4-4279-8c97-930f91291f98"). InnerVolumeSpecName "kube-api-access-xh2g2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:31:00 crc kubenswrapper[4867]: I0214 04:31:00.410079 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2d457dc-19b4-4279-8c97-930f91291f98-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e2d457dc-19b4-4279-8c97-930f91291f98" (UID: "e2d457dc-19b4-4279-8c97-930f91291f98"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:31:00 crc kubenswrapper[4867]: I0214 04:31:00.423684 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2d457dc-19b4-4279-8c97-930f91291f98-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e2d457dc-19b4-4279-8c97-930f91291f98" (UID: "e2d457dc-19b4-4279-8c97-930f91291f98"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:31:00 crc kubenswrapper[4867]: I0214 04:31:00.426581 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2d457dc-19b4-4279-8c97-930f91291f98-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e2d457dc-19b4-4279-8c97-930f91291f98" (UID: "e2d457dc-19b4-4279-8c97-930f91291f98"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:31:00 crc kubenswrapper[4867]: I0214 04:31:00.431815 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2d457dc-19b4-4279-8c97-930f91291f98-config" (OuterVolumeSpecName: "config") pod "e2d457dc-19b4-4279-8c97-930f91291f98" (UID: "e2d457dc-19b4-4279-8c97-930f91291f98"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:31:00 crc kubenswrapper[4867]: I0214 04:31:00.441975 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2d457dc-19b4-4279-8c97-930f91291f98-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e2d457dc-19b4-4279-8c97-930f91291f98" (UID: "e2d457dc-19b4-4279-8c97-930f91291f98"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:31:00 crc kubenswrapper[4867]: I0214 04:31:00.446340 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e2d457dc-19b4-4279-8c97-930f91291f98-dns-swift-storage-0\") pod \"e2d457dc-19b4-4279-8c97-930f91291f98\" (UID: \"e2d457dc-19b4-4279-8c97-930f91291f98\") " Feb 14 04:31:00 crc kubenswrapper[4867]: W0214 04:31:00.446693 4867 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/e2d457dc-19b4-4279-8c97-930f91291f98/volumes/kubernetes.io~configmap/dns-swift-storage-0 Feb 14 04:31:00 crc kubenswrapper[4867]: I0214 04:31:00.446886 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2d457dc-19b4-4279-8c97-930f91291f98-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e2d457dc-19b4-4279-8c97-930f91291f98" (UID: "e2d457dc-19b4-4279-8c97-930f91291f98"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:31:00 crc kubenswrapper[4867]: I0214 04:31:00.449083 4867 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e2d457dc-19b4-4279-8c97-930f91291f98-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:00 crc kubenswrapper[4867]: I0214 04:31:00.449125 4867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e2d457dc-19b4-4279-8c97-930f91291f98-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:00 crc kubenswrapper[4867]: I0214 04:31:00.449139 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xh2g2\" (UniqueName: \"kubernetes.io/projected/e2d457dc-19b4-4279-8c97-930f91291f98-kube-api-access-xh2g2\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:00 crc kubenswrapper[4867]: I0214 04:31:00.449156 4867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e2d457dc-19b4-4279-8c97-930f91291f98-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:00 crc kubenswrapper[4867]: I0214 04:31:00.449173 4867 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e2d457dc-19b4-4279-8c97-930f91291f98-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:00 crc kubenswrapper[4867]: I0214 04:31:00.449186 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2d457dc-19b4-4279-8c97-930f91291f98-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:01 crc kubenswrapper[4867]: I0214 04:31:01.073534 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-sp44n" event={"ID":"e2d457dc-19b4-4279-8c97-930f91291f98","Type":"ContainerDied","Data":"3bb4499423a21fd6e6abed1bb4c19b4b9bfd321a8e7779e3689cb78809defb85"} Feb 14 04:31:01 crc kubenswrapper[4867]: I0214 04:31:01.073604 4867 scope.go:117] "RemoveContainer" containerID="cfefeb2b897af2fb3d5d274167a23f6d2bce6f0ba7bf17c5af7d0be9357e047c" Feb 14 04:31:01 crc kubenswrapper[4867]: I0214 04:31:01.073685 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-sp44n" Feb 14 04:31:01 crc kubenswrapper[4867]: I0214 04:31:01.100984 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-sp44n"] Feb 14 04:31:01 crc kubenswrapper[4867]: I0214 04:31:01.101308 4867 scope.go:117] "RemoveContainer" containerID="3ce430069186ce26ff0516293d97e3eab6ca721fa6eae3b7d027a605885cee6e" Feb 14 04:31:01 crc kubenswrapper[4867]: I0214 04:31:01.112729 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-sp44n"] Feb 14 04:31:02 crc kubenswrapper[4867]: I0214 04:31:02.088080 4867 generic.go:334] "Generic (PLEG): container finished" podID="49af28f1-d33f-4717-81a7-4377bfef388c" containerID="abb5bce0228ffe2b4f577c72d541587bc9ccc14c780b4813bbfbccab7bd48336" exitCode=0 Feb 14 04:31:02 crc kubenswrapper[4867]: I0214 04:31:02.088132 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-gk75z" event={"ID":"49af28f1-d33f-4717-81a7-4377bfef388c","Type":"ContainerDied","Data":"abb5bce0228ffe2b4f577c72d541587bc9ccc14c780b4813bbfbccab7bd48336"} Feb 14 04:31:03 crc kubenswrapper[4867]: I0214 04:31:03.014600 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2d457dc-19b4-4279-8c97-930f91291f98" path="/var/lib/kubelet/pods/e2d457dc-19b4-4279-8c97-930f91291f98/volumes" Feb 14 04:31:03 crc kubenswrapper[4867]: I0214 04:31:03.522425 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-gk75z" Feb 14 04:31:03 crc kubenswrapper[4867]: I0214 04:31:03.613688 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49af28f1-d33f-4717-81a7-4377bfef388c-combined-ca-bundle\") pod \"49af28f1-d33f-4717-81a7-4377bfef388c\" (UID: \"49af28f1-d33f-4717-81a7-4377bfef388c\") " Feb 14 04:31:03 crc kubenswrapper[4867]: I0214 04:31:03.614737 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9gd9\" (UniqueName: \"kubernetes.io/projected/49af28f1-d33f-4717-81a7-4377bfef388c-kube-api-access-j9gd9\") pod \"49af28f1-d33f-4717-81a7-4377bfef388c\" (UID: \"49af28f1-d33f-4717-81a7-4377bfef388c\") " Feb 14 04:31:03 crc kubenswrapper[4867]: I0214 04:31:03.614964 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49af28f1-d33f-4717-81a7-4377bfef388c-config-data\") pod \"49af28f1-d33f-4717-81a7-4377bfef388c\" (UID: \"49af28f1-d33f-4717-81a7-4377bfef388c\") " Feb 14 04:31:03 crc kubenswrapper[4867]: I0214 04:31:03.623612 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49af28f1-d33f-4717-81a7-4377bfef388c-kube-api-access-j9gd9" (OuterVolumeSpecName: "kube-api-access-j9gd9") pod "49af28f1-d33f-4717-81a7-4377bfef388c" (UID: "49af28f1-d33f-4717-81a7-4377bfef388c"). InnerVolumeSpecName "kube-api-access-j9gd9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:31:03 crc kubenswrapper[4867]: I0214 04:31:03.655886 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49af28f1-d33f-4717-81a7-4377bfef388c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "49af28f1-d33f-4717-81a7-4377bfef388c" (UID: "49af28f1-d33f-4717-81a7-4377bfef388c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:31:03 crc kubenswrapper[4867]: I0214 04:31:03.675563 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49af28f1-d33f-4717-81a7-4377bfef388c-config-data" (OuterVolumeSpecName: "config-data") pod "49af28f1-d33f-4717-81a7-4377bfef388c" (UID: "49af28f1-d33f-4717-81a7-4377bfef388c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:31:03 crc kubenswrapper[4867]: I0214 04:31:03.718549 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49af28f1-d33f-4717-81a7-4377bfef388c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:03 crc kubenswrapper[4867]: I0214 04:31:03.718596 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j9gd9\" (UniqueName: \"kubernetes.io/projected/49af28f1-d33f-4717-81a7-4377bfef388c-kube-api-access-j9gd9\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:03 crc kubenswrapper[4867]: I0214 04:31:03.718616 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49af28f1-d33f-4717-81a7-4377bfef388c-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.107865 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-gk75z" event={"ID":"49af28f1-d33f-4717-81a7-4377bfef388c","Type":"ContainerDied","Data":"64fb40663b912dd7645436912b2fd2796b557bdb87fefac71729dbd2b250227b"} Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.107915 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64fb40663b912dd7645436912b2fd2796b557bdb87fefac71729dbd2b250227b" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.107981 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-gk75z" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.381084 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-l4ptr"] Feb 14 04:31:04 crc kubenswrapper[4867]: E0214 04:31:04.382389 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c14b9ea2-b4ee-4365-8b77-d58ff122fabb" containerName="mariadb-database-create" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.382415 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="c14b9ea2-b4ee-4365-8b77-d58ff122fabb" containerName="mariadb-database-create" Feb 14 04:31:04 crc kubenswrapper[4867]: E0214 04:31:04.382424 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6961722f-b14d-42f2-bd56-68686c2e8a9a" containerName="mariadb-account-create-update" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.382431 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="6961722f-b14d-42f2-bd56-68686c2e8a9a" containerName="mariadb-account-create-update" Feb 14 04:31:04 crc kubenswrapper[4867]: E0214 04:31:04.382445 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49af28f1-d33f-4717-81a7-4377bfef388c" containerName="keystone-db-sync" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.382452 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="49af28f1-d33f-4717-81a7-4377bfef388c" containerName="keystone-db-sync" Feb 14 04:31:04 crc kubenswrapper[4867]: E0214 04:31:04.382467 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1f3a1a1-5734-4782-98e1-1eb22cfbdf93" containerName="mariadb-database-create" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.382472 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1f3a1a1-5734-4782-98e1-1eb22cfbdf93" containerName="mariadb-database-create" Feb 14 04:31:04 crc kubenswrapper[4867]: E0214 04:31:04.382485 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd001336-81f9-43f6-9540-432047e6c98a" containerName="mariadb-account-create-update" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.382492 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd001336-81f9-43f6-9540-432047e6c98a" containerName="mariadb-account-create-update" Feb 14 04:31:04 crc kubenswrapper[4867]: E0214 04:31:04.382520 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2d457dc-19b4-4279-8c97-930f91291f98" containerName="dnsmasq-dns" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.382526 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2d457dc-19b4-4279-8c97-930f91291f98" containerName="dnsmasq-dns" Feb 14 04:31:04 crc kubenswrapper[4867]: E0214 04:31:04.382545 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c5e9025-3781-4461-98d7-0d0d72c3b59b" containerName="mariadb-account-create-update" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.382551 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c5e9025-3781-4461-98d7-0d0d72c3b59b" containerName="mariadb-account-create-update" Feb 14 04:31:04 crc kubenswrapper[4867]: E0214 04:31:04.382562 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2d457dc-19b4-4279-8c97-930f91291f98" containerName="init" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.382568 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2d457dc-19b4-4279-8c97-930f91291f98" containerName="init" Feb 14 04:31:04 crc kubenswrapper[4867]: E0214 04:31:04.382577 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c993d62-94a7-4903-b984-adcef36b53b8" containerName="mariadb-database-create" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.382583 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c993d62-94a7-4903-b984-adcef36b53b8" containerName="mariadb-database-create" Feb 14 04:31:04 crc kubenswrapper[4867]: E0214 04:31:04.382591 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f90d34b6-263e-4515-a13a-a41fda1c40ca" containerName="mariadb-database-create" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.382616 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f90d34b6-263e-4515-a13a-a41fda1c40ca" containerName="mariadb-database-create" Feb 14 04:31:04 crc kubenswrapper[4867]: E0214 04:31:04.382627 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1826e5b-3563-455f-9caf-9c4ee203210f" containerName="mariadb-account-create-update" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.382635 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1826e5b-3563-455f-9caf-9c4ee203210f" containerName="mariadb-account-create-update" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.382834 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c993d62-94a7-4903-b984-adcef36b53b8" containerName="mariadb-database-create" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.382847 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="c14b9ea2-b4ee-4365-8b77-d58ff122fabb" containerName="mariadb-database-create" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.382855 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="49af28f1-d33f-4717-81a7-4377bfef388c" containerName="keystone-db-sync" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.382867 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c5e9025-3781-4461-98d7-0d0d72c3b59b" containerName="mariadb-account-create-update" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.382880 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2d457dc-19b4-4279-8c97-930f91291f98" containerName="dnsmasq-dns" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.382894 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1826e5b-3563-455f-9caf-9c4ee203210f" containerName="mariadb-account-create-update" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.382903 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="f90d34b6-263e-4515-a13a-a41fda1c40ca" containerName="mariadb-database-create" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.382912 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd001336-81f9-43f6-9540-432047e6c98a" containerName="mariadb-account-create-update" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.382927 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1f3a1a1-5734-4782-98e1-1eb22cfbdf93" containerName="mariadb-database-create" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.382938 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="6961722f-b14d-42f2-bd56-68686c2e8a9a" containerName="mariadb-account-create-update" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.384664 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-l4ptr" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.404173 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-l4ptr"] Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.427688 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-mvxwt"] Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.429159 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mvxwt" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.432076 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.432117 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.434223 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.434441 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-ffvbq" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.434638 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.437562 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-mvxwt"] Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.452563 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b19d645-1c0b-4b85-a052-d90851f5f063-config\") pod \"dnsmasq-dns-847c4cc679-l4ptr\" (UID: \"2b19d645-1c0b-4b85-a052-d90851f5f063\") " pod="openstack/dnsmasq-dns-847c4cc679-l4ptr" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.452618 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2b19d645-1c0b-4b85-a052-d90851f5f063-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-l4ptr\" (UID: \"2b19d645-1c0b-4b85-a052-d90851f5f063\") " pod="openstack/dnsmasq-dns-847c4cc679-l4ptr" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.452694 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c94481eb-b5a1-40d6-86ea-623f39b63b92-combined-ca-bundle\") pod \"keystone-bootstrap-mvxwt\" (UID: \"c94481eb-b5a1-40d6-86ea-623f39b63b92\") " pod="openstack/keystone-bootstrap-mvxwt" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.452722 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2b19d645-1c0b-4b85-a052-d90851f5f063-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-l4ptr\" (UID: \"2b19d645-1c0b-4b85-a052-d90851f5f063\") " pod="openstack/dnsmasq-dns-847c4cc679-l4ptr" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.452742 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c94481eb-b5a1-40d6-86ea-623f39b63b92-credential-keys\") pod \"keystone-bootstrap-mvxwt\" (UID: \"c94481eb-b5a1-40d6-86ea-623f39b63b92\") " pod="openstack/keystone-bootstrap-mvxwt" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.452759 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c94481eb-b5a1-40d6-86ea-623f39b63b92-fernet-keys\") pod \"keystone-bootstrap-mvxwt\" (UID: \"c94481eb-b5a1-40d6-86ea-623f39b63b92\") " pod="openstack/keystone-bootstrap-mvxwt" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.452784 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c94481eb-b5a1-40d6-86ea-623f39b63b92-scripts\") pod \"keystone-bootstrap-mvxwt\" (UID: \"c94481eb-b5a1-40d6-86ea-623f39b63b92\") " pod="openstack/keystone-bootstrap-mvxwt" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.452800 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xg9qz\" (UniqueName: \"kubernetes.io/projected/c94481eb-b5a1-40d6-86ea-623f39b63b92-kube-api-access-xg9qz\") pod \"keystone-bootstrap-mvxwt\" (UID: \"c94481eb-b5a1-40d6-86ea-623f39b63b92\") " pod="openstack/keystone-bootstrap-mvxwt" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.452835 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c94481eb-b5a1-40d6-86ea-623f39b63b92-config-data\") pod \"keystone-bootstrap-mvxwt\" (UID: \"c94481eb-b5a1-40d6-86ea-623f39b63b92\") " pod="openstack/keystone-bootstrap-mvxwt" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.452869 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2b19d645-1c0b-4b85-a052-d90851f5f063-dns-svc\") pod \"dnsmasq-dns-847c4cc679-l4ptr\" (UID: \"2b19d645-1c0b-4b85-a052-d90851f5f063\") " pod="openstack/dnsmasq-dns-847c4cc679-l4ptr" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.452919 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2b19d645-1c0b-4b85-a052-d90851f5f063-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-l4ptr\" (UID: \"2b19d645-1c0b-4b85-a052-d90851f5f063\") " pod="openstack/dnsmasq-dns-847c4cc679-l4ptr" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.452947 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jp48t\" (UniqueName: \"kubernetes.io/projected/2b19d645-1c0b-4b85-a052-d90851f5f063-kube-api-access-jp48t\") pod \"dnsmasq-dns-847c4cc679-l4ptr\" (UID: \"2b19d645-1c0b-4b85-a052-d90851f5f063\") " pod="openstack/dnsmasq-dns-847c4cc679-l4ptr" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.555229 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c94481eb-b5a1-40d6-86ea-623f39b63b92-scripts\") pod \"keystone-bootstrap-mvxwt\" (UID: \"c94481eb-b5a1-40d6-86ea-623f39b63b92\") " pod="openstack/keystone-bootstrap-mvxwt" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.555274 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xg9qz\" (UniqueName: \"kubernetes.io/projected/c94481eb-b5a1-40d6-86ea-623f39b63b92-kube-api-access-xg9qz\") pod \"keystone-bootstrap-mvxwt\" (UID: \"c94481eb-b5a1-40d6-86ea-623f39b63b92\") " pod="openstack/keystone-bootstrap-mvxwt" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.555319 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c94481eb-b5a1-40d6-86ea-623f39b63b92-config-data\") pod \"keystone-bootstrap-mvxwt\" (UID: \"c94481eb-b5a1-40d6-86ea-623f39b63b92\") " pod="openstack/keystone-bootstrap-mvxwt" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.555372 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2b19d645-1c0b-4b85-a052-d90851f5f063-dns-svc\") pod \"dnsmasq-dns-847c4cc679-l4ptr\" (UID: \"2b19d645-1c0b-4b85-a052-d90851f5f063\") " pod="openstack/dnsmasq-dns-847c4cc679-l4ptr" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.555430 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2b19d645-1c0b-4b85-a052-d90851f5f063-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-l4ptr\" (UID: \"2b19d645-1c0b-4b85-a052-d90851f5f063\") " pod="openstack/dnsmasq-dns-847c4cc679-l4ptr" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.555459 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jp48t\" (UniqueName: \"kubernetes.io/projected/2b19d645-1c0b-4b85-a052-d90851f5f063-kube-api-access-jp48t\") pod \"dnsmasq-dns-847c4cc679-l4ptr\" (UID: \"2b19d645-1c0b-4b85-a052-d90851f5f063\") " pod="openstack/dnsmasq-dns-847c4cc679-l4ptr" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.555489 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b19d645-1c0b-4b85-a052-d90851f5f063-config\") pod \"dnsmasq-dns-847c4cc679-l4ptr\" (UID: \"2b19d645-1c0b-4b85-a052-d90851f5f063\") " pod="openstack/dnsmasq-dns-847c4cc679-l4ptr" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.555549 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2b19d645-1c0b-4b85-a052-d90851f5f063-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-l4ptr\" (UID: \"2b19d645-1c0b-4b85-a052-d90851f5f063\") " pod="openstack/dnsmasq-dns-847c4cc679-l4ptr" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.555611 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c94481eb-b5a1-40d6-86ea-623f39b63b92-combined-ca-bundle\") pod \"keystone-bootstrap-mvxwt\" (UID: \"c94481eb-b5a1-40d6-86ea-623f39b63b92\") " pod="openstack/keystone-bootstrap-mvxwt" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.555701 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2b19d645-1c0b-4b85-a052-d90851f5f063-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-l4ptr\" (UID: \"2b19d645-1c0b-4b85-a052-d90851f5f063\") " pod="openstack/dnsmasq-dns-847c4cc679-l4ptr" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.555718 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c94481eb-b5a1-40d6-86ea-623f39b63b92-credential-keys\") pod \"keystone-bootstrap-mvxwt\" (UID: \"c94481eb-b5a1-40d6-86ea-623f39b63b92\") " pod="openstack/keystone-bootstrap-mvxwt" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.555734 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c94481eb-b5a1-40d6-86ea-623f39b63b92-fernet-keys\") pod \"keystone-bootstrap-mvxwt\" (UID: \"c94481eb-b5a1-40d6-86ea-623f39b63b92\") " pod="openstack/keystone-bootstrap-mvxwt" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.556433 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2b19d645-1c0b-4b85-a052-d90851f5f063-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-l4ptr\" (UID: \"2b19d645-1c0b-4b85-a052-d90851f5f063\") " pod="openstack/dnsmasq-dns-847c4cc679-l4ptr" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.557082 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2b19d645-1c0b-4b85-a052-d90851f5f063-dns-svc\") pod \"dnsmasq-dns-847c4cc679-l4ptr\" (UID: \"2b19d645-1c0b-4b85-a052-d90851f5f063\") " pod="openstack/dnsmasq-dns-847c4cc679-l4ptr" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.557793 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2b19d645-1c0b-4b85-a052-d90851f5f063-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-l4ptr\" (UID: \"2b19d645-1c0b-4b85-a052-d90851f5f063\") " pod="openstack/dnsmasq-dns-847c4cc679-l4ptr" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.558279 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2b19d645-1c0b-4b85-a052-d90851f5f063-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-l4ptr\" (UID: \"2b19d645-1c0b-4b85-a052-d90851f5f063\") " pod="openstack/dnsmasq-dns-847c4cc679-l4ptr" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.558476 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b19d645-1c0b-4b85-a052-d90851f5f063-config\") pod \"dnsmasq-dns-847c4cc679-l4ptr\" (UID: \"2b19d645-1c0b-4b85-a052-d90851f5f063\") " pod="openstack/dnsmasq-dns-847c4cc679-l4ptr" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.563409 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c94481eb-b5a1-40d6-86ea-623f39b63b92-combined-ca-bundle\") pod \"keystone-bootstrap-mvxwt\" (UID: \"c94481eb-b5a1-40d6-86ea-623f39b63b92\") " pod="openstack/keystone-bootstrap-mvxwt" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.569601 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c94481eb-b5a1-40d6-86ea-623f39b63b92-config-data\") pod \"keystone-bootstrap-mvxwt\" (UID: \"c94481eb-b5a1-40d6-86ea-623f39b63b92\") " pod="openstack/keystone-bootstrap-mvxwt" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.570637 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-246z7"] Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.572255 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-246z7" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.585920 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.586039 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-pzjfh" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.587380 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xg9qz\" (UniqueName: \"kubernetes.io/projected/c94481eb-b5a1-40d6-86ea-623f39b63b92-kube-api-access-xg9qz\") pod \"keystone-bootstrap-mvxwt\" (UID: \"c94481eb-b5a1-40d6-86ea-623f39b63b92\") " pod="openstack/keystone-bootstrap-mvxwt" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.598823 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c94481eb-b5a1-40d6-86ea-623f39b63b92-scripts\") pod \"keystone-bootstrap-mvxwt\" (UID: \"c94481eb-b5a1-40d6-86ea-623f39b63b92\") " pod="openstack/keystone-bootstrap-mvxwt" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.599247 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-246z7"] Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.600164 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jp48t\" (UniqueName: \"kubernetes.io/projected/2b19d645-1c0b-4b85-a052-d90851f5f063-kube-api-access-jp48t\") pod \"dnsmasq-dns-847c4cc679-l4ptr\" (UID: \"2b19d645-1c0b-4b85-a052-d90851f5f063\") " pod="openstack/dnsmasq-dns-847c4cc679-l4ptr" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.604031 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c94481eb-b5a1-40d6-86ea-623f39b63b92-credential-keys\") pod \"keystone-bootstrap-mvxwt\" (UID: \"c94481eb-b5a1-40d6-86ea-623f39b63b92\") " pod="openstack/keystone-bootstrap-mvxwt" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.615385 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c94481eb-b5a1-40d6-86ea-623f39b63b92-fernet-keys\") pod \"keystone-bootstrap-mvxwt\" (UID: \"c94481eb-b5a1-40d6-86ea-623f39b63b92\") " pod="openstack/keystone-bootstrap-mvxwt" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.658646 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8hsn\" (UniqueName: \"kubernetes.io/projected/18fb2b12-f922-4976-8e05-6e78a8751456-kube-api-access-r8hsn\") pod \"heat-db-sync-246z7\" (UID: \"18fb2b12-f922-4976-8e05-6e78a8751456\") " pod="openstack/heat-db-sync-246z7" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.658738 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18fb2b12-f922-4976-8e05-6e78a8751456-combined-ca-bundle\") pod \"heat-db-sync-246z7\" (UID: \"18fb2b12-f922-4976-8e05-6e78a8751456\") " pod="openstack/heat-db-sync-246z7" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.667651 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18fb2b12-f922-4976-8e05-6e78a8751456-config-data\") pod \"heat-db-sync-246z7\" (UID: \"18fb2b12-f922-4976-8e05-6e78a8751456\") " pod="openstack/heat-db-sync-246z7" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.702746 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-l4ptr" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.753944 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mvxwt" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.755166 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-grkqh"] Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.768882 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-grkqh" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.772373 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18fb2b12-f922-4976-8e05-6e78a8751456-config-data\") pod \"heat-db-sync-246z7\" (UID: \"18fb2b12-f922-4976-8e05-6e78a8751456\") " pod="openstack/heat-db-sync-246z7" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.772479 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8hsn\" (UniqueName: \"kubernetes.io/projected/18fb2b12-f922-4976-8e05-6e78a8751456-kube-api-access-r8hsn\") pod \"heat-db-sync-246z7\" (UID: \"18fb2b12-f922-4976-8e05-6e78a8751456\") " pod="openstack/heat-db-sync-246z7" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.772541 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18fb2b12-f922-4976-8e05-6e78a8751456-combined-ca-bundle\") pod \"heat-db-sync-246z7\" (UID: \"18fb2b12-f922-4976-8e05-6e78a8751456\") " pod="openstack/heat-db-sync-246z7" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.779304 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.779800 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.780587 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-76c2m" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.783698 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18fb2b12-f922-4976-8e05-6e78a8751456-config-data\") pod \"heat-db-sync-246z7\" (UID: \"18fb2b12-f922-4976-8e05-6e78a8751456\") " pod="openstack/heat-db-sync-246z7" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.818199 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-grkqh"] Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.819622 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18fb2b12-f922-4976-8e05-6e78a8751456-combined-ca-bundle\") pod \"heat-db-sync-246z7\" (UID: \"18fb2b12-f922-4976-8e05-6e78a8751456\") " pod="openstack/heat-db-sync-246z7" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.844019 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8hsn\" (UniqueName: \"kubernetes.io/projected/18fb2b12-f922-4976-8e05-6e78a8751456-kube-api-access-r8hsn\") pod \"heat-db-sync-246z7\" (UID: \"18fb2b12-f922-4976-8e05-6e78a8751456\") " pod="openstack/heat-db-sync-246z7" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.908432 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87zjm\" (UniqueName: \"kubernetes.io/projected/9c973bde-ff14-4cce-9f9c-57354dbd4adb-kube-api-access-87zjm\") pod \"cinder-db-sync-grkqh\" (UID: \"9c973bde-ff14-4cce-9f9c-57354dbd4adb\") " pod="openstack/cinder-db-sync-grkqh" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.908625 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c973bde-ff14-4cce-9f9c-57354dbd4adb-scripts\") pod \"cinder-db-sync-grkqh\" (UID: \"9c973bde-ff14-4cce-9f9c-57354dbd4adb\") " pod="openstack/cinder-db-sync-grkqh" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.908660 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9c973bde-ff14-4cce-9f9c-57354dbd4adb-db-sync-config-data\") pod \"cinder-db-sync-grkqh\" (UID: \"9c973bde-ff14-4cce-9f9c-57354dbd4adb\") " pod="openstack/cinder-db-sync-grkqh" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.908677 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c973bde-ff14-4cce-9f9c-57354dbd4adb-combined-ca-bundle\") pod \"cinder-db-sync-grkqh\" (UID: \"9c973bde-ff14-4cce-9f9c-57354dbd4adb\") " pod="openstack/cinder-db-sync-grkqh" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.908825 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9c973bde-ff14-4cce-9f9c-57354dbd4adb-etc-machine-id\") pod \"cinder-db-sync-grkqh\" (UID: \"9c973bde-ff14-4cce-9f9c-57354dbd4adb\") " pod="openstack/cinder-db-sync-grkqh" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.908931 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c973bde-ff14-4cce-9f9c-57354dbd4adb-config-data\") pod \"cinder-db-sync-grkqh\" (UID: \"9c973bde-ff14-4cce-9f9c-57354dbd4adb\") " pod="openstack/cinder-db-sync-grkqh" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.924716 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-l4ptr"] Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.984213 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-mklx7"] Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.986042 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-mklx7" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.989137 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-246z7" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.989154 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-p86vr" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.042094 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.046988 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87zjm\" (UniqueName: \"kubernetes.io/projected/9c973bde-ff14-4cce-9f9c-57354dbd4adb-kube-api-access-87zjm\") pod \"cinder-db-sync-grkqh\" (UID: \"9c973bde-ff14-4cce-9f9c-57354dbd4adb\") " pod="openstack/cinder-db-sync-grkqh" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.047123 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c973bde-ff14-4cce-9f9c-57354dbd4adb-scripts\") pod \"cinder-db-sync-grkqh\" (UID: \"9c973bde-ff14-4cce-9f9c-57354dbd4adb\") " pod="openstack/cinder-db-sync-grkqh" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.047167 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9c973bde-ff14-4cce-9f9c-57354dbd4adb-db-sync-config-data\") pod \"cinder-db-sync-grkqh\" (UID: \"9c973bde-ff14-4cce-9f9c-57354dbd4adb\") " pod="openstack/cinder-db-sync-grkqh" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.047207 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c973bde-ff14-4cce-9f9c-57354dbd4adb-combined-ca-bundle\") pod \"cinder-db-sync-grkqh\" (UID: \"9c973bde-ff14-4cce-9f9c-57354dbd4adb\") " pod="openstack/cinder-db-sync-grkqh" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.047377 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9c973bde-ff14-4cce-9f9c-57354dbd4adb-etc-machine-id\") pod \"cinder-db-sync-grkqh\" (UID: \"9c973bde-ff14-4cce-9f9c-57354dbd4adb\") " pod="openstack/cinder-db-sync-grkqh" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.047540 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c973bde-ff14-4cce-9f9c-57354dbd4adb-config-data\") pod \"cinder-db-sync-grkqh\" (UID: \"9c973bde-ff14-4cce-9f9c-57354dbd4adb\") " pod="openstack/cinder-db-sync-grkqh" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.057552 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c973bde-ff14-4cce-9f9c-57354dbd4adb-config-data\") pod \"cinder-db-sync-grkqh\" (UID: \"9c973bde-ff14-4cce-9f9c-57354dbd4adb\") " pod="openstack/cinder-db-sync-grkqh" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.085186 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c973bde-ff14-4cce-9f9c-57354dbd4adb-scripts\") pod \"cinder-db-sync-grkqh\" (UID: \"9c973bde-ff14-4cce-9f9c-57354dbd4adb\") " pod="openstack/cinder-db-sync-grkqh" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.089473 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9c973bde-ff14-4cce-9f9c-57354dbd4adb-db-sync-config-data\") pod \"cinder-db-sync-grkqh\" (UID: \"9c973bde-ff14-4cce-9f9c-57354dbd4adb\") " pod="openstack/cinder-db-sync-grkqh" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.093649 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9c973bde-ff14-4cce-9f9c-57354dbd4adb-etc-machine-id\") pod \"cinder-db-sync-grkqh\" (UID: \"9c973bde-ff14-4cce-9f9c-57354dbd4adb\") " pod="openstack/cinder-db-sync-grkqh" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.096261 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c973bde-ff14-4cce-9f9c-57354dbd4adb-combined-ca-bundle\") pod \"cinder-db-sync-grkqh\" (UID: \"9c973bde-ff14-4cce-9f9c-57354dbd4adb\") " pod="openstack/cinder-db-sync-grkqh" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.098010 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87zjm\" (UniqueName: \"kubernetes.io/projected/9c973bde-ff14-4cce-9f9c-57354dbd4adb-kube-api-access-87zjm\") pod \"cinder-db-sync-grkqh\" (UID: \"9c973bde-ff14-4cce-9f9c-57354dbd4adb\") " pod="openstack/cinder-db-sync-grkqh" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.117545 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-425tq"] Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.118878 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-mklx7"] Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.118979 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-425tq" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.122472 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.122903 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-jbsbl" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.122925 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.126076 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-425tq"] Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.167900 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-8g8xm"] Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.168579 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x77fq\" (UniqueName: \"kubernetes.io/projected/cccb73cc-2b89-4363-b7ca-44dfa627d9f9-kube-api-access-x77fq\") pod \"barbican-db-sync-mklx7\" (UID: \"cccb73cc-2b89-4363-b7ca-44dfa627d9f9\") " pod="openstack/barbican-db-sync-mklx7" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.169466 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cccb73cc-2b89-4363-b7ca-44dfa627d9f9-db-sync-config-data\") pod \"barbican-db-sync-mklx7\" (UID: \"cccb73cc-2b89-4363-b7ca-44dfa627d9f9\") " pod="openstack/barbican-db-sync-mklx7" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.169587 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cccb73cc-2b89-4363-b7ca-44dfa627d9f9-combined-ca-bundle\") pod \"barbican-db-sync-mklx7\" (UID: \"cccb73cc-2b89-4363-b7ca-44dfa627d9f9\") " pod="openstack/barbican-db-sync-mklx7" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.173178 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-8g8xm" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.188774 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-8g8xm"] Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.212984 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-9zrmj"] Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.215144 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-9zrmj" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.219767 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.220149 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-jvmrs" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.220461 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.263168 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-9zrmj"] Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.283866 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5cef8824-386a-4c20-a176-e1964d5307f7-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-8g8xm\" (UID: \"5cef8824-386a-4c20-a176-e1964d5307f7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-8g8xm" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.283989 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x77fq\" (UniqueName: \"kubernetes.io/projected/cccb73cc-2b89-4363-b7ca-44dfa627d9f9-kube-api-access-x77fq\") pod \"barbican-db-sync-mklx7\" (UID: \"cccb73cc-2b89-4363-b7ca-44dfa627d9f9\") " pod="openstack/barbican-db-sync-mklx7" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.284070 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5cef8824-386a-4c20-a176-e1964d5307f7-config\") pod \"dnsmasq-dns-785d8bcb8c-8g8xm\" (UID: \"5cef8824-386a-4c20-a176-e1964d5307f7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-8g8xm" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.284830 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwnbs\" (UniqueName: \"kubernetes.io/projected/5cef8824-386a-4c20-a176-e1964d5307f7-kube-api-access-wwnbs\") pod \"dnsmasq-dns-785d8bcb8c-8g8xm\" (UID: \"5cef8824-386a-4c20-a176-e1964d5307f7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-8g8xm" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.285009 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cccb73cc-2b89-4363-b7ca-44dfa627d9f9-db-sync-config-data\") pod \"barbican-db-sync-mklx7\" (UID: \"cccb73cc-2b89-4363-b7ca-44dfa627d9f9\") " pod="openstack/barbican-db-sync-mklx7" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.287205 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cccb73cc-2b89-4363-b7ca-44dfa627d9f9-combined-ca-bundle\") pod \"barbican-db-sync-mklx7\" (UID: \"cccb73cc-2b89-4363-b7ca-44dfa627d9f9\") " pod="openstack/barbican-db-sync-mklx7" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.287287 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5cef8824-386a-4c20-a176-e1964d5307f7-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-8g8xm\" (UID: \"5cef8824-386a-4c20-a176-e1964d5307f7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-8g8xm" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.287362 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ed6edd10-56a9-4431-bb38-7b266f802e63-config\") pod \"neutron-db-sync-425tq\" (UID: \"ed6edd10-56a9-4431-bb38-7b266f802e63\") " pod="openstack/neutron-db-sync-425tq" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.287395 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed6edd10-56a9-4431-bb38-7b266f802e63-combined-ca-bundle\") pod \"neutron-db-sync-425tq\" (UID: \"ed6edd10-56a9-4431-bb38-7b266f802e63\") " pod="openstack/neutron-db-sync-425tq" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.287477 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzcfw\" (UniqueName: \"kubernetes.io/projected/ed6edd10-56a9-4431-bb38-7b266f802e63-kube-api-access-fzcfw\") pod \"neutron-db-sync-425tq\" (UID: \"ed6edd10-56a9-4431-bb38-7b266f802e63\") " pod="openstack/neutron-db-sync-425tq" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.287500 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5cef8824-386a-4c20-a176-e1964d5307f7-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-8g8xm\" (UID: \"5cef8824-386a-4c20-a176-e1964d5307f7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-8g8xm" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.287583 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5cef8824-386a-4c20-a176-e1964d5307f7-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-8g8xm\" (UID: \"5cef8824-386a-4c20-a176-e1964d5307f7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-8g8xm" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.290901 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cccb73cc-2b89-4363-b7ca-44dfa627d9f9-db-sync-config-data\") pod \"barbican-db-sync-mklx7\" (UID: \"cccb73cc-2b89-4363-b7ca-44dfa627d9f9\") " pod="openstack/barbican-db-sync-mklx7" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.291017 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cccb73cc-2b89-4363-b7ca-44dfa627d9f9-combined-ca-bundle\") pod \"barbican-db-sync-mklx7\" (UID: \"cccb73cc-2b89-4363-b7ca-44dfa627d9f9\") " pod="openstack/barbican-db-sync-mklx7" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.303900 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-grkqh" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.306322 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.318427 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.320390 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x77fq\" (UniqueName: \"kubernetes.io/projected/cccb73cc-2b89-4363-b7ca-44dfa627d9f9-kube-api-access-x77fq\") pod \"barbican-db-sync-mklx7\" (UID: \"cccb73cc-2b89-4363-b7ca-44dfa627d9f9\") " pod="openstack/barbican-db-sync-mklx7" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.328851 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.329155 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.385488 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.412640 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5cef8824-386a-4c20-a176-e1964d5307f7-config\") pod \"dnsmasq-dns-785d8bcb8c-8g8xm\" (UID: \"5cef8824-386a-4c20-a176-e1964d5307f7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-8g8xm" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.412818 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffefbab2-8288-4eaa-9df3-e95383cdf19d-config-data\") pod \"placement-db-sync-9zrmj\" (UID: \"ffefbab2-8288-4eaa-9df3-e95383cdf19d\") " pod="openstack/placement-db-sync-9zrmj" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.412866 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffefbab2-8288-4eaa-9df3-e95383cdf19d-combined-ca-bundle\") pod \"placement-db-sync-9zrmj\" (UID: \"ffefbab2-8288-4eaa-9df3-e95383cdf19d\") " pod="openstack/placement-db-sync-9zrmj" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.412946 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwnbs\" (UniqueName: \"kubernetes.io/projected/5cef8824-386a-4c20-a176-e1964d5307f7-kube-api-access-wwnbs\") pod \"dnsmasq-dns-785d8bcb8c-8g8xm\" (UID: \"5cef8824-386a-4c20-a176-e1964d5307f7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-8g8xm" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.413022 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ffefbab2-8288-4eaa-9df3-e95383cdf19d-logs\") pod \"placement-db-sync-9zrmj\" (UID: \"ffefbab2-8288-4eaa-9df3-e95383cdf19d\") " pod="openstack/placement-db-sync-9zrmj" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.413084 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5cef8824-386a-4c20-a176-e1964d5307f7-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-8g8xm\" (UID: \"5cef8824-386a-4c20-a176-e1964d5307f7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-8g8xm" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.424190 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-mklx7" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.424386 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5cef8824-386a-4c20-a176-e1964d5307f7-config\") pod \"dnsmasq-dns-785d8bcb8c-8g8xm\" (UID: \"5cef8824-386a-4c20-a176-e1964d5307f7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-8g8xm" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.425297 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ed6edd10-56a9-4431-bb38-7b266f802e63-config\") pod \"neutron-db-sync-425tq\" (UID: \"ed6edd10-56a9-4431-bb38-7b266f802e63\") " pod="openstack/neutron-db-sync-425tq" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.425347 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed6edd10-56a9-4431-bb38-7b266f802e63-combined-ca-bundle\") pod \"neutron-db-sync-425tq\" (UID: \"ed6edd10-56a9-4431-bb38-7b266f802e63\") " pod="openstack/neutron-db-sync-425tq" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.425406 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzcfw\" (UniqueName: \"kubernetes.io/projected/ed6edd10-56a9-4431-bb38-7b266f802e63-kube-api-access-fzcfw\") pod \"neutron-db-sync-425tq\" (UID: \"ed6edd10-56a9-4431-bb38-7b266f802e63\") " pod="openstack/neutron-db-sync-425tq" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.425424 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5cef8824-386a-4c20-a176-e1964d5307f7-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-8g8xm\" (UID: \"5cef8824-386a-4c20-a176-e1964d5307f7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-8g8xm" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.425462 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cmn6\" (UniqueName: \"kubernetes.io/projected/ffefbab2-8288-4eaa-9df3-e95383cdf19d-kube-api-access-2cmn6\") pod \"placement-db-sync-9zrmj\" (UID: \"ffefbab2-8288-4eaa-9df3-e95383cdf19d\") " pod="openstack/placement-db-sync-9zrmj" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.425493 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5cef8824-386a-4c20-a176-e1964d5307f7-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-8g8xm\" (UID: \"5cef8824-386a-4c20-a176-e1964d5307f7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-8g8xm" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.425594 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5cef8824-386a-4c20-a176-e1964d5307f7-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-8g8xm\" (UID: \"5cef8824-386a-4c20-a176-e1964d5307f7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-8g8xm" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.425671 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffefbab2-8288-4eaa-9df3-e95383cdf19d-scripts\") pod \"placement-db-sync-9zrmj\" (UID: \"ffefbab2-8288-4eaa-9df3-e95383cdf19d\") " pod="openstack/placement-db-sync-9zrmj" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.426948 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5cef8824-386a-4c20-a176-e1964d5307f7-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-8g8xm\" (UID: \"5cef8824-386a-4c20-a176-e1964d5307f7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-8g8xm" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.429140 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5cef8824-386a-4c20-a176-e1964d5307f7-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-8g8xm\" (UID: \"5cef8824-386a-4c20-a176-e1964d5307f7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-8g8xm" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.430380 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5cef8824-386a-4c20-a176-e1964d5307f7-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-8g8xm\" (UID: \"5cef8824-386a-4c20-a176-e1964d5307f7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-8g8xm" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.434682 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5cef8824-386a-4c20-a176-e1964d5307f7-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-8g8xm\" (UID: \"5cef8824-386a-4c20-a176-e1964d5307f7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-8g8xm" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.453578 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed6edd10-56a9-4431-bb38-7b266f802e63-combined-ca-bundle\") pod \"neutron-db-sync-425tq\" (UID: \"ed6edd10-56a9-4431-bb38-7b266f802e63\") " pod="openstack/neutron-db-sync-425tq" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.483185 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/ed6edd10-56a9-4431-bb38-7b266f802e63-config\") pod \"neutron-db-sync-425tq\" (UID: \"ed6edd10-56a9-4431-bb38-7b266f802e63\") " pod="openstack/neutron-db-sync-425tq" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.503271 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzcfw\" (UniqueName: \"kubernetes.io/projected/ed6edd10-56a9-4431-bb38-7b266f802e63-kube-api-access-fzcfw\") pod \"neutron-db-sync-425tq\" (UID: \"ed6edd10-56a9-4431-bb38-7b266f802e63\") " pod="openstack/neutron-db-sync-425tq" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.532976 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwnbs\" (UniqueName: \"kubernetes.io/projected/5cef8824-386a-4c20-a176-e1964d5307f7-kube-api-access-wwnbs\") pod \"dnsmasq-dns-785d8bcb8c-8g8xm\" (UID: \"5cef8824-386a-4c20-a176-e1964d5307f7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-8g8xm" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.538857 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20f83c90-35bd-4d40-90e4-f992c7844a5d-run-httpd\") pod \"ceilometer-0\" (UID: \"20f83c90-35bd-4d40-90e4-f992c7844a5d\") " pod="openstack/ceilometer-0" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.538928 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cmn6\" (UniqueName: \"kubernetes.io/projected/ffefbab2-8288-4eaa-9df3-e95383cdf19d-kube-api-access-2cmn6\") pod \"placement-db-sync-9zrmj\" (UID: \"ffefbab2-8288-4eaa-9df3-e95383cdf19d\") " pod="openstack/placement-db-sync-9zrmj" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.538978 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20f83c90-35bd-4d40-90e4-f992c7844a5d-config-data\") pod \"ceilometer-0\" (UID: \"20f83c90-35bd-4d40-90e4-f992c7844a5d\") " pod="openstack/ceilometer-0" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.538997 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tmkx\" (UniqueName: \"kubernetes.io/projected/20f83c90-35bd-4d40-90e4-f992c7844a5d-kube-api-access-6tmkx\") pod \"ceilometer-0\" (UID: \"20f83c90-35bd-4d40-90e4-f992c7844a5d\") " pod="openstack/ceilometer-0" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.539018 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20f83c90-35bd-4d40-90e4-f992c7844a5d-scripts\") pod \"ceilometer-0\" (UID: \"20f83c90-35bd-4d40-90e4-f992c7844a5d\") " pod="openstack/ceilometer-0" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.539039 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/20f83c90-35bd-4d40-90e4-f992c7844a5d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"20f83c90-35bd-4d40-90e4-f992c7844a5d\") " pod="openstack/ceilometer-0" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.539073 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffefbab2-8288-4eaa-9df3-e95383cdf19d-scripts\") pod \"placement-db-sync-9zrmj\" (UID: \"ffefbab2-8288-4eaa-9df3-e95383cdf19d\") " pod="openstack/placement-db-sync-9zrmj" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.539119 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20f83c90-35bd-4d40-90e4-f992c7844a5d-log-httpd\") pod \"ceilometer-0\" (UID: \"20f83c90-35bd-4d40-90e4-f992c7844a5d\") " pod="openstack/ceilometer-0" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.539140 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffefbab2-8288-4eaa-9df3-e95383cdf19d-config-data\") pod \"placement-db-sync-9zrmj\" (UID: \"ffefbab2-8288-4eaa-9df3-e95383cdf19d\") " pod="openstack/placement-db-sync-9zrmj" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.539157 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffefbab2-8288-4eaa-9df3-e95383cdf19d-combined-ca-bundle\") pod \"placement-db-sync-9zrmj\" (UID: \"ffefbab2-8288-4eaa-9df3-e95383cdf19d\") " pod="openstack/placement-db-sync-9zrmj" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.539195 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20f83c90-35bd-4d40-90e4-f992c7844a5d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"20f83c90-35bd-4d40-90e4-f992c7844a5d\") " pod="openstack/ceilometer-0" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.539217 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ffefbab2-8288-4eaa-9df3-e95383cdf19d-logs\") pod \"placement-db-sync-9zrmj\" (UID: \"ffefbab2-8288-4eaa-9df3-e95383cdf19d\") " pod="openstack/placement-db-sync-9zrmj" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.541618 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ffefbab2-8288-4eaa-9df3-e95383cdf19d-logs\") pod \"placement-db-sync-9zrmj\" (UID: \"ffefbab2-8288-4eaa-9df3-e95383cdf19d\") " pod="openstack/placement-db-sync-9zrmj" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.546056 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffefbab2-8288-4eaa-9df3-e95383cdf19d-scripts\") pod \"placement-db-sync-9zrmj\" (UID: \"ffefbab2-8288-4eaa-9df3-e95383cdf19d\") " pod="openstack/placement-db-sync-9zrmj" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.547190 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffefbab2-8288-4eaa-9df3-e95383cdf19d-combined-ca-bundle\") pod \"placement-db-sync-9zrmj\" (UID: \"ffefbab2-8288-4eaa-9df3-e95383cdf19d\") " pod="openstack/placement-db-sync-9zrmj" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.557451 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffefbab2-8288-4eaa-9df3-e95383cdf19d-config-data\") pod \"placement-db-sync-9zrmj\" (UID: \"ffefbab2-8288-4eaa-9df3-e95383cdf19d\") " pod="openstack/placement-db-sync-9zrmj" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.586472 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cmn6\" (UniqueName: \"kubernetes.io/projected/ffefbab2-8288-4eaa-9df3-e95383cdf19d-kube-api-access-2cmn6\") pod \"placement-db-sync-9zrmj\" (UID: \"ffefbab2-8288-4eaa-9df3-e95383cdf19d\") " pod="openstack/placement-db-sync-9zrmj" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.646254 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20f83c90-35bd-4d40-90e4-f992c7844a5d-log-httpd\") pod \"ceilometer-0\" (UID: \"20f83c90-35bd-4d40-90e4-f992c7844a5d\") " pod="openstack/ceilometer-0" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.646324 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20f83c90-35bd-4d40-90e4-f992c7844a5d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"20f83c90-35bd-4d40-90e4-f992c7844a5d\") " pod="openstack/ceilometer-0" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.646392 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20f83c90-35bd-4d40-90e4-f992c7844a5d-run-httpd\") pod \"ceilometer-0\" (UID: \"20f83c90-35bd-4d40-90e4-f992c7844a5d\") " pod="openstack/ceilometer-0" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.646436 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20f83c90-35bd-4d40-90e4-f992c7844a5d-config-data\") pod \"ceilometer-0\" (UID: \"20f83c90-35bd-4d40-90e4-f992c7844a5d\") " pod="openstack/ceilometer-0" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.646455 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tmkx\" (UniqueName: \"kubernetes.io/projected/20f83c90-35bd-4d40-90e4-f992c7844a5d-kube-api-access-6tmkx\") pod \"ceilometer-0\" (UID: \"20f83c90-35bd-4d40-90e4-f992c7844a5d\") " pod="openstack/ceilometer-0" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.646480 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20f83c90-35bd-4d40-90e4-f992c7844a5d-scripts\") pod \"ceilometer-0\" (UID: \"20f83c90-35bd-4d40-90e4-f992c7844a5d\") " pod="openstack/ceilometer-0" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.646518 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/20f83c90-35bd-4d40-90e4-f992c7844a5d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"20f83c90-35bd-4d40-90e4-f992c7844a5d\") " pod="openstack/ceilometer-0" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.649325 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20f83c90-35bd-4d40-90e4-f992c7844a5d-run-httpd\") pod \"ceilometer-0\" (UID: \"20f83c90-35bd-4d40-90e4-f992c7844a5d\") " pod="openstack/ceilometer-0" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.649830 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20f83c90-35bd-4d40-90e4-f992c7844a5d-log-httpd\") pod \"ceilometer-0\" (UID: \"20f83c90-35bd-4d40-90e4-f992c7844a5d\") " pod="openstack/ceilometer-0" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.654736 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20f83c90-35bd-4d40-90e4-f992c7844a5d-scripts\") pod \"ceilometer-0\" (UID: \"20f83c90-35bd-4d40-90e4-f992c7844a5d\") " pod="openstack/ceilometer-0" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.658072 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20f83c90-35bd-4d40-90e4-f992c7844a5d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"20f83c90-35bd-4d40-90e4-f992c7844a5d\") " pod="openstack/ceilometer-0" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.658540 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20f83c90-35bd-4d40-90e4-f992c7844a5d-config-data\") pod \"ceilometer-0\" (UID: \"20f83c90-35bd-4d40-90e4-f992c7844a5d\") " pod="openstack/ceilometer-0" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.668311 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/20f83c90-35bd-4d40-90e4-f992c7844a5d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"20f83c90-35bd-4d40-90e4-f992c7844a5d\") " pod="openstack/ceilometer-0" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.671421 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tmkx\" (UniqueName: \"kubernetes.io/projected/20f83c90-35bd-4d40-90e4-f992c7844a5d-kube-api-access-6tmkx\") pod \"ceilometer-0\" (UID: \"20f83c90-35bd-4d40-90e4-f992c7844a5d\") " pod="openstack/ceilometer-0" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.736138 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-l4ptr"] Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.753540 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-425tq" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.794068 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-mvxwt"] Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.805983 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-8g8xm" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.810912 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.814044 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.817998 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.818227 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-vtnl4" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.823134 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.825366 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.825701 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.850963 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.863785 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-9zrmj" Feb 14 04:31:05 crc kubenswrapper[4867]: W0214 04:31:05.865972 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod18fb2b12_f922_4976_8e05_6e78a8751456.slice/crio-9289cefc22342b7fc66aa673bbc9c4e9b6d16e205beb2daae9082d5d1e900eff WatchSource:0}: Error finding container 9289cefc22342b7fc66aa673bbc9c4e9b6d16e205beb2daae9082d5d1e900eff: Status 404 returned error can't find the container with id 9289cefc22342b7fc66aa673bbc9c4e9b6d16e205beb2daae9082d5d1e900eff Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.866625 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.866851 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.869025 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.869253 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.879321 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-246z7"] Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.945007 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.970247 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-36cbded9-e56a-4712-a8db-251c7dcbb87d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-36cbded9-e56a-4712-a8db-251c7dcbb87d\") pod \"glance-default-external-api-0\" (UID: \"07039199-dee5-4a0b-ae25-6eebf0cdc70b\") " pod="openstack/glance-default-external-api-0" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.970529 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f999df8e-7024-489e-ab2a-6b849be2f6ef-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f999df8e-7024-489e-ab2a-6b849be2f6ef\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.970559 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/07039199-dee5-4a0b-ae25-6eebf0cdc70b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"07039199-dee5-4a0b-ae25-6eebf0cdc70b\") " pod="openstack/glance-default-external-api-0" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.970581 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/07039199-dee5-4a0b-ae25-6eebf0cdc70b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"07039199-dee5-4a0b-ae25-6eebf0cdc70b\") " pod="openstack/glance-default-external-api-0" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.970611 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f999df8e-7024-489e-ab2a-6b849be2f6ef-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f999df8e-7024-489e-ab2a-6b849be2f6ef\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.970648 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f999df8e-7024-489e-ab2a-6b849be2f6ef-logs\") pod \"glance-default-internal-api-0\" (UID: \"f999df8e-7024-489e-ab2a-6b849be2f6ef\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.970677 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f999df8e-7024-489e-ab2a-6b849be2f6ef-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f999df8e-7024-489e-ab2a-6b849be2f6ef\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.970707 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07039199-dee5-4a0b-ae25-6eebf0cdc70b-config-data\") pod \"glance-default-external-api-0\" (UID: \"07039199-dee5-4a0b-ae25-6eebf0cdc70b\") " pod="openstack/glance-default-external-api-0" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.970766 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0f267588-31b6-42e6-a1eb-3b23ad395d25\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0f267588-31b6-42e6-a1eb-3b23ad395d25\") pod \"glance-default-internal-api-0\" (UID: \"f999df8e-7024-489e-ab2a-6b849be2f6ef\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.970799 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07039199-dee5-4a0b-ae25-6eebf0cdc70b-scripts\") pod \"glance-default-external-api-0\" (UID: \"07039199-dee5-4a0b-ae25-6eebf0cdc70b\") " pod="openstack/glance-default-external-api-0" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.970845 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f999df8e-7024-489e-ab2a-6b849be2f6ef-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f999df8e-7024-489e-ab2a-6b849be2f6ef\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.970862 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vs4zc\" (UniqueName: \"kubernetes.io/projected/f999df8e-7024-489e-ab2a-6b849be2f6ef-kube-api-access-vs4zc\") pod \"glance-default-internal-api-0\" (UID: \"f999df8e-7024-489e-ab2a-6b849be2f6ef\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.970904 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f999df8e-7024-489e-ab2a-6b849be2f6ef-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f999df8e-7024-489e-ab2a-6b849be2f6ef\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.970921 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07039199-dee5-4a0b-ae25-6eebf0cdc70b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"07039199-dee5-4a0b-ae25-6eebf0cdc70b\") " pod="openstack/glance-default-external-api-0" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.970943 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07039199-dee5-4a0b-ae25-6eebf0cdc70b-logs\") pod \"glance-default-external-api-0\" (UID: \"07039199-dee5-4a0b-ae25-6eebf0cdc70b\") " pod="openstack/glance-default-external-api-0" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.970957 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72f4g\" (UniqueName: \"kubernetes.io/projected/07039199-dee5-4a0b-ae25-6eebf0cdc70b-kube-api-access-72f4g\") pod \"glance-default-external-api-0\" (UID: \"07039199-dee5-4a0b-ae25-6eebf0cdc70b\") " pod="openstack/glance-default-external-api-0" Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.073324 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0f267588-31b6-42e6-a1eb-3b23ad395d25\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0f267588-31b6-42e6-a1eb-3b23ad395d25\") pod \"glance-default-internal-api-0\" (UID: \"f999df8e-7024-489e-ab2a-6b849be2f6ef\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.073371 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07039199-dee5-4a0b-ae25-6eebf0cdc70b-scripts\") pod \"glance-default-external-api-0\" (UID: \"07039199-dee5-4a0b-ae25-6eebf0cdc70b\") " pod="openstack/glance-default-external-api-0" Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.073418 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f999df8e-7024-489e-ab2a-6b849be2f6ef-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f999df8e-7024-489e-ab2a-6b849be2f6ef\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.073434 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vs4zc\" (UniqueName: \"kubernetes.io/projected/f999df8e-7024-489e-ab2a-6b849be2f6ef-kube-api-access-vs4zc\") pod \"glance-default-internal-api-0\" (UID: \"f999df8e-7024-489e-ab2a-6b849be2f6ef\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.073475 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f999df8e-7024-489e-ab2a-6b849be2f6ef-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f999df8e-7024-489e-ab2a-6b849be2f6ef\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.073498 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07039199-dee5-4a0b-ae25-6eebf0cdc70b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"07039199-dee5-4a0b-ae25-6eebf0cdc70b\") " pod="openstack/glance-default-external-api-0" Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.073535 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07039199-dee5-4a0b-ae25-6eebf0cdc70b-logs\") pod \"glance-default-external-api-0\" (UID: \"07039199-dee5-4a0b-ae25-6eebf0cdc70b\") " pod="openstack/glance-default-external-api-0" Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.073550 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72f4g\" (UniqueName: \"kubernetes.io/projected/07039199-dee5-4a0b-ae25-6eebf0cdc70b-kube-api-access-72f4g\") pod \"glance-default-external-api-0\" (UID: \"07039199-dee5-4a0b-ae25-6eebf0cdc70b\") " pod="openstack/glance-default-external-api-0" Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.073598 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-36cbded9-e56a-4712-a8db-251c7dcbb87d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-36cbded9-e56a-4712-a8db-251c7dcbb87d\") pod \"glance-default-external-api-0\" (UID: \"07039199-dee5-4a0b-ae25-6eebf0cdc70b\") " pod="openstack/glance-default-external-api-0" Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.073617 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f999df8e-7024-489e-ab2a-6b849be2f6ef-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f999df8e-7024-489e-ab2a-6b849be2f6ef\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.073639 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/07039199-dee5-4a0b-ae25-6eebf0cdc70b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"07039199-dee5-4a0b-ae25-6eebf0cdc70b\") " pod="openstack/glance-default-external-api-0" Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.073656 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/07039199-dee5-4a0b-ae25-6eebf0cdc70b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"07039199-dee5-4a0b-ae25-6eebf0cdc70b\") " pod="openstack/glance-default-external-api-0" Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.073678 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f999df8e-7024-489e-ab2a-6b849be2f6ef-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f999df8e-7024-489e-ab2a-6b849be2f6ef\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.073711 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f999df8e-7024-489e-ab2a-6b849be2f6ef-logs\") pod \"glance-default-internal-api-0\" (UID: \"f999df8e-7024-489e-ab2a-6b849be2f6ef\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.073744 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f999df8e-7024-489e-ab2a-6b849be2f6ef-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f999df8e-7024-489e-ab2a-6b849be2f6ef\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.073768 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07039199-dee5-4a0b-ae25-6eebf0cdc70b-config-data\") pod \"glance-default-external-api-0\" (UID: \"07039199-dee5-4a0b-ae25-6eebf0cdc70b\") " pod="openstack/glance-default-external-api-0" Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.078614 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f999df8e-7024-489e-ab2a-6b849be2f6ef-logs\") pod \"glance-default-internal-api-0\" (UID: \"f999df8e-7024-489e-ab2a-6b849be2f6ef\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.078668 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07039199-dee5-4a0b-ae25-6eebf0cdc70b-logs\") pod \"glance-default-external-api-0\" (UID: \"07039199-dee5-4a0b-ae25-6eebf0cdc70b\") " pod="openstack/glance-default-external-api-0" Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.078850 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f999df8e-7024-489e-ab2a-6b849be2f6ef-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f999df8e-7024-489e-ab2a-6b849be2f6ef\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.079715 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/07039199-dee5-4a0b-ae25-6eebf0cdc70b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"07039199-dee5-4a0b-ae25-6eebf0cdc70b\") " pod="openstack/glance-default-external-api-0" Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.101305 4867 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.101362 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-36cbded9-e56a-4712-a8db-251c7dcbb87d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-36cbded9-e56a-4712-a8db-251c7dcbb87d\") pod \"glance-default-external-api-0\" (UID: \"07039199-dee5-4a0b-ae25-6eebf0cdc70b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/2911fee5623424610909110255172e6a670235da2c51b706f28d869aaa21b2f4/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.103864 4867 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.104020 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0f267588-31b6-42e6-a1eb-3b23ad395d25\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0f267588-31b6-42e6-a1eb-3b23ad395d25\") pod \"glance-default-internal-api-0\" (UID: \"f999df8e-7024-489e-ab2a-6b849be2f6ef\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/75d9da1254ce7e619341632ffa065d218ee4aa27b9558c722e4cc97bdf7e072d/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.112948 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f999df8e-7024-489e-ab2a-6b849be2f6ef-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f999df8e-7024-489e-ab2a-6b849be2f6ef\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.115742 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72f4g\" (UniqueName: \"kubernetes.io/projected/07039199-dee5-4a0b-ae25-6eebf0cdc70b-kube-api-access-72f4g\") pod \"glance-default-external-api-0\" (UID: \"07039199-dee5-4a0b-ae25-6eebf0cdc70b\") " pod="openstack/glance-default-external-api-0" Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.116652 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07039199-dee5-4a0b-ae25-6eebf0cdc70b-scripts\") pod \"glance-default-external-api-0\" (UID: \"07039199-dee5-4a0b-ae25-6eebf0cdc70b\") " pod="openstack/glance-default-external-api-0" Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.121117 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f999df8e-7024-489e-ab2a-6b849be2f6ef-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f999df8e-7024-489e-ab2a-6b849be2f6ef\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.121193 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/07039199-dee5-4a0b-ae25-6eebf0cdc70b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"07039199-dee5-4a0b-ae25-6eebf0cdc70b\") " pod="openstack/glance-default-external-api-0" Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.121372 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f999df8e-7024-489e-ab2a-6b849be2f6ef-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f999df8e-7024-489e-ab2a-6b849be2f6ef\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.122766 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vs4zc\" (UniqueName: \"kubernetes.io/projected/f999df8e-7024-489e-ab2a-6b849be2f6ef-kube-api-access-vs4zc\") pod \"glance-default-internal-api-0\" (UID: \"f999df8e-7024-489e-ab2a-6b849be2f6ef\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.124914 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f999df8e-7024-489e-ab2a-6b849be2f6ef-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f999df8e-7024-489e-ab2a-6b849be2f6ef\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.125408 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07039199-dee5-4a0b-ae25-6eebf0cdc70b-config-data\") pod \"glance-default-external-api-0\" (UID: \"07039199-dee5-4a0b-ae25-6eebf0cdc70b\") " pod="openstack/glance-default-external-api-0" Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.126024 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07039199-dee5-4a0b-ae25-6eebf0cdc70b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"07039199-dee5-4a0b-ae25-6eebf0cdc70b\") " pod="openstack/glance-default-external-api-0" Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.156797 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-grkqh"] Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.210059 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-246z7" event={"ID":"18fb2b12-f922-4976-8e05-6e78a8751456","Type":"ContainerStarted","Data":"9289cefc22342b7fc66aa673bbc9c4e9b6d16e205beb2daae9082d5d1e900eff"} Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.240190 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mvxwt" event={"ID":"c94481eb-b5a1-40d6-86ea-623f39b63b92","Type":"ContainerStarted","Data":"f89dad4a87be20772a4f4fed951cb674eab08ab883a7cf25710c335ef40caf93"} Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.240250 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mvxwt" event={"ID":"c94481eb-b5a1-40d6-86ea-623f39b63b92","Type":"ContainerStarted","Data":"2fe3568a18d856985ad42eea1fdcd371f9fdb6e3f7cdf19c846cbc99fc4366df"} Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.272471 4867 generic.go:334] "Generic (PLEG): container finished" podID="2b19d645-1c0b-4b85-a052-d90851f5f063" containerID="457cca977bf31867430732e0f7dc34d7da68ead872f10800d0e04226f49fdbbc" exitCode=0 Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.272537 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-l4ptr" event={"ID":"2b19d645-1c0b-4b85-a052-d90851f5f063","Type":"ContainerDied","Data":"457cca977bf31867430732e0f7dc34d7da68ead872f10800d0e04226f49fdbbc"} Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.272567 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-l4ptr" event={"ID":"2b19d645-1c0b-4b85-a052-d90851f5f063","Type":"ContainerStarted","Data":"431b7a707179dbdb628432b420ce048e47de472cc8d7794e6aaafcbf07fdc73a"} Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.309743 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-36cbded9-e56a-4712-a8db-251c7dcbb87d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-36cbded9-e56a-4712-a8db-251c7dcbb87d\") pod \"glance-default-external-api-0\" (UID: \"07039199-dee5-4a0b-ae25-6eebf0cdc70b\") " pod="openstack/glance-default-external-api-0" Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.310184 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0f267588-31b6-42e6-a1eb-3b23ad395d25\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0f267588-31b6-42e6-a1eb-3b23ad395d25\") pod \"glance-default-internal-api-0\" (UID: \"f999df8e-7024-489e-ab2a-6b849be2f6ef\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.334435 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-mvxwt" podStartSLOduration=2.334416463 podStartE2EDuration="2.334416463s" podCreationTimestamp="2026-02-14 04:31:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:31:06.286178645 +0000 UTC m=+1298.367115949" watchObservedRunningTime="2026-02-14 04:31:06.334416463 +0000 UTC m=+1298.415353777" Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.489542 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-mklx7"] Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.506690 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.529137 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.763578 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.809627 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-9zrmj"] Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.875730 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.897426 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-425tq"] Feb 14 04:31:07 crc kubenswrapper[4867]: I0214 04:31:07.214239 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:31:07 crc kubenswrapper[4867]: I0214 04:31:07.263966 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:31:07 crc kubenswrapper[4867]: I0214 04:31:07.310091 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-l4ptr" Feb 14 04:31:07 crc kubenswrapper[4867]: I0214 04:31:07.340601 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-mklx7" event={"ID":"cccb73cc-2b89-4363-b7ca-44dfa627d9f9","Type":"ContainerStarted","Data":"f1bbb81d52303ed15cfa9fbfd73e50a998ea92e54eddc8748836c35a398ce9c1"} Feb 14 04:31:07 crc kubenswrapper[4867]: I0214 04:31:07.346523 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20f83c90-35bd-4d40-90e4-f992c7844a5d","Type":"ContainerStarted","Data":"fec759d47361c43e0a7e0280d89486799080a9e793713da877ee4655c98870f4"} Feb 14 04:31:07 crc kubenswrapper[4867]: I0214 04:31:07.349337 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-9zrmj" event={"ID":"ffefbab2-8288-4eaa-9df3-e95383cdf19d","Type":"ContainerStarted","Data":"b409bcffdfa5ea471959aecebea943d810c68abab172eab94ceaa2964168c2d8"} Feb 14 04:31:07 crc kubenswrapper[4867]: I0214 04:31:07.358745 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-grkqh" event={"ID":"9c973bde-ff14-4cce-9f9c-57354dbd4adb","Type":"ContainerStarted","Data":"b3a7579e2ea00af7974e6f233c7249ba1f5d8c4ed824a86714e0fb4c62e7eb90"} Feb 14 04:31:07 crc kubenswrapper[4867]: I0214 04:31:07.366978 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-425tq" event={"ID":"ed6edd10-56a9-4431-bb38-7b266f802e63","Type":"ContainerStarted","Data":"d78bdf76524edb85205e3ac00a9a89a4911b2fe692381100ea6ca9ff406ccaef"} Feb 14 04:31:07 crc kubenswrapper[4867]: I0214 04:31:07.412115 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-8g8xm"] Feb 14 04:31:07 crc kubenswrapper[4867]: I0214 04:31:07.452333 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b19d645-1c0b-4b85-a052-d90851f5f063-config\") pod \"2b19d645-1c0b-4b85-a052-d90851f5f063\" (UID: \"2b19d645-1c0b-4b85-a052-d90851f5f063\") " Feb 14 04:31:07 crc kubenswrapper[4867]: I0214 04:31:07.452426 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2b19d645-1c0b-4b85-a052-d90851f5f063-dns-swift-storage-0\") pod \"2b19d645-1c0b-4b85-a052-d90851f5f063\" (UID: \"2b19d645-1c0b-4b85-a052-d90851f5f063\") " Feb 14 04:31:07 crc kubenswrapper[4867]: I0214 04:31:07.452488 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jp48t\" (UniqueName: \"kubernetes.io/projected/2b19d645-1c0b-4b85-a052-d90851f5f063-kube-api-access-jp48t\") pod \"2b19d645-1c0b-4b85-a052-d90851f5f063\" (UID: \"2b19d645-1c0b-4b85-a052-d90851f5f063\") " Feb 14 04:31:07 crc kubenswrapper[4867]: I0214 04:31:07.452524 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2b19d645-1c0b-4b85-a052-d90851f5f063-dns-svc\") pod \"2b19d645-1c0b-4b85-a052-d90851f5f063\" (UID: \"2b19d645-1c0b-4b85-a052-d90851f5f063\") " Feb 14 04:31:07 crc kubenswrapper[4867]: I0214 04:31:07.452553 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2b19d645-1c0b-4b85-a052-d90851f5f063-ovsdbserver-nb\") pod \"2b19d645-1c0b-4b85-a052-d90851f5f063\" (UID: \"2b19d645-1c0b-4b85-a052-d90851f5f063\") " Feb 14 04:31:07 crc kubenswrapper[4867]: I0214 04:31:07.452692 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2b19d645-1c0b-4b85-a052-d90851f5f063-ovsdbserver-sb\") pod \"2b19d645-1c0b-4b85-a052-d90851f5f063\" (UID: \"2b19d645-1c0b-4b85-a052-d90851f5f063\") " Feb 14 04:31:07 crc kubenswrapper[4867]: I0214 04:31:07.462655 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b19d645-1c0b-4b85-a052-d90851f5f063-kube-api-access-jp48t" (OuterVolumeSpecName: "kube-api-access-jp48t") pod "2b19d645-1c0b-4b85-a052-d90851f5f063" (UID: "2b19d645-1c0b-4b85-a052-d90851f5f063"). InnerVolumeSpecName "kube-api-access-jp48t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:31:07 crc kubenswrapper[4867]: I0214 04:31:07.494557 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b19d645-1c0b-4b85-a052-d90851f5f063-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2b19d645-1c0b-4b85-a052-d90851f5f063" (UID: "2b19d645-1c0b-4b85-a052-d90851f5f063"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:31:07 crc kubenswrapper[4867]: I0214 04:31:07.528299 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b19d645-1c0b-4b85-a052-d90851f5f063-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2b19d645-1c0b-4b85-a052-d90851f5f063" (UID: "2b19d645-1c0b-4b85-a052-d90851f5f063"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:31:07 crc kubenswrapper[4867]: I0214 04:31:07.529169 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b19d645-1c0b-4b85-a052-d90851f5f063-config" (OuterVolumeSpecName: "config") pod "2b19d645-1c0b-4b85-a052-d90851f5f063" (UID: "2b19d645-1c0b-4b85-a052-d90851f5f063"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:31:07 crc kubenswrapper[4867]: I0214 04:31:07.538945 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b19d645-1c0b-4b85-a052-d90851f5f063-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2b19d645-1c0b-4b85-a052-d90851f5f063" (UID: "2b19d645-1c0b-4b85-a052-d90851f5f063"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:31:07 crc kubenswrapper[4867]: I0214 04:31:07.541870 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b19d645-1c0b-4b85-a052-d90851f5f063-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2b19d645-1c0b-4b85-a052-d90851f5f063" (UID: "2b19d645-1c0b-4b85-a052-d90851f5f063"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:31:07 crc kubenswrapper[4867]: I0214 04:31:07.562623 4867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2b19d645-1c0b-4b85-a052-d90851f5f063-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:07 crc kubenswrapper[4867]: I0214 04:31:07.562664 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b19d645-1c0b-4b85-a052-d90851f5f063-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:07 crc kubenswrapper[4867]: I0214 04:31:07.562674 4867 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2b19d645-1c0b-4b85-a052-d90851f5f063-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:07 crc kubenswrapper[4867]: I0214 04:31:07.562686 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jp48t\" (UniqueName: \"kubernetes.io/projected/2b19d645-1c0b-4b85-a052-d90851f5f063-kube-api-access-jp48t\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:07 crc kubenswrapper[4867]: I0214 04:31:07.562699 4867 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2b19d645-1c0b-4b85-a052-d90851f5f063-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:07 crc kubenswrapper[4867]: I0214 04:31:07.562711 4867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2b19d645-1c0b-4b85-a052-d90851f5f063-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:07 crc kubenswrapper[4867]: I0214 04:31:07.747167 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 14 04:31:07 crc kubenswrapper[4867]: I0214 04:31:07.918105 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 14 04:31:08 crc kubenswrapper[4867]: I0214 04:31:08.405827 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f999df8e-7024-489e-ab2a-6b849be2f6ef","Type":"ContainerStarted","Data":"c53135ac12ac40ad101becf3cef02ee975e00b3bf3f0a6d25b7a38ce50c3d5b8"} Feb 14 04:31:08 crc kubenswrapper[4867]: I0214 04:31:08.408982 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-l4ptr" event={"ID":"2b19d645-1c0b-4b85-a052-d90851f5f063","Type":"ContainerDied","Data":"431b7a707179dbdb628432b420ce048e47de472cc8d7794e6aaafcbf07fdc73a"} Feb 14 04:31:08 crc kubenswrapper[4867]: I0214 04:31:08.409040 4867 scope.go:117] "RemoveContainer" containerID="457cca977bf31867430732e0f7dc34d7da68ead872f10800d0e04226f49fdbbc" Feb 14 04:31:08 crc kubenswrapper[4867]: I0214 04:31:08.409267 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-l4ptr" Feb 14 04:31:08 crc kubenswrapper[4867]: I0214 04:31:08.413742 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-425tq" event={"ID":"ed6edd10-56a9-4431-bb38-7b266f802e63","Type":"ContainerStarted","Data":"b4af422ec473bd7a3a6d6b89b2e7229c4375e35cf75e8494db638d7095f07468"} Feb 14 04:31:08 crc kubenswrapper[4867]: I0214 04:31:08.431469 4867 generic.go:334] "Generic (PLEG): container finished" podID="5cef8824-386a-4c20-a176-e1964d5307f7" containerID="89e26c09a3c28860cf0f6c1bbbca98899e7df18ff66c3a51b5fe47e68eaecb95" exitCode=0 Feb 14 04:31:08 crc kubenswrapper[4867]: I0214 04:31:08.431634 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-8g8xm" event={"ID":"5cef8824-386a-4c20-a176-e1964d5307f7","Type":"ContainerDied","Data":"89e26c09a3c28860cf0f6c1bbbca98899e7df18ff66c3a51b5fe47e68eaecb95"} Feb 14 04:31:08 crc kubenswrapper[4867]: I0214 04:31:08.431704 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-8g8xm" event={"ID":"5cef8824-386a-4c20-a176-e1964d5307f7","Type":"ContainerStarted","Data":"a8af3c3243557785237b106c328a49ec8c7419d5a57f62a13b9820888d0db44a"} Feb 14 04:31:08 crc kubenswrapper[4867]: I0214 04:31:08.441551 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-425tq" podStartSLOduration=4.4415376890000005 podStartE2EDuration="4.441537689s" podCreationTimestamp="2026-02-14 04:31:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:31:08.439329151 +0000 UTC m=+1300.520266465" watchObservedRunningTime="2026-02-14 04:31:08.441537689 +0000 UTC m=+1300.522475003" Feb 14 04:31:08 crc kubenswrapper[4867]: I0214 04:31:08.442690 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"07039199-dee5-4a0b-ae25-6eebf0cdc70b","Type":"ContainerStarted","Data":"19177c982d8b998d8f576b4d6e1419b99adc37a7e5ffb6a3e9444f6ef274bbde"} Feb 14 04:31:08 crc kubenswrapper[4867]: I0214 04:31:08.541225 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-l4ptr"] Feb 14 04:31:08 crc kubenswrapper[4867]: I0214 04:31:08.578460 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-l4ptr"] Feb 14 04:31:09 crc kubenswrapper[4867]: I0214 04:31:09.054454 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b19d645-1c0b-4b85-a052-d90851f5f063" path="/var/lib/kubelet/pods/2b19d645-1c0b-4b85-a052-d90851f5f063/volumes" Feb 14 04:31:09 crc kubenswrapper[4867]: I0214 04:31:09.511298 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-8g8xm" event={"ID":"5cef8824-386a-4c20-a176-e1964d5307f7","Type":"ContainerStarted","Data":"5a32c2ef4aa73a15ff81381551f9faad42ec662d2800f7b41bd9d12693968e44"} Feb 14 04:31:09 crc kubenswrapper[4867]: I0214 04:31:09.512967 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-785d8bcb8c-8g8xm" Feb 14 04:31:09 crc kubenswrapper[4867]: I0214 04:31:09.521892 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"07039199-dee5-4a0b-ae25-6eebf0cdc70b","Type":"ContainerStarted","Data":"6388af96a9e8cd26ae554c99b13aa233ce10e1dc8de2f02a6f674fb4e51e6bd3"} Feb 14 04:31:09 crc kubenswrapper[4867]: I0214 04:31:09.530541 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f999df8e-7024-489e-ab2a-6b849be2f6ef","Type":"ContainerStarted","Data":"509fb90f4e6334b9685b885ef46fd5f42dffc3b95cc1b48b90fc4906b6403562"} Feb 14 04:31:09 crc kubenswrapper[4867]: I0214 04:31:09.540845 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-785d8bcb8c-8g8xm" podStartSLOduration=5.540828449 podStartE2EDuration="5.540828449s" podCreationTimestamp="2026-02-14 04:31:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:31:09.531244425 +0000 UTC m=+1301.612181759" watchObservedRunningTime="2026-02-14 04:31:09.540828449 +0000 UTC m=+1301.621765773" Feb 14 04:31:10 crc kubenswrapper[4867]: I0214 04:31:10.597458 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"07039199-dee5-4a0b-ae25-6eebf0cdc70b","Type":"ContainerStarted","Data":"15ed364b0a49f81fd4949fca04378cd1d1cf5fcd161d0b8180bec6ace68b75fa"} Feb 14 04:31:10 crc kubenswrapper[4867]: I0214 04:31:10.598133 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="07039199-dee5-4a0b-ae25-6eebf0cdc70b" containerName="glance-log" containerID="cri-o://6388af96a9e8cd26ae554c99b13aa233ce10e1dc8de2f02a6f674fb4e51e6bd3" gracePeriod=30 Feb 14 04:31:10 crc kubenswrapper[4867]: I0214 04:31:10.598757 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="07039199-dee5-4a0b-ae25-6eebf0cdc70b" containerName="glance-httpd" containerID="cri-o://15ed364b0a49f81fd4949fca04378cd1d1cf5fcd161d0b8180bec6ace68b75fa" gracePeriod=30 Feb 14 04:31:10 crc kubenswrapper[4867]: I0214 04:31:10.609984 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f999df8e-7024-489e-ab2a-6b849be2f6ef","Type":"ContainerStarted","Data":"69544341c5ca0c8dd1de9f8750f822d8a653543dcc8f00f4deed22c84b48df5d"} Feb 14 04:31:10 crc kubenswrapper[4867]: I0214 04:31:10.610259 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="f999df8e-7024-489e-ab2a-6b849be2f6ef" containerName="glance-log" containerID="cri-o://509fb90f4e6334b9685b885ef46fd5f42dffc3b95cc1b48b90fc4906b6403562" gracePeriod=30 Feb 14 04:31:10 crc kubenswrapper[4867]: I0214 04:31:10.610326 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="f999df8e-7024-489e-ab2a-6b849be2f6ef" containerName="glance-httpd" containerID="cri-o://69544341c5ca0c8dd1de9f8750f822d8a653543dcc8f00f4deed22c84b48df5d" gracePeriod=30 Feb 14 04:31:10 crc kubenswrapper[4867]: I0214 04:31:10.643118 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.643051916 podStartE2EDuration="6.643051916s" podCreationTimestamp="2026-02-14 04:31:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:31:10.626985211 +0000 UTC m=+1302.707922525" watchObservedRunningTime="2026-02-14 04:31:10.643051916 +0000 UTC m=+1302.723989230" Feb 14 04:31:10 crc kubenswrapper[4867]: I0214 04:31:10.672050 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.671754677 podStartE2EDuration="6.671754677s" podCreationTimestamp="2026-02-14 04:31:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:31:10.658377882 +0000 UTC m=+1302.739315206" watchObservedRunningTime="2026-02-14 04:31:10.671754677 +0000 UTC m=+1302.752691981" Feb 14 04:31:11 crc kubenswrapper[4867]: I0214 04:31:11.659877 4867 generic.go:334] "Generic (PLEG): container finished" podID="07039199-dee5-4a0b-ae25-6eebf0cdc70b" containerID="15ed364b0a49f81fd4949fca04378cd1d1cf5fcd161d0b8180bec6ace68b75fa" exitCode=0 Feb 14 04:31:11 crc kubenswrapper[4867]: I0214 04:31:11.660188 4867 generic.go:334] "Generic (PLEG): container finished" podID="07039199-dee5-4a0b-ae25-6eebf0cdc70b" containerID="6388af96a9e8cd26ae554c99b13aa233ce10e1dc8de2f02a6f674fb4e51e6bd3" exitCode=143 Feb 14 04:31:11 crc kubenswrapper[4867]: I0214 04:31:11.660035 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"07039199-dee5-4a0b-ae25-6eebf0cdc70b","Type":"ContainerDied","Data":"15ed364b0a49f81fd4949fca04378cd1d1cf5fcd161d0b8180bec6ace68b75fa"} Feb 14 04:31:11 crc kubenswrapper[4867]: I0214 04:31:11.660296 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"07039199-dee5-4a0b-ae25-6eebf0cdc70b","Type":"ContainerDied","Data":"6388af96a9e8cd26ae554c99b13aa233ce10e1dc8de2f02a6f674fb4e51e6bd3"} Feb 14 04:31:11 crc kubenswrapper[4867]: I0214 04:31:11.672795 4867 generic.go:334] "Generic (PLEG): container finished" podID="f999df8e-7024-489e-ab2a-6b849be2f6ef" containerID="69544341c5ca0c8dd1de9f8750f822d8a653543dcc8f00f4deed22c84b48df5d" exitCode=0 Feb 14 04:31:11 crc kubenswrapper[4867]: I0214 04:31:11.672826 4867 generic.go:334] "Generic (PLEG): container finished" podID="f999df8e-7024-489e-ab2a-6b849be2f6ef" containerID="509fb90f4e6334b9685b885ef46fd5f42dffc3b95cc1b48b90fc4906b6403562" exitCode=143 Feb 14 04:31:11 crc kubenswrapper[4867]: I0214 04:31:11.672931 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f999df8e-7024-489e-ab2a-6b849be2f6ef","Type":"ContainerDied","Data":"69544341c5ca0c8dd1de9f8750f822d8a653543dcc8f00f4deed22c84b48df5d"} Feb 14 04:31:11 crc kubenswrapper[4867]: I0214 04:31:11.672996 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f999df8e-7024-489e-ab2a-6b849be2f6ef","Type":"ContainerDied","Data":"509fb90f4e6334b9685b885ef46fd5f42dffc3b95cc1b48b90fc4906b6403562"} Feb 14 04:31:12 crc kubenswrapper[4867]: I0214 04:31:12.710644 4867 generic.go:334] "Generic (PLEG): container finished" podID="c94481eb-b5a1-40d6-86ea-623f39b63b92" containerID="f89dad4a87be20772a4f4fed951cb674eab08ab883a7cf25710c335ef40caf93" exitCode=0 Feb 14 04:31:12 crc kubenswrapper[4867]: I0214 04:31:12.710687 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mvxwt" event={"ID":"c94481eb-b5a1-40d6-86ea-623f39b63b92","Type":"ContainerDied","Data":"f89dad4a87be20772a4f4fed951cb674eab08ab883a7cf25710c335ef40caf93"} Feb 14 04:31:15 crc kubenswrapper[4867]: I0214 04:31:15.750471 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"07039199-dee5-4a0b-ae25-6eebf0cdc70b","Type":"ContainerDied","Data":"19177c982d8b998d8f576b4d6e1419b99adc37a7e5ffb6a3e9444f6ef274bbde"} Feb 14 04:31:15 crc kubenswrapper[4867]: I0214 04:31:15.751054 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="19177c982d8b998d8f576b4d6e1419b99adc37a7e5ffb6a3e9444f6ef274bbde" Feb 14 04:31:15 crc kubenswrapper[4867]: I0214 04:31:15.752678 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f999df8e-7024-489e-ab2a-6b849be2f6ef","Type":"ContainerDied","Data":"c53135ac12ac40ad101becf3cef02ee975e00b3bf3f0a6d25b7a38ce50c3d5b8"} Feb 14 04:31:15 crc kubenswrapper[4867]: I0214 04:31:15.752715 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c53135ac12ac40ad101becf3cef02ee975e00b3bf3f0a6d25b7a38ce50c3d5b8" Feb 14 04:31:15 crc kubenswrapper[4867]: I0214 04:31:15.754164 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mvxwt" event={"ID":"c94481eb-b5a1-40d6-86ea-623f39b63b92","Type":"ContainerDied","Data":"2fe3568a18d856985ad42eea1fdcd371f9fdb6e3f7cdf19c846cbc99fc4366df"} Feb 14 04:31:15 crc kubenswrapper[4867]: I0214 04:31:15.754215 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2fe3568a18d856985ad42eea1fdcd371f9fdb6e3f7cdf19c846cbc99fc4366df" Feb 14 04:31:15 crc kubenswrapper[4867]: I0214 04:31:15.808488 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-785d8bcb8c-8g8xm" Feb 14 04:31:15 crc kubenswrapper[4867]: I0214 04:31:15.811430 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mvxwt" Feb 14 04:31:15 crc kubenswrapper[4867]: I0214 04:31:15.819388 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 14 04:31:15 crc kubenswrapper[4867]: I0214 04:31:15.925781 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 14 04:31:15 crc kubenswrapper[4867]: I0214 04:31:15.939164 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c94481eb-b5a1-40d6-86ea-623f39b63b92-credential-keys\") pod \"c94481eb-b5a1-40d6-86ea-623f39b63b92\" (UID: \"c94481eb-b5a1-40d6-86ea-623f39b63b92\") " Feb 14 04:31:15 crc kubenswrapper[4867]: I0214 04:31:15.939273 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07039199-dee5-4a0b-ae25-6eebf0cdc70b-config-data\") pod \"07039199-dee5-4a0b-ae25-6eebf0cdc70b\" (UID: \"07039199-dee5-4a0b-ae25-6eebf0cdc70b\") " Feb 14 04:31:15 crc kubenswrapper[4867]: I0214 04:31:15.939299 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c94481eb-b5a1-40d6-86ea-623f39b63b92-combined-ca-bundle\") pod \"c94481eb-b5a1-40d6-86ea-623f39b63b92\" (UID: \"c94481eb-b5a1-40d6-86ea-623f39b63b92\") " Feb 14 04:31:15 crc kubenswrapper[4867]: I0214 04:31:15.939340 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c94481eb-b5a1-40d6-86ea-623f39b63b92-config-data\") pod \"c94481eb-b5a1-40d6-86ea-623f39b63b92\" (UID: \"c94481eb-b5a1-40d6-86ea-623f39b63b92\") " Feb 14 04:31:15 crc kubenswrapper[4867]: I0214 04:31:15.939427 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07039199-dee5-4a0b-ae25-6eebf0cdc70b-scripts\") pod \"07039199-dee5-4a0b-ae25-6eebf0cdc70b\" (UID: \"07039199-dee5-4a0b-ae25-6eebf0cdc70b\") " Feb 14 04:31:15 crc kubenswrapper[4867]: I0214 04:31:15.939517 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/07039199-dee5-4a0b-ae25-6eebf0cdc70b-httpd-run\") pod \"07039199-dee5-4a0b-ae25-6eebf0cdc70b\" (UID: \"07039199-dee5-4a0b-ae25-6eebf0cdc70b\") " Feb 14 04:31:15 crc kubenswrapper[4867]: I0214 04:31:15.939575 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72f4g\" (UniqueName: \"kubernetes.io/projected/07039199-dee5-4a0b-ae25-6eebf0cdc70b-kube-api-access-72f4g\") pod \"07039199-dee5-4a0b-ae25-6eebf0cdc70b\" (UID: \"07039199-dee5-4a0b-ae25-6eebf0cdc70b\") " Feb 14 04:31:15 crc kubenswrapper[4867]: I0214 04:31:15.939630 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xg9qz\" (UniqueName: \"kubernetes.io/projected/c94481eb-b5a1-40d6-86ea-623f39b63b92-kube-api-access-xg9qz\") pod \"c94481eb-b5a1-40d6-86ea-623f39b63b92\" (UID: \"c94481eb-b5a1-40d6-86ea-623f39b63b92\") " Feb 14 04:31:15 crc kubenswrapper[4867]: I0214 04:31:15.939656 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c94481eb-b5a1-40d6-86ea-623f39b63b92-fernet-keys\") pod \"c94481eb-b5a1-40d6-86ea-623f39b63b92\" (UID: \"c94481eb-b5a1-40d6-86ea-623f39b63b92\") " Feb 14 04:31:15 crc kubenswrapper[4867]: I0214 04:31:15.939700 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07039199-dee5-4a0b-ae25-6eebf0cdc70b-logs\") pod \"07039199-dee5-4a0b-ae25-6eebf0cdc70b\" (UID: \"07039199-dee5-4a0b-ae25-6eebf0cdc70b\") " Feb 14 04:31:15 crc kubenswrapper[4867]: I0214 04:31:15.939911 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-36cbded9-e56a-4712-a8db-251c7dcbb87d\") pod \"07039199-dee5-4a0b-ae25-6eebf0cdc70b\" (UID: \"07039199-dee5-4a0b-ae25-6eebf0cdc70b\") " Feb 14 04:31:15 crc kubenswrapper[4867]: I0214 04:31:15.939972 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c94481eb-b5a1-40d6-86ea-623f39b63b92-scripts\") pod \"c94481eb-b5a1-40d6-86ea-623f39b63b92\" (UID: \"c94481eb-b5a1-40d6-86ea-623f39b63b92\") " Feb 14 04:31:15 crc kubenswrapper[4867]: I0214 04:31:15.940037 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/07039199-dee5-4a0b-ae25-6eebf0cdc70b-public-tls-certs\") pod \"07039199-dee5-4a0b-ae25-6eebf0cdc70b\" (UID: \"07039199-dee5-4a0b-ae25-6eebf0cdc70b\") " Feb 14 04:31:15 crc kubenswrapper[4867]: I0214 04:31:15.940086 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07039199-dee5-4a0b-ae25-6eebf0cdc70b-combined-ca-bundle\") pod \"07039199-dee5-4a0b-ae25-6eebf0cdc70b\" (UID: \"07039199-dee5-4a0b-ae25-6eebf0cdc70b\") " Feb 14 04:31:15 crc kubenswrapper[4867]: I0214 04:31:15.947404 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07039199-dee5-4a0b-ae25-6eebf0cdc70b-logs" (OuterVolumeSpecName: "logs") pod "07039199-dee5-4a0b-ae25-6eebf0cdc70b" (UID: "07039199-dee5-4a0b-ae25-6eebf0cdc70b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:31:15 crc kubenswrapper[4867]: I0214 04:31:15.953015 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07039199-dee5-4a0b-ae25-6eebf0cdc70b-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "07039199-dee5-4a0b-ae25-6eebf0cdc70b" (UID: "07039199-dee5-4a0b-ae25-6eebf0cdc70b"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:31:15 crc kubenswrapper[4867]: I0214 04:31:15.988167 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-shjcj"] Feb 14 04:31:15 crc kubenswrapper[4867]: I0214 04:31:15.989142 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-74f6bcbc87-shjcj" podUID="34e3aca5-c7d4-4401-b301-1ab6497cb1d7" containerName="dnsmasq-dns" containerID="cri-o://42be2316b4ae343fcb4b814718eabf5f7933e5e7ed598513fca11b7935007ed3" gracePeriod=10 Feb 14 04:31:15 crc kubenswrapper[4867]: I0214 04:31:15.991568 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07039199-dee5-4a0b-ae25-6eebf0cdc70b-scripts" (OuterVolumeSpecName: "scripts") pod "07039199-dee5-4a0b-ae25-6eebf0cdc70b" (UID: "07039199-dee5-4a0b-ae25-6eebf0cdc70b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:31:15 crc kubenswrapper[4867]: I0214 04:31:15.993137 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07039199-dee5-4a0b-ae25-6eebf0cdc70b-kube-api-access-72f4g" (OuterVolumeSpecName: "kube-api-access-72f4g") pod "07039199-dee5-4a0b-ae25-6eebf0cdc70b" (UID: "07039199-dee5-4a0b-ae25-6eebf0cdc70b"). InnerVolumeSpecName "kube-api-access-72f4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:31:15 crc kubenswrapper[4867]: I0214 04:31:15.993458 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c94481eb-b5a1-40d6-86ea-623f39b63b92-scripts" (OuterVolumeSpecName: "scripts") pod "c94481eb-b5a1-40d6-86ea-623f39b63b92" (UID: "c94481eb-b5a1-40d6-86ea-623f39b63b92"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:31:15 crc kubenswrapper[4867]: I0214 04:31:15.996899 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c94481eb-b5a1-40d6-86ea-623f39b63b92-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "c94481eb-b5a1-40d6-86ea-623f39b63b92" (UID: "c94481eb-b5a1-40d6-86ea-623f39b63b92"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.001691 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c94481eb-b5a1-40d6-86ea-623f39b63b92-kube-api-access-xg9qz" (OuterVolumeSpecName: "kube-api-access-xg9qz") pod "c94481eb-b5a1-40d6-86ea-623f39b63b92" (UID: "c94481eb-b5a1-40d6-86ea-623f39b63b92"). InnerVolumeSpecName "kube-api-access-xg9qz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.002396 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c94481eb-b5a1-40d6-86ea-623f39b63b92-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "c94481eb-b5a1-40d6-86ea-623f39b63b92" (UID: "c94481eb-b5a1-40d6-86ea-623f39b63b92"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.031347 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c94481eb-b5a1-40d6-86ea-623f39b63b92-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c94481eb-b5a1-40d6-86ea-623f39b63b92" (UID: "c94481eb-b5a1-40d6-86ea-623f39b63b92"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.041922 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c94481eb-b5a1-40d6-86ea-623f39b63b92-config-data" (OuterVolumeSpecName: "config-data") pod "c94481eb-b5a1-40d6-86ea-623f39b63b92" (UID: "c94481eb-b5a1-40d6-86ea-623f39b63b92"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.042621 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f999df8e-7024-489e-ab2a-6b849be2f6ef-config-data\") pod \"f999df8e-7024-489e-ab2a-6b849be2f6ef\" (UID: \"f999df8e-7024-489e-ab2a-6b849be2f6ef\") " Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.042811 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c94481eb-b5a1-40d6-86ea-623f39b63b92-config-data\") pod \"c94481eb-b5a1-40d6-86ea-623f39b63b92\" (UID: \"c94481eb-b5a1-40d6-86ea-623f39b63b92\") " Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.042837 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f999df8e-7024-489e-ab2a-6b849be2f6ef-scripts\") pod \"f999df8e-7024-489e-ab2a-6b849be2f6ef\" (UID: \"f999df8e-7024-489e-ab2a-6b849be2f6ef\") " Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.042870 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f999df8e-7024-489e-ab2a-6b849be2f6ef-internal-tls-certs\") pod \"f999df8e-7024-489e-ab2a-6b849be2f6ef\" (UID: \"f999df8e-7024-489e-ab2a-6b849be2f6ef\") " Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.042892 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f999df8e-7024-489e-ab2a-6b849be2f6ef-logs\") pod \"f999df8e-7024-489e-ab2a-6b849be2f6ef\" (UID: \"f999df8e-7024-489e-ab2a-6b849be2f6ef\") " Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.042914 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f999df8e-7024-489e-ab2a-6b849be2f6ef-combined-ca-bundle\") pod \"f999df8e-7024-489e-ab2a-6b849be2f6ef\" (UID: \"f999df8e-7024-489e-ab2a-6b849be2f6ef\") " Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.042940 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f999df8e-7024-489e-ab2a-6b849be2f6ef-httpd-run\") pod \"f999df8e-7024-489e-ab2a-6b849be2f6ef\" (UID: \"f999df8e-7024-489e-ab2a-6b849be2f6ef\") " Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.042981 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vs4zc\" (UniqueName: \"kubernetes.io/projected/f999df8e-7024-489e-ab2a-6b849be2f6ef-kube-api-access-vs4zc\") pod \"f999df8e-7024-489e-ab2a-6b849be2f6ef\" (UID: \"f999df8e-7024-489e-ab2a-6b849be2f6ef\") " Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.043138 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0f267588-31b6-42e6-a1eb-3b23ad395d25\") pod \"f999df8e-7024-489e-ab2a-6b849be2f6ef\" (UID: \"f999df8e-7024-489e-ab2a-6b849be2f6ef\") " Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.043638 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xg9qz\" (UniqueName: \"kubernetes.io/projected/c94481eb-b5a1-40d6-86ea-623f39b63b92-kube-api-access-xg9qz\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.043658 4867 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c94481eb-b5a1-40d6-86ea-623f39b63b92-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.043671 4867 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07039199-dee5-4a0b-ae25-6eebf0cdc70b-logs\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.043682 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c94481eb-b5a1-40d6-86ea-623f39b63b92-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.043692 4867 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c94481eb-b5a1-40d6-86ea-623f39b63b92-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.043703 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c94481eb-b5a1-40d6-86ea-623f39b63b92-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.043716 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07039199-dee5-4a0b-ae25-6eebf0cdc70b-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.043723 4867 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/07039199-dee5-4a0b-ae25-6eebf0cdc70b-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.043732 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72f4g\" (UniqueName: \"kubernetes.io/projected/07039199-dee5-4a0b-ae25-6eebf0cdc70b-kube-api-access-72f4g\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.043868 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f999df8e-7024-489e-ab2a-6b849be2f6ef-logs" (OuterVolumeSpecName: "logs") pod "f999df8e-7024-489e-ab2a-6b849be2f6ef" (UID: "f999df8e-7024-489e-ab2a-6b849be2f6ef"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:31:16 crc kubenswrapper[4867]: W0214 04:31:16.043952 4867 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/c94481eb-b5a1-40d6-86ea-623f39b63b92/volumes/kubernetes.io~secret/config-data Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.043961 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c94481eb-b5a1-40d6-86ea-623f39b63b92-config-data" (OuterVolumeSpecName: "config-data") pod "c94481eb-b5a1-40d6-86ea-623f39b63b92" (UID: "c94481eb-b5a1-40d6-86ea-623f39b63b92"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.053482 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-36cbded9-e56a-4712-a8db-251c7dcbb87d" (OuterVolumeSpecName: "glance") pod "07039199-dee5-4a0b-ae25-6eebf0cdc70b" (UID: "07039199-dee5-4a0b-ae25-6eebf0cdc70b"). InnerVolumeSpecName "pvc-36cbded9-e56a-4712-a8db-251c7dcbb87d". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.056745 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f999df8e-7024-489e-ab2a-6b849be2f6ef-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "f999df8e-7024-489e-ab2a-6b849be2f6ef" (UID: "f999df8e-7024-489e-ab2a-6b849be2f6ef"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.063422 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f999df8e-7024-489e-ab2a-6b849be2f6ef-kube-api-access-vs4zc" (OuterVolumeSpecName: "kube-api-access-vs4zc") pod "f999df8e-7024-489e-ab2a-6b849be2f6ef" (UID: "f999df8e-7024-489e-ab2a-6b849be2f6ef"). InnerVolumeSpecName "kube-api-access-vs4zc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.067380 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f999df8e-7024-489e-ab2a-6b849be2f6ef-scripts" (OuterVolumeSpecName: "scripts") pod "f999df8e-7024-489e-ab2a-6b849be2f6ef" (UID: "f999df8e-7024-489e-ab2a-6b849be2f6ef"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.067715 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0f267588-31b6-42e6-a1eb-3b23ad395d25" (OuterVolumeSpecName: "glance") pod "f999df8e-7024-489e-ab2a-6b849be2f6ef" (UID: "f999df8e-7024-489e-ab2a-6b849be2f6ef"). InnerVolumeSpecName "pvc-0f267588-31b6-42e6-a1eb-3b23ad395d25". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.089975 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07039199-dee5-4a0b-ae25-6eebf0cdc70b-config-data" (OuterVolumeSpecName: "config-data") pod "07039199-dee5-4a0b-ae25-6eebf0cdc70b" (UID: "07039199-dee5-4a0b-ae25-6eebf0cdc70b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.093469 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07039199-dee5-4a0b-ae25-6eebf0cdc70b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "07039199-dee5-4a0b-ae25-6eebf0cdc70b" (UID: "07039199-dee5-4a0b-ae25-6eebf0cdc70b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.101033 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07039199-dee5-4a0b-ae25-6eebf0cdc70b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "07039199-dee5-4a0b-ae25-6eebf0cdc70b" (UID: "07039199-dee5-4a0b-ae25-6eebf0cdc70b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.114851 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f999df8e-7024-489e-ab2a-6b849be2f6ef-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f999df8e-7024-489e-ab2a-6b849be2f6ef" (UID: "f999df8e-7024-489e-ab2a-6b849be2f6ef"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.136729 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f999df8e-7024-489e-ab2a-6b849be2f6ef-config-data" (OuterVolumeSpecName: "config-data") pod "f999df8e-7024-489e-ab2a-6b849be2f6ef" (UID: "f999df8e-7024-489e-ab2a-6b849be2f6ef"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.147942 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07039199-dee5-4a0b-ae25-6eebf0cdc70b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.147969 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07039199-dee5-4a0b-ae25-6eebf0cdc70b-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.147978 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c94481eb-b5a1-40d6-86ea-623f39b63b92-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.148007 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f999df8e-7024-489e-ab2a-6b849be2f6ef-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.148032 4867 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f999df8e-7024-489e-ab2a-6b849be2f6ef-logs\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.148040 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f999df8e-7024-489e-ab2a-6b849be2f6ef-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.148048 4867 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f999df8e-7024-489e-ab2a-6b849be2f6ef-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.148059 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vs4zc\" (UniqueName: \"kubernetes.io/projected/f999df8e-7024-489e-ab2a-6b849be2f6ef-kube-api-access-vs4zc\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.148188 4867 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-0f267588-31b6-42e6-a1eb-3b23ad395d25\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0f267588-31b6-42e6-a1eb-3b23ad395d25\") on node \"crc\" " Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.148205 4867 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-36cbded9-e56a-4712-a8db-251c7dcbb87d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-36cbded9-e56a-4712-a8db-251c7dcbb87d\") on node \"crc\" " Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.148237 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f999df8e-7024-489e-ab2a-6b849be2f6ef-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.148249 4867 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/07039199-dee5-4a0b-ae25-6eebf0cdc70b-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.155797 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f999df8e-7024-489e-ab2a-6b849be2f6ef-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "f999df8e-7024-489e-ab2a-6b849be2f6ef" (UID: "f999df8e-7024-489e-ab2a-6b849be2f6ef"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.181681 4867 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.181859 4867 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-0f267588-31b6-42e6-a1eb-3b23ad395d25" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0f267588-31b6-42e6-a1eb-3b23ad395d25") on node "crc" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.210118 4867 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.210303 4867 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-36cbded9-e56a-4712-a8db-251c7dcbb87d" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-36cbded9-e56a-4712-a8db-251c7dcbb87d") on node "crc" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.251438 4867 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f999df8e-7024-489e-ab2a-6b849be2f6ef-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.251483 4867 reconciler_common.go:293] "Volume detached for volume \"pvc-0f267588-31b6-42e6-a1eb-3b23ad395d25\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0f267588-31b6-42e6-a1eb-3b23ad395d25\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.251514 4867 reconciler_common.go:293] "Volume detached for volume \"pvc-36cbded9-e56a-4712-a8db-251c7dcbb87d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-36cbded9-e56a-4712-a8db-251c7dcbb87d\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.774992 4867 generic.go:334] "Generic (PLEG): container finished" podID="34e3aca5-c7d4-4401-b301-1ab6497cb1d7" containerID="42be2316b4ae343fcb4b814718eabf5f7933e5e7ed598513fca11b7935007ed3" exitCode=0 Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.775107 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-shjcj" event={"ID":"34e3aca5-c7d4-4401-b301-1ab6497cb1d7","Type":"ContainerDied","Data":"42be2316b4ae343fcb4b814718eabf5f7933e5e7ed598513fca11b7935007ed3"} Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.775134 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mvxwt" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.775143 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.775239 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.833861 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.844686 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.865283 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.881756 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.897459 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 14 04:31:16 crc kubenswrapper[4867]: E0214 04:31:16.898024 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07039199-dee5-4a0b-ae25-6eebf0cdc70b" containerName="glance-log" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.898040 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="07039199-dee5-4a0b-ae25-6eebf0cdc70b" containerName="glance-log" Feb 14 04:31:16 crc kubenswrapper[4867]: E0214 04:31:16.898055 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07039199-dee5-4a0b-ae25-6eebf0cdc70b" containerName="glance-httpd" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.898061 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="07039199-dee5-4a0b-ae25-6eebf0cdc70b" containerName="glance-httpd" Feb 14 04:31:16 crc kubenswrapper[4867]: E0214 04:31:16.898074 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f999df8e-7024-489e-ab2a-6b849be2f6ef" containerName="glance-log" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.898080 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f999df8e-7024-489e-ab2a-6b849be2f6ef" containerName="glance-log" Feb 14 04:31:16 crc kubenswrapper[4867]: E0214 04:31:16.898101 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f999df8e-7024-489e-ab2a-6b849be2f6ef" containerName="glance-httpd" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.898108 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f999df8e-7024-489e-ab2a-6b849be2f6ef" containerName="glance-httpd" Feb 14 04:31:16 crc kubenswrapper[4867]: E0214 04:31:16.898131 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c94481eb-b5a1-40d6-86ea-623f39b63b92" containerName="keystone-bootstrap" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.898138 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="c94481eb-b5a1-40d6-86ea-623f39b63b92" containerName="keystone-bootstrap" Feb 14 04:31:16 crc kubenswrapper[4867]: E0214 04:31:16.898157 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b19d645-1c0b-4b85-a052-d90851f5f063" containerName="init" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.898164 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b19d645-1c0b-4b85-a052-d90851f5f063" containerName="init" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.898350 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="07039199-dee5-4a0b-ae25-6eebf0cdc70b" containerName="glance-log" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.898361 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="f999df8e-7024-489e-ab2a-6b849be2f6ef" containerName="glance-log" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.898374 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="c94481eb-b5a1-40d6-86ea-623f39b63b92" containerName="keystone-bootstrap" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.898385 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="f999df8e-7024-489e-ab2a-6b849be2f6ef" containerName="glance-httpd" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.898399 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b19d645-1c0b-4b85-a052-d90851f5f063" containerName="init" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.898412 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="07039199-dee5-4a0b-ae25-6eebf0cdc70b" containerName="glance-httpd" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.899584 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.907792 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-vtnl4" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.908223 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.908400 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.908574 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.911490 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.931739 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.944179 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.950136 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.951300 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.957914 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.968990 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f21b5d2-75e5-4cc5-96d0-670e9ed88df0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7f21b5d2-75e5-4cc5-96d0-670e9ed88df0\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.969045 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7f21b5d2-75e5-4cc5-96d0-670e9ed88df0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7f21b5d2-75e5-4cc5-96d0-670e9ed88df0\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.969112 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f21b5d2-75e5-4cc5-96d0-670e9ed88df0-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"7f21b5d2-75e5-4cc5-96d0-670e9ed88df0\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.969133 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f21b5d2-75e5-4cc5-96d0-670e9ed88df0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7f21b5d2-75e5-4cc5-96d0-670e9ed88df0\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.969149 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f21b5d2-75e5-4cc5-96d0-670e9ed88df0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7f21b5d2-75e5-4cc5-96d0-670e9ed88df0\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.969204 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmjl2\" (UniqueName: \"kubernetes.io/projected/7f21b5d2-75e5-4cc5-96d0-670e9ed88df0-kube-api-access-qmjl2\") pod \"glance-default-internal-api-0\" (UID: \"7f21b5d2-75e5-4cc5-96d0-670e9ed88df0\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.969315 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f21b5d2-75e5-4cc5-96d0-670e9ed88df0-logs\") pod \"glance-default-internal-api-0\" (UID: \"7f21b5d2-75e5-4cc5-96d0-670e9ed88df0\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.969389 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0f267588-31b6-42e6-a1eb-3b23ad395d25\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0f267588-31b6-42e6-a1eb-3b23ad395d25\") pod \"glance-default-internal-api-0\" (UID: \"7f21b5d2-75e5-4cc5-96d0-670e9ed88df0\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.022601 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07039199-dee5-4a0b-ae25-6eebf0cdc70b" path="/var/lib/kubelet/pods/07039199-dee5-4a0b-ae25-6eebf0cdc70b/volumes" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.024328 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f999df8e-7024-489e-ab2a-6b849be2f6ef" path="/var/lib/kubelet/pods/f999df8e-7024-489e-ab2a-6b849be2f6ef/volumes" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.073607 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-mvxwt"] Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.074681 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7f21b5d2-75e5-4cc5-96d0-670e9ed88df0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7f21b5d2-75e5-4cc5-96d0-670e9ed88df0\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.074784 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f21b5d2-75e5-4cc5-96d0-670e9ed88df0-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"7f21b5d2-75e5-4cc5-96d0-670e9ed88df0\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.074821 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f21b5d2-75e5-4cc5-96d0-670e9ed88df0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7f21b5d2-75e5-4cc5-96d0-670e9ed88df0\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.074844 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f21b5d2-75e5-4cc5-96d0-670e9ed88df0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7f21b5d2-75e5-4cc5-96d0-670e9ed88df0\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.074900 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/406727d4-ffca-4ade-b0ca-b5dbfcb23e24-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"406727d4-ffca-4ade-b0ca-b5dbfcb23e24\") " pod="openstack/glance-default-external-api-0" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.074928 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-36cbded9-e56a-4712-a8db-251c7dcbb87d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-36cbded9-e56a-4712-a8db-251c7dcbb87d\") pod \"glance-default-external-api-0\" (UID: \"406727d4-ffca-4ade-b0ca-b5dbfcb23e24\") " pod="openstack/glance-default-external-api-0" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.075008 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmjl2\" (UniqueName: \"kubernetes.io/projected/7f21b5d2-75e5-4cc5-96d0-670e9ed88df0-kube-api-access-qmjl2\") pod \"glance-default-internal-api-0\" (UID: \"7f21b5d2-75e5-4cc5-96d0-670e9ed88df0\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.075058 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f21b5d2-75e5-4cc5-96d0-670e9ed88df0-logs\") pod \"glance-default-internal-api-0\" (UID: \"7f21b5d2-75e5-4cc5-96d0-670e9ed88df0\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.075149 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/406727d4-ffca-4ade-b0ca-b5dbfcb23e24-scripts\") pod \"glance-default-external-api-0\" (UID: \"406727d4-ffca-4ade-b0ca-b5dbfcb23e24\") " pod="openstack/glance-default-external-api-0" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.075184 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/406727d4-ffca-4ade-b0ca-b5dbfcb23e24-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"406727d4-ffca-4ade-b0ca-b5dbfcb23e24\") " pod="openstack/glance-default-external-api-0" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.075212 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/406727d4-ffca-4ade-b0ca-b5dbfcb23e24-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"406727d4-ffca-4ade-b0ca-b5dbfcb23e24\") " pod="openstack/glance-default-external-api-0" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.075234 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/406727d4-ffca-4ade-b0ca-b5dbfcb23e24-logs\") pod \"glance-default-external-api-0\" (UID: \"406727d4-ffca-4ade-b0ca-b5dbfcb23e24\") " pod="openstack/glance-default-external-api-0" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.075262 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncmbs\" (UniqueName: \"kubernetes.io/projected/406727d4-ffca-4ade-b0ca-b5dbfcb23e24-kube-api-access-ncmbs\") pod \"glance-default-external-api-0\" (UID: \"406727d4-ffca-4ade-b0ca-b5dbfcb23e24\") " pod="openstack/glance-default-external-api-0" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.075290 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0f267588-31b6-42e6-a1eb-3b23ad395d25\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0f267588-31b6-42e6-a1eb-3b23ad395d25\") pod \"glance-default-internal-api-0\" (UID: \"7f21b5d2-75e5-4cc5-96d0-670e9ed88df0\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.075322 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/406727d4-ffca-4ade-b0ca-b5dbfcb23e24-config-data\") pod \"glance-default-external-api-0\" (UID: \"406727d4-ffca-4ade-b0ca-b5dbfcb23e24\") " pod="openstack/glance-default-external-api-0" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.075377 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f21b5d2-75e5-4cc5-96d0-670e9ed88df0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7f21b5d2-75e5-4cc5-96d0-670e9ed88df0\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.075384 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7f21b5d2-75e5-4cc5-96d0-670e9ed88df0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7f21b5d2-75e5-4cc5-96d0-670e9ed88df0\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.076409 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f21b5d2-75e5-4cc5-96d0-670e9ed88df0-logs\") pod \"glance-default-internal-api-0\" (UID: \"7f21b5d2-75e5-4cc5-96d0-670e9ed88df0\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.080774 4867 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.080815 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0f267588-31b6-42e6-a1eb-3b23ad395d25\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0f267588-31b6-42e6-a1eb-3b23ad395d25\") pod \"glance-default-internal-api-0\" (UID: \"7f21b5d2-75e5-4cc5-96d0-670e9ed88df0\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/75d9da1254ce7e619341632ffa065d218ee4aa27b9558c722e4cc97bdf7e072d/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.081787 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f21b5d2-75e5-4cc5-96d0-670e9ed88df0-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"7f21b5d2-75e5-4cc5-96d0-670e9ed88df0\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.092569 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f21b5d2-75e5-4cc5-96d0-670e9ed88df0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7f21b5d2-75e5-4cc5-96d0-670e9ed88df0\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.092863 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f21b5d2-75e5-4cc5-96d0-670e9ed88df0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7f21b5d2-75e5-4cc5-96d0-670e9ed88df0\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.102831 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f21b5d2-75e5-4cc5-96d0-670e9ed88df0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7f21b5d2-75e5-4cc5-96d0-670e9ed88df0\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.103140 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmjl2\" (UniqueName: \"kubernetes.io/projected/7f21b5d2-75e5-4cc5-96d0-670e9ed88df0-kube-api-access-qmjl2\") pod \"glance-default-internal-api-0\" (UID: \"7f21b5d2-75e5-4cc5-96d0-670e9ed88df0\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.123722 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-mvxwt"] Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.140552 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-gdzwh"] Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.141288 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0f267588-31b6-42e6-a1eb-3b23ad395d25\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0f267588-31b6-42e6-a1eb-3b23ad395d25\") pod \"glance-default-internal-api-0\" (UID: \"7f21b5d2-75e5-4cc5-96d0-670e9ed88df0\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.142118 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-gdzwh" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.147100 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.147260 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.147535 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.149157 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-ffvbq" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.149485 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.164438 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-gdzwh"] Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.178565 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87589008-b930-4698-b94b-883c707d5fb1-scripts\") pod \"keystone-bootstrap-gdzwh\" (UID: \"87589008-b930-4698-b94b-883c707d5fb1\") " pod="openstack/keystone-bootstrap-gdzwh" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.178622 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/406727d4-ffca-4ade-b0ca-b5dbfcb23e24-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"406727d4-ffca-4ade-b0ca-b5dbfcb23e24\") " pod="openstack/glance-default-external-api-0" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.178649 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-36cbded9-e56a-4712-a8db-251c7dcbb87d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-36cbded9-e56a-4712-a8db-251c7dcbb87d\") pod \"glance-default-external-api-0\" (UID: \"406727d4-ffca-4ade-b0ca-b5dbfcb23e24\") " pod="openstack/glance-default-external-api-0" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.178728 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/87589008-b930-4698-b94b-883c707d5fb1-fernet-keys\") pod \"keystone-bootstrap-gdzwh\" (UID: \"87589008-b930-4698-b94b-883c707d5fb1\") " pod="openstack/keystone-bootstrap-gdzwh" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.178755 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85v7k\" (UniqueName: \"kubernetes.io/projected/87589008-b930-4698-b94b-883c707d5fb1-kube-api-access-85v7k\") pod \"keystone-bootstrap-gdzwh\" (UID: \"87589008-b930-4698-b94b-883c707d5fb1\") " pod="openstack/keystone-bootstrap-gdzwh" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.178772 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/87589008-b930-4698-b94b-883c707d5fb1-credential-keys\") pod \"keystone-bootstrap-gdzwh\" (UID: \"87589008-b930-4698-b94b-883c707d5fb1\") " pod="openstack/keystone-bootstrap-gdzwh" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.178796 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/406727d4-ffca-4ade-b0ca-b5dbfcb23e24-scripts\") pod \"glance-default-external-api-0\" (UID: \"406727d4-ffca-4ade-b0ca-b5dbfcb23e24\") " pod="openstack/glance-default-external-api-0" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.178817 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/406727d4-ffca-4ade-b0ca-b5dbfcb23e24-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"406727d4-ffca-4ade-b0ca-b5dbfcb23e24\") " pod="openstack/glance-default-external-api-0" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.178838 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/406727d4-ffca-4ade-b0ca-b5dbfcb23e24-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"406727d4-ffca-4ade-b0ca-b5dbfcb23e24\") " pod="openstack/glance-default-external-api-0" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.178857 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/406727d4-ffca-4ade-b0ca-b5dbfcb23e24-logs\") pod \"glance-default-external-api-0\" (UID: \"406727d4-ffca-4ade-b0ca-b5dbfcb23e24\") " pod="openstack/glance-default-external-api-0" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.178876 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncmbs\" (UniqueName: \"kubernetes.io/projected/406727d4-ffca-4ade-b0ca-b5dbfcb23e24-kube-api-access-ncmbs\") pod \"glance-default-external-api-0\" (UID: \"406727d4-ffca-4ade-b0ca-b5dbfcb23e24\") " pod="openstack/glance-default-external-api-0" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.178940 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/406727d4-ffca-4ade-b0ca-b5dbfcb23e24-config-data\") pod \"glance-default-external-api-0\" (UID: \"406727d4-ffca-4ade-b0ca-b5dbfcb23e24\") " pod="openstack/glance-default-external-api-0" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.179070 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87589008-b930-4698-b94b-883c707d5fb1-combined-ca-bundle\") pod \"keystone-bootstrap-gdzwh\" (UID: \"87589008-b930-4698-b94b-883c707d5fb1\") " pod="openstack/keystone-bootstrap-gdzwh" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.179205 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87589008-b930-4698-b94b-883c707d5fb1-config-data\") pod \"keystone-bootstrap-gdzwh\" (UID: \"87589008-b930-4698-b94b-883c707d5fb1\") " pod="openstack/keystone-bootstrap-gdzwh" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.179386 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/406727d4-ffca-4ade-b0ca-b5dbfcb23e24-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"406727d4-ffca-4ade-b0ca-b5dbfcb23e24\") " pod="openstack/glance-default-external-api-0" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.185558 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/406727d4-ffca-4ade-b0ca-b5dbfcb23e24-logs\") pod \"glance-default-external-api-0\" (UID: \"406727d4-ffca-4ade-b0ca-b5dbfcb23e24\") " pod="openstack/glance-default-external-api-0" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.186395 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/406727d4-ffca-4ade-b0ca-b5dbfcb23e24-config-data\") pod \"glance-default-external-api-0\" (UID: \"406727d4-ffca-4ade-b0ca-b5dbfcb23e24\") " pod="openstack/glance-default-external-api-0" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.186625 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/406727d4-ffca-4ade-b0ca-b5dbfcb23e24-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"406727d4-ffca-4ade-b0ca-b5dbfcb23e24\") " pod="openstack/glance-default-external-api-0" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.186879 4867 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.186905 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-36cbded9-e56a-4712-a8db-251c7dcbb87d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-36cbded9-e56a-4712-a8db-251c7dcbb87d\") pod \"glance-default-external-api-0\" (UID: \"406727d4-ffca-4ade-b0ca-b5dbfcb23e24\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/2911fee5623424610909110255172e6a670235da2c51b706f28d869aaa21b2f4/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.190856 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/406727d4-ffca-4ade-b0ca-b5dbfcb23e24-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"406727d4-ffca-4ade-b0ca-b5dbfcb23e24\") " pod="openstack/glance-default-external-api-0" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.196702 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/406727d4-ffca-4ade-b0ca-b5dbfcb23e24-scripts\") pod \"glance-default-external-api-0\" (UID: \"406727d4-ffca-4ade-b0ca-b5dbfcb23e24\") " pod="openstack/glance-default-external-api-0" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.196888 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncmbs\" (UniqueName: \"kubernetes.io/projected/406727d4-ffca-4ade-b0ca-b5dbfcb23e24-kube-api-access-ncmbs\") pod \"glance-default-external-api-0\" (UID: \"406727d4-ffca-4ade-b0ca-b5dbfcb23e24\") " pod="openstack/glance-default-external-api-0" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.232117 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-36cbded9-e56a-4712-a8db-251c7dcbb87d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-36cbded9-e56a-4712-a8db-251c7dcbb87d\") pod \"glance-default-external-api-0\" (UID: \"406727d4-ffca-4ade-b0ca-b5dbfcb23e24\") " pod="openstack/glance-default-external-api-0" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.252415 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.280452 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.280997 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87589008-b930-4698-b94b-883c707d5fb1-combined-ca-bundle\") pod \"keystone-bootstrap-gdzwh\" (UID: \"87589008-b930-4698-b94b-883c707d5fb1\") " pod="openstack/keystone-bootstrap-gdzwh" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.281075 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87589008-b930-4698-b94b-883c707d5fb1-config-data\") pod \"keystone-bootstrap-gdzwh\" (UID: \"87589008-b930-4698-b94b-883c707d5fb1\") " pod="openstack/keystone-bootstrap-gdzwh" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.281142 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87589008-b930-4698-b94b-883c707d5fb1-scripts\") pod \"keystone-bootstrap-gdzwh\" (UID: \"87589008-b930-4698-b94b-883c707d5fb1\") " pod="openstack/keystone-bootstrap-gdzwh" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.281214 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/87589008-b930-4698-b94b-883c707d5fb1-fernet-keys\") pod \"keystone-bootstrap-gdzwh\" (UID: \"87589008-b930-4698-b94b-883c707d5fb1\") " pod="openstack/keystone-bootstrap-gdzwh" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.281246 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/87589008-b930-4698-b94b-883c707d5fb1-credential-keys\") pod \"keystone-bootstrap-gdzwh\" (UID: \"87589008-b930-4698-b94b-883c707d5fb1\") " pod="openstack/keystone-bootstrap-gdzwh" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.281268 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85v7k\" (UniqueName: \"kubernetes.io/projected/87589008-b930-4698-b94b-883c707d5fb1-kube-api-access-85v7k\") pod \"keystone-bootstrap-gdzwh\" (UID: \"87589008-b930-4698-b94b-883c707d5fb1\") " pod="openstack/keystone-bootstrap-gdzwh" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.284797 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87589008-b930-4698-b94b-883c707d5fb1-scripts\") pod \"keystone-bootstrap-gdzwh\" (UID: \"87589008-b930-4698-b94b-883c707d5fb1\") " pod="openstack/keystone-bootstrap-gdzwh" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.285163 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/87589008-b930-4698-b94b-883c707d5fb1-credential-keys\") pod \"keystone-bootstrap-gdzwh\" (UID: \"87589008-b930-4698-b94b-883c707d5fb1\") " pod="openstack/keystone-bootstrap-gdzwh" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.285176 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87589008-b930-4698-b94b-883c707d5fb1-config-data\") pod \"keystone-bootstrap-gdzwh\" (UID: \"87589008-b930-4698-b94b-883c707d5fb1\") " pod="openstack/keystone-bootstrap-gdzwh" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.286565 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/87589008-b930-4698-b94b-883c707d5fb1-fernet-keys\") pod \"keystone-bootstrap-gdzwh\" (UID: \"87589008-b930-4698-b94b-883c707d5fb1\") " pod="openstack/keystone-bootstrap-gdzwh" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.288047 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87589008-b930-4698-b94b-883c707d5fb1-combined-ca-bundle\") pod \"keystone-bootstrap-gdzwh\" (UID: \"87589008-b930-4698-b94b-883c707d5fb1\") " pod="openstack/keystone-bootstrap-gdzwh" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.300077 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85v7k\" (UniqueName: \"kubernetes.io/projected/87589008-b930-4698-b94b-883c707d5fb1-kube-api-access-85v7k\") pod \"keystone-bootstrap-gdzwh\" (UID: \"87589008-b930-4698-b94b-883c707d5fb1\") " pod="openstack/keystone-bootstrap-gdzwh" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.572944 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-gdzwh" Feb 14 04:31:19 crc kubenswrapper[4867]: I0214 04:31:19.013219 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c94481eb-b5a1-40d6-86ea-623f39b63b92" path="/var/lib/kubelet/pods/c94481eb-b5a1-40d6-86ea-623f39b63b92/volumes" Feb 14 04:31:19 crc kubenswrapper[4867]: I0214 04:31:19.585464 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74f6bcbc87-shjcj" podUID="34e3aca5-c7d4-4401-b301-1ab6497cb1d7" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.168:5353: connect: connection refused" Feb 14 04:31:23 crc kubenswrapper[4867]: E0214 04:31:23.959641 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Feb 14 04:31:23 crc kubenswrapper[4867]: E0214 04:31:23.960668 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2cmn6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-9zrmj_openstack(ffefbab2-8288-4eaa-9df3-e95383cdf19d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 04:31:23 crc kubenswrapper[4867]: E0214 04:31:23.961867 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-9zrmj" podUID="ffefbab2-8288-4eaa-9df3-e95383cdf19d" Feb 14 04:31:24 crc kubenswrapper[4867]: I0214 04:31:24.585941 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74f6bcbc87-shjcj" podUID="34e3aca5-c7d4-4401-b301-1ab6497cb1d7" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.168:5353: connect: connection refused" Feb 14 04:31:24 crc kubenswrapper[4867]: E0214 04:31:24.863578 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-9zrmj" podUID="ffefbab2-8288-4eaa-9df3-e95383cdf19d" Feb 14 04:31:31 crc kubenswrapper[4867]: I0214 04:31:31.251186 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 04:31:31 crc kubenswrapper[4867]: I0214 04:31:31.251930 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 04:31:33 crc kubenswrapper[4867]: E0214 04:31:33.935827 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Feb 14 04:31:33 crc kubenswrapper[4867]: E0214 04:31:33.936303 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x77fq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-mklx7_openstack(cccb73cc-2b89-4363-b7ca-44dfa627d9f9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 04:31:33 crc kubenswrapper[4867]: E0214 04:31:33.937597 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-mklx7" podUID="cccb73cc-2b89-4363-b7ca-44dfa627d9f9" Feb 14 04:31:33 crc kubenswrapper[4867]: I0214 04:31:33.953649 4867 generic.go:334] "Generic (PLEG): container finished" podID="ed6edd10-56a9-4431-bb38-7b266f802e63" containerID="b4af422ec473bd7a3a6d6b89b2e7229c4375e35cf75e8494db638d7095f07468" exitCode=0 Feb 14 04:31:33 crc kubenswrapper[4867]: I0214 04:31:33.953815 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-425tq" event={"ID":"ed6edd10-56a9-4431-bb38-7b266f802e63","Type":"ContainerDied","Data":"b4af422ec473bd7a3a6d6b89b2e7229c4375e35cf75e8494db638d7095f07468"} Feb 14 04:31:33 crc kubenswrapper[4867]: E0214 04:31:33.956139 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-mklx7" podUID="cccb73cc-2b89-4363-b7ca-44dfa627d9f9" Feb 14 04:31:34 crc kubenswrapper[4867]: I0214 04:31:34.082640 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-shjcj" Feb 14 04:31:34 crc kubenswrapper[4867]: I0214 04:31:34.180102 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/34e3aca5-c7d4-4401-b301-1ab6497cb1d7-ovsdbserver-nb\") pod \"34e3aca5-c7d4-4401-b301-1ab6497cb1d7\" (UID: \"34e3aca5-c7d4-4401-b301-1ab6497cb1d7\") " Feb 14 04:31:34 crc kubenswrapper[4867]: I0214 04:31:34.180268 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/34e3aca5-c7d4-4401-b301-1ab6497cb1d7-dns-swift-storage-0\") pod \"34e3aca5-c7d4-4401-b301-1ab6497cb1d7\" (UID: \"34e3aca5-c7d4-4401-b301-1ab6497cb1d7\") " Feb 14 04:31:34 crc kubenswrapper[4867]: I0214 04:31:34.180300 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34e3aca5-c7d4-4401-b301-1ab6497cb1d7-config\") pod \"34e3aca5-c7d4-4401-b301-1ab6497cb1d7\" (UID: \"34e3aca5-c7d4-4401-b301-1ab6497cb1d7\") " Feb 14 04:31:34 crc kubenswrapper[4867]: I0214 04:31:34.180331 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/34e3aca5-c7d4-4401-b301-1ab6497cb1d7-ovsdbserver-sb\") pod \"34e3aca5-c7d4-4401-b301-1ab6497cb1d7\" (UID: \"34e3aca5-c7d4-4401-b301-1ab6497cb1d7\") " Feb 14 04:31:34 crc kubenswrapper[4867]: I0214 04:31:34.180354 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/34e3aca5-c7d4-4401-b301-1ab6497cb1d7-dns-svc\") pod \"34e3aca5-c7d4-4401-b301-1ab6497cb1d7\" (UID: \"34e3aca5-c7d4-4401-b301-1ab6497cb1d7\") " Feb 14 04:31:34 crc kubenswrapper[4867]: I0214 04:31:34.180375 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvg6h\" (UniqueName: \"kubernetes.io/projected/34e3aca5-c7d4-4401-b301-1ab6497cb1d7-kube-api-access-bvg6h\") pod \"34e3aca5-c7d4-4401-b301-1ab6497cb1d7\" (UID: \"34e3aca5-c7d4-4401-b301-1ab6497cb1d7\") " Feb 14 04:31:34 crc kubenswrapper[4867]: I0214 04:31:34.188189 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34e3aca5-c7d4-4401-b301-1ab6497cb1d7-kube-api-access-bvg6h" (OuterVolumeSpecName: "kube-api-access-bvg6h") pod "34e3aca5-c7d4-4401-b301-1ab6497cb1d7" (UID: "34e3aca5-c7d4-4401-b301-1ab6497cb1d7"). InnerVolumeSpecName "kube-api-access-bvg6h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:31:34 crc kubenswrapper[4867]: I0214 04:31:34.234352 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34e3aca5-c7d4-4401-b301-1ab6497cb1d7-config" (OuterVolumeSpecName: "config") pod "34e3aca5-c7d4-4401-b301-1ab6497cb1d7" (UID: "34e3aca5-c7d4-4401-b301-1ab6497cb1d7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:31:34 crc kubenswrapper[4867]: I0214 04:31:34.239216 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34e3aca5-c7d4-4401-b301-1ab6497cb1d7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "34e3aca5-c7d4-4401-b301-1ab6497cb1d7" (UID: "34e3aca5-c7d4-4401-b301-1ab6497cb1d7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:31:34 crc kubenswrapper[4867]: I0214 04:31:34.246523 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34e3aca5-c7d4-4401-b301-1ab6497cb1d7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "34e3aca5-c7d4-4401-b301-1ab6497cb1d7" (UID: "34e3aca5-c7d4-4401-b301-1ab6497cb1d7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:31:34 crc kubenswrapper[4867]: I0214 04:31:34.259946 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34e3aca5-c7d4-4401-b301-1ab6497cb1d7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "34e3aca5-c7d4-4401-b301-1ab6497cb1d7" (UID: "34e3aca5-c7d4-4401-b301-1ab6497cb1d7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:31:34 crc kubenswrapper[4867]: I0214 04:31:34.264684 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34e3aca5-c7d4-4401-b301-1ab6497cb1d7-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "34e3aca5-c7d4-4401-b301-1ab6497cb1d7" (UID: "34e3aca5-c7d4-4401-b301-1ab6497cb1d7"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:31:34 crc kubenswrapper[4867]: I0214 04:31:34.283481 4867 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/34e3aca5-c7d4-4401-b301-1ab6497cb1d7-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:34 crc kubenswrapper[4867]: I0214 04:31:34.283542 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34e3aca5-c7d4-4401-b301-1ab6497cb1d7-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:34 crc kubenswrapper[4867]: I0214 04:31:34.283557 4867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/34e3aca5-c7d4-4401-b301-1ab6497cb1d7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:34 crc kubenswrapper[4867]: I0214 04:31:34.283568 4867 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/34e3aca5-c7d4-4401-b301-1ab6497cb1d7-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:34 crc kubenswrapper[4867]: I0214 04:31:34.283579 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvg6h\" (UniqueName: \"kubernetes.io/projected/34e3aca5-c7d4-4401-b301-1ab6497cb1d7-kube-api-access-bvg6h\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:34 crc kubenswrapper[4867]: I0214 04:31:34.283588 4867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/34e3aca5-c7d4-4401-b301-1ab6497cb1d7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:34 crc kubenswrapper[4867]: I0214 04:31:34.586429 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74f6bcbc87-shjcj" podUID="34e3aca5-c7d4-4401-b301-1ab6497cb1d7" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.168:5353: i/o timeout" Feb 14 04:31:34 crc kubenswrapper[4867]: I0214 04:31:34.586553 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74f6bcbc87-shjcj" Feb 14 04:31:34 crc kubenswrapper[4867]: I0214 04:31:34.967732 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-shjcj" Feb 14 04:31:34 crc kubenswrapper[4867]: I0214 04:31:34.967800 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-shjcj" event={"ID":"34e3aca5-c7d4-4401-b301-1ab6497cb1d7","Type":"ContainerDied","Data":"ebbc4da8bb363e9a0155ec0e870c82eae82810ab31f3b604e5582d38957c9d4d"} Feb 14 04:31:34 crc kubenswrapper[4867]: I0214 04:31:34.967856 4867 scope.go:117] "RemoveContainer" containerID="42be2316b4ae343fcb4b814718eabf5f7933e5e7ed598513fca11b7935007ed3" Feb 14 04:31:35 crc kubenswrapper[4867]: I0214 04:31:35.019300 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-shjcj"] Feb 14 04:31:35 crc kubenswrapper[4867]: I0214 04:31:35.024258 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-shjcj"] Feb 14 04:31:35 crc kubenswrapper[4867]: I0214 04:31:35.229687 4867 scope.go:117] "RemoveContainer" containerID="16409e89382c3b3bacc54f4af34e446329e86ddc39bf082ba4bf9fe2d118dfb6" Feb 14 04:31:35 crc kubenswrapper[4867]: E0214 04:31:35.279825 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Feb 14 04:31:35 crc kubenswrapper[4867]: E0214 04:31:35.280000 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-87zjm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-grkqh_openstack(9c973bde-ff14-4cce-9f9c-57354dbd4adb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 04:31:35 crc kubenswrapper[4867]: E0214 04:31:35.281146 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-grkqh" podUID="9c973bde-ff14-4cce-9f9c-57354dbd4adb" Feb 14 04:31:35 crc kubenswrapper[4867]: I0214 04:31:35.561931 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-425tq" Feb 14 04:31:35 crc kubenswrapper[4867]: I0214 04:31:35.641244 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ed6edd10-56a9-4431-bb38-7b266f802e63-config\") pod \"ed6edd10-56a9-4431-bb38-7b266f802e63\" (UID: \"ed6edd10-56a9-4431-bb38-7b266f802e63\") " Feb 14 04:31:35 crc kubenswrapper[4867]: I0214 04:31:35.641359 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzcfw\" (UniqueName: \"kubernetes.io/projected/ed6edd10-56a9-4431-bb38-7b266f802e63-kube-api-access-fzcfw\") pod \"ed6edd10-56a9-4431-bb38-7b266f802e63\" (UID: \"ed6edd10-56a9-4431-bb38-7b266f802e63\") " Feb 14 04:31:35 crc kubenswrapper[4867]: I0214 04:31:35.641543 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed6edd10-56a9-4431-bb38-7b266f802e63-combined-ca-bundle\") pod \"ed6edd10-56a9-4431-bb38-7b266f802e63\" (UID: \"ed6edd10-56a9-4431-bb38-7b266f802e63\") " Feb 14 04:31:35 crc kubenswrapper[4867]: I0214 04:31:35.648913 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed6edd10-56a9-4431-bb38-7b266f802e63-kube-api-access-fzcfw" (OuterVolumeSpecName: "kube-api-access-fzcfw") pod "ed6edd10-56a9-4431-bb38-7b266f802e63" (UID: "ed6edd10-56a9-4431-bb38-7b266f802e63"). InnerVolumeSpecName "kube-api-access-fzcfw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:31:35 crc kubenswrapper[4867]: I0214 04:31:35.649870 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fzcfw\" (UniqueName: \"kubernetes.io/projected/ed6edd10-56a9-4431-bb38-7b266f802e63-kube-api-access-fzcfw\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:35 crc kubenswrapper[4867]: I0214 04:31:35.679200 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed6edd10-56a9-4431-bb38-7b266f802e63-config" (OuterVolumeSpecName: "config") pod "ed6edd10-56a9-4431-bb38-7b266f802e63" (UID: "ed6edd10-56a9-4431-bb38-7b266f802e63"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:31:35 crc kubenswrapper[4867]: I0214 04:31:35.684798 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed6edd10-56a9-4431-bb38-7b266f802e63-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ed6edd10-56a9-4431-bb38-7b266f802e63" (UID: "ed6edd10-56a9-4431-bb38-7b266f802e63"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:31:35 crc kubenswrapper[4867]: I0214 04:31:35.711316 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-gdzwh"] Feb 14 04:31:35 crc kubenswrapper[4867]: I0214 04:31:35.717795 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 14 04:31:35 crc kubenswrapper[4867]: I0214 04:31:35.751422 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/ed6edd10-56a9-4431-bb38-7b266f802e63-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:35 crc kubenswrapper[4867]: I0214 04:31:35.751455 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed6edd10-56a9-4431-bb38-7b266f802e63-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:35 crc kubenswrapper[4867]: I0214 04:31:35.902910 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 14 04:31:35 crc kubenswrapper[4867]: I0214 04:31:35.990782 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-425tq" event={"ID":"ed6edd10-56a9-4431-bb38-7b266f802e63","Type":"ContainerDied","Data":"d78bdf76524edb85205e3ac00a9a89a4911b2fe692381100ea6ca9ff406ccaef"} Feb 14 04:31:35 crc kubenswrapper[4867]: I0214 04:31:35.990836 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-425tq" Feb 14 04:31:35 crc kubenswrapper[4867]: I0214 04:31:35.990858 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d78bdf76524edb85205e3ac00a9a89a4911b2fe692381100ea6ca9ff406ccaef" Feb 14 04:31:35 crc kubenswrapper[4867]: I0214 04:31:35.992910 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"406727d4-ffca-4ade-b0ca-b5dbfcb23e24","Type":"ContainerStarted","Data":"a3cc1da73263e85bbf2b7d750ab646192fbf22c988007a55f775707de3030a59"} Feb 14 04:31:35 crc kubenswrapper[4867]: I0214 04:31:35.995052 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-gdzwh" event={"ID":"87589008-b930-4698-b94b-883c707d5fb1","Type":"ContainerStarted","Data":"42546acb8bf1d18a2013b6f620e8fb872f570e002bf0d9270838f9f12f95b201"} Feb 14 04:31:35 crc kubenswrapper[4867]: I0214 04:31:35.995088 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-gdzwh" event={"ID":"87589008-b930-4698-b94b-883c707d5fb1","Type":"ContainerStarted","Data":"884e46e9a9ccb7a1951c05016a7cfe503d95ce144f68a7a413c28878d0db0fb9"} Feb 14 04:31:35 crc kubenswrapper[4867]: I0214 04:31:35.999389 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20f83c90-35bd-4d40-90e4-f992c7844a5d","Type":"ContainerStarted","Data":"12c007eaf3f2f0273b4b97ee67fcb41bee882cea55e4b7022e88e2bd510463b3"} Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.001377 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-246z7" event={"ID":"18fb2b12-f922-4976-8e05-6e78a8751456","Type":"ContainerStarted","Data":"60316f17511ab27fc3a729f8ccdd9f3a0822ad95a99d3ea5ac358cbcc6ece82a"} Feb 14 04:31:36 crc kubenswrapper[4867]: E0214 04:31:36.002965 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-grkqh" podUID="9c973bde-ff14-4cce-9f9c-57354dbd4adb" Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.021760 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-gdzwh" podStartSLOduration=19.021740138 podStartE2EDuration="19.021740138s" podCreationTimestamp="2026-02-14 04:31:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:31:36.016998262 +0000 UTC m=+1328.097935576" watchObservedRunningTime="2026-02-14 04:31:36.021740138 +0000 UTC m=+1328.102677452" Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.061373 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-246z7" podStartSLOduration=2.804693954 podStartE2EDuration="32.061348167s" podCreationTimestamp="2026-02-14 04:31:04 +0000 UTC" firstStartedPulling="2026-02-14 04:31:05.88675425 +0000 UTC m=+1297.967691564" lastFinishedPulling="2026-02-14 04:31:35.143408463 +0000 UTC m=+1327.224345777" observedRunningTime="2026-02-14 04:31:36.035070061 +0000 UTC m=+1328.116007375" watchObservedRunningTime="2026-02-14 04:31:36.061348167 +0000 UTC m=+1328.142285471" Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.244203 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-zkb5z"] Feb 14 04:31:36 crc kubenswrapper[4867]: E0214 04:31:36.245044 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed6edd10-56a9-4431-bb38-7b266f802e63" containerName="neutron-db-sync" Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.245065 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed6edd10-56a9-4431-bb38-7b266f802e63" containerName="neutron-db-sync" Feb 14 04:31:36 crc kubenswrapper[4867]: E0214 04:31:36.245109 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34e3aca5-c7d4-4401-b301-1ab6497cb1d7" containerName="dnsmasq-dns" Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.245116 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="34e3aca5-c7d4-4401-b301-1ab6497cb1d7" containerName="dnsmasq-dns" Feb 14 04:31:36 crc kubenswrapper[4867]: E0214 04:31:36.245128 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34e3aca5-c7d4-4401-b301-1ab6497cb1d7" containerName="init" Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.245134 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="34e3aca5-c7d4-4401-b301-1ab6497cb1d7" containerName="init" Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.245328 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="34e3aca5-c7d4-4401-b301-1ab6497cb1d7" containerName="dnsmasq-dns" Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.245361 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed6edd10-56a9-4431-bb38-7b266f802e63" containerName="neutron-db-sync" Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.252042 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-zkb5z" Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.267325 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-zkb5z"] Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.268719 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/41682938-f603-460d-91e2-9de423799697-dns-svc\") pod \"dnsmasq-dns-55f844cf75-zkb5z\" (UID: \"41682938-f603-460d-91e2-9de423799697\") " pod="openstack/dnsmasq-dns-55f844cf75-zkb5z" Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.268811 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/41682938-f603-460d-91e2-9de423799697-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-zkb5z\" (UID: \"41682938-f603-460d-91e2-9de423799697\") " pod="openstack/dnsmasq-dns-55f844cf75-zkb5z" Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.268869 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/41682938-f603-460d-91e2-9de423799697-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-zkb5z\" (UID: \"41682938-f603-460d-91e2-9de423799697\") " pod="openstack/dnsmasq-dns-55f844cf75-zkb5z" Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.268901 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41682938-f603-460d-91e2-9de423799697-config\") pod \"dnsmasq-dns-55f844cf75-zkb5z\" (UID: \"41682938-f603-460d-91e2-9de423799697\") " pod="openstack/dnsmasq-dns-55f844cf75-zkb5z" Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.268922 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwkw4\" (UniqueName: \"kubernetes.io/projected/41682938-f603-460d-91e2-9de423799697-kube-api-access-bwkw4\") pod \"dnsmasq-dns-55f844cf75-zkb5z\" (UID: \"41682938-f603-460d-91e2-9de423799697\") " pod="openstack/dnsmasq-dns-55f844cf75-zkb5z" Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.268945 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/41682938-f603-460d-91e2-9de423799697-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-zkb5z\" (UID: \"41682938-f603-460d-91e2-9de423799697\") " pod="openstack/dnsmasq-dns-55f844cf75-zkb5z" Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.304713 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-74c5fcd7cb-sr8z9"] Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.307645 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-74c5fcd7cb-sr8z9" Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.316334 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.316689 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-jbsbl" Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.316831 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.341159 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.341697 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-74c5fcd7cb-sr8z9"] Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.371530 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/41682938-f603-460d-91e2-9de423799697-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-zkb5z\" (UID: \"41682938-f603-460d-91e2-9de423799697\") " pod="openstack/dnsmasq-dns-55f844cf75-zkb5z" Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.371865 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149-config\") pod \"neutron-74c5fcd7cb-sr8z9\" (UID: \"9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149\") " pod="openstack/neutron-74c5fcd7cb-sr8z9" Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.371954 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41682938-f603-460d-91e2-9de423799697-config\") pod \"dnsmasq-dns-55f844cf75-zkb5z\" (UID: \"41682938-f603-460d-91e2-9de423799697\") " pod="openstack/dnsmasq-dns-55f844cf75-zkb5z" Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.372074 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwkw4\" (UniqueName: \"kubernetes.io/projected/41682938-f603-460d-91e2-9de423799697-kube-api-access-bwkw4\") pod \"dnsmasq-dns-55f844cf75-zkb5z\" (UID: \"41682938-f603-460d-91e2-9de423799697\") " pod="openstack/dnsmasq-dns-55f844cf75-zkb5z" Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.372156 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149-ovndb-tls-certs\") pod \"neutron-74c5fcd7cb-sr8z9\" (UID: \"9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149\") " pod="openstack/neutron-74c5fcd7cb-sr8z9" Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.372232 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/41682938-f603-460d-91e2-9de423799697-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-zkb5z\" (UID: \"41682938-f603-460d-91e2-9de423799697\") " pod="openstack/dnsmasq-dns-55f844cf75-zkb5z" Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.372364 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prsd4\" (UniqueName: \"kubernetes.io/projected/9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149-kube-api-access-prsd4\") pod \"neutron-74c5fcd7cb-sr8z9\" (UID: \"9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149\") " pod="openstack/neutron-74c5fcd7cb-sr8z9" Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.372483 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/41682938-f603-460d-91e2-9de423799697-dns-svc\") pod \"dnsmasq-dns-55f844cf75-zkb5z\" (UID: \"41682938-f603-460d-91e2-9de423799697\") " pod="openstack/dnsmasq-dns-55f844cf75-zkb5z" Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.372745 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/41682938-f603-460d-91e2-9de423799697-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-zkb5z\" (UID: \"41682938-f603-460d-91e2-9de423799697\") " pod="openstack/dnsmasq-dns-55f844cf75-zkb5z" Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.374161 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/41682938-f603-460d-91e2-9de423799697-dns-svc\") pod \"dnsmasq-dns-55f844cf75-zkb5z\" (UID: \"41682938-f603-460d-91e2-9de423799697\") " pod="openstack/dnsmasq-dns-55f844cf75-zkb5z" Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.374279 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/41682938-f603-460d-91e2-9de423799697-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-zkb5z\" (UID: \"41682938-f603-460d-91e2-9de423799697\") " pod="openstack/dnsmasq-dns-55f844cf75-zkb5z" Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.374856 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41682938-f603-460d-91e2-9de423799697-config\") pod \"dnsmasq-dns-55f844cf75-zkb5z\" (UID: \"41682938-f603-460d-91e2-9de423799697\") " pod="openstack/dnsmasq-dns-55f844cf75-zkb5z" Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.379264 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149-httpd-config\") pod \"neutron-74c5fcd7cb-sr8z9\" (UID: \"9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149\") " pod="openstack/neutron-74c5fcd7cb-sr8z9" Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.379690 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149-combined-ca-bundle\") pod \"neutron-74c5fcd7cb-sr8z9\" (UID: \"9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149\") " pod="openstack/neutron-74c5fcd7cb-sr8z9" Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.380141 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/41682938-f603-460d-91e2-9de423799697-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-zkb5z\" (UID: \"41682938-f603-460d-91e2-9de423799697\") " pod="openstack/dnsmasq-dns-55f844cf75-zkb5z" Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.381323 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/41682938-f603-460d-91e2-9de423799697-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-zkb5z\" (UID: \"41682938-f603-460d-91e2-9de423799697\") " pod="openstack/dnsmasq-dns-55f844cf75-zkb5z" Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.409266 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwkw4\" (UniqueName: \"kubernetes.io/projected/41682938-f603-460d-91e2-9de423799697-kube-api-access-bwkw4\") pod \"dnsmasq-dns-55f844cf75-zkb5z\" (UID: \"41682938-f603-460d-91e2-9de423799697\") " pod="openstack/dnsmasq-dns-55f844cf75-zkb5z" Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.484081 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prsd4\" (UniqueName: \"kubernetes.io/projected/9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149-kube-api-access-prsd4\") pod \"neutron-74c5fcd7cb-sr8z9\" (UID: \"9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149\") " pod="openstack/neutron-74c5fcd7cb-sr8z9" Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.484176 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149-httpd-config\") pod \"neutron-74c5fcd7cb-sr8z9\" (UID: \"9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149\") " pod="openstack/neutron-74c5fcd7cb-sr8z9" Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.484206 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149-combined-ca-bundle\") pod \"neutron-74c5fcd7cb-sr8z9\" (UID: \"9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149\") " pod="openstack/neutron-74c5fcd7cb-sr8z9" Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.484285 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149-config\") pod \"neutron-74c5fcd7cb-sr8z9\" (UID: \"9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149\") " pod="openstack/neutron-74c5fcd7cb-sr8z9" Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.484313 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149-ovndb-tls-certs\") pod \"neutron-74c5fcd7cb-sr8z9\" (UID: \"9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149\") " pod="openstack/neutron-74c5fcd7cb-sr8z9" Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.490340 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149-httpd-config\") pod \"neutron-74c5fcd7cb-sr8z9\" (UID: \"9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149\") " pod="openstack/neutron-74c5fcd7cb-sr8z9" Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.491696 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149-ovndb-tls-certs\") pod \"neutron-74c5fcd7cb-sr8z9\" (UID: \"9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149\") " pod="openstack/neutron-74c5fcd7cb-sr8z9" Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.494045 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149-combined-ca-bundle\") pod \"neutron-74c5fcd7cb-sr8z9\" (UID: \"9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149\") " pod="openstack/neutron-74c5fcd7cb-sr8z9" Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.518957 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149-config\") pod \"neutron-74c5fcd7cb-sr8z9\" (UID: \"9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149\") " pod="openstack/neutron-74c5fcd7cb-sr8z9" Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.519868 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prsd4\" (UniqueName: \"kubernetes.io/projected/9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149-kube-api-access-prsd4\") pod \"neutron-74c5fcd7cb-sr8z9\" (UID: \"9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149\") " pod="openstack/neutron-74c5fcd7cb-sr8z9" Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.648016 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-zkb5z" Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.658289 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-74c5fcd7cb-sr8z9" Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.761140 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 14 04:31:36 crc kubenswrapper[4867]: W0214 04:31:36.780837 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7f21b5d2_75e5_4cc5_96d0_670e9ed88df0.slice/crio-a058ad6cbd2191072dd3095571bbab2223991ccf0e5587286e857f99ac25261b WatchSource:0}: Error finding container a058ad6cbd2191072dd3095571bbab2223991ccf0e5587286e857f99ac25261b: Status 404 returned error can't find the container with id a058ad6cbd2191072dd3095571bbab2223991ccf0e5587286e857f99ac25261b Feb 14 04:31:37 crc kubenswrapper[4867]: I0214 04:31:37.052318 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34e3aca5-c7d4-4401-b301-1ab6497cb1d7" path="/var/lib/kubelet/pods/34e3aca5-c7d4-4401-b301-1ab6497cb1d7/volumes" Feb 14 04:31:37 crc kubenswrapper[4867]: I0214 04:31:37.075841 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7f21b5d2-75e5-4cc5-96d0-670e9ed88df0","Type":"ContainerStarted","Data":"a058ad6cbd2191072dd3095571bbab2223991ccf0e5587286e857f99ac25261b"} Feb 14 04:31:37 crc kubenswrapper[4867]: I0214 04:31:37.098965 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"406727d4-ffca-4ade-b0ca-b5dbfcb23e24","Type":"ContainerStarted","Data":"461e174da477dbbe46e48418e6c4b74717f5d942fc161f7932d038f71bf9aca1"} Feb 14 04:31:37 crc kubenswrapper[4867]: I0214 04:31:37.286724 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-zkb5z"] Feb 14 04:31:37 crc kubenswrapper[4867]: I0214 04:31:37.623889 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-74c5fcd7cb-sr8z9"] Feb 14 04:31:38 crc kubenswrapper[4867]: I0214 04:31:38.120098 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-zkb5z" event={"ID":"41682938-f603-460d-91e2-9de423799697","Type":"ContainerStarted","Data":"fb9de469ce205f58ab8b9cb9fe410a6dc2ae4ce6eea561956a614622a54d90eb"} Feb 14 04:31:38 crc kubenswrapper[4867]: I0214 04:31:38.131416 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7f21b5d2-75e5-4cc5-96d0-670e9ed88df0","Type":"ContainerStarted","Data":"70953f2317efbfb87d7a56f4d71c52385c4847b32874288de71ce95ba977de9e"} Feb 14 04:31:38 crc kubenswrapper[4867]: I0214 04:31:38.148391 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"406727d4-ffca-4ade-b0ca-b5dbfcb23e24","Type":"ContainerStarted","Data":"12a1d2cb9718993931d34f7f092630cac049d31e66bb907373a6a9ebfd3b2034"} Feb 14 04:31:38 crc kubenswrapper[4867]: I0214 04:31:38.229871 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=22.229852759 podStartE2EDuration="22.229852759s" podCreationTimestamp="2026-02-14 04:31:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:31:38.216063074 +0000 UTC m=+1330.297000388" watchObservedRunningTime="2026-02-14 04:31:38.229852759 +0000 UTC m=+1330.310790073" Feb 14 04:31:39 crc kubenswrapper[4867]: I0214 04:31:39.155635 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-569c46898f-bbd5l"] Feb 14 04:31:39 crc kubenswrapper[4867]: I0214 04:31:39.161944 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-569c46898f-bbd5l" Feb 14 04:31:39 crc kubenswrapper[4867]: I0214 04:31:39.163279 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20f83c90-35bd-4d40-90e4-f992c7844a5d","Type":"ContainerStarted","Data":"5aef47de2b98909844392965ecce12a94c4a0b4e3f7b14facabcf28be59312be"} Feb 14 04:31:39 crc kubenswrapper[4867]: I0214 04:31:39.166477 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 14 04:31:39 crc kubenswrapper[4867]: I0214 04:31:39.166785 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 14 04:31:39 crc kubenswrapper[4867]: I0214 04:31:39.171583 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-74c5fcd7cb-sr8z9" event={"ID":"9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149","Type":"ContainerStarted","Data":"a00d0ebf0ff2de031204758114db4258ee7b4d688e4e3e8fcab6451b81a33050"} Feb 14 04:31:39 crc kubenswrapper[4867]: I0214 04:31:39.171666 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-74c5fcd7cb-sr8z9" event={"ID":"9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149","Type":"ContainerStarted","Data":"a3270a5cb491a003b02a8ff42a33368a493af6d0e24d1558f76c114ff7412184"} Feb 14 04:31:39 crc kubenswrapper[4867]: I0214 04:31:39.171678 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-74c5fcd7cb-sr8z9" event={"ID":"9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149","Type":"ContainerStarted","Data":"39d679b02b54e70585a87ea7dbf473acb26533d3e4ea7319177999bccaf06766"} Feb 14 04:31:39 crc kubenswrapper[4867]: I0214 04:31:39.171729 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-74c5fcd7cb-sr8z9" Feb 14 04:31:39 crc kubenswrapper[4867]: I0214 04:31:39.174612 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-569c46898f-bbd5l"] Feb 14 04:31:39 crc kubenswrapper[4867]: I0214 04:31:39.181708 4867 generic.go:334] "Generic (PLEG): container finished" podID="41682938-f603-460d-91e2-9de423799697" containerID="89d6a8bcac13fc998b43875a988468666140ff6de2472314fab3fcf4097c9cae" exitCode=0 Feb 14 04:31:39 crc kubenswrapper[4867]: I0214 04:31:39.181786 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-zkb5z" event={"ID":"41682938-f603-460d-91e2-9de423799697","Type":"ContainerDied","Data":"89d6a8bcac13fc998b43875a988468666140ff6de2472314fab3fcf4097c9cae"} Feb 14 04:31:39 crc kubenswrapper[4867]: I0214 04:31:39.186670 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7f21b5d2-75e5-4cc5-96d0-670e9ed88df0","Type":"ContainerStarted","Data":"784cfaee3c31733050d3a1efb21352103c907f523d29c5e564d74f7dfef79bf4"} Feb 14 04:31:39 crc kubenswrapper[4867]: I0214 04:31:39.259722 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-74c5fcd7cb-sr8z9" podStartSLOduration=3.259692149 podStartE2EDuration="3.259692149s" podCreationTimestamp="2026-02-14 04:31:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:31:39.227863706 +0000 UTC m=+1331.308801020" watchObservedRunningTime="2026-02-14 04:31:39.259692149 +0000 UTC m=+1331.340629453" Feb 14 04:31:39 crc kubenswrapper[4867]: I0214 04:31:39.322322 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=23.322300608 podStartE2EDuration="23.322300608s" podCreationTimestamp="2026-02-14 04:31:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:31:39.277960603 +0000 UTC m=+1331.358897917" watchObservedRunningTime="2026-02-14 04:31:39.322300608 +0000 UTC m=+1331.403237922" Feb 14 04:31:39 crc kubenswrapper[4867]: I0214 04:31:39.350521 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d-combined-ca-bundle\") pod \"neutron-569c46898f-bbd5l\" (UID: \"8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d\") " pod="openstack/neutron-569c46898f-bbd5l" Feb 14 04:31:39 crc kubenswrapper[4867]: I0214 04:31:39.350570 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhvs2\" (UniqueName: \"kubernetes.io/projected/8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d-kube-api-access-lhvs2\") pod \"neutron-569c46898f-bbd5l\" (UID: \"8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d\") " pod="openstack/neutron-569c46898f-bbd5l" Feb 14 04:31:39 crc kubenswrapper[4867]: I0214 04:31:39.350635 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d-config\") pod \"neutron-569c46898f-bbd5l\" (UID: \"8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d\") " pod="openstack/neutron-569c46898f-bbd5l" Feb 14 04:31:39 crc kubenswrapper[4867]: I0214 04:31:39.350667 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d-public-tls-certs\") pod \"neutron-569c46898f-bbd5l\" (UID: \"8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d\") " pod="openstack/neutron-569c46898f-bbd5l" Feb 14 04:31:39 crc kubenswrapper[4867]: I0214 04:31:39.350717 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d-internal-tls-certs\") pod \"neutron-569c46898f-bbd5l\" (UID: \"8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d\") " pod="openstack/neutron-569c46898f-bbd5l" Feb 14 04:31:39 crc kubenswrapper[4867]: I0214 04:31:39.350800 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d-httpd-config\") pod \"neutron-569c46898f-bbd5l\" (UID: \"8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d\") " pod="openstack/neutron-569c46898f-bbd5l" Feb 14 04:31:39 crc kubenswrapper[4867]: I0214 04:31:39.350833 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d-ovndb-tls-certs\") pod \"neutron-569c46898f-bbd5l\" (UID: \"8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d\") " pod="openstack/neutron-569c46898f-bbd5l" Feb 14 04:31:39 crc kubenswrapper[4867]: I0214 04:31:39.452988 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d-internal-tls-certs\") pod \"neutron-569c46898f-bbd5l\" (UID: \"8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d\") " pod="openstack/neutron-569c46898f-bbd5l" Feb 14 04:31:39 crc kubenswrapper[4867]: I0214 04:31:39.453071 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d-httpd-config\") pod \"neutron-569c46898f-bbd5l\" (UID: \"8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d\") " pod="openstack/neutron-569c46898f-bbd5l" Feb 14 04:31:39 crc kubenswrapper[4867]: I0214 04:31:39.453104 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d-ovndb-tls-certs\") pod \"neutron-569c46898f-bbd5l\" (UID: \"8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d\") " pod="openstack/neutron-569c46898f-bbd5l" Feb 14 04:31:39 crc kubenswrapper[4867]: I0214 04:31:39.453199 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d-combined-ca-bundle\") pod \"neutron-569c46898f-bbd5l\" (UID: \"8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d\") " pod="openstack/neutron-569c46898f-bbd5l" Feb 14 04:31:39 crc kubenswrapper[4867]: I0214 04:31:39.453238 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhvs2\" (UniqueName: \"kubernetes.io/projected/8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d-kube-api-access-lhvs2\") pod \"neutron-569c46898f-bbd5l\" (UID: \"8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d\") " pod="openstack/neutron-569c46898f-bbd5l" Feb 14 04:31:39 crc kubenswrapper[4867]: I0214 04:31:39.453281 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d-config\") pod \"neutron-569c46898f-bbd5l\" (UID: \"8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d\") " pod="openstack/neutron-569c46898f-bbd5l" Feb 14 04:31:39 crc kubenswrapper[4867]: I0214 04:31:39.453309 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d-public-tls-certs\") pod \"neutron-569c46898f-bbd5l\" (UID: \"8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d\") " pod="openstack/neutron-569c46898f-bbd5l" Feb 14 04:31:39 crc kubenswrapper[4867]: I0214 04:31:39.460879 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d-httpd-config\") pod \"neutron-569c46898f-bbd5l\" (UID: \"8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d\") " pod="openstack/neutron-569c46898f-bbd5l" Feb 14 04:31:39 crc kubenswrapper[4867]: I0214 04:31:39.461284 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d-config\") pod \"neutron-569c46898f-bbd5l\" (UID: \"8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d\") " pod="openstack/neutron-569c46898f-bbd5l" Feb 14 04:31:39 crc kubenswrapper[4867]: I0214 04:31:39.462636 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d-ovndb-tls-certs\") pod \"neutron-569c46898f-bbd5l\" (UID: \"8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d\") " pod="openstack/neutron-569c46898f-bbd5l" Feb 14 04:31:39 crc kubenswrapper[4867]: I0214 04:31:39.466307 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d-public-tls-certs\") pod \"neutron-569c46898f-bbd5l\" (UID: \"8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d\") " pod="openstack/neutron-569c46898f-bbd5l" Feb 14 04:31:39 crc kubenswrapper[4867]: I0214 04:31:39.467217 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d-internal-tls-certs\") pod \"neutron-569c46898f-bbd5l\" (UID: \"8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d\") " pod="openstack/neutron-569c46898f-bbd5l" Feb 14 04:31:39 crc kubenswrapper[4867]: I0214 04:31:39.471494 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d-combined-ca-bundle\") pod \"neutron-569c46898f-bbd5l\" (UID: \"8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d\") " pod="openstack/neutron-569c46898f-bbd5l" Feb 14 04:31:39 crc kubenswrapper[4867]: I0214 04:31:39.475262 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhvs2\" (UniqueName: \"kubernetes.io/projected/8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d-kube-api-access-lhvs2\") pod \"neutron-569c46898f-bbd5l\" (UID: \"8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d\") " pod="openstack/neutron-569c46898f-bbd5l" Feb 14 04:31:39 crc kubenswrapper[4867]: I0214 04:31:39.640097 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-569c46898f-bbd5l" Feb 14 04:31:40 crc kubenswrapper[4867]: I0214 04:31:40.219259 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-9zrmj" event={"ID":"ffefbab2-8288-4eaa-9df3-e95383cdf19d","Type":"ContainerStarted","Data":"cbc1c766da784a3e5453caf17699272e324db8e8f9f9c7202b12542f06aac4da"} Feb 14 04:31:40 crc kubenswrapper[4867]: I0214 04:31:40.254870 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-zkb5z" event={"ID":"41682938-f603-460d-91e2-9de423799697","Type":"ContainerStarted","Data":"3fa0ecdd88a94efe2f93d06bd0c02307c78ae77450f27f456086d11f4e56cff0"} Feb 14 04:31:40 crc kubenswrapper[4867]: I0214 04:31:40.254929 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55f844cf75-zkb5z" Feb 14 04:31:40 crc kubenswrapper[4867]: I0214 04:31:40.293715 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-9zrmj" podStartSLOduration=3.6896220250000002 podStartE2EDuration="36.293689719s" podCreationTimestamp="2026-02-14 04:31:04 +0000 UTC" firstStartedPulling="2026-02-14 04:31:06.991604808 +0000 UTC m=+1299.072542112" lastFinishedPulling="2026-02-14 04:31:39.595672502 +0000 UTC m=+1331.676609806" observedRunningTime="2026-02-14 04:31:40.259216105 +0000 UTC m=+1332.340153419" watchObservedRunningTime="2026-02-14 04:31:40.293689719 +0000 UTC m=+1332.374627033" Feb 14 04:31:40 crc kubenswrapper[4867]: I0214 04:31:40.321173 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55f844cf75-zkb5z" podStartSLOduration=4.321147937 podStartE2EDuration="4.321147937s" podCreationTimestamp="2026-02-14 04:31:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:31:40.293569536 +0000 UTC m=+1332.374506850" watchObservedRunningTime="2026-02-14 04:31:40.321147937 +0000 UTC m=+1332.402085251" Feb 14 04:31:40 crc kubenswrapper[4867]: I0214 04:31:40.349729 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-569c46898f-bbd5l"] Feb 14 04:31:41 crc kubenswrapper[4867]: I0214 04:31:41.271436 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-569c46898f-bbd5l" event={"ID":"8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d","Type":"ContainerStarted","Data":"f445405ff2670ec25765e689c899369e6b86208982965111c8fd6b86edd2a3f9"} Feb 14 04:31:41 crc kubenswrapper[4867]: I0214 04:31:41.272151 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-569c46898f-bbd5l" Feb 14 04:31:41 crc kubenswrapper[4867]: I0214 04:31:41.272197 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-569c46898f-bbd5l" event={"ID":"8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d","Type":"ContainerStarted","Data":"df38319c35b43b20a57003cff86a29347a0b01099020f21394a48e3029dd9a34"} Feb 14 04:31:41 crc kubenswrapper[4867]: I0214 04:31:41.272224 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-569c46898f-bbd5l" event={"ID":"8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d","Type":"ContainerStarted","Data":"028f5efc08b53a55521858d44a43207730eee63dfa58503296592bae2f4868dd"} Feb 14 04:31:41 crc kubenswrapper[4867]: I0214 04:31:41.285647 4867 generic.go:334] "Generic (PLEG): container finished" podID="87589008-b930-4698-b94b-883c707d5fb1" containerID="42546acb8bf1d18a2013b6f620e8fb872f570e002bf0d9270838f9f12f95b201" exitCode=0 Feb 14 04:31:41 crc kubenswrapper[4867]: I0214 04:31:41.285797 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-gdzwh" event={"ID":"87589008-b930-4698-b94b-883c707d5fb1","Type":"ContainerDied","Data":"42546acb8bf1d18a2013b6f620e8fb872f570e002bf0d9270838f9f12f95b201"} Feb 14 04:31:41 crc kubenswrapper[4867]: I0214 04:31:41.302601 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-569c46898f-bbd5l" podStartSLOduration=2.302577713 podStartE2EDuration="2.302577713s" podCreationTimestamp="2026-02-14 04:31:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:31:41.290168215 +0000 UTC m=+1333.371105549" watchObservedRunningTime="2026-02-14 04:31:41.302577713 +0000 UTC m=+1333.383515037" Feb 14 04:31:45 crc kubenswrapper[4867]: I0214 04:31:45.343090 4867 generic.go:334] "Generic (PLEG): container finished" podID="ffefbab2-8288-4eaa-9df3-e95383cdf19d" containerID="cbc1c766da784a3e5453caf17699272e324db8e8f9f9c7202b12542f06aac4da" exitCode=0 Feb 14 04:31:45 crc kubenswrapper[4867]: I0214 04:31:45.343197 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-9zrmj" event={"ID":"ffefbab2-8288-4eaa-9df3-e95383cdf19d","Type":"ContainerDied","Data":"cbc1c766da784a3e5453caf17699272e324db8e8f9f9c7202b12542f06aac4da"} Feb 14 04:31:46 crc kubenswrapper[4867]: I0214 04:31:46.359050 4867 generic.go:334] "Generic (PLEG): container finished" podID="18fb2b12-f922-4976-8e05-6e78a8751456" containerID="60316f17511ab27fc3a729f8ccdd9f3a0822ad95a99d3ea5ac358cbcc6ece82a" exitCode=0 Feb 14 04:31:46 crc kubenswrapper[4867]: I0214 04:31:46.359104 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-246z7" event={"ID":"18fb2b12-f922-4976-8e05-6e78a8751456","Type":"ContainerDied","Data":"60316f17511ab27fc3a729f8ccdd9f3a0822ad95a99d3ea5ac358cbcc6ece82a"} Feb 14 04:31:46 crc kubenswrapper[4867]: I0214 04:31:46.649714 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55f844cf75-zkb5z" Feb 14 04:31:46 crc kubenswrapper[4867]: I0214 04:31:46.728951 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-8g8xm"] Feb 14 04:31:46 crc kubenswrapper[4867]: I0214 04:31:46.729263 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-785d8bcb8c-8g8xm" podUID="5cef8824-386a-4c20-a176-e1964d5307f7" containerName="dnsmasq-dns" containerID="cri-o://5a32c2ef4aa73a15ff81381551f9faad42ec662d2800f7b41bd9d12693968e44" gracePeriod=10 Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.088630 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-gdzwh" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.109836 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-9zrmj" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.208971 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87589008-b930-4698-b94b-883c707d5fb1-scripts\") pod \"87589008-b930-4698-b94b-883c707d5fb1\" (UID: \"87589008-b930-4698-b94b-883c707d5fb1\") " Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.209041 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-85v7k\" (UniqueName: \"kubernetes.io/projected/87589008-b930-4698-b94b-883c707d5fb1-kube-api-access-85v7k\") pod \"87589008-b930-4698-b94b-883c707d5fb1\" (UID: \"87589008-b930-4698-b94b-883c707d5fb1\") " Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.209065 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87589008-b930-4698-b94b-883c707d5fb1-combined-ca-bundle\") pod \"87589008-b930-4698-b94b-883c707d5fb1\" (UID: \"87589008-b930-4698-b94b-883c707d5fb1\") " Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.209182 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2cmn6\" (UniqueName: \"kubernetes.io/projected/ffefbab2-8288-4eaa-9df3-e95383cdf19d-kube-api-access-2cmn6\") pod \"ffefbab2-8288-4eaa-9df3-e95383cdf19d\" (UID: \"ffefbab2-8288-4eaa-9df3-e95383cdf19d\") " Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.209233 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/87589008-b930-4698-b94b-883c707d5fb1-fernet-keys\") pod \"87589008-b930-4698-b94b-883c707d5fb1\" (UID: \"87589008-b930-4698-b94b-883c707d5fb1\") " Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.209354 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffefbab2-8288-4eaa-9df3-e95383cdf19d-config-data\") pod \"ffefbab2-8288-4eaa-9df3-e95383cdf19d\" (UID: \"ffefbab2-8288-4eaa-9df3-e95383cdf19d\") " Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.209389 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffefbab2-8288-4eaa-9df3-e95383cdf19d-combined-ca-bundle\") pod \"ffefbab2-8288-4eaa-9df3-e95383cdf19d\" (UID: \"ffefbab2-8288-4eaa-9df3-e95383cdf19d\") " Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.209441 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87589008-b930-4698-b94b-883c707d5fb1-config-data\") pod \"87589008-b930-4698-b94b-883c707d5fb1\" (UID: \"87589008-b930-4698-b94b-883c707d5fb1\") " Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.209543 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ffefbab2-8288-4eaa-9df3-e95383cdf19d-logs\") pod \"ffefbab2-8288-4eaa-9df3-e95383cdf19d\" (UID: \"ffefbab2-8288-4eaa-9df3-e95383cdf19d\") " Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.209601 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/87589008-b930-4698-b94b-883c707d5fb1-credential-keys\") pod \"87589008-b930-4698-b94b-883c707d5fb1\" (UID: \"87589008-b930-4698-b94b-883c707d5fb1\") " Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.209647 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffefbab2-8288-4eaa-9df3-e95383cdf19d-scripts\") pod \"ffefbab2-8288-4eaa-9df3-e95383cdf19d\" (UID: \"ffefbab2-8288-4eaa-9df3-e95383cdf19d\") " Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.215044 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ffefbab2-8288-4eaa-9df3-e95383cdf19d-logs" (OuterVolumeSpecName: "logs") pod "ffefbab2-8288-4eaa-9df3-e95383cdf19d" (UID: "ffefbab2-8288-4eaa-9df3-e95383cdf19d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.223649 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87589008-b930-4698-b94b-883c707d5fb1-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "87589008-b930-4698-b94b-883c707d5fb1" (UID: "87589008-b930-4698-b94b-883c707d5fb1"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.223734 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87589008-b930-4698-b94b-883c707d5fb1-scripts" (OuterVolumeSpecName: "scripts") pod "87589008-b930-4698-b94b-883c707d5fb1" (UID: "87589008-b930-4698-b94b-883c707d5fb1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.224988 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87589008-b930-4698-b94b-883c707d5fb1-kube-api-access-85v7k" (OuterVolumeSpecName: "kube-api-access-85v7k") pod "87589008-b930-4698-b94b-883c707d5fb1" (UID: "87589008-b930-4698-b94b-883c707d5fb1"). InnerVolumeSpecName "kube-api-access-85v7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.225057 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffefbab2-8288-4eaa-9df3-e95383cdf19d-kube-api-access-2cmn6" (OuterVolumeSpecName: "kube-api-access-2cmn6") pod "ffefbab2-8288-4eaa-9df3-e95383cdf19d" (UID: "ffefbab2-8288-4eaa-9df3-e95383cdf19d"). InnerVolumeSpecName "kube-api-access-2cmn6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.233203 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87589008-b930-4698-b94b-883c707d5fb1-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "87589008-b930-4698-b94b-883c707d5fb1" (UID: "87589008-b930-4698-b94b-883c707d5fb1"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.242687 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffefbab2-8288-4eaa-9df3-e95383cdf19d-scripts" (OuterVolumeSpecName: "scripts") pod "ffefbab2-8288-4eaa-9df3-e95383cdf19d" (UID: "ffefbab2-8288-4eaa-9df3-e95383cdf19d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.254799 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.257176 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.257192 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.257201 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.273167 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87589008-b930-4698-b94b-883c707d5fb1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "87589008-b930-4698-b94b-883c707d5fb1" (UID: "87589008-b930-4698-b94b-883c707d5fb1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.281096 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.281308 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.281322 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.281332 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.290692 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffefbab2-8288-4eaa-9df3-e95383cdf19d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ffefbab2-8288-4eaa-9df3-e95383cdf19d" (UID: "ffefbab2-8288-4eaa-9df3-e95383cdf19d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.292969 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87589008-b930-4698-b94b-883c707d5fb1-config-data" (OuterVolumeSpecName: "config-data") pod "87589008-b930-4698-b94b-883c707d5fb1" (UID: "87589008-b930-4698-b94b-883c707d5fb1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.302276 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.305598 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffefbab2-8288-4eaa-9df3-e95383cdf19d-config-data" (OuterVolumeSpecName: "config-data") pod "ffefbab2-8288-4eaa-9df3-e95383cdf19d" (UID: "ffefbab2-8288-4eaa-9df3-e95383cdf19d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.314226 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2cmn6\" (UniqueName: \"kubernetes.io/projected/ffefbab2-8288-4eaa-9df3-e95383cdf19d-kube-api-access-2cmn6\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.314267 4867 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/87589008-b930-4698-b94b-883c707d5fb1-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.314279 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffefbab2-8288-4eaa-9df3-e95383cdf19d-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.314287 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffefbab2-8288-4eaa-9df3-e95383cdf19d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.314296 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87589008-b930-4698-b94b-883c707d5fb1-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.314304 4867 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ffefbab2-8288-4eaa-9df3-e95383cdf19d-logs\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.314314 4867 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/87589008-b930-4698-b94b-883c707d5fb1-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.314323 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffefbab2-8288-4eaa-9df3-e95383cdf19d-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.314330 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87589008-b930-4698-b94b-883c707d5fb1-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.314338 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-85v7k\" (UniqueName: \"kubernetes.io/projected/87589008-b930-4698-b94b-883c707d5fb1-kube-api-access-85v7k\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.314346 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87589008-b930-4698-b94b-883c707d5fb1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.354324 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.359626 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.367542 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-8g8xm" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.372372 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.401517 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-gdzwh" event={"ID":"87589008-b930-4698-b94b-883c707d5fb1","Type":"ContainerDied","Data":"884e46e9a9ccb7a1951c05016a7cfe503d95ce144f68a7a413c28878d0db0fb9"} Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.401565 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="884e46e9a9ccb7a1951c05016a7cfe503d95ce144f68a7a413c28878d0db0fb9" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.401711 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-gdzwh" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.409310 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-mklx7" event={"ID":"cccb73cc-2b89-4363-b7ca-44dfa627d9f9","Type":"ContainerStarted","Data":"f215c5a914efdb087a943f5dda611b846de12406e04a977d9c6c6acb8ed9e635"} Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.417608 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20f83c90-35bd-4d40-90e4-f992c7844a5d","Type":"ContainerStarted","Data":"80b2feaac0df4a17c38e5c52338aa4756e2f98cfb9c0f642287cd39641d2aa47"} Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.421944 4867 generic.go:334] "Generic (PLEG): container finished" podID="5cef8824-386a-4c20-a176-e1964d5307f7" containerID="5a32c2ef4aa73a15ff81381551f9faad42ec662d2800f7b41bd9d12693968e44" exitCode=0 Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.422000 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-8g8xm" event={"ID":"5cef8824-386a-4c20-a176-e1964d5307f7","Type":"ContainerDied","Data":"5a32c2ef4aa73a15ff81381551f9faad42ec662d2800f7b41bd9d12693968e44"} Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.422020 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-8g8xm" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.422040 4867 scope.go:117] "RemoveContainer" containerID="5a32c2ef4aa73a15ff81381551f9faad42ec662d2800f7b41bd9d12693968e44" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.422026 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-8g8xm" event={"ID":"5cef8824-386a-4c20-a176-e1964d5307f7","Type":"ContainerDied","Data":"a8af3c3243557785237b106c328a49ec8c7419d5a57f62a13b9820888d0db44a"} Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.426244 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-9zrmj" event={"ID":"ffefbab2-8288-4eaa-9df3-e95383cdf19d","Type":"ContainerDied","Data":"b409bcffdfa5ea471959aecebea943d810c68abab172eab94ceaa2964168c2d8"} Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.426303 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b409bcffdfa5ea471959aecebea943d810c68abab172eab94ceaa2964168c2d8" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.429928 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-9zrmj" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.469010 4867 scope.go:117] "RemoveContainer" containerID="89e26c09a3c28860cf0f6c1bbbca98899e7df18ff66c3a51b5fe47e68eaecb95" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.469949 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-mklx7" podStartSLOduration=3.174195966 podStartE2EDuration="43.46992839s" podCreationTimestamp="2026-02-14 04:31:04 +0000 UTC" firstStartedPulling="2026-02-14 04:31:06.598973523 +0000 UTC m=+1298.679910837" lastFinishedPulling="2026-02-14 04:31:46.894705947 +0000 UTC m=+1338.975643261" observedRunningTime="2026-02-14 04:31:47.457627534 +0000 UTC m=+1339.538564848" watchObservedRunningTime="2026-02-14 04:31:47.46992839 +0000 UTC m=+1339.550865704" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.519532 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5cef8824-386a-4c20-a176-e1964d5307f7-ovsdbserver-nb\") pod \"5cef8824-386a-4c20-a176-e1964d5307f7\" (UID: \"5cef8824-386a-4c20-a176-e1964d5307f7\") " Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.519847 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wwnbs\" (UniqueName: \"kubernetes.io/projected/5cef8824-386a-4c20-a176-e1964d5307f7-kube-api-access-wwnbs\") pod \"5cef8824-386a-4c20-a176-e1964d5307f7\" (UID: \"5cef8824-386a-4c20-a176-e1964d5307f7\") " Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.519930 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5cef8824-386a-4c20-a176-e1964d5307f7-dns-svc\") pod \"5cef8824-386a-4c20-a176-e1964d5307f7\" (UID: \"5cef8824-386a-4c20-a176-e1964d5307f7\") " Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.520854 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5cef8824-386a-4c20-a176-e1964d5307f7-config\") pod \"5cef8824-386a-4c20-a176-e1964d5307f7\" (UID: \"5cef8824-386a-4c20-a176-e1964d5307f7\") " Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.520959 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5cef8824-386a-4c20-a176-e1964d5307f7-ovsdbserver-sb\") pod \"5cef8824-386a-4c20-a176-e1964d5307f7\" (UID: \"5cef8824-386a-4c20-a176-e1964d5307f7\") " Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.521034 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5cef8824-386a-4c20-a176-e1964d5307f7-dns-swift-storage-0\") pod \"5cef8824-386a-4c20-a176-e1964d5307f7\" (UID: \"5cef8824-386a-4c20-a176-e1964d5307f7\") " Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.529717 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cef8824-386a-4c20-a176-e1964d5307f7-kube-api-access-wwnbs" (OuterVolumeSpecName: "kube-api-access-wwnbs") pod "5cef8824-386a-4c20-a176-e1964d5307f7" (UID: "5cef8824-386a-4c20-a176-e1964d5307f7"). InnerVolumeSpecName "kube-api-access-wwnbs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.541006 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-74d7c6cb48-8wr7l"] Feb 14 04:31:47 crc kubenswrapper[4867]: E0214 04:31:47.541457 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87589008-b930-4698-b94b-883c707d5fb1" containerName="keystone-bootstrap" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.541477 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="87589008-b930-4698-b94b-883c707d5fb1" containerName="keystone-bootstrap" Feb 14 04:31:47 crc kubenswrapper[4867]: E0214 04:31:47.541495 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cef8824-386a-4c20-a176-e1964d5307f7" containerName="dnsmasq-dns" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.547584 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cef8824-386a-4c20-a176-e1964d5307f7" containerName="dnsmasq-dns" Feb 14 04:31:47 crc kubenswrapper[4867]: E0214 04:31:47.547643 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cef8824-386a-4c20-a176-e1964d5307f7" containerName="init" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.547653 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cef8824-386a-4c20-a176-e1964d5307f7" containerName="init" Feb 14 04:31:47 crc kubenswrapper[4867]: E0214 04:31:47.547718 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffefbab2-8288-4eaa-9df3-e95383cdf19d" containerName="placement-db-sync" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.547725 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffefbab2-8288-4eaa-9df3-e95383cdf19d" containerName="placement-db-sync" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.548078 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffefbab2-8288-4eaa-9df3-e95383cdf19d" containerName="placement-db-sync" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.548102 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cef8824-386a-4c20-a176-e1964d5307f7" containerName="dnsmasq-dns" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.548113 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="87589008-b930-4698-b94b-883c707d5fb1" containerName="keystone-bootstrap" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.548877 4867 scope.go:117] "RemoveContainer" containerID="5a32c2ef4aa73a15ff81381551f9faad42ec662d2800f7b41bd9d12693968e44" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.549625 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-74d7c6cb48-8wr7l" Feb 14 04:31:47 crc kubenswrapper[4867]: E0214 04:31:47.559008 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a32c2ef4aa73a15ff81381551f9faad42ec662d2800f7b41bd9d12693968e44\": container with ID starting with 5a32c2ef4aa73a15ff81381551f9faad42ec662d2800f7b41bd9d12693968e44 not found: ID does not exist" containerID="5a32c2ef4aa73a15ff81381551f9faad42ec662d2800f7b41bd9d12693968e44" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.559060 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a32c2ef4aa73a15ff81381551f9faad42ec662d2800f7b41bd9d12693968e44"} err="failed to get container status \"5a32c2ef4aa73a15ff81381551f9faad42ec662d2800f7b41bd9d12693968e44\": rpc error: code = NotFound desc = could not find container \"5a32c2ef4aa73a15ff81381551f9faad42ec662d2800f7b41bd9d12693968e44\": container with ID starting with 5a32c2ef4aa73a15ff81381551f9faad42ec662d2800f7b41bd9d12693968e44 not found: ID does not exist" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.559090 4867 scope.go:117] "RemoveContainer" containerID="89e26c09a3c28860cf0f6c1bbbca98899e7df18ff66c3a51b5fe47e68eaecb95" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.559485 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-jvmrs" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.559922 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.560625 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.560771 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.560636 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 14 04:31:47 crc kubenswrapper[4867]: E0214 04:31:47.561228 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89e26c09a3c28860cf0f6c1bbbca98899e7df18ff66c3a51b5fe47e68eaecb95\": container with ID starting with 89e26c09a3c28860cf0f6c1bbbca98899e7df18ff66c3a51b5fe47e68eaecb95 not found: ID does not exist" containerID="89e26c09a3c28860cf0f6c1bbbca98899e7df18ff66c3a51b5fe47e68eaecb95" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.561325 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89e26c09a3c28860cf0f6c1bbbca98899e7df18ff66c3a51b5fe47e68eaecb95"} err="failed to get container status \"89e26c09a3c28860cf0f6c1bbbca98899e7df18ff66c3a51b5fe47e68eaecb95\": rpc error: code = NotFound desc = could not find container \"89e26c09a3c28860cf0f6c1bbbca98899e7df18ff66c3a51b5fe47e68eaecb95\": container with ID starting with 89e26c09a3c28860cf0f6c1bbbca98899e7df18ff66c3a51b5fe47e68eaecb95 not found: ID does not exist" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.581172 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-74d7c6cb48-8wr7l"] Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.599777 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5cef8824-386a-4c20-a176-e1964d5307f7-config" (OuterVolumeSpecName: "config") pod "5cef8824-386a-4c20-a176-e1964d5307f7" (UID: "5cef8824-386a-4c20-a176-e1964d5307f7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.619674 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5cef8824-386a-4c20-a176-e1964d5307f7-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "5cef8824-386a-4c20-a176-e1964d5307f7" (UID: "5cef8824-386a-4c20-a176-e1964d5307f7"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.623276 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5cef8824-386a-4c20-a176-e1964d5307f7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5cef8824-386a-4c20-a176-e1964d5307f7" (UID: "5cef8824-386a-4c20-a176-e1964d5307f7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.623573 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5cef8824-386a-4c20-a176-e1964d5307f7-dns-svc\") pod \"5cef8824-386a-4c20-a176-e1964d5307f7\" (UID: \"5cef8824-386a-4c20-a176-e1964d5307f7\") " Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.623951 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2-scripts\") pod \"placement-74d7c6cb48-8wr7l\" (UID: \"8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2\") " pod="openstack/placement-74d7c6cb48-8wr7l" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.624002 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2-config-data\") pod \"placement-74d7c6cb48-8wr7l\" (UID: \"8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2\") " pod="openstack/placement-74d7c6cb48-8wr7l" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.624030 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2-public-tls-certs\") pod \"placement-74d7c6cb48-8wr7l\" (UID: \"8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2\") " pod="openstack/placement-74d7c6cb48-8wr7l" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.624117 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2-internal-tls-certs\") pod \"placement-74d7c6cb48-8wr7l\" (UID: \"8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2\") " pod="openstack/placement-74d7c6cb48-8wr7l" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.624341 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzmv2\" (UniqueName: \"kubernetes.io/projected/8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2-kube-api-access-nzmv2\") pod \"placement-74d7c6cb48-8wr7l\" (UID: \"8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2\") " pod="openstack/placement-74d7c6cb48-8wr7l" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.624374 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2-combined-ca-bundle\") pod \"placement-74d7c6cb48-8wr7l\" (UID: \"8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2\") " pod="openstack/placement-74d7c6cb48-8wr7l" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.624426 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2-logs\") pod \"placement-74d7c6cb48-8wr7l\" (UID: \"8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2\") " pod="openstack/placement-74d7c6cb48-8wr7l" Feb 14 04:31:47 crc kubenswrapper[4867]: W0214 04:31:47.624487 4867 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/5cef8824-386a-4c20-a176-e1964d5307f7/volumes/kubernetes.io~configmap/dns-svc Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.624521 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5cef8824-386a-4c20-a176-e1964d5307f7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5cef8824-386a-4c20-a176-e1964d5307f7" (UID: "5cef8824-386a-4c20-a176-e1964d5307f7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.624624 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wwnbs\" (UniqueName: \"kubernetes.io/projected/5cef8824-386a-4c20-a176-e1964d5307f7-kube-api-access-wwnbs\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.624643 4867 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5cef8824-386a-4c20-a176-e1964d5307f7-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.624652 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5cef8824-386a-4c20-a176-e1964d5307f7-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.624662 4867 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5cef8824-386a-4c20-a176-e1964d5307f7-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.636275 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5cef8824-386a-4c20-a176-e1964d5307f7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5cef8824-386a-4c20-a176-e1964d5307f7" (UID: "5cef8824-386a-4c20-a176-e1964d5307f7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.646361 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5cef8824-386a-4c20-a176-e1964d5307f7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5cef8824-386a-4c20-a176-e1964d5307f7" (UID: "5cef8824-386a-4c20-a176-e1964d5307f7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.731114 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2-scripts\") pod \"placement-74d7c6cb48-8wr7l\" (UID: \"8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2\") " pod="openstack/placement-74d7c6cb48-8wr7l" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.731532 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2-config-data\") pod \"placement-74d7c6cb48-8wr7l\" (UID: \"8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2\") " pod="openstack/placement-74d7c6cb48-8wr7l" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.731588 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2-public-tls-certs\") pod \"placement-74d7c6cb48-8wr7l\" (UID: \"8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2\") " pod="openstack/placement-74d7c6cb48-8wr7l" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.731721 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2-internal-tls-certs\") pod \"placement-74d7c6cb48-8wr7l\" (UID: \"8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2\") " pod="openstack/placement-74d7c6cb48-8wr7l" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.731777 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzmv2\" (UniqueName: \"kubernetes.io/projected/8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2-kube-api-access-nzmv2\") pod \"placement-74d7c6cb48-8wr7l\" (UID: \"8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2\") " pod="openstack/placement-74d7c6cb48-8wr7l" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.731801 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2-combined-ca-bundle\") pod \"placement-74d7c6cb48-8wr7l\" (UID: \"8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2\") " pod="openstack/placement-74d7c6cb48-8wr7l" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.731844 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2-logs\") pod \"placement-74d7c6cb48-8wr7l\" (UID: \"8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2\") " pod="openstack/placement-74d7c6cb48-8wr7l" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.731914 4867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5cef8824-386a-4c20-a176-e1964d5307f7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.731926 4867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5cef8824-386a-4c20-a176-e1964d5307f7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.732268 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2-logs\") pod \"placement-74d7c6cb48-8wr7l\" (UID: \"8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2\") " pod="openstack/placement-74d7c6cb48-8wr7l" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.737119 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2-scripts\") pod \"placement-74d7c6cb48-8wr7l\" (UID: \"8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2\") " pod="openstack/placement-74d7c6cb48-8wr7l" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.743347 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2-combined-ca-bundle\") pod \"placement-74d7c6cb48-8wr7l\" (UID: \"8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2\") " pod="openstack/placement-74d7c6cb48-8wr7l" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.748130 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2-public-tls-certs\") pod \"placement-74d7c6cb48-8wr7l\" (UID: \"8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2\") " pod="openstack/placement-74d7c6cb48-8wr7l" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.749616 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2-config-data\") pod \"placement-74d7c6cb48-8wr7l\" (UID: \"8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2\") " pod="openstack/placement-74d7c6cb48-8wr7l" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.750421 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2-internal-tls-certs\") pod \"placement-74d7c6cb48-8wr7l\" (UID: \"8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2\") " pod="openstack/placement-74d7c6cb48-8wr7l" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.758530 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzmv2\" (UniqueName: \"kubernetes.io/projected/8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2-kube-api-access-nzmv2\") pod \"placement-74d7c6cb48-8wr7l\" (UID: \"8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2\") " pod="openstack/placement-74d7c6cb48-8wr7l" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.878666 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-74d7c6cb48-8wr7l" Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.069893 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-246z7" Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.093911 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-8g8xm"] Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.111066 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-8g8xm"] Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.156616 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18fb2b12-f922-4976-8e05-6e78a8751456-config-data\") pod \"18fb2b12-f922-4976-8e05-6e78a8751456\" (UID: \"18fb2b12-f922-4976-8e05-6e78a8751456\") " Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.156761 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8hsn\" (UniqueName: \"kubernetes.io/projected/18fb2b12-f922-4976-8e05-6e78a8751456-kube-api-access-r8hsn\") pod \"18fb2b12-f922-4976-8e05-6e78a8751456\" (UID: \"18fb2b12-f922-4976-8e05-6e78a8751456\") " Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.156789 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18fb2b12-f922-4976-8e05-6e78a8751456-combined-ca-bundle\") pod \"18fb2b12-f922-4976-8e05-6e78a8751456\" (UID: \"18fb2b12-f922-4976-8e05-6e78a8751456\") " Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.183784 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18fb2b12-f922-4976-8e05-6e78a8751456-kube-api-access-r8hsn" (OuterVolumeSpecName: "kube-api-access-r8hsn") pod "18fb2b12-f922-4976-8e05-6e78a8751456" (UID: "18fb2b12-f922-4976-8e05-6e78a8751456"). InnerVolumeSpecName "kube-api-access-r8hsn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.228647 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18fb2b12-f922-4976-8e05-6e78a8751456-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "18fb2b12-f922-4976-8e05-6e78a8751456" (UID: "18fb2b12-f922-4976-8e05-6e78a8751456"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.256050 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-7595b47f77-vtg9d"] Feb 14 04:31:48 crc kubenswrapper[4867]: E0214 04:31:48.256732 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18fb2b12-f922-4976-8e05-6e78a8751456" containerName="heat-db-sync" Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.256760 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="18fb2b12-f922-4976-8e05-6e78a8751456" containerName="heat-db-sync" Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.256998 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="18fb2b12-f922-4976-8e05-6e78a8751456" containerName="heat-db-sync" Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.257808 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7595b47f77-vtg9d" Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.260720 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.260898 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.261057 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.261223 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.261342 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.261574 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-ffvbq" Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.263324 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r8hsn\" (UniqueName: \"kubernetes.io/projected/18fb2b12-f922-4976-8e05-6e78a8751456-kube-api-access-r8hsn\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.263339 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18fb2b12-f922-4976-8e05-6e78a8751456-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.280314 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7595b47f77-vtg9d"] Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.339132 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18fb2b12-f922-4976-8e05-6e78a8751456-config-data" (OuterVolumeSpecName: "config-data") pod "18fb2b12-f922-4976-8e05-6e78a8751456" (UID: "18fb2b12-f922-4976-8e05-6e78a8751456"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.365130 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ddcc862-a10c-487c-aaa4-0e93df9c0005-internal-tls-certs\") pod \"keystone-7595b47f77-vtg9d\" (UID: \"1ddcc862-a10c-487c-aaa4-0e93df9c0005\") " pod="openstack/keystone-7595b47f77-vtg9d" Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.365179 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ddcc862-a10c-487c-aaa4-0e93df9c0005-scripts\") pod \"keystone-7595b47f77-vtg9d\" (UID: \"1ddcc862-a10c-487c-aaa4-0e93df9c0005\") " pod="openstack/keystone-7595b47f77-vtg9d" Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.365203 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1ddcc862-a10c-487c-aaa4-0e93df9c0005-credential-keys\") pod \"keystone-7595b47f77-vtg9d\" (UID: \"1ddcc862-a10c-487c-aaa4-0e93df9c0005\") " pod="openstack/keystone-7595b47f77-vtg9d" Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.365282 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1ddcc862-a10c-487c-aaa4-0e93df9c0005-fernet-keys\") pod \"keystone-7595b47f77-vtg9d\" (UID: \"1ddcc862-a10c-487c-aaa4-0e93df9c0005\") " pod="openstack/keystone-7595b47f77-vtg9d" Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.365363 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ddcc862-a10c-487c-aaa4-0e93df9c0005-combined-ca-bundle\") pod \"keystone-7595b47f77-vtg9d\" (UID: \"1ddcc862-a10c-487c-aaa4-0e93df9c0005\") " pod="openstack/keystone-7595b47f77-vtg9d" Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.365389 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ddcc862-a10c-487c-aaa4-0e93df9c0005-config-data\") pod \"keystone-7595b47f77-vtg9d\" (UID: \"1ddcc862-a10c-487c-aaa4-0e93df9c0005\") " pod="openstack/keystone-7595b47f77-vtg9d" Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.365458 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ddcc862-a10c-487c-aaa4-0e93df9c0005-public-tls-certs\") pod \"keystone-7595b47f77-vtg9d\" (UID: \"1ddcc862-a10c-487c-aaa4-0e93df9c0005\") " pod="openstack/keystone-7595b47f77-vtg9d" Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.365568 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdj5v\" (UniqueName: \"kubernetes.io/projected/1ddcc862-a10c-487c-aaa4-0e93df9c0005-kube-api-access-wdj5v\") pod \"keystone-7595b47f77-vtg9d\" (UID: \"1ddcc862-a10c-487c-aaa4-0e93df9c0005\") " pod="openstack/keystone-7595b47f77-vtg9d" Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.365661 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18fb2b12-f922-4976-8e05-6e78a8751456-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.481048 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ddcc862-a10c-487c-aaa4-0e93df9c0005-combined-ca-bundle\") pod \"keystone-7595b47f77-vtg9d\" (UID: \"1ddcc862-a10c-487c-aaa4-0e93df9c0005\") " pod="openstack/keystone-7595b47f77-vtg9d" Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.481177 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ddcc862-a10c-487c-aaa4-0e93df9c0005-config-data\") pod \"keystone-7595b47f77-vtg9d\" (UID: \"1ddcc862-a10c-487c-aaa4-0e93df9c0005\") " pod="openstack/keystone-7595b47f77-vtg9d" Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.481495 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ddcc862-a10c-487c-aaa4-0e93df9c0005-public-tls-certs\") pod \"keystone-7595b47f77-vtg9d\" (UID: \"1ddcc862-a10c-487c-aaa4-0e93df9c0005\") " pod="openstack/keystone-7595b47f77-vtg9d" Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.481650 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdj5v\" (UniqueName: \"kubernetes.io/projected/1ddcc862-a10c-487c-aaa4-0e93df9c0005-kube-api-access-wdj5v\") pod \"keystone-7595b47f77-vtg9d\" (UID: \"1ddcc862-a10c-487c-aaa4-0e93df9c0005\") " pod="openstack/keystone-7595b47f77-vtg9d" Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.486743 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ddcc862-a10c-487c-aaa4-0e93df9c0005-internal-tls-certs\") pod \"keystone-7595b47f77-vtg9d\" (UID: \"1ddcc862-a10c-487c-aaa4-0e93df9c0005\") " pod="openstack/keystone-7595b47f77-vtg9d" Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.487004 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ddcc862-a10c-487c-aaa4-0e93df9c0005-scripts\") pod \"keystone-7595b47f77-vtg9d\" (UID: \"1ddcc862-a10c-487c-aaa4-0e93df9c0005\") " pod="openstack/keystone-7595b47f77-vtg9d" Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.487098 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1ddcc862-a10c-487c-aaa4-0e93df9c0005-credential-keys\") pod \"keystone-7595b47f77-vtg9d\" (UID: \"1ddcc862-a10c-487c-aaa4-0e93df9c0005\") " pod="openstack/keystone-7595b47f77-vtg9d" Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.487193 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1ddcc862-a10c-487c-aaa4-0e93df9c0005-fernet-keys\") pod \"keystone-7595b47f77-vtg9d\" (UID: \"1ddcc862-a10c-487c-aaa4-0e93df9c0005\") " pod="openstack/keystone-7595b47f77-vtg9d" Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.496210 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ddcc862-a10c-487c-aaa4-0e93df9c0005-scripts\") pod \"keystone-7595b47f77-vtg9d\" (UID: \"1ddcc862-a10c-487c-aaa4-0e93df9c0005\") " pod="openstack/keystone-7595b47f77-vtg9d" Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.502013 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1ddcc862-a10c-487c-aaa4-0e93df9c0005-fernet-keys\") pod \"keystone-7595b47f77-vtg9d\" (UID: \"1ddcc862-a10c-487c-aaa4-0e93df9c0005\") " pod="openstack/keystone-7595b47f77-vtg9d" Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.502188 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1ddcc862-a10c-487c-aaa4-0e93df9c0005-credential-keys\") pod \"keystone-7595b47f77-vtg9d\" (UID: \"1ddcc862-a10c-487c-aaa4-0e93df9c0005\") " pod="openstack/keystone-7595b47f77-vtg9d" Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.504330 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ddcc862-a10c-487c-aaa4-0e93df9c0005-internal-tls-certs\") pod \"keystone-7595b47f77-vtg9d\" (UID: \"1ddcc862-a10c-487c-aaa4-0e93df9c0005\") " pod="openstack/keystone-7595b47f77-vtg9d" Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.507960 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ddcc862-a10c-487c-aaa4-0e93df9c0005-public-tls-certs\") pod \"keystone-7595b47f77-vtg9d\" (UID: \"1ddcc862-a10c-487c-aaa4-0e93df9c0005\") " pod="openstack/keystone-7595b47f77-vtg9d" Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.537150 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ddcc862-a10c-487c-aaa4-0e93df9c0005-config-data\") pod \"keystone-7595b47f77-vtg9d\" (UID: \"1ddcc862-a10c-487c-aaa4-0e93df9c0005\") " pod="openstack/keystone-7595b47f77-vtg9d" Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.538632 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ddcc862-a10c-487c-aaa4-0e93df9c0005-combined-ca-bundle\") pod \"keystone-7595b47f77-vtg9d\" (UID: \"1ddcc862-a10c-487c-aaa4-0e93df9c0005\") " pod="openstack/keystone-7595b47f77-vtg9d" Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.546321 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-74d7c6cb48-8wr7l"] Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.555279 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdj5v\" (UniqueName: \"kubernetes.io/projected/1ddcc862-a10c-487c-aaa4-0e93df9c0005-kube-api-access-wdj5v\") pod \"keystone-7595b47f77-vtg9d\" (UID: \"1ddcc862-a10c-487c-aaa4-0e93df9c0005\") " pod="openstack/keystone-7595b47f77-vtg9d" Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.564306 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-246z7" event={"ID":"18fb2b12-f922-4976-8e05-6e78a8751456","Type":"ContainerDied","Data":"9289cefc22342b7fc66aa673bbc9c4e9b6d16e205beb2daae9082d5d1e900eff"} Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.564359 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9289cefc22342b7fc66aa673bbc9c4e9b6d16e205beb2daae9082d5d1e900eff" Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.564455 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-246z7" Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.603637 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7595b47f77-vtg9d" Feb 14 04:31:49 crc kubenswrapper[4867]: I0214 04:31:49.062442 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5cef8824-386a-4c20-a176-e1964d5307f7" path="/var/lib/kubelet/pods/5cef8824-386a-4c20-a176-e1964d5307f7/volumes" Feb 14 04:31:49 crc kubenswrapper[4867]: I0214 04:31:49.279195 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7595b47f77-vtg9d"] Feb 14 04:31:49 crc kubenswrapper[4867]: I0214 04:31:49.574931 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7595b47f77-vtg9d" event={"ID":"1ddcc862-a10c-487c-aaa4-0e93df9c0005","Type":"ContainerStarted","Data":"284d4ef18c8f33c8c1b929f6ec01157fe34daeca23e98acbb24226cffb045a3a"} Feb 14 04:31:49 crc kubenswrapper[4867]: I0214 04:31:49.577941 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-74d7c6cb48-8wr7l" event={"ID":"8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2","Type":"ContainerStarted","Data":"e3dbb7ce8b1d62d84a2b156d530b4308c99b32ab7b60ee3156b3ed9b46908218"} Feb 14 04:31:49 crc kubenswrapper[4867]: I0214 04:31:49.577972 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-74d7c6cb48-8wr7l" event={"ID":"8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2","Type":"ContainerStarted","Data":"d72d747bf641f17caffe57b13805170a59917becd98a04f814a50119c9f846ba"} Feb 14 04:31:50 crc kubenswrapper[4867]: I0214 04:31:50.366414 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-8574cd8bdd-r5cv6"] Feb 14 04:31:50 crc kubenswrapper[4867]: I0214 04:31:50.384387 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-8574cd8bdd-r5cv6"] Feb 14 04:31:50 crc kubenswrapper[4867]: I0214 04:31:50.384546 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8574cd8bdd-r5cv6" Feb 14 04:31:50 crc kubenswrapper[4867]: I0214 04:31:50.477911 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ef45c32-32a1-4302-84e3-3ff7e864cb99-combined-ca-bundle\") pod \"placement-8574cd8bdd-r5cv6\" (UID: \"2ef45c32-32a1-4302-84e3-3ff7e864cb99\") " pod="openstack/placement-8574cd8bdd-r5cv6" Feb 14 04:31:50 crc kubenswrapper[4867]: I0214 04:31:50.478012 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6tp6\" (UniqueName: \"kubernetes.io/projected/2ef45c32-32a1-4302-84e3-3ff7e864cb99-kube-api-access-r6tp6\") pod \"placement-8574cd8bdd-r5cv6\" (UID: \"2ef45c32-32a1-4302-84e3-3ff7e864cb99\") " pod="openstack/placement-8574cd8bdd-r5cv6" Feb 14 04:31:50 crc kubenswrapper[4867]: I0214 04:31:50.478039 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ef45c32-32a1-4302-84e3-3ff7e864cb99-logs\") pod \"placement-8574cd8bdd-r5cv6\" (UID: \"2ef45c32-32a1-4302-84e3-3ff7e864cb99\") " pod="openstack/placement-8574cd8bdd-r5cv6" Feb 14 04:31:50 crc kubenswrapper[4867]: I0214 04:31:50.478113 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ef45c32-32a1-4302-84e3-3ff7e864cb99-config-data\") pod \"placement-8574cd8bdd-r5cv6\" (UID: \"2ef45c32-32a1-4302-84e3-3ff7e864cb99\") " pod="openstack/placement-8574cd8bdd-r5cv6" Feb 14 04:31:50 crc kubenswrapper[4867]: I0214 04:31:50.478149 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ef45c32-32a1-4302-84e3-3ff7e864cb99-internal-tls-certs\") pod \"placement-8574cd8bdd-r5cv6\" (UID: \"2ef45c32-32a1-4302-84e3-3ff7e864cb99\") " pod="openstack/placement-8574cd8bdd-r5cv6" Feb 14 04:31:50 crc kubenswrapper[4867]: I0214 04:31:50.478170 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ef45c32-32a1-4302-84e3-3ff7e864cb99-public-tls-certs\") pod \"placement-8574cd8bdd-r5cv6\" (UID: \"2ef45c32-32a1-4302-84e3-3ff7e864cb99\") " pod="openstack/placement-8574cd8bdd-r5cv6" Feb 14 04:31:50 crc kubenswrapper[4867]: I0214 04:31:50.478188 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ef45c32-32a1-4302-84e3-3ff7e864cb99-scripts\") pod \"placement-8574cd8bdd-r5cv6\" (UID: \"2ef45c32-32a1-4302-84e3-3ff7e864cb99\") " pod="openstack/placement-8574cd8bdd-r5cv6" Feb 14 04:31:50 crc kubenswrapper[4867]: I0214 04:31:50.580029 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6tp6\" (UniqueName: \"kubernetes.io/projected/2ef45c32-32a1-4302-84e3-3ff7e864cb99-kube-api-access-r6tp6\") pod \"placement-8574cd8bdd-r5cv6\" (UID: \"2ef45c32-32a1-4302-84e3-3ff7e864cb99\") " pod="openstack/placement-8574cd8bdd-r5cv6" Feb 14 04:31:50 crc kubenswrapper[4867]: I0214 04:31:50.580078 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ef45c32-32a1-4302-84e3-3ff7e864cb99-logs\") pod \"placement-8574cd8bdd-r5cv6\" (UID: \"2ef45c32-32a1-4302-84e3-3ff7e864cb99\") " pod="openstack/placement-8574cd8bdd-r5cv6" Feb 14 04:31:50 crc kubenswrapper[4867]: I0214 04:31:50.580154 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ef45c32-32a1-4302-84e3-3ff7e864cb99-config-data\") pod \"placement-8574cd8bdd-r5cv6\" (UID: \"2ef45c32-32a1-4302-84e3-3ff7e864cb99\") " pod="openstack/placement-8574cd8bdd-r5cv6" Feb 14 04:31:50 crc kubenswrapper[4867]: I0214 04:31:50.580187 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ef45c32-32a1-4302-84e3-3ff7e864cb99-internal-tls-certs\") pod \"placement-8574cd8bdd-r5cv6\" (UID: \"2ef45c32-32a1-4302-84e3-3ff7e864cb99\") " pod="openstack/placement-8574cd8bdd-r5cv6" Feb 14 04:31:50 crc kubenswrapper[4867]: I0214 04:31:50.580210 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ef45c32-32a1-4302-84e3-3ff7e864cb99-public-tls-certs\") pod \"placement-8574cd8bdd-r5cv6\" (UID: \"2ef45c32-32a1-4302-84e3-3ff7e864cb99\") " pod="openstack/placement-8574cd8bdd-r5cv6" Feb 14 04:31:50 crc kubenswrapper[4867]: I0214 04:31:50.580230 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ef45c32-32a1-4302-84e3-3ff7e864cb99-scripts\") pod \"placement-8574cd8bdd-r5cv6\" (UID: \"2ef45c32-32a1-4302-84e3-3ff7e864cb99\") " pod="openstack/placement-8574cd8bdd-r5cv6" Feb 14 04:31:50 crc kubenswrapper[4867]: I0214 04:31:50.580330 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ef45c32-32a1-4302-84e3-3ff7e864cb99-combined-ca-bundle\") pod \"placement-8574cd8bdd-r5cv6\" (UID: \"2ef45c32-32a1-4302-84e3-3ff7e864cb99\") " pod="openstack/placement-8574cd8bdd-r5cv6" Feb 14 04:31:50 crc kubenswrapper[4867]: I0214 04:31:50.580485 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ef45c32-32a1-4302-84e3-3ff7e864cb99-logs\") pod \"placement-8574cd8bdd-r5cv6\" (UID: \"2ef45c32-32a1-4302-84e3-3ff7e864cb99\") " pod="openstack/placement-8574cd8bdd-r5cv6" Feb 14 04:31:50 crc kubenswrapper[4867]: I0214 04:31:50.587800 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ef45c32-32a1-4302-84e3-3ff7e864cb99-scripts\") pod \"placement-8574cd8bdd-r5cv6\" (UID: \"2ef45c32-32a1-4302-84e3-3ff7e864cb99\") " pod="openstack/placement-8574cd8bdd-r5cv6" Feb 14 04:31:50 crc kubenswrapper[4867]: I0214 04:31:50.587990 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ef45c32-32a1-4302-84e3-3ff7e864cb99-combined-ca-bundle\") pod \"placement-8574cd8bdd-r5cv6\" (UID: \"2ef45c32-32a1-4302-84e3-3ff7e864cb99\") " pod="openstack/placement-8574cd8bdd-r5cv6" Feb 14 04:31:50 crc kubenswrapper[4867]: I0214 04:31:50.591890 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ef45c32-32a1-4302-84e3-3ff7e864cb99-internal-tls-certs\") pod \"placement-8574cd8bdd-r5cv6\" (UID: \"2ef45c32-32a1-4302-84e3-3ff7e864cb99\") " pod="openstack/placement-8574cd8bdd-r5cv6" Feb 14 04:31:50 crc kubenswrapper[4867]: I0214 04:31:50.601486 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ef45c32-32a1-4302-84e3-3ff7e864cb99-config-data\") pod \"placement-8574cd8bdd-r5cv6\" (UID: \"2ef45c32-32a1-4302-84e3-3ff7e864cb99\") " pod="openstack/placement-8574cd8bdd-r5cv6" Feb 14 04:31:50 crc kubenswrapper[4867]: I0214 04:31:50.601782 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ef45c32-32a1-4302-84e3-3ff7e864cb99-public-tls-certs\") pod \"placement-8574cd8bdd-r5cv6\" (UID: \"2ef45c32-32a1-4302-84e3-3ff7e864cb99\") " pod="openstack/placement-8574cd8bdd-r5cv6" Feb 14 04:31:50 crc kubenswrapper[4867]: I0214 04:31:50.604250 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-74d7c6cb48-8wr7l" event={"ID":"8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2","Type":"ContainerStarted","Data":"95f9bf20e81b8ee8296887c27b1fc03c7aeba7ab6e8adc89f4de3b967b5b9c86"} Feb 14 04:31:50 crc kubenswrapper[4867]: I0214 04:31:50.604621 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-74d7c6cb48-8wr7l" Feb 14 04:31:50 crc kubenswrapper[4867]: I0214 04:31:50.604677 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-74d7c6cb48-8wr7l" Feb 14 04:31:50 crc kubenswrapper[4867]: I0214 04:31:50.606348 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6tp6\" (UniqueName: \"kubernetes.io/projected/2ef45c32-32a1-4302-84e3-3ff7e864cb99-kube-api-access-r6tp6\") pod \"placement-8574cd8bdd-r5cv6\" (UID: \"2ef45c32-32a1-4302-84e3-3ff7e864cb99\") " pod="openstack/placement-8574cd8bdd-r5cv6" Feb 14 04:31:50 crc kubenswrapper[4867]: I0214 04:31:50.610311 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-grkqh" event={"ID":"9c973bde-ff14-4cce-9f9c-57354dbd4adb","Type":"ContainerStarted","Data":"933362dc125c07b501be0afbe062e3a9150917f293f02be88bdfafccd96cea38"} Feb 14 04:31:50 crc kubenswrapper[4867]: I0214 04:31:50.619548 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7595b47f77-vtg9d" event={"ID":"1ddcc862-a10c-487c-aaa4-0e93df9c0005","Type":"ContainerStarted","Data":"31fb0f3c48111438ee031349650a61f4fe5bd218eb1d44f8b161df96998d98a0"} Feb 14 04:31:50 crc kubenswrapper[4867]: I0214 04:31:50.620678 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-7595b47f77-vtg9d" Feb 14 04:31:50 crc kubenswrapper[4867]: I0214 04:31:50.631560 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-74d7c6cb48-8wr7l" podStartSLOduration=3.631538048 podStartE2EDuration="3.631538048s" podCreationTimestamp="2026-02-14 04:31:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:31:50.628256331 +0000 UTC m=+1342.709193645" watchObservedRunningTime="2026-02-14 04:31:50.631538048 +0000 UTC m=+1342.712475362" Feb 14 04:31:50 crc kubenswrapper[4867]: I0214 04:31:50.657016 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-grkqh" podStartSLOduration=4.335962361 podStartE2EDuration="46.656997433s" podCreationTimestamp="2026-02-14 04:31:04 +0000 UTC" firstStartedPulling="2026-02-14 04:31:06.218338847 +0000 UTC m=+1298.299276161" lastFinishedPulling="2026-02-14 04:31:48.539373929 +0000 UTC m=+1340.620311233" observedRunningTime="2026-02-14 04:31:50.654046184 +0000 UTC m=+1342.734983498" watchObservedRunningTime="2026-02-14 04:31:50.656997433 +0000 UTC m=+1342.737934747" Feb 14 04:31:50 crc kubenswrapper[4867]: I0214 04:31:50.680973 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-7595b47f77-vtg9d" podStartSLOduration=2.680953287 podStartE2EDuration="2.680953287s" podCreationTimestamp="2026-02-14 04:31:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:31:50.678327188 +0000 UTC m=+1342.759264502" watchObservedRunningTime="2026-02-14 04:31:50.680953287 +0000 UTC m=+1342.761890601" Feb 14 04:31:50 crc kubenswrapper[4867]: I0214 04:31:50.723045 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8574cd8bdd-r5cv6" Feb 14 04:31:51 crc kubenswrapper[4867]: I0214 04:31:51.164297 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 14 04:31:51 crc kubenswrapper[4867]: I0214 04:31:51.168409 4867 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 14 04:31:51 crc kubenswrapper[4867]: I0214 04:31:51.285938 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-8574cd8bdd-r5cv6"] Feb 14 04:31:51 crc kubenswrapper[4867]: I0214 04:31:51.357605 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 14 04:31:51 crc kubenswrapper[4867]: I0214 04:31:51.357762 4867 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 14 04:31:51 crc kubenswrapper[4867]: I0214 04:31:51.367175 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 14 04:31:51 crc kubenswrapper[4867]: I0214 04:31:51.505762 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 14 04:31:51 crc kubenswrapper[4867]: I0214 04:31:51.650853 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8574cd8bdd-r5cv6" event={"ID":"2ef45c32-32a1-4302-84e3-3ff7e864cb99","Type":"ContainerStarted","Data":"5837652f3241f8ac7f996793c0e77dc2cc0983f1e1cb2f4705eb5aed2bfafc25"} Feb 14 04:31:52 crc kubenswrapper[4867]: I0214 04:31:52.670979 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8574cd8bdd-r5cv6" event={"ID":"2ef45c32-32a1-4302-84e3-3ff7e864cb99","Type":"ContainerStarted","Data":"17f1db18a03838b7c0b891920a932ab3620b823b7bc296bd601248587f10cc95"} Feb 14 04:31:52 crc kubenswrapper[4867]: I0214 04:31:52.671334 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8574cd8bdd-r5cv6" event={"ID":"2ef45c32-32a1-4302-84e3-3ff7e864cb99","Type":"ContainerStarted","Data":"5aad4aff4d66f881a3ea4da12e0740ea7f5d50327c7eaf6d2b1af7ad98769a29"} Feb 14 04:31:52 crc kubenswrapper[4867]: I0214 04:31:52.671353 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-8574cd8bdd-r5cv6" Feb 14 04:31:52 crc kubenswrapper[4867]: I0214 04:31:52.704589 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-8574cd8bdd-r5cv6" podStartSLOduration=2.70456468 podStartE2EDuration="2.70456468s" podCreationTimestamp="2026-02-14 04:31:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:31:52.692400728 +0000 UTC m=+1344.773338062" watchObservedRunningTime="2026-02-14 04:31:52.70456468 +0000 UTC m=+1344.785502004" Feb 14 04:31:53 crc kubenswrapper[4867]: I0214 04:31:53.687257 4867 generic.go:334] "Generic (PLEG): container finished" podID="cccb73cc-2b89-4363-b7ca-44dfa627d9f9" containerID="f215c5a914efdb087a943f5dda611b846de12406e04a977d9c6c6acb8ed9e635" exitCode=0 Feb 14 04:31:53 crc kubenswrapper[4867]: I0214 04:31:53.687400 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-mklx7" event={"ID":"cccb73cc-2b89-4363-b7ca-44dfa627d9f9","Type":"ContainerDied","Data":"f215c5a914efdb087a943f5dda611b846de12406e04a977d9c6c6acb8ed9e635"} Feb 14 04:31:53 crc kubenswrapper[4867]: I0214 04:31:53.687930 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-8574cd8bdd-r5cv6" Feb 14 04:31:55 crc kubenswrapper[4867]: I0214 04:31:55.712187 4867 generic.go:334] "Generic (PLEG): container finished" podID="9c973bde-ff14-4cce-9f9c-57354dbd4adb" containerID="933362dc125c07b501be0afbe062e3a9150917f293f02be88bdfafccd96cea38" exitCode=0 Feb 14 04:31:55 crc kubenswrapper[4867]: I0214 04:31:55.712275 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-grkqh" event={"ID":"9c973bde-ff14-4cce-9f9c-57354dbd4adb","Type":"ContainerDied","Data":"933362dc125c07b501be0afbe062e3a9150917f293f02be88bdfafccd96cea38"} Feb 14 04:31:57 crc kubenswrapper[4867]: I0214 04:31:57.578104 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-mklx7" Feb 14 04:31:57 crc kubenswrapper[4867]: I0214 04:31:57.689392 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cccb73cc-2b89-4363-b7ca-44dfa627d9f9-combined-ca-bundle\") pod \"cccb73cc-2b89-4363-b7ca-44dfa627d9f9\" (UID: \"cccb73cc-2b89-4363-b7ca-44dfa627d9f9\") " Feb 14 04:31:57 crc kubenswrapper[4867]: I0214 04:31:57.689542 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x77fq\" (UniqueName: \"kubernetes.io/projected/cccb73cc-2b89-4363-b7ca-44dfa627d9f9-kube-api-access-x77fq\") pod \"cccb73cc-2b89-4363-b7ca-44dfa627d9f9\" (UID: \"cccb73cc-2b89-4363-b7ca-44dfa627d9f9\") " Feb 14 04:31:57 crc kubenswrapper[4867]: I0214 04:31:57.689569 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cccb73cc-2b89-4363-b7ca-44dfa627d9f9-db-sync-config-data\") pod \"cccb73cc-2b89-4363-b7ca-44dfa627d9f9\" (UID: \"cccb73cc-2b89-4363-b7ca-44dfa627d9f9\") " Feb 14 04:31:57 crc kubenswrapper[4867]: I0214 04:31:57.698618 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cccb73cc-2b89-4363-b7ca-44dfa627d9f9-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "cccb73cc-2b89-4363-b7ca-44dfa627d9f9" (UID: "cccb73cc-2b89-4363-b7ca-44dfa627d9f9"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:31:57 crc kubenswrapper[4867]: I0214 04:31:57.707146 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cccb73cc-2b89-4363-b7ca-44dfa627d9f9-kube-api-access-x77fq" (OuterVolumeSpecName: "kube-api-access-x77fq") pod "cccb73cc-2b89-4363-b7ca-44dfa627d9f9" (UID: "cccb73cc-2b89-4363-b7ca-44dfa627d9f9"). InnerVolumeSpecName "kube-api-access-x77fq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:31:57 crc kubenswrapper[4867]: I0214 04:31:57.742459 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cccb73cc-2b89-4363-b7ca-44dfa627d9f9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cccb73cc-2b89-4363-b7ca-44dfa627d9f9" (UID: "cccb73cc-2b89-4363-b7ca-44dfa627d9f9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:31:57 crc kubenswrapper[4867]: I0214 04:31:57.749585 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-mklx7" event={"ID":"cccb73cc-2b89-4363-b7ca-44dfa627d9f9","Type":"ContainerDied","Data":"f1bbb81d52303ed15cfa9fbfd73e50a998ea92e54eddc8748836c35a398ce9c1"} Feb 14 04:31:57 crc kubenswrapper[4867]: I0214 04:31:57.749635 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1bbb81d52303ed15cfa9fbfd73e50a998ea92e54eddc8748836c35a398ce9c1" Feb 14 04:31:57 crc kubenswrapper[4867]: I0214 04:31:57.749704 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-mklx7" Feb 14 04:31:57 crc kubenswrapper[4867]: I0214 04:31:57.793371 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cccb73cc-2b89-4363-b7ca-44dfa627d9f9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:57 crc kubenswrapper[4867]: I0214 04:31:57.793770 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x77fq\" (UniqueName: \"kubernetes.io/projected/cccb73cc-2b89-4363-b7ca-44dfa627d9f9-kube-api-access-x77fq\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:57 crc kubenswrapper[4867]: I0214 04:31:57.793782 4867 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cccb73cc-2b89-4363-b7ca-44dfa627d9f9-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:58 crc kubenswrapper[4867]: I0214 04:31:58.084609 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-grkqh" Feb 14 04:31:58 crc kubenswrapper[4867]: I0214 04:31:58.209868 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87zjm\" (UniqueName: \"kubernetes.io/projected/9c973bde-ff14-4cce-9f9c-57354dbd4adb-kube-api-access-87zjm\") pod \"9c973bde-ff14-4cce-9f9c-57354dbd4adb\" (UID: \"9c973bde-ff14-4cce-9f9c-57354dbd4adb\") " Feb 14 04:31:58 crc kubenswrapper[4867]: I0214 04:31:58.210012 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c973bde-ff14-4cce-9f9c-57354dbd4adb-scripts\") pod \"9c973bde-ff14-4cce-9f9c-57354dbd4adb\" (UID: \"9c973bde-ff14-4cce-9f9c-57354dbd4adb\") " Feb 14 04:31:58 crc kubenswrapper[4867]: I0214 04:31:58.210097 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9c973bde-ff14-4cce-9f9c-57354dbd4adb-etc-machine-id\") pod \"9c973bde-ff14-4cce-9f9c-57354dbd4adb\" (UID: \"9c973bde-ff14-4cce-9f9c-57354dbd4adb\") " Feb 14 04:31:58 crc kubenswrapper[4867]: I0214 04:31:58.210171 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c973bde-ff14-4cce-9f9c-57354dbd4adb-combined-ca-bundle\") pod \"9c973bde-ff14-4cce-9f9c-57354dbd4adb\" (UID: \"9c973bde-ff14-4cce-9f9c-57354dbd4adb\") " Feb 14 04:31:58 crc kubenswrapper[4867]: I0214 04:31:58.210196 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c973bde-ff14-4cce-9f9c-57354dbd4adb-config-data\") pod \"9c973bde-ff14-4cce-9f9c-57354dbd4adb\" (UID: \"9c973bde-ff14-4cce-9f9c-57354dbd4adb\") " Feb 14 04:31:58 crc kubenswrapper[4867]: I0214 04:31:58.210324 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9c973bde-ff14-4cce-9f9c-57354dbd4adb-db-sync-config-data\") pod \"9c973bde-ff14-4cce-9f9c-57354dbd4adb\" (UID: \"9c973bde-ff14-4cce-9f9c-57354dbd4adb\") " Feb 14 04:31:58 crc kubenswrapper[4867]: I0214 04:31:58.210650 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c973bde-ff14-4cce-9f9c-57354dbd4adb-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "9c973bde-ff14-4cce-9f9c-57354dbd4adb" (UID: "9c973bde-ff14-4cce-9f9c-57354dbd4adb"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 04:31:58 crc kubenswrapper[4867]: I0214 04:31:58.211957 4867 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9c973bde-ff14-4cce-9f9c-57354dbd4adb-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:58 crc kubenswrapper[4867]: I0214 04:31:58.214077 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c973bde-ff14-4cce-9f9c-57354dbd4adb-kube-api-access-87zjm" (OuterVolumeSpecName: "kube-api-access-87zjm") pod "9c973bde-ff14-4cce-9f9c-57354dbd4adb" (UID: "9c973bde-ff14-4cce-9f9c-57354dbd4adb"). InnerVolumeSpecName "kube-api-access-87zjm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:31:58 crc kubenswrapper[4867]: I0214 04:31:58.214931 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c973bde-ff14-4cce-9f9c-57354dbd4adb-scripts" (OuterVolumeSpecName: "scripts") pod "9c973bde-ff14-4cce-9f9c-57354dbd4adb" (UID: "9c973bde-ff14-4cce-9f9c-57354dbd4adb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:31:58 crc kubenswrapper[4867]: I0214 04:31:58.214961 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c973bde-ff14-4cce-9f9c-57354dbd4adb-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "9c973bde-ff14-4cce-9f9c-57354dbd4adb" (UID: "9c973bde-ff14-4cce-9f9c-57354dbd4adb"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:31:58 crc kubenswrapper[4867]: I0214 04:31:58.238760 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c973bde-ff14-4cce-9f9c-57354dbd4adb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9c973bde-ff14-4cce-9f9c-57354dbd4adb" (UID: "9c973bde-ff14-4cce-9f9c-57354dbd4adb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:31:58 crc kubenswrapper[4867]: I0214 04:31:58.275071 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c973bde-ff14-4cce-9f9c-57354dbd4adb-config-data" (OuterVolumeSpecName: "config-data") pod "9c973bde-ff14-4cce-9f9c-57354dbd4adb" (UID: "9c973bde-ff14-4cce-9f9c-57354dbd4adb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:31:58 crc kubenswrapper[4867]: I0214 04:31:58.313909 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c973bde-ff14-4cce-9f9c-57354dbd4adb-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:58 crc kubenswrapper[4867]: I0214 04:31:58.314167 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c973bde-ff14-4cce-9f9c-57354dbd4adb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:58 crc kubenswrapper[4867]: I0214 04:31:58.314231 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c973bde-ff14-4cce-9f9c-57354dbd4adb-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:58 crc kubenswrapper[4867]: I0214 04:31:58.314289 4867 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9c973bde-ff14-4cce-9f9c-57354dbd4adb-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:58 crc kubenswrapper[4867]: I0214 04:31:58.314410 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-87zjm\" (UniqueName: \"kubernetes.io/projected/9c973bde-ff14-4cce-9f9c-57354dbd4adb-kube-api-access-87zjm\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:58 crc kubenswrapper[4867]: I0214 04:31:58.764228 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="20f83c90-35bd-4d40-90e4-f992c7844a5d" containerName="ceilometer-central-agent" containerID="cri-o://12c007eaf3f2f0273b4b97ee67fcb41bee882cea55e4b7022e88e2bd510463b3" gracePeriod=30 Feb 14 04:31:58 crc kubenswrapper[4867]: I0214 04:31:58.763975 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20f83c90-35bd-4d40-90e4-f992c7844a5d","Type":"ContainerStarted","Data":"c3cf8cc9c9af14899e3e42c8a5806f199da51be9cd935b737e6e52767602944f"} Feb 14 04:31:58 crc kubenswrapper[4867]: I0214 04:31:58.764751 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 14 04:31:58 crc kubenswrapper[4867]: I0214 04:31:58.764388 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="20f83c90-35bd-4d40-90e4-f992c7844a5d" containerName="ceilometer-notification-agent" containerID="cri-o://5aef47de2b98909844392965ecce12a94c4a0b4e3f7b14facabcf28be59312be" gracePeriod=30 Feb 14 04:31:58 crc kubenswrapper[4867]: I0214 04:31:58.764349 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="20f83c90-35bd-4d40-90e4-f992c7844a5d" containerName="sg-core" containerID="cri-o://80b2feaac0df4a17c38e5c52338aa4756e2f98cfb9c0f642287cd39641d2aa47" gracePeriod=30 Feb 14 04:31:58 crc kubenswrapper[4867]: I0214 04:31:58.764349 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="20f83c90-35bd-4d40-90e4-f992c7844a5d" containerName="proxy-httpd" containerID="cri-o://c3cf8cc9c9af14899e3e42c8a5806f199da51be9cd935b737e6e52767602944f" gracePeriod=30 Feb 14 04:31:58 crc kubenswrapper[4867]: I0214 04:31:58.769198 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-grkqh" event={"ID":"9c973bde-ff14-4cce-9f9c-57354dbd4adb","Type":"ContainerDied","Data":"b3a7579e2ea00af7974e6f233c7249ba1f5d8c4ed824a86714e0fb4c62e7eb90"} Feb 14 04:31:58 crc kubenswrapper[4867]: I0214 04:31:58.769244 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3a7579e2ea00af7974e6f233c7249ba1f5d8c4ed824a86714e0fb4c62e7eb90" Feb 14 04:31:58 crc kubenswrapper[4867]: I0214 04:31:58.769354 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-grkqh" Feb 14 04:31:58 crc kubenswrapper[4867]: I0214 04:31:58.836121 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.9986397030000003 podStartE2EDuration="54.836096589s" podCreationTimestamp="2026-02-14 04:31:04 +0000 UTC" firstStartedPulling="2026-02-14 04:31:07.256201489 +0000 UTC m=+1299.337138803" lastFinishedPulling="2026-02-14 04:31:58.093658375 +0000 UTC m=+1350.174595689" observedRunningTime="2026-02-14 04:31:58.810405478 +0000 UTC m=+1350.891342792" watchObservedRunningTime="2026-02-14 04:31:58.836096589 +0000 UTC m=+1350.917033903" Feb 14 04:31:58 crc kubenswrapper[4867]: I0214 04:31:58.865070 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-6cb8d59db5-hc7rx"] Feb 14 04:31:58 crc kubenswrapper[4867]: E0214 04:31:58.865544 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cccb73cc-2b89-4363-b7ca-44dfa627d9f9" containerName="barbican-db-sync" Feb 14 04:31:58 crc kubenswrapper[4867]: I0214 04:31:58.865556 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="cccb73cc-2b89-4363-b7ca-44dfa627d9f9" containerName="barbican-db-sync" Feb 14 04:31:58 crc kubenswrapper[4867]: E0214 04:31:58.865598 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c973bde-ff14-4cce-9f9c-57354dbd4adb" containerName="cinder-db-sync" Feb 14 04:31:58 crc kubenswrapper[4867]: I0214 04:31:58.865605 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c973bde-ff14-4cce-9f9c-57354dbd4adb" containerName="cinder-db-sync" Feb 14 04:31:58 crc kubenswrapper[4867]: I0214 04:31:58.865790 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c973bde-ff14-4cce-9f9c-57354dbd4adb" containerName="cinder-db-sync" Feb 14 04:31:58 crc kubenswrapper[4867]: I0214 04:31:58.865806 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="cccb73cc-2b89-4363-b7ca-44dfa627d9f9" containerName="barbican-db-sync" Feb 14 04:31:58 crc kubenswrapper[4867]: I0214 04:31:58.866976 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6cb8d59db5-hc7rx" Feb 14 04:31:58 crc kubenswrapper[4867]: I0214 04:31:58.869472 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-p86vr" Feb 14 04:31:58 crc kubenswrapper[4867]: I0214 04:31:58.869877 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 14 04:31:58 crc kubenswrapper[4867]: I0214 04:31:58.879090 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 14 04:31:58 crc kubenswrapper[4867]: I0214 04:31:58.907159 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6cb8d59db5-hc7rx"] Feb 14 04:31:58 crc kubenswrapper[4867]: I0214 04:31:58.929752 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvm7w\" (UniqueName: \"kubernetes.io/projected/6517b483-cb9c-465e-a7f0-f697b6ba3189-kube-api-access-xvm7w\") pod \"barbican-worker-6cb8d59db5-hc7rx\" (UID: \"6517b483-cb9c-465e-a7f0-f697b6ba3189\") " pod="openstack/barbican-worker-6cb8d59db5-hc7rx" Feb 14 04:31:58 crc kubenswrapper[4867]: I0214 04:31:58.929902 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6517b483-cb9c-465e-a7f0-f697b6ba3189-logs\") pod \"barbican-worker-6cb8d59db5-hc7rx\" (UID: \"6517b483-cb9c-465e-a7f0-f697b6ba3189\") " pod="openstack/barbican-worker-6cb8d59db5-hc7rx" Feb 14 04:31:58 crc kubenswrapper[4867]: I0214 04:31:58.930018 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6517b483-cb9c-465e-a7f0-f697b6ba3189-config-data-custom\") pod \"barbican-worker-6cb8d59db5-hc7rx\" (UID: \"6517b483-cb9c-465e-a7f0-f697b6ba3189\") " pod="openstack/barbican-worker-6cb8d59db5-hc7rx" Feb 14 04:31:58 crc kubenswrapper[4867]: I0214 04:31:58.930051 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6517b483-cb9c-465e-a7f0-f697b6ba3189-config-data\") pod \"barbican-worker-6cb8d59db5-hc7rx\" (UID: \"6517b483-cb9c-465e-a7f0-f697b6ba3189\") " pod="openstack/barbican-worker-6cb8d59db5-hc7rx" Feb 14 04:31:58 crc kubenswrapper[4867]: I0214 04:31:58.930097 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6517b483-cb9c-465e-a7f0-f697b6ba3189-combined-ca-bundle\") pod \"barbican-worker-6cb8d59db5-hc7rx\" (UID: \"6517b483-cb9c-465e-a7f0-f697b6ba3189\") " pod="openstack/barbican-worker-6cb8d59db5-hc7rx" Feb 14 04:31:58 crc kubenswrapper[4867]: I0214 04:31:58.968372 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-7f6876db8-kxmgv"] Feb 14 04:31:58 crc kubenswrapper[4867]: I0214 04:31:58.970736 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7f6876db8-kxmgv" Feb 14 04:31:58 crc kubenswrapper[4867]: I0214 04:31:58.976789 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.031666 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvm7w\" (UniqueName: \"kubernetes.io/projected/6517b483-cb9c-465e-a7f0-f697b6ba3189-kube-api-access-xvm7w\") pod \"barbican-worker-6cb8d59db5-hc7rx\" (UID: \"6517b483-cb9c-465e-a7f0-f697b6ba3189\") " pod="openstack/barbican-worker-6cb8d59db5-hc7rx" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.031741 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4a4a3883-6484-4af9-a7f0-8dd5ee4da247-config-data-custom\") pod \"barbican-keystone-listener-7f6876db8-kxmgv\" (UID: \"4a4a3883-6484-4af9-a7f0-8dd5ee4da247\") " pod="openstack/barbican-keystone-listener-7f6876db8-kxmgv" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.031809 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6517b483-cb9c-465e-a7f0-f697b6ba3189-logs\") pod \"barbican-worker-6cb8d59db5-hc7rx\" (UID: \"6517b483-cb9c-465e-a7f0-f697b6ba3189\") " pod="openstack/barbican-worker-6cb8d59db5-hc7rx" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.031832 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a4a3883-6484-4af9-a7f0-8dd5ee4da247-config-data\") pod \"barbican-keystone-listener-7f6876db8-kxmgv\" (UID: \"4a4a3883-6484-4af9-a7f0-8dd5ee4da247\") " pod="openstack/barbican-keystone-listener-7f6876db8-kxmgv" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.031895 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6517b483-cb9c-465e-a7f0-f697b6ba3189-config-data-custom\") pod \"barbican-worker-6cb8d59db5-hc7rx\" (UID: \"6517b483-cb9c-465e-a7f0-f697b6ba3189\") " pod="openstack/barbican-worker-6cb8d59db5-hc7rx" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.031915 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6517b483-cb9c-465e-a7f0-f697b6ba3189-config-data\") pod \"barbican-worker-6cb8d59db5-hc7rx\" (UID: \"6517b483-cb9c-465e-a7f0-f697b6ba3189\") " pod="openstack/barbican-worker-6cb8d59db5-hc7rx" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.031946 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a4a3883-6484-4af9-a7f0-8dd5ee4da247-logs\") pod \"barbican-keystone-listener-7f6876db8-kxmgv\" (UID: \"4a4a3883-6484-4af9-a7f0-8dd5ee4da247\") " pod="openstack/barbican-keystone-listener-7f6876db8-kxmgv" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.031965 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6517b483-cb9c-465e-a7f0-f697b6ba3189-combined-ca-bundle\") pod \"barbican-worker-6cb8d59db5-hc7rx\" (UID: \"6517b483-cb9c-465e-a7f0-f697b6ba3189\") " pod="openstack/barbican-worker-6cb8d59db5-hc7rx" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.031998 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjcwg\" (UniqueName: \"kubernetes.io/projected/4a4a3883-6484-4af9-a7f0-8dd5ee4da247-kube-api-access-rjcwg\") pod \"barbican-keystone-listener-7f6876db8-kxmgv\" (UID: \"4a4a3883-6484-4af9-a7f0-8dd5ee4da247\") " pod="openstack/barbican-keystone-listener-7f6876db8-kxmgv" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.032035 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a4a3883-6484-4af9-a7f0-8dd5ee4da247-combined-ca-bundle\") pod \"barbican-keystone-listener-7f6876db8-kxmgv\" (UID: \"4a4a3883-6484-4af9-a7f0-8dd5ee4da247\") " pod="openstack/barbican-keystone-listener-7f6876db8-kxmgv" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.034446 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6517b483-cb9c-465e-a7f0-f697b6ba3189-logs\") pod \"barbican-worker-6cb8d59db5-hc7rx\" (UID: \"6517b483-cb9c-465e-a7f0-f697b6ba3189\") " pod="openstack/barbican-worker-6cb8d59db5-hc7rx" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.041718 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6517b483-cb9c-465e-a7f0-f697b6ba3189-combined-ca-bundle\") pod \"barbican-worker-6cb8d59db5-hc7rx\" (UID: \"6517b483-cb9c-465e-a7f0-f697b6ba3189\") " pod="openstack/barbican-worker-6cb8d59db5-hc7rx" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.044125 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7f6876db8-kxmgv"] Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.044162 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-rh624"] Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.045873 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-rh624" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.047674 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6517b483-cb9c-465e-a7f0-f697b6ba3189-config-data\") pod \"barbican-worker-6cb8d59db5-hc7rx\" (UID: \"6517b483-cb9c-465e-a7f0-f697b6ba3189\") " pod="openstack/barbican-worker-6cb8d59db5-hc7rx" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.063229 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvm7w\" (UniqueName: \"kubernetes.io/projected/6517b483-cb9c-465e-a7f0-f697b6ba3189-kube-api-access-xvm7w\") pod \"barbican-worker-6cb8d59db5-hc7rx\" (UID: \"6517b483-cb9c-465e-a7f0-f697b6ba3189\") " pod="openstack/barbican-worker-6cb8d59db5-hc7rx" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.066127 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6517b483-cb9c-465e-a7f0-f697b6ba3189-config-data-custom\") pod \"barbican-worker-6cb8d59db5-hc7rx\" (UID: \"6517b483-cb9c-465e-a7f0-f697b6ba3189\") " pod="openstack/barbican-worker-6cb8d59db5-hc7rx" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.069030 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-rh624"] Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.130981 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-78546bb898-l5722"] Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.135098 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-78546bb898-l5722" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.136008 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjcwg\" (UniqueName: \"kubernetes.io/projected/4a4a3883-6484-4af9-a7f0-8dd5ee4da247-kube-api-access-rjcwg\") pod \"barbican-keystone-listener-7f6876db8-kxmgv\" (UID: \"4a4a3883-6484-4af9-a7f0-8dd5ee4da247\") " pod="openstack/barbican-keystone-listener-7f6876db8-kxmgv" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.136070 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ead79748-92fd-4acc-9abb-e5d73a7be7da-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-rh624\" (UID: \"ead79748-92fd-4acc-9abb-e5d73a7be7da\") " pod="openstack/dnsmasq-dns-85ff748b95-rh624" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.136105 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a4a3883-6484-4af9-a7f0-8dd5ee4da247-combined-ca-bundle\") pod \"barbican-keystone-listener-7f6876db8-kxmgv\" (UID: \"4a4a3883-6484-4af9-a7f0-8dd5ee4da247\") " pod="openstack/barbican-keystone-listener-7f6876db8-kxmgv" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.136177 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ead79748-92fd-4acc-9abb-e5d73a7be7da-config\") pod \"dnsmasq-dns-85ff748b95-rh624\" (UID: \"ead79748-92fd-4acc-9abb-e5d73a7be7da\") " pod="openstack/dnsmasq-dns-85ff748b95-rh624" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.136199 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4a4a3883-6484-4af9-a7f0-8dd5ee4da247-config-data-custom\") pod \"barbican-keystone-listener-7f6876db8-kxmgv\" (UID: \"4a4a3883-6484-4af9-a7f0-8dd5ee4da247\") " pod="openstack/barbican-keystone-listener-7f6876db8-kxmgv" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.136258 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7sftc\" (UniqueName: \"kubernetes.io/projected/ead79748-92fd-4acc-9abb-e5d73a7be7da-kube-api-access-7sftc\") pod \"dnsmasq-dns-85ff748b95-rh624\" (UID: \"ead79748-92fd-4acc-9abb-e5d73a7be7da\") " pod="openstack/dnsmasq-dns-85ff748b95-rh624" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.136287 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a4a3883-6484-4af9-a7f0-8dd5ee4da247-config-data\") pod \"barbican-keystone-listener-7f6876db8-kxmgv\" (UID: \"4a4a3883-6484-4af9-a7f0-8dd5ee4da247\") " pod="openstack/barbican-keystone-listener-7f6876db8-kxmgv" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.136322 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ead79748-92fd-4acc-9abb-e5d73a7be7da-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-rh624\" (UID: \"ead79748-92fd-4acc-9abb-e5d73a7be7da\") " pod="openstack/dnsmasq-dns-85ff748b95-rh624" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.136416 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ead79748-92fd-4acc-9abb-e5d73a7be7da-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-rh624\" (UID: \"ead79748-92fd-4acc-9abb-e5d73a7be7da\") " pod="openstack/dnsmasq-dns-85ff748b95-rh624" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.136451 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a4a3883-6484-4af9-a7f0-8dd5ee4da247-logs\") pod \"barbican-keystone-listener-7f6876db8-kxmgv\" (UID: \"4a4a3883-6484-4af9-a7f0-8dd5ee4da247\") " pod="openstack/barbican-keystone-listener-7f6876db8-kxmgv" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.136499 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ead79748-92fd-4acc-9abb-e5d73a7be7da-dns-svc\") pod \"dnsmasq-dns-85ff748b95-rh624\" (UID: \"ead79748-92fd-4acc-9abb-e5d73a7be7da\") " pod="openstack/dnsmasq-dns-85ff748b95-rh624" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.137301 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a4a3883-6484-4af9-a7f0-8dd5ee4da247-logs\") pod \"barbican-keystone-listener-7f6876db8-kxmgv\" (UID: \"4a4a3883-6484-4af9-a7f0-8dd5ee4da247\") " pod="openstack/barbican-keystone-listener-7f6876db8-kxmgv" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.140852 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a4a3883-6484-4af9-a7f0-8dd5ee4da247-config-data\") pod \"barbican-keystone-listener-7f6876db8-kxmgv\" (UID: \"4a4a3883-6484-4af9-a7f0-8dd5ee4da247\") " pod="openstack/barbican-keystone-listener-7f6876db8-kxmgv" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.141868 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.142346 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a4a3883-6484-4af9-a7f0-8dd5ee4da247-combined-ca-bundle\") pod \"barbican-keystone-listener-7f6876db8-kxmgv\" (UID: \"4a4a3883-6484-4af9-a7f0-8dd5ee4da247\") " pod="openstack/barbican-keystone-listener-7f6876db8-kxmgv" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.151023 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4a4a3883-6484-4af9-a7f0-8dd5ee4da247-config-data-custom\") pod \"barbican-keystone-listener-7f6876db8-kxmgv\" (UID: \"4a4a3883-6484-4af9-a7f0-8dd5ee4da247\") " pod="openstack/barbican-keystone-listener-7f6876db8-kxmgv" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.193178 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjcwg\" (UniqueName: \"kubernetes.io/projected/4a4a3883-6484-4af9-a7f0-8dd5ee4da247-kube-api-access-rjcwg\") pod \"barbican-keystone-listener-7f6876db8-kxmgv\" (UID: \"4a4a3883-6484-4af9-a7f0-8dd5ee4da247\") " pod="openstack/barbican-keystone-listener-7f6876db8-kxmgv" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.240637 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ead79748-92fd-4acc-9abb-e5d73a7be7da-config\") pod \"dnsmasq-dns-85ff748b95-rh624\" (UID: \"ead79748-92fd-4acc-9abb-e5d73a7be7da\") " pod="openstack/dnsmasq-dns-85ff748b95-rh624" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.240728 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bf24394-6465-476f-a99e-f46fce318656-config-data\") pod \"barbican-api-78546bb898-l5722\" (UID: \"3bf24394-6465-476f-a99e-f46fce318656\") " pod="openstack/barbican-api-78546bb898-l5722" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.240787 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3bf24394-6465-476f-a99e-f46fce318656-logs\") pod \"barbican-api-78546bb898-l5722\" (UID: \"3bf24394-6465-476f-a99e-f46fce318656\") " pod="openstack/barbican-api-78546bb898-l5722" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.240817 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7sftc\" (UniqueName: \"kubernetes.io/projected/ead79748-92fd-4acc-9abb-e5d73a7be7da-kube-api-access-7sftc\") pod \"dnsmasq-dns-85ff748b95-rh624\" (UID: \"ead79748-92fd-4acc-9abb-e5d73a7be7da\") " pod="openstack/dnsmasq-dns-85ff748b95-rh624" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.240857 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ead79748-92fd-4acc-9abb-e5d73a7be7da-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-rh624\" (UID: \"ead79748-92fd-4acc-9abb-e5d73a7be7da\") " pod="openstack/dnsmasq-dns-85ff748b95-rh624" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.240895 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bf24394-6465-476f-a99e-f46fce318656-combined-ca-bundle\") pod \"barbican-api-78546bb898-l5722\" (UID: \"3bf24394-6465-476f-a99e-f46fce318656\") " pod="openstack/barbican-api-78546bb898-l5722" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.240926 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3bf24394-6465-476f-a99e-f46fce318656-config-data-custom\") pod \"barbican-api-78546bb898-l5722\" (UID: \"3bf24394-6465-476f-a99e-f46fce318656\") " pod="openstack/barbican-api-78546bb898-l5722" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.240952 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bvxz\" (UniqueName: \"kubernetes.io/projected/3bf24394-6465-476f-a99e-f46fce318656-kube-api-access-2bvxz\") pod \"barbican-api-78546bb898-l5722\" (UID: \"3bf24394-6465-476f-a99e-f46fce318656\") " pod="openstack/barbican-api-78546bb898-l5722" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.241004 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ead79748-92fd-4acc-9abb-e5d73a7be7da-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-rh624\" (UID: \"ead79748-92fd-4acc-9abb-e5d73a7be7da\") " pod="openstack/dnsmasq-dns-85ff748b95-rh624" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.241038 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ead79748-92fd-4acc-9abb-e5d73a7be7da-dns-svc\") pod \"dnsmasq-dns-85ff748b95-rh624\" (UID: \"ead79748-92fd-4acc-9abb-e5d73a7be7da\") " pod="openstack/dnsmasq-dns-85ff748b95-rh624" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.241079 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ead79748-92fd-4acc-9abb-e5d73a7be7da-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-rh624\" (UID: \"ead79748-92fd-4acc-9abb-e5d73a7be7da\") " pod="openstack/dnsmasq-dns-85ff748b95-rh624" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.241706 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ead79748-92fd-4acc-9abb-e5d73a7be7da-config\") pod \"dnsmasq-dns-85ff748b95-rh624\" (UID: \"ead79748-92fd-4acc-9abb-e5d73a7be7da\") " pod="openstack/dnsmasq-dns-85ff748b95-rh624" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.254871 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-78546bb898-l5722"] Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.260000 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ead79748-92fd-4acc-9abb-e5d73a7be7da-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-rh624\" (UID: \"ead79748-92fd-4acc-9abb-e5d73a7be7da\") " pod="openstack/dnsmasq-dns-85ff748b95-rh624" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.263772 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ead79748-92fd-4acc-9abb-e5d73a7be7da-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-rh624\" (UID: \"ead79748-92fd-4acc-9abb-e5d73a7be7da\") " pod="openstack/dnsmasq-dns-85ff748b95-rh624" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.265478 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ead79748-92fd-4acc-9abb-e5d73a7be7da-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-rh624\" (UID: \"ead79748-92fd-4acc-9abb-e5d73a7be7da\") " pod="openstack/dnsmasq-dns-85ff748b95-rh624" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.266533 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ead79748-92fd-4acc-9abb-e5d73a7be7da-dns-svc\") pod \"dnsmasq-dns-85ff748b95-rh624\" (UID: \"ead79748-92fd-4acc-9abb-e5d73a7be7da\") " pod="openstack/dnsmasq-dns-85ff748b95-rh624" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.273884 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6cb8d59db5-hc7rx" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.274895 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7sftc\" (UniqueName: \"kubernetes.io/projected/ead79748-92fd-4acc-9abb-e5d73a7be7da-kube-api-access-7sftc\") pod \"dnsmasq-dns-85ff748b95-rh624\" (UID: \"ead79748-92fd-4acc-9abb-e5d73a7be7da\") " pod="openstack/dnsmasq-dns-85ff748b95-rh624" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.342841 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7f6876db8-kxmgv" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.344433 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bf24394-6465-476f-a99e-f46fce318656-combined-ca-bundle\") pod \"barbican-api-78546bb898-l5722\" (UID: \"3bf24394-6465-476f-a99e-f46fce318656\") " pod="openstack/barbican-api-78546bb898-l5722" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.344483 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3bf24394-6465-476f-a99e-f46fce318656-config-data-custom\") pod \"barbican-api-78546bb898-l5722\" (UID: \"3bf24394-6465-476f-a99e-f46fce318656\") " pod="openstack/barbican-api-78546bb898-l5722" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.344523 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bvxz\" (UniqueName: \"kubernetes.io/projected/3bf24394-6465-476f-a99e-f46fce318656-kube-api-access-2bvxz\") pod \"barbican-api-78546bb898-l5722\" (UID: \"3bf24394-6465-476f-a99e-f46fce318656\") " pod="openstack/barbican-api-78546bb898-l5722" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.344686 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bf24394-6465-476f-a99e-f46fce318656-config-data\") pod \"barbican-api-78546bb898-l5722\" (UID: \"3bf24394-6465-476f-a99e-f46fce318656\") " pod="openstack/barbican-api-78546bb898-l5722" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.344721 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3bf24394-6465-476f-a99e-f46fce318656-logs\") pod \"barbican-api-78546bb898-l5722\" (UID: \"3bf24394-6465-476f-a99e-f46fce318656\") " pod="openstack/barbican-api-78546bb898-l5722" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.346699 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3bf24394-6465-476f-a99e-f46fce318656-logs\") pod \"barbican-api-78546bb898-l5722\" (UID: \"3bf24394-6465-476f-a99e-f46fce318656\") " pod="openstack/barbican-api-78546bb898-l5722" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.348805 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bf24394-6465-476f-a99e-f46fce318656-combined-ca-bundle\") pod \"barbican-api-78546bb898-l5722\" (UID: \"3bf24394-6465-476f-a99e-f46fce318656\") " pod="openstack/barbican-api-78546bb898-l5722" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.349929 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3bf24394-6465-476f-a99e-f46fce318656-config-data-custom\") pod \"barbican-api-78546bb898-l5722\" (UID: \"3bf24394-6465-476f-a99e-f46fce318656\") " pod="openstack/barbican-api-78546bb898-l5722" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.352630 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bf24394-6465-476f-a99e-f46fce318656-config-data\") pod \"barbican-api-78546bb898-l5722\" (UID: \"3bf24394-6465-476f-a99e-f46fce318656\") " pod="openstack/barbican-api-78546bb898-l5722" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.398169 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bvxz\" (UniqueName: \"kubernetes.io/projected/3bf24394-6465-476f-a99e-f46fce318656-kube-api-access-2bvxz\") pod \"barbican-api-78546bb898-l5722\" (UID: \"3bf24394-6465-476f-a99e-f46fce318656\") " pod="openstack/barbican-api-78546bb898-l5722" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.404417 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.424451 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.430182 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.430384 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.430490 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.430597 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-76c2m" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.435519 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.474161 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-rh624"] Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.475102 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-rh624" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.553009 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6c55469-3aa2-4471-932a-442ce56570a7-config-data\") pod \"cinder-scheduler-0\" (UID: \"b6c55469-3aa2-4471-932a-442ce56570a7\") " pod="openstack/cinder-scheduler-0" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.553089 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b6c55469-3aa2-4471-932a-442ce56570a7-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"b6c55469-3aa2-4471-932a-442ce56570a7\") " pod="openstack/cinder-scheduler-0" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.553122 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6c55469-3aa2-4471-932a-442ce56570a7-scripts\") pod \"cinder-scheduler-0\" (UID: \"b6c55469-3aa2-4471-932a-442ce56570a7\") " pod="openstack/cinder-scheduler-0" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.553199 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kv2hq\" (UniqueName: \"kubernetes.io/projected/b6c55469-3aa2-4471-932a-442ce56570a7-kube-api-access-kv2hq\") pod \"cinder-scheduler-0\" (UID: \"b6c55469-3aa2-4471-932a-442ce56570a7\") " pod="openstack/cinder-scheduler-0" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.553262 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6c55469-3aa2-4471-932a-442ce56570a7-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"b6c55469-3aa2-4471-932a-442ce56570a7\") " pod="openstack/cinder-scheduler-0" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.553294 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b6c55469-3aa2-4471-932a-442ce56570a7-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"b6c55469-3aa2-4471-932a-442ce56570a7\") " pod="openstack/cinder-scheduler-0" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.554491 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-pq99b"] Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.556746 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-pq99b" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.599994 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-pq99b"] Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.643404 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-78546bb898-l5722" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.657336 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6c55469-3aa2-4471-932a-442ce56570a7-config-data\") pod \"cinder-scheduler-0\" (UID: \"b6c55469-3aa2-4471-932a-442ce56570a7\") " pod="openstack/cinder-scheduler-0" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.657404 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b6c55469-3aa2-4471-932a-442ce56570a7-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"b6c55469-3aa2-4471-932a-442ce56570a7\") " pod="openstack/cinder-scheduler-0" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.657437 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6c55469-3aa2-4471-932a-442ce56570a7-scripts\") pod \"cinder-scheduler-0\" (UID: \"b6c55469-3aa2-4471-932a-442ce56570a7\") " pod="openstack/cinder-scheduler-0" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.657517 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kv2hq\" (UniqueName: \"kubernetes.io/projected/b6c55469-3aa2-4471-932a-442ce56570a7-kube-api-access-kv2hq\") pod \"cinder-scheduler-0\" (UID: \"b6c55469-3aa2-4471-932a-442ce56570a7\") " pod="openstack/cinder-scheduler-0" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.657569 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6c55469-3aa2-4471-932a-442ce56570a7-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"b6c55469-3aa2-4471-932a-442ce56570a7\") " pod="openstack/cinder-scheduler-0" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.657595 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b6c55469-3aa2-4471-932a-442ce56570a7-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"b6c55469-3aa2-4471-932a-442ce56570a7\") " pod="openstack/cinder-scheduler-0" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.666996 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6c55469-3aa2-4471-932a-442ce56570a7-config-data\") pod \"cinder-scheduler-0\" (UID: \"b6c55469-3aa2-4471-932a-442ce56570a7\") " pod="openstack/cinder-scheduler-0" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.667080 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.669601 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b6c55469-3aa2-4471-932a-442ce56570a7-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"b6c55469-3aa2-4471-932a-442ce56570a7\") " pod="openstack/cinder-scheduler-0" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.682706 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b6c55469-3aa2-4471-932a-442ce56570a7-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"b6c55469-3aa2-4471-932a-442ce56570a7\") " pod="openstack/cinder-scheduler-0" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.685895 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.688234 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6c55469-3aa2-4471-932a-442ce56570a7-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"b6c55469-3aa2-4471-932a-442ce56570a7\") " pod="openstack/cinder-scheduler-0" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.689292 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.692538 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.707178 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6c55469-3aa2-4471-932a-442ce56570a7-scripts\") pod \"cinder-scheduler-0\" (UID: \"b6c55469-3aa2-4471-932a-442ce56570a7\") " pod="openstack/cinder-scheduler-0" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.712453 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kv2hq\" (UniqueName: \"kubernetes.io/projected/b6c55469-3aa2-4471-932a-442ce56570a7-kube-api-access-kv2hq\") pod \"cinder-scheduler-0\" (UID: \"b6c55469-3aa2-4471-932a-442ce56570a7\") " pod="openstack/cinder-scheduler-0" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.764050 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/746b9097-84d0-4d00-a92c-808df9206d8a-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-pq99b\" (UID: \"746b9097-84d0-4d00-a92c-808df9206d8a\") " pod="openstack/dnsmasq-dns-5c9776ccc5-pq99b" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.764098 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/746b9097-84d0-4d00-a92c-808df9206d8a-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-pq99b\" (UID: \"746b9097-84d0-4d00-a92c-808df9206d8a\") " pod="openstack/dnsmasq-dns-5c9776ccc5-pq99b" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.764136 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/746b9097-84d0-4d00-a92c-808df9206d8a-config\") pod \"dnsmasq-dns-5c9776ccc5-pq99b\" (UID: \"746b9097-84d0-4d00-a92c-808df9206d8a\") " pod="openstack/dnsmasq-dns-5c9776ccc5-pq99b" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.764207 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/746b9097-84d0-4d00-a92c-808df9206d8a-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-pq99b\" (UID: \"746b9097-84d0-4d00-a92c-808df9206d8a\") " pod="openstack/dnsmasq-dns-5c9776ccc5-pq99b" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.764234 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/746b9097-84d0-4d00-a92c-808df9206d8a-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-pq99b\" (UID: \"746b9097-84d0-4d00-a92c-808df9206d8a\") " pod="openstack/dnsmasq-dns-5c9776ccc5-pq99b" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.764274 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8rdd\" (UniqueName: \"kubernetes.io/projected/746b9097-84d0-4d00-a92c-808df9206d8a-kube-api-access-j8rdd\") pod \"dnsmasq-dns-5c9776ccc5-pq99b\" (UID: \"746b9097-84d0-4d00-a92c-808df9206d8a\") " pod="openstack/dnsmasq-dns-5c9776ccc5-pq99b" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.828015 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.857229 4867 generic.go:334] "Generic (PLEG): container finished" podID="20f83c90-35bd-4d40-90e4-f992c7844a5d" containerID="c3cf8cc9c9af14899e3e42c8a5806f199da51be9cd935b737e6e52767602944f" exitCode=0 Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.857259 4867 generic.go:334] "Generic (PLEG): container finished" podID="20f83c90-35bd-4d40-90e4-f992c7844a5d" containerID="80b2feaac0df4a17c38e5c52338aa4756e2f98cfb9c0f642287cd39641d2aa47" exitCode=2 Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.857281 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20f83c90-35bd-4d40-90e4-f992c7844a5d","Type":"ContainerDied","Data":"c3cf8cc9c9af14899e3e42c8a5806f199da51be9cd935b737e6e52767602944f"} Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.857308 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20f83c90-35bd-4d40-90e4-f992c7844a5d","Type":"ContainerDied","Data":"80b2feaac0df4a17c38e5c52338aa4756e2f98cfb9c0f642287cd39641d2aa47"} Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.866291 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/defe0915-1f3e-4357-ba66-529a3801b279-etc-machine-id\") pod \"cinder-api-0\" (UID: \"defe0915-1f3e-4357-ba66-529a3801b279\") " pod="openstack/cinder-api-0" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.866344 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/746b9097-84d0-4d00-a92c-808df9206d8a-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-pq99b\" (UID: \"746b9097-84d0-4d00-a92c-808df9206d8a\") " pod="openstack/dnsmasq-dns-5c9776ccc5-pq99b" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.866390 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/746b9097-84d0-4d00-a92c-808df9206d8a-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-pq99b\" (UID: \"746b9097-84d0-4d00-a92c-808df9206d8a\") " pod="openstack/dnsmasq-dns-5c9776ccc5-pq99b" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.866442 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8rdd\" (UniqueName: \"kubernetes.io/projected/746b9097-84d0-4d00-a92c-808df9206d8a-kube-api-access-j8rdd\") pod \"dnsmasq-dns-5c9776ccc5-pq99b\" (UID: \"746b9097-84d0-4d00-a92c-808df9206d8a\") " pod="openstack/dnsmasq-dns-5c9776ccc5-pq99b" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.866483 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/defe0915-1f3e-4357-ba66-529a3801b279-config-data\") pod \"cinder-api-0\" (UID: \"defe0915-1f3e-4357-ba66-529a3801b279\") " pod="openstack/cinder-api-0" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.866614 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/defe0915-1f3e-4357-ba66-529a3801b279-config-data-custom\") pod \"cinder-api-0\" (UID: \"defe0915-1f3e-4357-ba66-529a3801b279\") " pod="openstack/cinder-api-0" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.866635 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/defe0915-1f3e-4357-ba66-529a3801b279-logs\") pod \"cinder-api-0\" (UID: \"defe0915-1f3e-4357-ba66-529a3801b279\") " pod="openstack/cinder-api-0" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.866663 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/defe0915-1f3e-4357-ba66-529a3801b279-scripts\") pod \"cinder-api-0\" (UID: \"defe0915-1f3e-4357-ba66-529a3801b279\") " pod="openstack/cinder-api-0" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.866680 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/defe0915-1f3e-4357-ba66-529a3801b279-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"defe0915-1f3e-4357-ba66-529a3801b279\") " pod="openstack/cinder-api-0" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.866697 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/746b9097-84d0-4d00-a92c-808df9206d8a-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-pq99b\" (UID: \"746b9097-84d0-4d00-a92c-808df9206d8a\") " pod="openstack/dnsmasq-dns-5c9776ccc5-pq99b" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.866725 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/746b9097-84d0-4d00-a92c-808df9206d8a-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-pq99b\" (UID: \"746b9097-84d0-4d00-a92c-808df9206d8a\") " pod="openstack/dnsmasq-dns-5c9776ccc5-pq99b" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.866758 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/746b9097-84d0-4d00-a92c-808df9206d8a-config\") pod \"dnsmasq-dns-5c9776ccc5-pq99b\" (UID: \"746b9097-84d0-4d00-a92c-808df9206d8a\") " pod="openstack/dnsmasq-dns-5c9776ccc5-pq99b" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.866801 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhr5h\" (UniqueName: \"kubernetes.io/projected/defe0915-1f3e-4357-ba66-529a3801b279-kube-api-access-dhr5h\") pod \"cinder-api-0\" (UID: \"defe0915-1f3e-4357-ba66-529a3801b279\") " pod="openstack/cinder-api-0" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.867836 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/746b9097-84d0-4d00-a92c-808df9206d8a-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-pq99b\" (UID: \"746b9097-84d0-4d00-a92c-808df9206d8a\") " pod="openstack/dnsmasq-dns-5c9776ccc5-pq99b" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.868385 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/746b9097-84d0-4d00-a92c-808df9206d8a-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-pq99b\" (UID: \"746b9097-84d0-4d00-a92c-808df9206d8a\") " pod="openstack/dnsmasq-dns-5c9776ccc5-pq99b" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.869194 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/746b9097-84d0-4d00-a92c-808df9206d8a-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-pq99b\" (UID: \"746b9097-84d0-4d00-a92c-808df9206d8a\") " pod="openstack/dnsmasq-dns-5c9776ccc5-pq99b" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.870384 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/746b9097-84d0-4d00-a92c-808df9206d8a-config\") pod \"dnsmasq-dns-5c9776ccc5-pq99b\" (UID: \"746b9097-84d0-4d00-a92c-808df9206d8a\") " pod="openstack/dnsmasq-dns-5c9776ccc5-pq99b" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.871127 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/746b9097-84d0-4d00-a92c-808df9206d8a-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-pq99b\" (UID: \"746b9097-84d0-4d00-a92c-808df9206d8a\") " pod="openstack/dnsmasq-dns-5c9776ccc5-pq99b" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.897121 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8rdd\" (UniqueName: \"kubernetes.io/projected/746b9097-84d0-4d00-a92c-808df9206d8a-kube-api-access-j8rdd\") pod \"dnsmasq-dns-5c9776ccc5-pq99b\" (UID: \"746b9097-84d0-4d00-a92c-808df9206d8a\") " pod="openstack/dnsmasq-dns-5c9776ccc5-pq99b" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.973469 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/defe0915-1f3e-4357-ba66-529a3801b279-scripts\") pod \"cinder-api-0\" (UID: \"defe0915-1f3e-4357-ba66-529a3801b279\") " pod="openstack/cinder-api-0" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.973530 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/defe0915-1f3e-4357-ba66-529a3801b279-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"defe0915-1f3e-4357-ba66-529a3801b279\") " pod="openstack/cinder-api-0" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.973743 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhr5h\" (UniqueName: \"kubernetes.io/projected/defe0915-1f3e-4357-ba66-529a3801b279-kube-api-access-dhr5h\") pod \"cinder-api-0\" (UID: \"defe0915-1f3e-4357-ba66-529a3801b279\") " pod="openstack/cinder-api-0" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.973859 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/defe0915-1f3e-4357-ba66-529a3801b279-etc-machine-id\") pod \"cinder-api-0\" (UID: \"defe0915-1f3e-4357-ba66-529a3801b279\") " pod="openstack/cinder-api-0" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.974076 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/defe0915-1f3e-4357-ba66-529a3801b279-config-data\") pod \"cinder-api-0\" (UID: \"defe0915-1f3e-4357-ba66-529a3801b279\") " pod="openstack/cinder-api-0" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.974342 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/defe0915-1f3e-4357-ba66-529a3801b279-config-data-custom\") pod \"cinder-api-0\" (UID: \"defe0915-1f3e-4357-ba66-529a3801b279\") " pod="openstack/cinder-api-0" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.974372 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/defe0915-1f3e-4357-ba66-529a3801b279-logs\") pod \"cinder-api-0\" (UID: \"defe0915-1f3e-4357-ba66-529a3801b279\") " pod="openstack/cinder-api-0" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.975204 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/defe0915-1f3e-4357-ba66-529a3801b279-etc-machine-id\") pod \"cinder-api-0\" (UID: \"defe0915-1f3e-4357-ba66-529a3801b279\") " pod="openstack/cinder-api-0" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.975966 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/defe0915-1f3e-4357-ba66-529a3801b279-logs\") pod \"cinder-api-0\" (UID: \"defe0915-1f3e-4357-ba66-529a3801b279\") " pod="openstack/cinder-api-0" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.977813 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/defe0915-1f3e-4357-ba66-529a3801b279-scripts\") pod \"cinder-api-0\" (UID: \"defe0915-1f3e-4357-ba66-529a3801b279\") " pod="openstack/cinder-api-0" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.979464 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/defe0915-1f3e-4357-ba66-529a3801b279-config-data\") pod \"cinder-api-0\" (UID: \"defe0915-1f3e-4357-ba66-529a3801b279\") " pod="openstack/cinder-api-0" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.987422 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/defe0915-1f3e-4357-ba66-529a3801b279-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"defe0915-1f3e-4357-ba66-529a3801b279\") " pod="openstack/cinder-api-0" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.991423 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/defe0915-1f3e-4357-ba66-529a3801b279-config-data-custom\") pod \"cinder-api-0\" (UID: \"defe0915-1f3e-4357-ba66-529a3801b279\") " pod="openstack/cinder-api-0" Feb 14 04:32:00 crc kubenswrapper[4867]: I0214 04:32:00.003957 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhr5h\" (UniqueName: \"kubernetes.io/projected/defe0915-1f3e-4357-ba66-529a3801b279-kube-api-access-dhr5h\") pod \"cinder-api-0\" (UID: \"defe0915-1f3e-4357-ba66-529a3801b279\") " pod="openstack/cinder-api-0" Feb 14 04:32:00 crc kubenswrapper[4867]: I0214 04:32:00.117079 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-pq99b" Feb 14 04:32:00 crc kubenswrapper[4867]: I0214 04:32:00.131848 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 14 04:32:00 crc kubenswrapper[4867]: I0214 04:32:00.159275 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7f6876db8-kxmgv"] Feb 14 04:32:00 crc kubenswrapper[4867]: I0214 04:32:00.188233 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6cb8d59db5-hc7rx"] Feb 14 04:32:00 crc kubenswrapper[4867]: I0214 04:32:00.319279 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-rh624"] Feb 14 04:32:00 crc kubenswrapper[4867]: I0214 04:32:00.413394 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-78546bb898-l5722"] Feb 14 04:32:00 crc kubenswrapper[4867]: W0214 04:32:00.419166 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3bf24394_6465_476f_a99e_f46fce318656.slice/crio-8d85459a09b7155a3e119769eaeb23dbfd9aa893f907e0c55fc24cbd558bf78f WatchSource:0}: Error finding container 8d85459a09b7155a3e119769eaeb23dbfd9aa893f907e0c55fc24cbd558bf78f: Status 404 returned error can't find the container with id 8d85459a09b7155a3e119769eaeb23dbfd9aa893f907e0c55fc24cbd558bf78f Feb 14 04:32:00 crc kubenswrapper[4867]: I0214 04:32:00.525033 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 14 04:32:00 crc kubenswrapper[4867]: I0214 04:32:00.793749 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 14 04:32:00 crc kubenswrapper[4867]: I0214 04:32:00.911265 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-78546bb898-l5722" event={"ID":"3bf24394-6465-476f-a99e-f46fce318656","Type":"ContainerStarted","Data":"8d85459a09b7155a3e119769eaeb23dbfd9aa893f907e0c55fc24cbd558bf78f"} Feb 14 04:32:00 crc kubenswrapper[4867]: I0214 04:32:00.912910 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b6c55469-3aa2-4471-932a-442ce56570a7","Type":"ContainerStarted","Data":"3e15ae2331b94d3c6d65cab2376b0b1e088c96cfaa63266969feb367a3f3d213"} Feb 14 04:32:00 crc kubenswrapper[4867]: I0214 04:32:00.913792 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7f6876db8-kxmgv" event={"ID":"4a4a3883-6484-4af9-a7f0-8dd5ee4da247","Type":"ContainerStarted","Data":"bab0243e2f30ade2fbf7d69ff5be791722a012a5265d850b903dfa45eb14c8cb"} Feb 14 04:32:00 crc kubenswrapper[4867]: I0214 04:32:00.933073 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-rh624" event={"ID":"ead79748-92fd-4acc-9abb-e5d73a7be7da","Type":"ContainerStarted","Data":"09deda04b6ec52201b019321aa75e2ff7261072711b3d07a6ec6b5a3d2007260"} Feb 14 04:32:00 crc kubenswrapper[4867]: I0214 04:32:00.939149 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6cb8d59db5-hc7rx" event={"ID":"6517b483-cb9c-465e-a7f0-f697b6ba3189","Type":"ContainerStarted","Data":"5019e7ff2e1fa2506cd7b4669b444efae61a9516b4d15aaa76c5f39c261cc2e8"} Feb 14 04:32:00 crc kubenswrapper[4867]: I0214 04:32:00.944439 4867 generic.go:334] "Generic (PLEG): container finished" podID="20f83c90-35bd-4d40-90e4-f992c7844a5d" containerID="12c007eaf3f2f0273b4b97ee67fcb41bee882cea55e4b7022e88e2bd510463b3" exitCode=0 Feb 14 04:32:00 crc kubenswrapper[4867]: I0214 04:32:00.944478 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20f83c90-35bd-4d40-90e4-f992c7844a5d","Type":"ContainerDied","Data":"12c007eaf3f2f0273b4b97ee67fcb41bee882cea55e4b7022e88e2bd510463b3"} Feb 14 04:32:01 crc kubenswrapper[4867]: W0214 04:32:01.073959 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod746b9097_84d0_4d00_a92c_808df9206d8a.slice/crio-9ac4c13dc3497256b1b6cb1aa9076b705851041e05e9b02af05f329d0735ed8b WatchSource:0}: Error finding container 9ac4c13dc3497256b1b6cb1aa9076b705851041e05e9b02af05f329d0735ed8b: Status 404 returned error can't find the container with id 9ac4c13dc3497256b1b6cb1aa9076b705851041e05e9b02af05f329d0735ed8b Feb 14 04:32:01 crc kubenswrapper[4867]: I0214 04:32:01.109301 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-pq99b"] Feb 14 04:32:01 crc kubenswrapper[4867]: I0214 04:32:01.265423 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 04:32:01 crc kubenswrapper[4867]: I0214 04:32:01.265486 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 04:32:01 crc kubenswrapper[4867]: I0214 04:32:01.411270 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 14 04:32:01 crc kubenswrapper[4867]: I0214 04:32:01.982538 4867 generic.go:334] "Generic (PLEG): container finished" podID="ead79748-92fd-4acc-9abb-e5d73a7be7da" containerID="8347aa29efdc5405a84ddb4018ebd17d5d842526f15a338717ac351c9e5c192b" exitCode=0 Feb 14 04:32:01 crc kubenswrapper[4867]: I0214 04:32:01.982711 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-rh624" event={"ID":"ead79748-92fd-4acc-9abb-e5d73a7be7da","Type":"ContainerDied","Data":"8347aa29efdc5405a84ddb4018ebd17d5d842526f15a338717ac351c9e5c192b"} Feb 14 04:32:01 crc kubenswrapper[4867]: I0214 04:32:01.992998 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-78546bb898-l5722" event={"ID":"3bf24394-6465-476f-a99e-f46fce318656","Type":"ContainerStarted","Data":"d7acae34b523e3a580609072a0335d9f4dc1a0643b2d2946b03ae70287735d81"} Feb 14 04:32:01 crc kubenswrapper[4867]: I0214 04:32:01.993076 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-78546bb898-l5722" event={"ID":"3bf24394-6465-476f-a99e-f46fce318656","Type":"ContainerStarted","Data":"3195bbd4ee7008fc50e7835b398535783b87d1f4092164f29b60b4bdc5b3c456"} Feb 14 04:32:01 crc kubenswrapper[4867]: I0214 04:32:01.993204 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-78546bb898-l5722" Feb 14 04:32:01 crc kubenswrapper[4867]: I0214 04:32:01.996379 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"defe0915-1f3e-4357-ba66-529a3801b279","Type":"ContainerStarted","Data":"840888b4eb2d6ca224cd2d23e11a1c6d063d10b85bbe55c18a106f69c4fb5e24"} Feb 14 04:32:01 crc kubenswrapper[4867]: I0214 04:32:01.996424 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"defe0915-1f3e-4357-ba66-529a3801b279","Type":"ContainerStarted","Data":"b7df10ba039fcee4e4b9fcdc56451b0c829f6865f31aa92d7e75f9d0c4ffbbef"} Feb 14 04:32:01 crc kubenswrapper[4867]: I0214 04:32:01.999372 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-pq99b" event={"ID":"746b9097-84d0-4d00-a92c-808df9206d8a","Type":"ContainerStarted","Data":"9ac4c13dc3497256b1b6cb1aa9076b705851041e05e9b02af05f329d0735ed8b"} Feb 14 04:32:02 crc kubenswrapper[4867]: I0214 04:32:02.053176 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-78546bb898-l5722" podStartSLOduration=3.053150257 podStartE2EDuration="3.053150257s" podCreationTimestamp="2026-02-14 04:31:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:32:02.036967068 +0000 UTC m=+1354.117904382" watchObservedRunningTime="2026-02-14 04:32:02.053150257 +0000 UTC m=+1354.134087571" Feb 14 04:32:02 crc kubenswrapper[4867]: I0214 04:32:02.626668 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-rh624" Feb 14 04:32:02 crc kubenswrapper[4867]: I0214 04:32:02.751687 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ead79748-92fd-4acc-9abb-e5d73a7be7da-ovsdbserver-nb\") pod \"ead79748-92fd-4acc-9abb-e5d73a7be7da\" (UID: \"ead79748-92fd-4acc-9abb-e5d73a7be7da\") " Feb 14 04:32:02 crc kubenswrapper[4867]: I0214 04:32:02.751754 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ead79748-92fd-4acc-9abb-e5d73a7be7da-config\") pod \"ead79748-92fd-4acc-9abb-e5d73a7be7da\" (UID: \"ead79748-92fd-4acc-9abb-e5d73a7be7da\") " Feb 14 04:32:02 crc kubenswrapper[4867]: I0214 04:32:02.751887 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ead79748-92fd-4acc-9abb-e5d73a7be7da-dns-swift-storage-0\") pod \"ead79748-92fd-4acc-9abb-e5d73a7be7da\" (UID: \"ead79748-92fd-4acc-9abb-e5d73a7be7da\") " Feb 14 04:32:02 crc kubenswrapper[4867]: I0214 04:32:02.751963 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ead79748-92fd-4acc-9abb-e5d73a7be7da-ovsdbserver-sb\") pod \"ead79748-92fd-4acc-9abb-e5d73a7be7da\" (UID: \"ead79748-92fd-4acc-9abb-e5d73a7be7da\") " Feb 14 04:32:02 crc kubenswrapper[4867]: I0214 04:32:02.752002 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ead79748-92fd-4acc-9abb-e5d73a7be7da-dns-svc\") pod \"ead79748-92fd-4acc-9abb-e5d73a7be7da\" (UID: \"ead79748-92fd-4acc-9abb-e5d73a7be7da\") " Feb 14 04:32:02 crc kubenswrapper[4867]: I0214 04:32:02.752066 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7sftc\" (UniqueName: \"kubernetes.io/projected/ead79748-92fd-4acc-9abb-e5d73a7be7da-kube-api-access-7sftc\") pod \"ead79748-92fd-4acc-9abb-e5d73a7be7da\" (UID: \"ead79748-92fd-4acc-9abb-e5d73a7be7da\") " Feb 14 04:32:02 crc kubenswrapper[4867]: I0214 04:32:02.766961 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ead79748-92fd-4acc-9abb-e5d73a7be7da-kube-api-access-7sftc" (OuterVolumeSpecName: "kube-api-access-7sftc") pod "ead79748-92fd-4acc-9abb-e5d73a7be7da" (UID: "ead79748-92fd-4acc-9abb-e5d73a7be7da"). InnerVolumeSpecName "kube-api-access-7sftc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:32:02 crc kubenswrapper[4867]: I0214 04:32:02.793235 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ead79748-92fd-4acc-9abb-e5d73a7be7da-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ead79748-92fd-4acc-9abb-e5d73a7be7da" (UID: "ead79748-92fd-4acc-9abb-e5d73a7be7da"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:32:02 crc kubenswrapper[4867]: I0214 04:32:02.795170 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ead79748-92fd-4acc-9abb-e5d73a7be7da-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ead79748-92fd-4acc-9abb-e5d73a7be7da" (UID: "ead79748-92fd-4acc-9abb-e5d73a7be7da"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:32:02 crc kubenswrapper[4867]: I0214 04:32:02.799115 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ead79748-92fd-4acc-9abb-e5d73a7be7da-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ead79748-92fd-4acc-9abb-e5d73a7be7da" (UID: "ead79748-92fd-4acc-9abb-e5d73a7be7da"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:32:02 crc kubenswrapper[4867]: I0214 04:32:02.799630 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ead79748-92fd-4acc-9abb-e5d73a7be7da-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ead79748-92fd-4acc-9abb-e5d73a7be7da" (UID: "ead79748-92fd-4acc-9abb-e5d73a7be7da"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:32:02 crc kubenswrapper[4867]: I0214 04:32:02.814488 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ead79748-92fd-4acc-9abb-e5d73a7be7da-config" (OuterVolumeSpecName: "config") pod "ead79748-92fd-4acc-9abb-e5d73a7be7da" (UID: "ead79748-92fd-4acc-9abb-e5d73a7be7da"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:32:02 crc kubenswrapper[4867]: I0214 04:32:02.888066 4867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ead79748-92fd-4acc-9abb-e5d73a7be7da-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:02 crc kubenswrapper[4867]: I0214 04:32:02.888359 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ead79748-92fd-4acc-9abb-e5d73a7be7da-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:02 crc kubenswrapper[4867]: I0214 04:32:02.888427 4867 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ead79748-92fd-4acc-9abb-e5d73a7be7da-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:02 crc kubenswrapper[4867]: I0214 04:32:02.888499 4867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ead79748-92fd-4acc-9abb-e5d73a7be7da-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:02 crc kubenswrapper[4867]: I0214 04:32:02.888587 4867 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ead79748-92fd-4acc-9abb-e5d73a7be7da-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:02 crc kubenswrapper[4867]: I0214 04:32:02.888661 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7sftc\" (UniqueName: \"kubernetes.io/projected/ead79748-92fd-4acc-9abb-e5d73a7be7da-kube-api-access-7sftc\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:03 crc kubenswrapper[4867]: I0214 04:32:03.021489 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6cb8d59db5-hc7rx" event={"ID":"6517b483-cb9c-465e-a7f0-f697b6ba3189","Type":"ContainerStarted","Data":"e655fb0709e8bb6c8faaedd3620b602dde2f71f117bc0a4ce2f2db694fa65dcc"} Feb 14 04:32:03 crc kubenswrapper[4867]: I0214 04:32:03.029609 4867 generic.go:334] "Generic (PLEG): container finished" podID="746b9097-84d0-4d00-a92c-808df9206d8a" containerID="5b62b9c4c18730bd95cd769f97a701a763d73eb11bbefea4c9a65847618af00e" exitCode=0 Feb 14 04:32:03 crc kubenswrapper[4867]: I0214 04:32:03.029739 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-pq99b" event={"ID":"746b9097-84d0-4d00-a92c-808df9206d8a","Type":"ContainerDied","Data":"5b62b9c4c18730bd95cd769f97a701a763d73eb11bbefea4c9a65847618af00e"} Feb 14 04:32:03 crc kubenswrapper[4867]: I0214 04:32:03.034569 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7f6876db8-kxmgv" event={"ID":"4a4a3883-6484-4af9-a7f0-8dd5ee4da247","Type":"ContainerStarted","Data":"f4ad14ad915c712ba0f4f33465067e05c07cc0afb594e4331d69be3ed95dd3cd"} Feb 14 04:32:03 crc kubenswrapper[4867]: I0214 04:32:03.053079 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-rh624" Feb 14 04:32:03 crc kubenswrapper[4867]: I0214 04:32:03.054258 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-rh624" event={"ID":"ead79748-92fd-4acc-9abb-e5d73a7be7da","Type":"ContainerDied","Data":"09deda04b6ec52201b019321aa75e2ff7261072711b3d07a6ec6b5a3d2007260"} Feb 14 04:32:03 crc kubenswrapper[4867]: I0214 04:32:03.054436 4867 scope.go:117] "RemoveContainer" containerID="8347aa29efdc5405a84ddb4018ebd17d5d842526f15a338717ac351c9e5c192b" Feb 14 04:32:03 crc kubenswrapper[4867]: I0214 04:32:03.055194 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-78546bb898-l5722" Feb 14 04:32:03 crc kubenswrapper[4867]: I0214 04:32:03.192440 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-rh624"] Feb 14 04:32:03 crc kubenswrapper[4867]: I0214 04:32:03.230591 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-rh624"] Feb 14 04:32:04 crc kubenswrapper[4867]: I0214 04:32:04.099915 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7f6876db8-kxmgv" event={"ID":"4a4a3883-6484-4af9-a7f0-8dd5ee4da247","Type":"ContainerStarted","Data":"47d54b8f3d1cbe2df657b8c4ef5ec2454d923a5ded983165ad2ca683545e743a"} Feb 14 04:32:04 crc kubenswrapper[4867]: I0214 04:32:04.127862 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6cb8d59db5-hc7rx" event={"ID":"6517b483-cb9c-465e-a7f0-f697b6ba3189","Type":"ContainerStarted","Data":"a1e0acea8b8254a02fd035c490fd90428229ffa5a7fbe5002fd7d9df1e79a22d"} Feb 14 04:32:04 crc kubenswrapper[4867]: I0214 04:32:04.164766 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-6cb8d59db5-hc7rx" podStartSLOduration=3.7944623 podStartE2EDuration="6.16474163s" podCreationTimestamp="2026-02-14 04:31:58 +0000 UTC" firstStartedPulling="2026-02-14 04:32:00.219663981 +0000 UTC m=+1352.300601295" lastFinishedPulling="2026-02-14 04:32:02.589943311 +0000 UTC m=+1354.670880625" observedRunningTime="2026-02-14 04:32:04.157787316 +0000 UTC m=+1356.238724630" watchObservedRunningTime="2026-02-14 04:32:04.16474163 +0000 UTC m=+1356.245678944" Feb 14 04:32:04 crc kubenswrapper[4867]: I0214 04:32:04.173806 4867 generic.go:334] "Generic (PLEG): container finished" podID="20f83c90-35bd-4d40-90e4-f992c7844a5d" containerID="5aef47de2b98909844392965ecce12a94c4a0b4e3f7b14facabcf28be59312be" exitCode=0 Feb 14 04:32:04 crc kubenswrapper[4867]: I0214 04:32:04.173922 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20f83c90-35bd-4d40-90e4-f992c7844a5d","Type":"ContainerDied","Data":"5aef47de2b98909844392965ecce12a94c4a0b4e3f7b14facabcf28be59312be"} Feb 14 04:32:04 crc kubenswrapper[4867]: I0214 04:32:04.174800 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-7f6876db8-kxmgv" podStartSLOduration=3.798682872 podStartE2EDuration="6.174774176s" podCreationTimestamp="2026-02-14 04:31:58 +0000 UTC" firstStartedPulling="2026-02-14 04:32:00.219962319 +0000 UTC m=+1352.300899633" lastFinishedPulling="2026-02-14 04:32:02.596053623 +0000 UTC m=+1354.676990937" observedRunningTime="2026-02-14 04:32:04.136861332 +0000 UTC m=+1356.217798646" watchObservedRunningTime="2026-02-14 04:32:04.174774176 +0000 UTC m=+1356.255711490" Feb 14 04:32:04 crc kubenswrapper[4867]: I0214 04:32:04.196680 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"defe0915-1f3e-4357-ba66-529a3801b279","Type":"ContainerStarted","Data":"016421f8d1fadaca0abec6bb1a08cd7059d9199b8b1337fce2ac9c878f82f597"} Feb 14 04:32:04 crc kubenswrapper[4867]: I0214 04:32:04.196852 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="defe0915-1f3e-4357-ba66-529a3801b279" containerName="cinder-api-log" containerID="cri-o://840888b4eb2d6ca224cd2d23e11a1c6d063d10b85bbe55c18a106f69c4fb5e24" gracePeriod=30 Feb 14 04:32:04 crc kubenswrapper[4867]: I0214 04:32:04.196936 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 14 04:32:04 crc kubenswrapper[4867]: I0214 04:32:04.197062 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="defe0915-1f3e-4357-ba66-529a3801b279" containerName="cinder-api" containerID="cri-o://016421f8d1fadaca0abec6bb1a08cd7059d9199b8b1337fce2ac9c878f82f597" gracePeriod=30 Feb 14 04:32:04 crc kubenswrapper[4867]: I0214 04:32:04.231972 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-pq99b" event={"ID":"746b9097-84d0-4d00-a92c-808df9206d8a","Type":"ContainerStarted","Data":"bbbd663b685e649aba7c6d25d4f5d873760ad4fd5bfb839326762e3bc9aeb52d"} Feb 14 04:32:04 crc kubenswrapper[4867]: I0214 04:32:04.232296 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-pq99b" Feb 14 04:32:04 crc kubenswrapper[4867]: I0214 04:32:04.252693 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b6c55469-3aa2-4471-932a-442ce56570a7","Type":"ContainerStarted","Data":"ecbff86946fb366e485d44a146ac2998664c52a1b60ad30dd4585b7cf70bfede"} Feb 14 04:32:04 crc kubenswrapper[4867]: I0214 04:32:04.275931 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.275907456 podStartE2EDuration="5.275907456s" podCreationTimestamp="2026-02-14 04:31:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:32:04.23112899 +0000 UTC m=+1356.312066304" watchObservedRunningTime="2026-02-14 04:32:04.275907456 +0000 UTC m=+1356.356844770" Feb 14 04:32:04 crc kubenswrapper[4867]: I0214 04:32:04.311672 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-pq99b" podStartSLOduration=5.311643903 podStartE2EDuration="5.311643903s" podCreationTimestamp="2026-02-14 04:31:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:32:04.259547063 +0000 UTC m=+1356.340484377" watchObservedRunningTime="2026-02-14 04:32:04.311643903 +0000 UTC m=+1356.392581217" Feb 14 04:32:04 crc kubenswrapper[4867]: I0214 04:32:04.447569 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 04:32:04 crc kubenswrapper[4867]: I0214 04:32:04.547184 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20f83c90-35bd-4d40-90e4-f992c7844a5d-run-httpd\") pod \"20f83c90-35bd-4d40-90e4-f992c7844a5d\" (UID: \"20f83c90-35bd-4d40-90e4-f992c7844a5d\") " Feb 14 04:32:04 crc kubenswrapper[4867]: I0214 04:32:04.547250 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/20f83c90-35bd-4d40-90e4-f992c7844a5d-sg-core-conf-yaml\") pod \"20f83c90-35bd-4d40-90e4-f992c7844a5d\" (UID: \"20f83c90-35bd-4d40-90e4-f992c7844a5d\") " Feb 14 04:32:04 crc kubenswrapper[4867]: I0214 04:32:04.547286 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20f83c90-35bd-4d40-90e4-f992c7844a5d-combined-ca-bundle\") pod \"20f83c90-35bd-4d40-90e4-f992c7844a5d\" (UID: \"20f83c90-35bd-4d40-90e4-f992c7844a5d\") " Feb 14 04:32:04 crc kubenswrapper[4867]: I0214 04:32:04.547351 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20f83c90-35bd-4d40-90e4-f992c7844a5d-scripts\") pod \"20f83c90-35bd-4d40-90e4-f992c7844a5d\" (UID: \"20f83c90-35bd-4d40-90e4-f992c7844a5d\") " Feb 14 04:32:04 crc kubenswrapper[4867]: I0214 04:32:04.547392 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6tmkx\" (UniqueName: \"kubernetes.io/projected/20f83c90-35bd-4d40-90e4-f992c7844a5d-kube-api-access-6tmkx\") pod \"20f83c90-35bd-4d40-90e4-f992c7844a5d\" (UID: \"20f83c90-35bd-4d40-90e4-f992c7844a5d\") " Feb 14 04:32:04 crc kubenswrapper[4867]: I0214 04:32:04.547609 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20f83c90-35bd-4d40-90e4-f992c7844a5d-log-httpd\") pod \"20f83c90-35bd-4d40-90e4-f992c7844a5d\" (UID: \"20f83c90-35bd-4d40-90e4-f992c7844a5d\") " Feb 14 04:32:04 crc kubenswrapper[4867]: I0214 04:32:04.547604 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20f83c90-35bd-4d40-90e4-f992c7844a5d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "20f83c90-35bd-4d40-90e4-f992c7844a5d" (UID: "20f83c90-35bd-4d40-90e4-f992c7844a5d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:32:04 crc kubenswrapper[4867]: I0214 04:32:04.547634 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20f83c90-35bd-4d40-90e4-f992c7844a5d-config-data\") pod \"20f83c90-35bd-4d40-90e4-f992c7844a5d\" (UID: \"20f83c90-35bd-4d40-90e4-f992c7844a5d\") " Feb 14 04:32:04 crc kubenswrapper[4867]: I0214 04:32:04.548931 4867 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20f83c90-35bd-4d40-90e4-f992c7844a5d-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:04 crc kubenswrapper[4867]: I0214 04:32:04.549076 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20f83c90-35bd-4d40-90e4-f992c7844a5d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "20f83c90-35bd-4d40-90e4-f992c7844a5d" (UID: "20f83c90-35bd-4d40-90e4-f992c7844a5d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:32:04 crc kubenswrapper[4867]: I0214 04:32:04.555228 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20f83c90-35bd-4d40-90e4-f992c7844a5d-kube-api-access-6tmkx" (OuterVolumeSpecName: "kube-api-access-6tmkx") pod "20f83c90-35bd-4d40-90e4-f992c7844a5d" (UID: "20f83c90-35bd-4d40-90e4-f992c7844a5d"). InnerVolumeSpecName "kube-api-access-6tmkx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:32:04 crc kubenswrapper[4867]: I0214 04:32:04.555859 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20f83c90-35bd-4d40-90e4-f992c7844a5d-scripts" (OuterVolumeSpecName: "scripts") pod "20f83c90-35bd-4d40-90e4-f992c7844a5d" (UID: "20f83c90-35bd-4d40-90e4-f992c7844a5d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:04 crc kubenswrapper[4867]: I0214 04:32:04.612858 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20f83c90-35bd-4d40-90e4-f992c7844a5d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "20f83c90-35bd-4d40-90e4-f992c7844a5d" (UID: "20f83c90-35bd-4d40-90e4-f992c7844a5d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:04 crc kubenswrapper[4867]: I0214 04:32:04.651749 4867 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20f83c90-35bd-4d40-90e4-f992c7844a5d-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:04 crc kubenswrapper[4867]: I0214 04:32:04.652180 4867 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/20f83c90-35bd-4d40-90e4-f992c7844a5d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:04 crc kubenswrapper[4867]: I0214 04:32:04.652195 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20f83c90-35bd-4d40-90e4-f992c7844a5d-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:04 crc kubenswrapper[4867]: I0214 04:32:04.652207 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6tmkx\" (UniqueName: \"kubernetes.io/projected/20f83c90-35bd-4d40-90e4-f992c7844a5d-kube-api-access-6tmkx\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:04 crc kubenswrapper[4867]: I0214 04:32:04.744602 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20f83c90-35bd-4d40-90e4-f992c7844a5d-config-data" (OuterVolumeSpecName: "config-data") pod "20f83c90-35bd-4d40-90e4-f992c7844a5d" (UID: "20f83c90-35bd-4d40-90e4-f992c7844a5d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:04 crc kubenswrapper[4867]: I0214 04:32:04.755342 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20f83c90-35bd-4d40-90e4-f992c7844a5d-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:04 crc kubenswrapper[4867]: I0214 04:32:04.791100 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20f83c90-35bd-4d40-90e4-f992c7844a5d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "20f83c90-35bd-4d40-90e4-f992c7844a5d" (UID: "20f83c90-35bd-4d40-90e4-f992c7844a5d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:04 crc kubenswrapper[4867]: I0214 04:32:04.858451 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20f83c90-35bd-4d40-90e4-f992c7844a5d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.026015 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ead79748-92fd-4acc-9abb-e5d73a7be7da" path="/var/lib/kubelet/pods/ead79748-92fd-4acc-9abb-e5d73a7be7da/volumes" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.108438 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.265711 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b6c55469-3aa2-4471-932a-442ce56570a7","Type":"ContainerStarted","Data":"972cae5e159f32657523e5994c8475d8de82cc180b3a8e9a74d4c60a95877238"} Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.268999 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20f83c90-35bd-4d40-90e4-f992c7844a5d","Type":"ContainerDied","Data":"fec759d47361c43e0a7e0280d89486799080a9e793713da877ee4655c98870f4"} Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.269062 4867 scope.go:117] "RemoveContainer" containerID="c3cf8cc9c9af14899e3e42c8a5806f199da51be9cd935b737e6e52767602944f" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.269243 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.272930 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/defe0915-1f3e-4357-ba66-529a3801b279-config-data-custom\") pod \"defe0915-1f3e-4357-ba66-529a3801b279\" (UID: \"defe0915-1f3e-4357-ba66-529a3801b279\") " Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.272983 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dhr5h\" (UniqueName: \"kubernetes.io/projected/defe0915-1f3e-4357-ba66-529a3801b279-kube-api-access-dhr5h\") pod \"defe0915-1f3e-4357-ba66-529a3801b279\" (UID: \"defe0915-1f3e-4357-ba66-529a3801b279\") " Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.273026 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/defe0915-1f3e-4357-ba66-529a3801b279-etc-machine-id\") pod \"defe0915-1f3e-4357-ba66-529a3801b279\" (UID: \"defe0915-1f3e-4357-ba66-529a3801b279\") " Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.273076 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/defe0915-1f3e-4357-ba66-529a3801b279-combined-ca-bundle\") pod \"defe0915-1f3e-4357-ba66-529a3801b279\" (UID: \"defe0915-1f3e-4357-ba66-529a3801b279\") " Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.273173 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/defe0915-1f3e-4357-ba66-529a3801b279-logs\") pod \"defe0915-1f3e-4357-ba66-529a3801b279\" (UID: \"defe0915-1f3e-4357-ba66-529a3801b279\") " Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.273272 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/defe0915-1f3e-4357-ba66-529a3801b279-scripts\") pod \"defe0915-1f3e-4357-ba66-529a3801b279\" (UID: \"defe0915-1f3e-4357-ba66-529a3801b279\") " Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.273330 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/defe0915-1f3e-4357-ba66-529a3801b279-config-data\") pod \"defe0915-1f3e-4357-ba66-529a3801b279\" (UID: \"defe0915-1f3e-4357-ba66-529a3801b279\") " Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.274000 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/defe0915-1f3e-4357-ba66-529a3801b279-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "defe0915-1f3e-4357-ba66-529a3801b279" (UID: "defe0915-1f3e-4357-ba66-529a3801b279"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.275876 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/defe0915-1f3e-4357-ba66-529a3801b279-logs" (OuterVolumeSpecName: "logs") pod "defe0915-1f3e-4357-ba66-529a3801b279" (UID: "defe0915-1f3e-4357-ba66-529a3801b279"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.281819 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/defe0915-1f3e-4357-ba66-529a3801b279-scripts" (OuterVolumeSpecName: "scripts") pod "defe0915-1f3e-4357-ba66-529a3801b279" (UID: "defe0915-1f3e-4357-ba66-529a3801b279"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.282440 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/defe0915-1f3e-4357-ba66-529a3801b279-kube-api-access-dhr5h" (OuterVolumeSpecName: "kube-api-access-dhr5h") pod "defe0915-1f3e-4357-ba66-529a3801b279" (UID: "defe0915-1f3e-4357-ba66-529a3801b279"). InnerVolumeSpecName "kube-api-access-dhr5h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.283612 4867 generic.go:334] "Generic (PLEG): container finished" podID="defe0915-1f3e-4357-ba66-529a3801b279" containerID="016421f8d1fadaca0abec6bb1a08cd7059d9199b8b1337fce2ac9c878f82f597" exitCode=0 Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.283654 4867 generic.go:334] "Generic (PLEG): container finished" podID="defe0915-1f3e-4357-ba66-529a3801b279" containerID="840888b4eb2d6ca224cd2d23e11a1c6d063d10b85bbe55c18a106f69c4fb5e24" exitCode=143 Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.284545 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"defe0915-1f3e-4357-ba66-529a3801b279","Type":"ContainerDied","Data":"016421f8d1fadaca0abec6bb1a08cd7059d9199b8b1337fce2ac9c878f82f597"} Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.284585 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"defe0915-1f3e-4357-ba66-529a3801b279","Type":"ContainerDied","Data":"840888b4eb2d6ca224cd2d23e11a1c6d063d10b85bbe55c18a106f69c4fb5e24"} Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.284597 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"defe0915-1f3e-4357-ba66-529a3801b279","Type":"ContainerDied","Data":"b7df10ba039fcee4e4b9fcdc56451b0c829f6865f31aa92d7e75f9d0c4ffbbef"} Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.284644 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.293606 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/defe0915-1f3e-4357-ba66-529a3801b279-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "defe0915-1f3e-4357-ba66-529a3801b279" (UID: "defe0915-1f3e-4357-ba66-529a3801b279"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.303527 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.406744463 podStartE2EDuration="6.303488716s" podCreationTimestamp="2026-02-14 04:31:59 +0000 UTC" firstStartedPulling="2026-02-14 04:32:00.529381798 +0000 UTC m=+1352.610319112" lastFinishedPulling="2026-02-14 04:32:01.426126051 +0000 UTC m=+1353.507063365" observedRunningTime="2026-02-14 04:32:05.293973074 +0000 UTC m=+1357.374910388" watchObservedRunningTime="2026-02-14 04:32:05.303488716 +0000 UTC m=+1357.384426030" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.323032 4867 scope.go:117] "RemoveContainer" containerID="80b2feaac0df4a17c38e5c52338aa4756e2f98cfb9c0f642287cd39641d2aa47" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.326263 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/defe0915-1f3e-4357-ba66-529a3801b279-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "defe0915-1f3e-4357-ba66-529a3801b279" (UID: "defe0915-1f3e-4357-ba66-529a3801b279"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.357406 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.379433 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/defe0915-1f3e-4357-ba66-529a3801b279-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.379467 4867 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/defe0915-1f3e-4357-ba66-529a3801b279-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.379477 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dhr5h\" (UniqueName: \"kubernetes.io/projected/defe0915-1f3e-4357-ba66-529a3801b279-kube-api-access-dhr5h\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.379487 4867 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/defe0915-1f3e-4357-ba66-529a3801b279-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.379526 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/defe0915-1f3e-4357-ba66-529a3801b279-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.379537 4867 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/defe0915-1f3e-4357-ba66-529a3801b279-logs\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.379845 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.387748 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/defe0915-1f3e-4357-ba66-529a3801b279-config-data" (OuterVolumeSpecName: "config-data") pod "defe0915-1f3e-4357-ba66-529a3801b279" (UID: "defe0915-1f3e-4357-ba66-529a3801b279"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.397131 4867 scope.go:117] "RemoveContainer" containerID="5aef47de2b98909844392965ecce12a94c4a0b4e3f7b14facabcf28be59312be" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.418663 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:32:05 crc kubenswrapper[4867]: E0214 04:32:05.419296 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20f83c90-35bd-4d40-90e4-f992c7844a5d" containerName="ceilometer-notification-agent" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.419328 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="20f83c90-35bd-4d40-90e4-f992c7844a5d" containerName="ceilometer-notification-agent" Feb 14 04:32:05 crc kubenswrapper[4867]: E0214 04:32:05.419354 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="defe0915-1f3e-4357-ba66-529a3801b279" containerName="cinder-api" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.419363 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="defe0915-1f3e-4357-ba66-529a3801b279" containerName="cinder-api" Feb 14 04:32:05 crc kubenswrapper[4867]: E0214 04:32:05.419384 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="defe0915-1f3e-4357-ba66-529a3801b279" containerName="cinder-api-log" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.419393 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="defe0915-1f3e-4357-ba66-529a3801b279" containerName="cinder-api-log" Feb 14 04:32:05 crc kubenswrapper[4867]: E0214 04:32:05.419423 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20f83c90-35bd-4d40-90e4-f992c7844a5d" containerName="proxy-httpd" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.419434 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="20f83c90-35bd-4d40-90e4-f992c7844a5d" containerName="proxy-httpd" Feb 14 04:32:05 crc kubenswrapper[4867]: E0214 04:32:05.419446 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20f83c90-35bd-4d40-90e4-f992c7844a5d" containerName="ceilometer-central-agent" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.419455 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="20f83c90-35bd-4d40-90e4-f992c7844a5d" containerName="ceilometer-central-agent" Feb 14 04:32:05 crc kubenswrapper[4867]: E0214 04:32:05.419483 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20f83c90-35bd-4d40-90e4-f992c7844a5d" containerName="sg-core" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.419492 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="20f83c90-35bd-4d40-90e4-f992c7844a5d" containerName="sg-core" Feb 14 04:32:05 crc kubenswrapper[4867]: E0214 04:32:05.419568 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ead79748-92fd-4acc-9abb-e5d73a7be7da" containerName="init" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.419580 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="ead79748-92fd-4acc-9abb-e5d73a7be7da" containerName="init" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.419890 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="defe0915-1f3e-4357-ba66-529a3801b279" containerName="cinder-api-log" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.419912 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="ead79748-92fd-4acc-9abb-e5d73a7be7da" containerName="init" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.419926 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="defe0915-1f3e-4357-ba66-529a3801b279" containerName="cinder-api" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.419935 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="20f83c90-35bd-4d40-90e4-f992c7844a5d" containerName="ceilometer-notification-agent" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.419945 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="20f83c90-35bd-4d40-90e4-f992c7844a5d" containerName="proxy-httpd" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.419963 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="20f83c90-35bd-4d40-90e4-f992c7844a5d" containerName="sg-core" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.419978 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="20f83c90-35bd-4d40-90e4-f992c7844a5d" containerName="ceilometer-central-agent" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.436346 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.442189 4867 scope.go:117] "RemoveContainer" containerID="12c007eaf3f2f0273b4b97ee67fcb41bee882cea55e4b7022e88e2bd510463b3" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.442467 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.443017 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.446025 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.481818 4867 scope.go:117] "RemoveContainer" containerID="016421f8d1fadaca0abec6bb1a08cd7059d9199b8b1337fce2ac9c878f82f597" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.483904 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/defe0915-1f3e-4357-ba66-529a3801b279-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.514295 4867 scope.go:117] "RemoveContainer" containerID="840888b4eb2d6ca224cd2d23e11a1c6d063d10b85bbe55c18a106f69c4fb5e24" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.542664 4867 scope.go:117] "RemoveContainer" containerID="016421f8d1fadaca0abec6bb1a08cd7059d9199b8b1337fce2ac9c878f82f597" Feb 14 04:32:05 crc kubenswrapper[4867]: E0214 04:32:05.543167 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"016421f8d1fadaca0abec6bb1a08cd7059d9199b8b1337fce2ac9c878f82f597\": container with ID starting with 016421f8d1fadaca0abec6bb1a08cd7059d9199b8b1337fce2ac9c878f82f597 not found: ID does not exist" containerID="016421f8d1fadaca0abec6bb1a08cd7059d9199b8b1337fce2ac9c878f82f597" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.543200 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"016421f8d1fadaca0abec6bb1a08cd7059d9199b8b1337fce2ac9c878f82f597"} err="failed to get container status \"016421f8d1fadaca0abec6bb1a08cd7059d9199b8b1337fce2ac9c878f82f597\": rpc error: code = NotFound desc = could not find container \"016421f8d1fadaca0abec6bb1a08cd7059d9199b8b1337fce2ac9c878f82f597\": container with ID starting with 016421f8d1fadaca0abec6bb1a08cd7059d9199b8b1337fce2ac9c878f82f597 not found: ID does not exist" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.543232 4867 scope.go:117] "RemoveContainer" containerID="840888b4eb2d6ca224cd2d23e11a1c6d063d10b85bbe55c18a106f69c4fb5e24" Feb 14 04:32:05 crc kubenswrapper[4867]: E0214 04:32:05.543578 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"840888b4eb2d6ca224cd2d23e11a1c6d063d10b85bbe55c18a106f69c4fb5e24\": container with ID starting with 840888b4eb2d6ca224cd2d23e11a1c6d063d10b85bbe55c18a106f69c4fb5e24 not found: ID does not exist" containerID="840888b4eb2d6ca224cd2d23e11a1c6d063d10b85bbe55c18a106f69c4fb5e24" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.543633 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"840888b4eb2d6ca224cd2d23e11a1c6d063d10b85bbe55c18a106f69c4fb5e24"} err="failed to get container status \"840888b4eb2d6ca224cd2d23e11a1c6d063d10b85bbe55c18a106f69c4fb5e24\": rpc error: code = NotFound desc = could not find container \"840888b4eb2d6ca224cd2d23e11a1c6d063d10b85bbe55c18a106f69c4fb5e24\": container with ID starting with 840888b4eb2d6ca224cd2d23e11a1c6d063d10b85bbe55c18a106f69c4fb5e24 not found: ID does not exist" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.543694 4867 scope.go:117] "RemoveContainer" containerID="016421f8d1fadaca0abec6bb1a08cd7059d9199b8b1337fce2ac9c878f82f597" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.543942 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"016421f8d1fadaca0abec6bb1a08cd7059d9199b8b1337fce2ac9c878f82f597"} err="failed to get container status \"016421f8d1fadaca0abec6bb1a08cd7059d9199b8b1337fce2ac9c878f82f597\": rpc error: code = NotFound desc = could not find container \"016421f8d1fadaca0abec6bb1a08cd7059d9199b8b1337fce2ac9c878f82f597\": container with ID starting with 016421f8d1fadaca0abec6bb1a08cd7059d9199b8b1337fce2ac9c878f82f597 not found: ID does not exist" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.543966 4867 scope.go:117] "RemoveContainer" containerID="840888b4eb2d6ca224cd2d23e11a1c6d063d10b85bbe55c18a106f69c4fb5e24" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.544157 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"840888b4eb2d6ca224cd2d23e11a1c6d063d10b85bbe55c18a106f69c4fb5e24"} err="failed to get container status \"840888b4eb2d6ca224cd2d23e11a1c6d063d10b85bbe55c18a106f69c4fb5e24\": rpc error: code = NotFound desc = could not find container \"840888b4eb2d6ca224cd2d23e11a1c6d063d10b85bbe55c18a106f69c4fb5e24\": container with ID starting with 840888b4eb2d6ca224cd2d23e11a1c6d063d10b85bbe55c18a106f69c4fb5e24 not found: ID does not exist" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.586248 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bfq7\" (UniqueName: \"kubernetes.io/projected/7ce36665-fb1a-4860-bc8a-5e12431d4cd6-kube-api-access-4bfq7\") pod \"ceilometer-0\" (UID: \"7ce36665-fb1a-4860-bc8a-5e12431d4cd6\") " pod="openstack/ceilometer-0" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.586421 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ce36665-fb1a-4860-bc8a-5e12431d4cd6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7ce36665-fb1a-4860-bc8a-5e12431d4cd6\") " pod="openstack/ceilometer-0" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.586597 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ce36665-fb1a-4860-bc8a-5e12431d4cd6-scripts\") pod \"ceilometer-0\" (UID: \"7ce36665-fb1a-4860-bc8a-5e12431d4cd6\") " pod="openstack/ceilometer-0" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.587170 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ce36665-fb1a-4860-bc8a-5e12431d4cd6-config-data\") pod \"ceilometer-0\" (UID: \"7ce36665-fb1a-4860-bc8a-5e12431d4cd6\") " pod="openstack/ceilometer-0" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.587234 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7ce36665-fb1a-4860-bc8a-5e12431d4cd6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7ce36665-fb1a-4860-bc8a-5e12431d4cd6\") " pod="openstack/ceilometer-0" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.587306 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ce36665-fb1a-4860-bc8a-5e12431d4cd6-run-httpd\") pod \"ceilometer-0\" (UID: \"7ce36665-fb1a-4860-bc8a-5e12431d4cd6\") " pod="openstack/ceilometer-0" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.587429 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ce36665-fb1a-4860-bc8a-5e12431d4cd6-log-httpd\") pod \"ceilometer-0\" (UID: \"7ce36665-fb1a-4860-bc8a-5e12431d4cd6\") " pod="openstack/ceilometer-0" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.659662 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.673378 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.689687 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ce36665-fb1a-4860-bc8a-5e12431d4cd6-config-data\") pod \"ceilometer-0\" (UID: \"7ce36665-fb1a-4860-bc8a-5e12431d4cd6\") " pod="openstack/ceilometer-0" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.689993 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7ce36665-fb1a-4860-bc8a-5e12431d4cd6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7ce36665-fb1a-4860-bc8a-5e12431d4cd6\") " pod="openstack/ceilometer-0" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.690155 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ce36665-fb1a-4860-bc8a-5e12431d4cd6-run-httpd\") pod \"ceilometer-0\" (UID: \"7ce36665-fb1a-4860-bc8a-5e12431d4cd6\") " pod="openstack/ceilometer-0" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.690299 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ce36665-fb1a-4860-bc8a-5e12431d4cd6-log-httpd\") pod \"ceilometer-0\" (UID: \"7ce36665-fb1a-4860-bc8a-5e12431d4cd6\") " pod="openstack/ceilometer-0" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.690410 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bfq7\" (UniqueName: \"kubernetes.io/projected/7ce36665-fb1a-4860-bc8a-5e12431d4cd6-kube-api-access-4bfq7\") pod \"ceilometer-0\" (UID: \"7ce36665-fb1a-4860-bc8a-5e12431d4cd6\") " pod="openstack/ceilometer-0" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.690584 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ce36665-fb1a-4860-bc8a-5e12431d4cd6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7ce36665-fb1a-4860-bc8a-5e12431d4cd6\") " pod="openstack/ceilometer-0" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.690718 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ce36665-fb1a-4860-bc8a-5e12431d4cd6-scripts\") pod \"ceilometer-0\" (UID: \"7ce36665-fb1a-4860-bc8a-5e12431d4cd6\") " pod="openstack/ceilometer-0" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.691543 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ce36665-fb1a-4860-bc8a-5e12431d4cd6-run-httpd\") pod \"ceilometer-0\" (UID: \"7ce36665-fb1a-4860-bc8a-5e12431d4cd6\") " pod="openstack/ceilometer-0" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.691568 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ce36665-fb1a-4860-bc8a-5e12431d4cd6-log-httpd\") pod \"ceilometer-0\" (UID: \"7ce36665-fb1a-4860-bc8a-5e12431d4cd6\") " pod="openstack/ceilometer-0" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.698366 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ce36665-fb1a-4860-bc8a-5e12431d4cd6-scripts\") pod \"ceilometer-0\" (UID: \"7ce36665-fb1a-4860-bc8a-5e12431d4cd6\") " pod="openstack/ceilometer-0" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.698989 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ce36665-fb1a-4860-bc8a-5e12431d4cd6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7ce36665-fb1a-4860-bc8a-5e12431d4cd6\") " pod="openstack/ceilometer-0" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.700118 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ce36665-fb1a-4860-bc8a-5e12431d4cd6-config-data\") pod \"ceilometer-0\" (UID: \"7ce36665-fb1a-4860-bc8a-5e12431d4cd6\") " pod="openstack/ceilometer-0" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.700802 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7ce36665-fb1a-4860-bc8a-5e12431d4cd6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7ce36665-fb1a-4860-bc8a-5e12431d4cd6\") " pod="openstack/ceilometer-0" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.708317 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.715077 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.730666 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.730835 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bfq7\" (UniqueName: \"kubernetes.io/projected/7ce36665-fb1a-4860-bc8a-5e12431d4cd6-kube-api-access-4bfq7\") pod \"ceilometer-0\" (UID: \"7ce36665-fb1a-4860-bc8a-5e12431d4cd6\") " pod="openstack/ceilometer-0" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.736591 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.748873 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.749140 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.775637 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.789587 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-584d8cfdf8-4lt8c"] Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.806481 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-584d8cfdf8-4lt8c" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.811611 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.811941 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.839703 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-584d8cfdf8-4lt8c"] Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.914408 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmlq4\" (UniqueName: \"kubernetes.io/projected/195db0d6-0991-48b6-a7a1-ad5311555ede-kube-api-access-bmlq4\") pod \"cinder-api-0\" (UID: \"195db0d6-0991-48b6-a7a1-ad5311555ede\") " pod="openstack/cinder-api-0" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.914471 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3375fa12-2e3a-431e-9341-72d5a213083e-config-data-custom\") pod \"barbican-api-584d8cfdf8-4lt8c\" (UID: \"3375fa12-2e3a-431e-9341-72d5a213083e\") " pod="openstack/barbican-api-584d8cfdf8-4lt8c" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.914524 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlvsb\" (UniqueName: \"kubernetes.io/projected/3375fa12-2e3a-431e-9341-72d5a213083e-kube-api-access-jlvsb\") pod \"barbican-api-584d8cfdf8-4lt8c\" (UID: \"3375fa12-2e3a-431e-9341-72d5a213083e\") " pod="openstack/barbican-api-584d8cfdf8-4lt8c" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.914570 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/195db0d6-0991-48b6-a7a1-ad5311555ede-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"195db0d6-0991-48b6-a7a1-ad5311555ede\") " pod="openstack/cinder-api-0" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.914607 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/195db0d6-0991-48b6-a7a1-ad5311555ede-config-data-custom\") pod \"cinder-api-0\" (UID: \"195db0d6-0991-48b6-a7a1-ad5311555ede\") " pod="openstack/cinder-api-0" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.914644 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/195db0d6-0991-48b6-a7a1-ad5311555ede-logs\") pod \"cinder-api-0\" (UID: \"195db0d6-0991-48b6-a7a1-ad5311555ede\") " pod="openstack/cinder-api-0" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.914708 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3375fa12-2e3a-431e-9341-72d5a213083e-internal-tls-certs\") pod \"barbican-api-584d8cfdf8-4lt8c\" (UID: \"3375fa12-2e3a-431e-9341-72d5a213083e\") " pod="openstack/barbican-api-584d8cfdf8-4lt8c" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.914748 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/195db0d6-0991-48b6-a7a1-ad5311555ede-public-tls-certs\") pod \"cinder-api-0\" (UID: \"195db0d6-0991-48b6-a7a1-ad5311555ede\") " pod="openstack/cinder-api-0" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.914813 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3375fa12-2e3a-431e-9341-72d5a213083e-logs\") pod \"barbican-api-584d8cfdf8-4lt8c\" (UID: \"3375fa12-2e3a-431e-9341-72d5a213083e\") " pod="openstack/barbican-api-584d8cfdf8-4lt8c" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.914853 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3375fa12-2e3a-431e-9341-72d5a213083e-config-data\") pod \"barbican-api-584d8cfdf8-4lt8c\" (UID: \"3375fa12-2e3a-431e-9341-72d5a213083e\") " pod="openstack/barbican-api-584d8cfdf8-4lt8c" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.914886 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3375fa12-2e3a-431e-9341-72d5a213083e-combined-ca-bundle\") pod \"barbican-api-584d8cfdf8-4lt8c\" (UID: \"3375fa12-2e3a-431e-9341-72d5a213083e\") " pod="openstack/barbican-api-584d8cfdf8-4lt8c" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.914946 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/195db0d6-0991-48b6-a7a1-ad5311555ede-config-data\") pod \"cinder-api-0\" (UID: \"195db0d6-0991-48b6-a7a1-ad5311555ede\") " pod="openstack/cinder-api-0" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.914986 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3375fa12-2e3a-431e-9341-72d5a213083e-public-tls-certs\") pod \"barbican-api-584d8cfdf8-4lt8c\" (UID: \"3375fa12-2e3a-431e-9341-72d5a213083e\") " pod="openstack/barbican-api-584d8cfdf8-4lt8c" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.915008 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/195db0d6-0991-48b6-a7a1-ad5311555ede-scripts\") pod \"cinder-api-0\" (UID: \"195db0d6-0991-48b6-a7a1-ad5311555ede\") " pod="openstack/cinder-api-0" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.915061 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/195db0d6-0991-48b6-a7a1-ad5311555ede-etc-machine-id\") pod \"cinder-api-0\" (UID: \"195db0d6-0991-48b6-a7a1-ad5311555ede\") " pod="openstack/cinder-api-0" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.915132 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/195db0d6-0991-48b6-a7a1-ad5311555ede-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"195db0d6-0991-48b6-a7a1-ad5311555ede\") " pod="openstack/cinder-api-0" Feb 14 04:32:06 crc kubenswrapper[4867]: I0214 04:32:06.016675 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/195db0d6-0991-48b6-a7a1-ad5311555ede-etc-machine-id\") pod \"cinder-api-0\" (UID: \"195db0d6-0991-48b6-a7a1-ad5311555ede\") " pod="openstack/cinder-api-0" Feb 14 04:32:06 crc kubenswrapper[4867]: I0214 04:32:06.016980 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/195db0d6-0991-48b6-a7a1-ad5311555ede-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"195db0d6-0991-48b6-a7a1-ad5311555ede\") " pod="openstack/cinder-api-0" Feb 14 04:32:06 crc kubenswrapper[4867]: I0214 04:32:06.017067 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/195db0d6-0991-48b6-a7a1-ad5311555ede-etc-machine-id\") pod \"cinder-api-0\" (UID: \"195db0d6-0991-48b6-a7a1-ad5311555ede\") " pod="openstack/cinder-api-0" Feb 14 04:32:06 crc kubenswrapper[4867]: I0214 04:32:06.017066 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmlq4\" (UniqueName: \"kubernetes.io/projected/195db0d6-0991-48b6-a7a1-ad5311555ede-kube-api-access-bmlq4\") pod \"cinder-api-0\" (UID: \"195db0d6-0991-48b6-a7a1-ad5311555ede\") " pod="openstack/cinder-api-0" Feb 14 04:32:06 crc kubenswrapper[4867]: I0214 04:32:06.017472 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3375fa12-2e3a-431e-9341-72d5a213083e-config-data-custom\") pod \"barbican-api-584d8cfdf8-4lt8c\" (UID: \"3375fa12-2e3a-431e-9341-72d5a213083e\") " pod="openstack/barbican-api-584d8cfdf8-4lt8c" Feb 14 04:32:06 crc kubenswrapper[4867]: I0214 04:32:06.017531 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlvsb\" (UniqueName: \"kubernetes.io/projected/3375fa12-2e3a-431e-9341-72d5a213083e-kube-api-access-jlvsb\") pod \"barbican-api-584d8cfdf8-4lt8c\" (UID: \"3375fa12-2e3a-431e-9341-72d5a213083e\") " pod="openstack/barbican-api-584d8cfdf8-4lt8c" Feb 14 04:32:06 crc kubenswrapper[4867]: I0214 04:32:06.017582 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/195db0d6-0991-48b6-a7a1-ad5311555ede-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"195db0d6-0991-48b6-a7a1-ad5311555ede\") " pod="openstack/cinder-api-0" Feb 14 04:32:06 crc kubenswrapper[4867]: I0214 04:32:06.017626 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/195db0d6-0991-48b6-a7a1-ad5311555ede-config-data-custom\") pod \"cinder-api-0\" (UID: \"195db0d6-0991-48b6-a7a1-ad5311555ede\") " pod="openstack/cinder-api-0" Feb 14 04:32:06 crc kubenswrapper[4867]: I0214 04:32:06.017668 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/195db0d6-0991-48b6-a7a1-ad5311555ede-logs\") pod \"cinder-api-0\" (UID: \"195db0d6-0991-48b6-a7a1-ad5311555ede\") " pod="openstack/cinder-api-0" Feb 14 04:32:06 crc kubenswrapper[4867]: I0214 04:32:06.017746 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3375fa12-2e3a-431e-9341-72d5a213083e-internal-tls-certs\") pod \"barbican-api-584d8cfdf8-4lt8c\" (UID: \"3375fa12-2e3a-431e-9341-72d5a213083e\") " pod="openstack/barbican-api-584d8cfdf8-4lt8c" Feb 14 04:32:06 crc kubenswrapper[4867]: I0214 04:32:06.017809 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/195db0d6-0991-48b6-a7a1-ad5311555ede-public-tls-certs\") pod \"cinder-api-0\" (UID: \"195db0d6-0991-48b6-a7a1-ad5311555ede\") " pod="openstack/cinder-api-0" Feb 14 04:32:06 crc kubenswrapper[4867]: I0214 04:32:06.017879 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3375fa12-2e3a-431e-9341-72d5a213083e-logs\") pod \"barbican-api-584d8cfdf8-4lt8c\" (UID: \"3375fa12-2e3a-431e-9341-72d5a213083e\") " pod="openstack/barbican-api-584d8cfdf8-4lt8c" Feb 14 04:32:06 crc kubenswrapper[4867]: I0214 04:32:06.017918 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3375fa12-2e3a-431e-9341-72d5a213083e-config-data\") pod \"barbican-api-584d8cfdf8-4lt8c\" (UID: \"3375fa12-2e3a-431e-9341-72d5a213083e\") " pod="openstack/barbican-api-584d8cfdf8-4lt8c" Feb 14 04:32:06 crc kubenswrapper[4867]: I0214 04:32:06.017942 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3375fa12-2e3a-431e-9341-72d5a213083e-combined-ca-bundle\") pod \"barbican-api-584d8cfdf8-4lt8c\" (UID: \"3375fa12-2e3a-431e-9341-72d5a213083e\") " pod="openstack/barbican-api-584d8cfdf8-4lt8c" Feb 14 04:32:06 crc kubenswrapper[4867]: I0214 04:32:06.018014 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/195db0d6-0991-48b6-a7a1-ad5311555ede-config-data\") pod \"cinder-api-0\" (UID: \"195db0d6-0991-48b6-a7a1-ad5311555ede\") " pod="openstack/cinder-api-0" Feb 14 04:32:06 crc kubenswrapper[4867]: I0214 04:32:06.018061 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3375fa12-2e3a-431e-9341-72d5a213083e-public-tls-certs\") pod \"barbican-api-584d8cfdf8-4lt8c\" (UID: \"3375fa12-2e3a-431e-9341-72d5a213083e\") " pod="openstack/barbican-api-584d8cfdf8-4lt8c" Feb 14 04:32:06 crc kubenswrapper[4867]: I0214 04:32:06.018082 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/195db0d6-0991-48b6-a7a1-ad5311555ede-scripts\") pod \"cinder-api-0\" (UID: \"195db0d6-0991-48b6-a7a1-ad5311555ede\") " pod="openstack/cinder-api-0" Feb 14 04:32:06 crc kubenswrapper[4867]: I0214 04:32:06.018293 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/195db0d6-0991-48b6-a7a1-ad5311555ede-logs\") pod \"cinder-api-0\" (UID: \"195db0d6-0991-48b6-a7a1-ad5311555ede\") " pod="openstack/cinder-api-0" Feb 14 04:32:06 crc kubenswrapper[4867]: I0214 04:32:06.021583 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3375fa12-2e3a-431e-9341-72d5a213083e-logs\") pod \"barbican-api-584d8cfdf8-4lt8c\" (UID: \"3375fa12-2e3a-431e-9341-72d5a213083e\") " pod="openstack/barbican-api-584d8cfdf8-4lt8c" Feb 14 04:32:06 crc kubenswrapper[4867]: I0214 04:32:06.027237 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/195db0d6-0991-48b6-a7a1-ad5311555ede-scripts\") pod \"cinder-api-0\" (UID: \"195db0d6-0991-48b6-a7a1-ad5311555ede\") " pod="openstack/cinder-api-0" Feb 14 04:32:06 crc kubenswrapper[4867]: I0214 04:32:06.027267 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/195db0d6-0991-48b6-a7a1-ad5311555ede-public-tls-certs\") pod \"cinder-api-0\" (UID: \"195db0d6-0991-48b6-a7a1-ad5311555ede\") " pod="openstack/cinder-api-0" Feb 14 04:32:06 crc kubenswrapper[4867]: I0214 04:32:06.028247 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3375fa12-2e3a-431e-9341-72d5a213083e-combined-ca-bundle\") pod \"barbican-api-584d8cfdf8-4lt8c\" (UID: \"3375fa12-2e3a-431e-9341-72d5a213083e\") " pod="openstack/barbican-api-584d8cfdf8-4lt8c" Feb 14 04:32:06 crc kubenswrapper[4867]: I0214 04:32:06.031186 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3375fa12-2e3a-431e-9341-72d5a213083e-public-tls-certs\") pod \"barbican-api-584d8cfdf8-4lt8c\" (UID: \"3375fa12-2e3a-431e-9341-72d5a213083e\") " pod="openstack/barbican-api-584d8cfdf8-4lt8c" Feb 14 04:32:06 crc kubenswrapper[4867]: I0214 04:32:06.031752 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/195db0d6-0991-48b6-a7a1-ad5311555ede-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"195db0d6-0991-48b6-a7a1-ad5311555ede\") " pod="openstack/cinder-api-0" Feb 14 04:32:06 crc kubenswrapper[4867]: I0214 04:32:06.035209 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3375fa12-2e3a-431e-9341-72d5a213083e-config-data-custom\") pod \"barbican-api-584d8cfdf8-4lt8c\" (UID: \"3375fa12-2e3a-431e-9341-72d5a213083e\") " pod="openstack/barbican-api-584d8cfdf8-4lt8c" Feb 14 04:32:06 crc kubenswrapper[4867]: I0214 04:32:06.035280 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/195db0d6-0991-48b6-a7a1-ad5311555ede-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"195db0d6-0991-48b6-a7a1-ad5311555ede\") " pod="openstack/cinder-api-0" Feb 14 04:32:06 crc kubenswrapper[4867]: I0214 04:32:06.036307 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3375fa12-2e3a-431e-9341-72d5a213083e-config-data\") pod \"barbican-api-584d8cfdf8-4lt8c\" (UID: \"3375fa12-2e3a-431e-9341-72d5a213083e\") " pod="openstack/barbican-api-584d8cfdf8-4lt8c" Feb 14 04:32:06 crc kubenswrapper[4867]: I0214 04:32:06.037069 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/195db0d6-0991-48b6-a7a1-ad5311555ede-config-data\") pod \"cinder-api-0\" (UID: \"195db0d6-0991-48b6-a7a1-ad5311555ede\") " pod="openstack/cinder-api-0" Feb 14 04:32:06 crc kubenswrapper[4867]: I0214 04:32:06.037235 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3375fa12-2e3a-431e-9341-72d5a213083e-internal-tls-certs\") pod \"barbican-api-584d8cfdf8-4lt8c\" (UID: \"3375fa12-2e3a-431e-9341-72d5a213083e\") " pod="openstack/barbican-api-584d8cfdf8-4lt8c" Feb 14 04:32:06 crc kubenswrapper[4867]: I0214 04:32:06.037313 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/195db0d6-0991-48b6-a7a1-ad5311555ede-config-data-custom\") pod \"cinder-api-0\" (UID: \"195db0d6-0991-48b6-a7a1-ad5311555ede\") " pod="openstack/cinder-api-0" Feb 14 04:32:06 crc kubenswrapper[4867]: I0214 04:32:06.040268 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmlq4\" (UniqueName: \"kubernetes.io/projected/195db0d6-0991-48b6-a7a1-ad5311555ede-kube-api-access-bmlq4\") pod \"cinder-api-0\" (UID: \"195db0d6-0991-48b6-a7a1-ad5311555ede\") " pod="openstack/cinder-api-0" Feb 14 04:32:06 crc kubenswrapper[4867]: I0214 04:32:06.041518 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlvsb\" (UniqueName: \"kubernetes.io/projected/3375fa12-2e3a-431e-9341-72d5a213083e-kube-api-access-jlvsb\") pod \"barbican-api-584d8cfdf8-4lt8c\" (UID: \"3375fa12-2e3a-431e-9341-72d5a213083e\") " pod="openstack/barbican-api-584d8cfdf8-4lt8c" Feb 14 04:32:06 crc kubenswrapper[4867]: I0214 04:32:06.276004 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 14 04:32:06 crc kubenswrapper[4867]: I0214 04:32:06.288128 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-584d8cfdf8-4lt8c" Feb 14 04:32:06 crc kubenswrapper[4867]: I0214 04:32:06.422996 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:32:06 crc kubenswrapper[4867]: I0214 04:32:06.679350 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-74c5fcd7cb-sr8z9" Feb 14 04:32:06 crc kubenswrapper[4867]: I0214 04:32:06.840066 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 14 04:32:06 crc kubenswrapper[4867]: I0214 04:32:06.952831 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-569c46898f-bbd5l"] Feb 14 04:32:06 crc kubenswrapper[4867]: I0214 04:32:06.953332 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-569c46898f-bbd5l" podUID="8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d" containerName="neutron-api" containerID="cri-o://df38319c35b43b20a57003cff86a29347a0b01099020f21394a48e3029dd9a34" gracePeriod=30 Feb 14 04:32:06 crc kubenswrapper[4867]: I0214 04:32:06.954179 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-569c46898f-bbd5l" podUID="8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d" containerName="neutron-httpd" containerID="cri-o://f445405ff2670ec25765e689c899369e6b86208982965111c8fd6b86edd2a3f9" gracePeriod=30 Feb 14 04:32:06 crc kubenswrapper[4867]: I0214 04:32:06.994611 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-569c46898f-bbd5l" Feb 14 04:32:07 crc kubenswrapper[4867]: I0214 04:32:07.096844 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20f83c90-35bd-4d40-90e4-f992c7844a5d" path="/var/lib/kubelet/pods/20f83c90-35bd-4d40-90e4-f992c7844a5d/volumes" Feb 14 04:32:07 crc kubenswrapper[4867]: I0214 04:32:07.098226 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="defe0915-1f3e-4357-ba66-529a3801b279" path="/var/lib/kubelet/pods/defe0915-1f3e-4357-ba66-529a3801b279/volumes" Feb 14 04:32:07 crc kubenswrapper[4867]: I0214 04:32:07.099081 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7886d5654f-wzr2s"] Feb 14 04:32:07 crc kubenswrapper[4867]: I0214 04:32:07.102302 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7886d5654f-wzr2s"] Feb 14 04:32:07 crc kubenswrapper[4867]: I0214 04:32:07.102329 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-584d8cfdf8-4lt8c"] Feb 14 04:32:07 crc kubenswrapper[4867]: I0214 04:32:07.102407 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7886d5654f-wzr2s" Feb 14 04:32:07 crc kubenswrapper[4867]: W0214 04:32:07.132480 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3375fa12_2e3a_431e_9341_72d5a213083e.slice/crio-144d7358dce1482e969a6c4d4aa4368c97382a613b0107504c81e13a057467b2 WatchSource:0}: Error finding container 144d7358dce1482e969a6c4d4aa4368c97382a613b0107504c81e13a057467b2: Status 404 returned error can't find the container with id 144d7358dce1482e969a6c4d4aa4368c97382a613b0107504c81e13a057467b2 Feb 14 04:32:07 crc kubenswrapper[4867]: I0214 04:32:07.156617 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4a16bfe-366a-4143-932a-e0b51615c401-combined-ca-bundle\") pod \"neutron-7886d5654f-wzr2s\" (UID: \"d4a16bfe-366a-4143-932a-e0b51615c401\") " pod="openstack/neutron-7886d5654f-wzr2s" Feb 14 04:32:07 crc kubenswrapper[4867]: I0214 04:32:07.156725 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4a16bfe-366a-4143-932a-e0b51615c401-public-tls-certs\") pod \"neutron-7886d5654f-wzr2s\" (UID: \"d4a16bfe-366a-4143-932a-e0b51615c401\") " pod="openstack/neutron-7886d5654f-wzr2s" Feb 14 04:32:07 crc kubenswrapper[4867]: I0214 04:32:07.156749 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xn54r\" (UniqueName: \"kubernetes.io/projected/d4a16bfe-366a-4143-932a-e0b51615c401-kube-api-access-xn54r\") pod \"neutron-7886d5654f-wzr2s\" (UID: \"d4a16bfe-366a-4143-932a-e0b51615c401\") " pod="openstack/neutron-7886d5654f-wzr2s" Feb 14 04:32:07 crc kubenswrapper[4867]: I0214 04:32:07.156784 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d4a16bfe-366a-4143-932a-e0b51615c401-config\") pod \"neutron-7886d5654f-wzr2s\" (UID: \"d4a16bfe-366a-4143-932a-e0b51615c401\") " pod="openstack/neutron-7886d5654f-wzr2s" Feb 14 04:32:07 crc kubenswrapper[4867]: I0214 04:32:07.156834 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4a16bfe-366a-4143-932a-e0b51615c401-ovndb-tls-certs\") pod \"neutron-7886d5654f-wzr2s\" (UID: \"d4a16bfe-366a-4143-932a-e0b51615c401\") " pod="openstack/neutron-7886d5654f-wzr2s" Feb 14 04:32:07 crc kubenswrapper[4867]: I0214 04:32:07.156865 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4a16bfe-366a-4143-932a-e0b51615c401-internal-tls-certs\") pod \"neutron-7886d5654f-wzr2s\" (UID: \"d4a16bfe-366a-4143-932a-e0b51615c401\") " pod="openstack/neutron-7886d5654f-wzr2s" Feb 14 04:32:07 crc kubenswrapper[4867]: I0214 04:32:07.156953 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d4a16bfe-366a-4143-932a-e0b51615c401-httpd-config\") pod \"neutron-7886d5654f-wzr2s\" (UID: \"d4a16bfe-366a-4143-932a-e0b51615c401\") " pod="openstack/neutron-7886d5654f-wzr2s" Feb 14 04:32:07 crc kubenswrapper[4867]: I0214 04:32:07.260111 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4a16bfe-366a-4143-932a-e0b51615c401-combined-ca-bundle\") pod \"neutron-7886d5654f-wzr2s\" (UID: \"d4a16bfe-366a-4143-932a-e0b51615c401\") " pod="openstack/neutron-7886d5654f-wzr2s" Feb 14 04:32:07 crc kubenswrapper[4867]: I0214 04:32:07.260232 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4a16bfe-366a-4143-932a-e0b51615c401-public-tls-certs\") pod \"neutron-7886d5654f-wzr2s\" (UID: \"d4a16bfe-366a-4143-932a-e0b51615c401\") " pod="openstack/neutron-7886d5654f-wzr2s" Feb 14 04:32:07 crc kubenswrapper[4867]: I0214 04:32:07.260264 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xn54r\" (UniqueName: \"kubernetes.io/projected/d4a16bfe-366a-4143-932a-e0b51615c401-kube-api-access-xn54r\") pod \"neutron-7886d5654f-wzr2s\" (UID: \"d4a16bfe-366a-4143-932a-e0b51615c401\") " pod="openstack/neutron-7886d5654f-wzr2s" Feb 14 04:32:07 crc kubenswrapper[4867]: I0214 04:32:07.260299 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d4a16bfe-366a-4143-932a-e0b51615c401-config\") pod \"neutron-7886d5654f-wzr2s\" (UID: \"d4a16bfe-366a-4143-932a-e0b51615c401\") " pod="openstack/neutron-7886d5654f-wzr2s" Feb 14 04:32:07 crc kubenswrapper[4867]: I0214 04:32:07.260374 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4a16bfe-366a-4143-932a-e0b51615c401-ovndb-tls-certs\") pod \"neutron-7886d5654f-wzr2s\" (UID: \"d4a16bfe-366a-4143-932a-e0b51615c401\") " pod="openstack/neutron-7886d5654f-wzr2s" Feb 14 04:32:07 crc kubenswrapper[4867]: I0214 04:32:07.260429 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4a16bfe-366a-4143-932a-e0b51615c401-internal-tls-certs\") pod \"neutron-7886d5654f-wzr2s\" (UID: \"d4a16bfe-366a-4143-932a-e0b51615c401\") " pod="openstack/neutron-7886d5654f-wzr2s" Feb 14 04:32:07 crc kubenswrapper[4867]: I0214 04:32:07.260572 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d4a16bfe-366a-4143-932a-e0b51615c401-httpd-config\") pod \"neutron-7886d5654f-wzr2s\" (UID: \"d4a16bfe-366a-4143-932a-e0b51615c401\") " pod="openstack/neutron-7886d5654f-wzr2s" Feb 14 04:32:07 crc kubenswrapper[4867]: I0214 04:32:07.270469 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4a16bfe-366a-4143-932a-e0b51615c401-internal-tls-certs\") pod \"neutron-7886d5654f-wzr2s\" (UID: \"d4a16bfe-366a-4143-932a-e0b51615c401\") " pod="openstack/neutron-7886d5654f-wzr2s" Feb 14 04:32:07 crc kubenswrapper[4867]: I0214 04:32:07.272036 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4a16bfe-366a-4143-932a-e0b51615c401-combined-ca-bundle\") pod \"neutron-7886d5654f-wzr2s\" (UID: \"d4a16bfe-366a-4143-932a-e0b51615c401\") " pod="openstack/neutron-7886d5654f-wzr2s" Feb 14 04:32:07 crc kubenswrapper[4867]: I0214 04:32:07.279213 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/d4a16bfe-366a-4143-932a-e0b51615c401-config\") pod \"neutron-7886d5654f-wzr2s\" (UID: \"d4a16bfe-366a-4143-932a-e0b51615c401\") " pod="openstack/neutron-7886d5654f-wzr2s" Feb 14 04:32:07 crc kubenswrapper[4867]: I0214 04:32:07.280154 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d4a16bfe-366a-4143-932a-e0b51615c401-httpd-config\") pod \"neutron-7886d5654f-wzr2s\" (UID: \"d4a16bfe-366a-4143-932a-e0b51615c401\") " pod="openstack/neutron-7886d5654f-wzr2s" Feb 14 04:32:07 crc kubenswrapper[4867]: I0214 04:32:07.282452 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4a16bfe-366a-4143-932a-e0b51615c401-ovndb-tls-certs\") pod \"neutron-7886d5654f-wzr2s\" (UID: \"d4a16bfe-366a-4143-932a-e0b51615c401\") " pod="openstack/neutron-7886d5654f-wzr2s" Feb 14 04:32:07 crc kubenswrapper[4867]: I0214 04:32:07.287179 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xn54r\" (UniqueName: \"kubernetes.io/projected/d4a16bfe-366a-4143-932a-e0b51615c401-kube-api-access-xn54r\") pod \"neutron-7886d5654f-wzr2s\" (UID: \"d4a16bfe-366a-4143-932a-e0b51615c401\") " pod="openstack/neutron-7886d5654f-wzr2s" Feb 14 04:32:07 crc kubenswrapper[4867]: I0214 04:32:07.292606 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4a16bfe-366a-4143-932a-e0b51615c401-public-tls-certs\") pod \"neutron-7886d5654f-wzr2s\" (UID: \"d4a16bfe-366a-4143-932a-e0b51615c401\") " pod="openstack/neutron-7886d5654f-wzr2s" Feb 14 04:32:07 crc kubenswrapper[4867]: I0214 04:32:07.344624 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7ce36665-fb1a-4860-bc8a-5e12431d4cd6","Type":"ContainerStarted","Data":"c5af4b5f8602cd5b59f39b9b073911fd553022dc70a80e4fe1af5abd876f1920"} Feb 14 04:32:07 crc kubenswrapper[4867]: I0214 04:32:07.348523 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-584d8cfdf8-4lt8c" event={"ID":"3375fa12-2e3a-431e-9341-72d5a213083e","Type":"ContainerStarted","Data":"144d7358dce1482e969a6c4d4aa4368c97382a613b0107504c81e13a057467b2"} Feb 14 04:32:07 crc kubenswrapper[4867]: I0214 04:32:07.350613 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"195db0d6-0991-48b6-a7a1-ad5311555ede","Type":"ContainerStarted","Data":"a44dda6f8296393dedba55dfb959cf2361267f611586efd623055a585266e1ef"} Feb 14 04:32:07 crc kubenswrapper[4867]: I0214 04:32:07.482895 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7886d5654f-wzr2s" Feb 14 04:32:08 crc kubenswrapper[4867]: I0214 04:32:08.420840 4867 generic.go:334] "Generic (PLEG): container finished" podID="8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d" containerID="f445405ff2670ec25765e689c899369e6b86208982965111c8fd6b86edd2a3f9" exitCode=0 Feb 14 04:32:08 crc kubenswrapper[4867]: I0214 04:32:08.420928 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-569c46898f-bbd5l" event={"ID":"8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d","Type":"ContainerDied","Data":"f445405ff2670ec25765e689c899369e6b86208982965111c8fd6b86edd2a3f9"} Feb 14 04:32:08 crc kubenswrapper[4867]: I0214 04:32:08.425845 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7ce36665-fb1a-4860-bc8a-5e12431d4cd6","Type":"ContainerStarted","Data":"fa147253ee7488f81ea6eca1453e9afe783991b356d4806c11a9a0f690b9282a"} Feb 14 04:32:08 crc kubenswrapper[4867]: I0214 04:32:08.434023 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-584d8cfdf8-4lt8c" event={"ID":"3375fa12-2e3a-431e-9341-72d5a213083e","Type":"ContainerStarted","Data":"18cd9262ede2c3ab09044a01019d623017616a5fa4d03ea3db50d9c90f8a8f5d"} Feb 14 04:32:08 crc kubenswrapper[4867]: I0214 04:32:08.434063 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-584d8cfdf8-4lt8c" event={"ID":"3375fa12-2e3a-431e-9341-72d5a213083e","Type":"ContainerStarted","Data":"503b65e504ecf9985fd72e4c51681c3c1b4b6bf77287a996db21c0ac81e2c2de"} Feb 14 04:32:08 crc kubenswrapper[4867]: I0214 04:32:08.434096 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-584d8cfdf8-4lt8c" Feb 14 04:32:08 crc kubenswrapper[4867]: I0214 04:32:08.434116 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-584d8cfdf8-4lt8c" Feb 14 04:32:08 crc kubenswrapper[4867]: I0214 04:32:08.453813 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-584d8cfdf8-4lt8c" podStartSLOduration=3.453794979 podStartE2EDuration="3.453794979s" podCreationTimestamp="2026-02-14 04:32:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:32:08.450785608 +0000 UTC m=+1360.531722922" watchObservedRunningTime="2026-02-14 04:32:08.453794979 +0000 UTC m=+1360.534732293" Feb 14 04:32:08 crc kubenswrapper[4867]: I0214 04:32:08.468847 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"195db0d6-0991-48b6-a7a1-ad5311555ede","Type":"ContainerStarted","Data":"d8c5c9f74ccd78823f9d33bdc90facd5590dbe73c66e68e9f9f90cdf5225e85c"} Feb 14 04:32:08 crc kubenswrapper[4867]: W0214 04:32:08.501768 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd4a16bfe_366a_4143_932a_e0b51615c401.slice/crio-8f2bf0f6910639058db96bcd1da70119bfecc6e8fd63bf7fd1c5af30dbd9f9c9 WatchSource:0}: Error finding container 8f2bf0f6910639058db96bcd1da70119bfecc6e8fd63bf7fd1c5af30dbd9f9c9: Status 404 returned error can't find the container with id 8f2bf0f6910639058db96bcd1da70119bfecc6e8fd63bf7fd1c5af30dbd9f9c9 Feb 14 04:32:08 crc kubenswrapper[4867]: I0214 04:32:08.503271 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7886d5654f-wzr2s"] Feb 14 04:32:09 crc kubenswrapper[4867]: I0214 04:32:09.484770 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7ce36665-fb1a-4860-bc8a-5e12431d4cd6","Type":"ContainerStarted","Data":"48f01dc9aa282450371f6297a6c143b96aef3bdcad1b711eb94a51bfc381c6b0"} Feb 14 04:32:09 crc kubenswrapper[4867]: I0214 04:32:09.485243 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7ce36665-fb1a-4860-bc8a-5e12431d4cd6","Type":"ContainerStarted","Data":"fe5aa9c47c46abdc1b30cca0eb25c76a83c0676a5128f68950adc248471821b2"} Feb 14 04:32:09 crc kubenswrapper[4867]: I0214 04:32:09.486715 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7886d5654f-wzr2s" event={"ID":"d4a16bfe-366a-4143-932a-e0b51615c401","Type":"ContainerStarted","Data":"5e0a62aa6ec3491a2cf67a13cda5ff17befc72d1618cee92b1cfc69b6aa572e0"} Feb 14 04:32:09 crc kubenswrapper[4867]: I0214 04:32:09.486737 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7886d5654f-wzr2s" event={"ID":"d4a16bfe-366a-4143-932a-e0b51615c401","Type":"ContainerStarted","Data":"4c03492a1b05456f7e21cb68a1fce0332c5fc554391765af8a0d2c450f2b4455"} Feb 14 04:32:09 crc kubenswrapper[4867]: I0214 04:32:09.486746 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7886d5654f-wzr2s" event={"ID":"d4a16bfe-366a-4143-932a-e0b51615c401","Type":"ContainerStarted","Data":"8f2bf0f6910639058db96bcd1da70119bfecc6e8fd63bf7fd1c5af30dbd9f9c9"} Feb 14 04:32:09 crc kubenswrapper[4867]: I0214 04:32:09.488332 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7886d5654f-wzr2s" Feb 14 04:32:09 crc kubenswrapper[4867]: I0214 04:32:09.493593 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"195db0d6-0991-48b6-a7a1-ad5311555ede","Type":"ContainerStarted","Data":"6a35c006524f36990453cacbcd07435b4ee94829298141ee3c860cd141deda2f"} Feb 14 04:32:09 crc kubenswrapper[4867]: I0214 04:32:09.509031 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7886d5654f-wzr2s" podStartSLOduration=3.5090071419999997 podStartE2EDuration="3.509007142s" podCreationTimestamp="2026-02-14 04:32:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:32:09.506694899 +0000 UTC m=+1361.587632213" watchObservedRunningTime="2026-02-14 04:32:09.509007142 +0000 UTC m=+1361.589944466" Feb 14 04:32:09 crc kubenswrapper[4867]: I0214 04:32:09.540800 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.540775485 podStartE2EDuration="4.540775485s" podCreationTimestamp="2026-02-14 04:32:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:32:09.533176141 +0000 UTC m=+1361.614113465" watchObservedRunningTime="2026-02-14 04:32:09.540775485 +0000 UTC m=+1361.621712799" Feb 14 04:32:09 crc kubenswrapper[4867]: I0214 04:32:09.641441 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-569c46898f-bbd5l" podUID="8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.194:9696/\": dial tcp 10.217.0.194:9696: connect: connection refused" Feb 14 04:32:09 crc kubenswrapper[4867]: I0214 04:32:09.829398 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 14 04:32:10 crc kubenswrapper[4867]: I0214 04:32:10.096010 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 14 04:32:10 crc kubenswrapper[4867]: I0214 04:32:10.122835 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9776ccc5-pq99b" Feb 14 04:32:10 crc kubenswrapper[4867]: I0214 04:32:10.189917 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-zkb5z"] Feb 14 04:32:10 crc kubenswrapper[4867]: I0214 04:32:10.194853 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55f844cf75-zkb5z" podUID="41682938-f603-460d-91e2-9de423799697" containerName="dnsmasq-dns" containerID="cri-o://3fa0ecdd88a94efe2f93d06bd0c02307c78ae77450f27f456086d11f4e56cff0" gracePeriod=10 Feb 14 04:32:10 crc kubenswrapper[4867]: I0214 04:32:10.508716 4867 generic.go:334] "Generic (PLEG): container finished" podID="41682938-f603-460d-91e2-9de423799697" containerID="3fa0ecdd88a94efe2f93d06bd0c02307c78ae77450f27f456086d11f4e56cff0" exitCode=0 Feb 14 04:32:10 crc kubenswrapper[4867]: I0214 04:32:10.508829 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-zkb5z" event={"ID":"41682938-f603-460d-91e2-9de423799697","Type":"ContainerDied","Data":"3fa0ecdd88a94efe2f93d06bd0c02307c78ae77450f27f456086d11f4e56cff0"} Feb 14 04:32:10 crc kubenswrapper[4867]: I0214 04:32:10.509836 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 14 04:32:10 crc kubenswrapper[4867]: I0214 04:32:10.579225 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 14 04:32:10 crc kubenswrapper[4867]: I0214 04:32:10.829729 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-zkb5z" Feb 14 04:32:10 crc kubenswrapper[4867]: I0214 04:32:10.896669 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41682938-f603-460d-91e2-9de423799697-config\") pod \"41682938-f603-460d-91e2-9de423799697\" (UID: \"41682938-f603-460d-91e2-9de423799697\") " Feb 14 04:32:10 crc kubenswrapper[4867]: I0214 04:32:10.896881 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/41682938-f603-460d-91e2-9de423799697-ovsdbserver-sb\") pod \"41682938-f603-460d-91e2-9de423799697\" (UID: \"41682938-f603-460d-91e2-9de423799697\") " Feb 14 04:32:10 crc kubenswrapper[4867]: I0214 04:32:10.896952 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bwkw4\" (UniqueName: \"kubernetes.io/projected/41682938-f603-460d-91e2-9de423799697-kube-api-access-bwkw4\") pod \"41682938-f603-460d-91e2-9de423799697\" (UID: \"41682938-f603-460d-91e2-9de423799697\") " Feb 14 04:32:10 crc kubenswrapper[4867]: I0214 04:32:10.897062 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/41682938-f603-460d-91e2-9de423799697-ovsdbserver-nb\") pod \"41682938-f603-460d-91e2-9de423799697\" (UID: \"41682938-f603-460d-91e2-9de423799697\") " Feb 14 04:32:10 crc kubenswrapper[4867]: I0214 04:32:10.897115 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/41682938-f603-460d-91e2-9de423799697-dns-swift-storage-0\") pod \"41682938-f603-460d-91e2-9de423799697\" (UID: \"41682938-f603-460d-91e2-9de423799697\") " Feb 14 04:32:10 crc kubenswrapper[4867]: I0214 04:32:10.897154 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/41682938-f603-460d-91e2-9de423799697-dns-svc\") pod \"41682938-f603-460d-91e2-9de423799697\" (UID: \"41682938-f603-460d-91e2-9de423799697\") " Feb 14 04:32:10 crc kubenswrapper[4867]: I0214 04:32:10.917251 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41682938-f603-460d-91e2-9de423799697-kube-api-access-bwkw4" (OuterVolumeSpecName: "kube-api-access-bwkw4") pod "41682938-f603-460d-91e2-9de423799697" (UID: "41682938-f603-460d-91e2-9de423799697"). InnerVolumeSpecName "kube-api-access-bwkw4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:32:11 crc kubenswrapper[4867]: I0214 04:32:11.006418 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bwkw4\" (UniqueName: \"kubernetes.io/projected/41682938-f603-460d-91e2-9de423799697-kube-api-access-bwkw4\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:11 crc kubenswrapper[4867]: I0214 04:32:11.033380 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41682938-f603-460d-91e2-9de423799697-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "41682938-f603-460d-91e2-9de423799697" (UID: "41682938-f603-460d-91e2-9de423799697"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:32:11 crc kubenswrapper[4867]: I0214 04:32:11.037112 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41682938-f603-460d-91e2-9de423799697-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "41682938-f603-460d-91e2-9de423799697" (UID: "41682938-f603-460d-91e2-9de423799697"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:32:11 crc kubenswrapper[4867]: I0214 04:32:11.085305 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41682938-f603-460d-91e2-9de423799697-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "41682938-f603-460d-91e2-9de423799697" (UID: "41682938-f603-460d-91e2-9de423799697"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:32:11 crc kubenswrapper[4867]: I0214 04:32:11.087918 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41682938-f603-460d-91e2-9de423799697-config" (OuterVolumeSpecName: "config") pod "41682938-f603-460d-91e2-9de423799697" (UID: "41682938-f603-460d-91e2-9de423799697"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:32:11 crc kubenswrapper[4867]: I0214 04:32:11.100073 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41682938-f603-460d-91e2-9de423799697-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "41682938-f603-460d-91e2-9de423799697" (UID: "41682938-f603-460d-91e2-9de423799697"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:32:11 crc kubenswrapper[4867]: I0214 04:32:11.108927 4867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/41682938-f603-460d-91e2-9de423799697-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:11 crc kubenswrapper[4867]: I0214 04:32:11.108959 4867 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/41682938-f603-460d-91e2-9de423799697-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:11 crc kubenswrapper[4867]: I0214 04:32:11.108970 4867 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/41682938-f603-460d-91e2-9de423799697-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:11 crc kubenswrapper[4867]: I0214 04:32:11.108978 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41682938-f603-460d-91e2-9de423799697-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:11 crc kubenswrapper[4867]: I0214 04:32:11.108986 4867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/41682938-f603-460d-91e2-9de423799697-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:11 crc kubenswrapper[4867]: I0214 04:32:11.523378 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7ce36665-fb1a-4860-bc8a-5e12431d4cd6","Type":"ContainerStarted","Data":"fbab9809e65a478959fcc20b95a52910111448975d370afd8952ef2712282827"} Feb 14 04:32:11 crc kubenswrapper[4867]: I0214 04:32:11.523876 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 14 04:32:11 crc kubenswrapper[4867]: I0214 04:32:11.533737 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-zkb5z" event={"ID":"41682938-f603-460d-91e2-9de423799697","Type":"ContainerDied","Data":"fb9de469ce205f58ab8b9cb9fe410a6dc2ae4ce6eea561956a614622a54d90eb"} Feb 14 04:32:11 crc kubenswrapper[4867]: I0214 04:32:11.533801 4867 scope.go:117] "RemoveContainer" containerID="3fa0ecdd88a94efe2f93d06bd0c02307c78ae77450f27f456086d11f4e56cff0" Feb 14 04:32:11 crc kubenswrapper[4867]: I0214 04:32:11.533948 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="b6c55469-3aa2-4471-932a-442ce56570a7" containerName="cinder-scheduler" containerID="cri-o://ecbff86946fb366e485d44a146ac2998664c52a1b60ad30dd4585b7cf70bfede" gracePeriod=30 Feb 14 04:32:11 crc kubenswrapper[4867]: I0214 04:32:11.534005 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-zkb5z" Feb 14 04:32:11 crc kubenswrapper[4867]: I0214 04:32:11.534065 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="b6c55469-3aa2-4471-932a-442ce56570a7" containerName="probe" containerID="cri-o://972cae5e159f32657523e5994c8475d8de82cc180b3a8e9a74d4c60a95877238" gracePeriod=30 Feb 14 04:32:11 crc kubenswrapper[4867]: I0214 04:32:11.604452 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.192844962 podStartE2EDuration="6.604426202s" podCreationTimestamp="2026-02-14 04:32:05 +0000 UTC" firstStartedPulling="2026-02-14 04:32:06.435732149 +0000 UTC m=+1358.516669463" lastFinishedPulling="2026-02-14 04:32:10.847313389 +0000 UTC m=+1362.928250703" observedRunningTime="2026-02-14 04:32:11.570493941 +0000 UTC m=+1363.651431255" watchObservedRunningTime="2026-02-14 04:32:11.604426202 +0000 UTC m=+1363.685363516" Feb 14 04:32:11 crc kubenswrapper[4867]: I0214 04:32:11.625738 4867 scope.go:117] "RemoveContainer" containerID="89d6a8bcac13fc998b43875a988468666140ff6de2472314fab3fcf4097c9cae" Feb 14 04:32:11 crc kubenswrapper[4867]: I0214 04:32:11.666936 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-zkb5z"] Feb 14 04:32:11 crc kubenswrapper[4867]: I0214 04:32:11.684803 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-zkb5z"] Feb 14 04:32:11 crc kubenswrapper[4867]: E0214 04:32:11.846125 4867 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod41682938_f603_460d_91e2_9de423799697.slice/crio-fb9de469ce205f58ab8b9cb9fe410a6dc2ae4ce6eea561956a614622a54d90eb\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod41682938_f603_460d_91e2_9de423799697.slice\": RecentStats: unable to find data in memory cache]" Feb 14 04:32:11 crc kubenswrapper[4867]: I0214 04:32:11.852258 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-78546bb898-l5722" Feb 14 04:32:11 crc kubenswrapper[4867]: I0214 04:32:11.951839 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-78546bb898-l5722" Feb 14 04:32:12 crc kubenswrapper[4867]: I0214 04:32:12.547472 4867 generic.go:334] "Generic (PLEG): container finished" podID="8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d" containerID="df38319c35b43b20a57003cff86a29347a0b01099020f21394a48e3029dd9a34" exitCode=0 Feb 14 04:32:12 crc kubenswrapper[4867]: I0214 04:32:12.547560 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-569c46898f-bbd5l" event={"ID":"8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d","Type":"ContainerDied","Data":"df38319c35b43b20a57003cff86a29347a0b01099020f21394a48e3029dd9a34"} Feb 14 04:32:12 crc kubenswrapper[4867]: I0214 04:32:12.557878 4867 generic.go:334] "Generic (PLEG): container finished" podID="b6c55469-3aa2-4471-932a-442ce56570a7" containerID="972cae5e159f32657523e5994c8475d8de82cc180b3a8e9a74d4c60a95877238" exitCode=0 Feb 14 04:32:12 crc kubenswrapper[4867]: I0214 04:32:12.557956 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b6c55469-3aa2-4471-932a-442ce56570a7","Type":"ContainerDied","Data":"972cae5e159f32657523e5994c8475d8de82cc180b3a8e9a74d4c60a95877238"} Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.014288 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41682938-f603-460d-91e2-9de423799697" path="/var/lib/kubelet/pods/41682938-f603-460d-91e2-9de423799697/volumes" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.030151 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-569c46898f-bbd5l" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.171183 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d-ovndb-tls-certs\") pod \"8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d\" (UID: \"8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d\") " Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.171252 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d-combined-ca-bundle\") pod \"8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d\" (UID: \"8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d\") " Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.171348 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d-internal-tls-certs\") pod \"8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d\" (UID: \"8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d\") " Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.171386 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d-httpd-config\") pod \"8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d\" (UID: \"8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d\") " Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.171573 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d-public-tls-certs\") pod \"8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d\" (UID: \"8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d\") " Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.171686 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lhvs2\" (UniqueName: \"kubernetes.io/projected/8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d-kube-api-access-lhvs2\") pod \"8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d\" (UID: \"8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d\") " Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.171765 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d-config\") pod \"8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d\" (UID: \"8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d\") " Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.185479 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d-kube-api-access-lhvs2" (OuterVolumeSpecName: "kube-api-access-lhvs2") pod "8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d" (UID: "8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d"). InnerVolumeSpecName "kube-api-access-lhvs2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.191685 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d" (UID: "8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.259666 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d" (UID: "8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.263635 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d" (UID: "8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.265605 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d-config" (OuterVolumeSpecName: "config") pod "8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d" (UID: "8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.275619 4867 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.275954 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lhvs2\" (UniqueName: \"kubernetes.io/projected/8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d-kube-api-access-lhvs2\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.276026 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.276084 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.276146 4867 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.299152 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d" (UID: "8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.356931 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d" (UID: "8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.378660 4867 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.378692 4867 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.410957 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.582954 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kv2hq\" (UniqueName: \"kubernetes.io/projected/b6c55469-3aa2-4471-932a-442ce56570a7-kube-api-access-kv2hq\") pod \"b6c55469-3aa2-4471-932a-442ce56570a7\" (UID: \"b6c55469-3aa2-4471-932a-442ce56570a7\") " Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.584424 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6c55469-3aa2-4471-932a-442ce56570a7-config-data\") pod \"b6c55469-3aa2-4471-932a-442ce56570a7\" (UID: \"b6c55469-3aa2-4471-932a-442ce56570a7\") " Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.585791 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b6c55469-3aa2-4471-932a-442ce56570a7-config-data-custom\") pod \"b6c55469-3aa2-4471-932a-442ce56570a7\" (UID: \"b6c55469-3aa2-4471-932a-442ce56570a7\") " Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.586081 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6c55469-3aa2-4471-932a-442ce56570a7-combined-ca-bundle\") pod \"b6c55469-3aa2-4471-932a-442ce56570a7\" (UID: \"b6c55469-3aa2-4471-932a-442ce56570a7\") " Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.586829 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6c55469-3aa2-4471-932a-442ce56570a7-scripts\") pod \"b6c55469-3aa2-4471-932a-442ce56570a7\" (UID: \"b6c55469-3aa2-4471-932a-442ce56570a7\") " Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.586957 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b6c55469-3aa2-4471-932a-442ce56570a7-etc-machine-id\") pod \"b6c55469-3aa2-4471-932a-442ce56570a7\" (UID: \"b6c55469-3aa2-4471-932a-442ce56570a7\") " Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.587795 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6c55469-3aa2-4471-932a-442ce56570a7-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "b6c55469-3aa2-4471-932a-442ce56570a7" (UID: "b6c55469-3aa2-4471-932a-442ce56570a7"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.590991 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6c55469-3aa2-4471-932a-442ce56570a7-kube-api-access-kv2hq" (OuterVolumeSpecName: "kube-api-access-kv2hq") pod "b6c55469-3aa2-4471-932a-442ce56570a7" (UID: "b6c55469-3aa2-4471-932a-442ce56570a7"). InnerVolumeSpecName "kube-api-access-kv2hq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.595457 4867 generic.go:334] "Generic (PLEG): container finished" podID="b6c55469-3aa2-4471-932a-442ce56570a7" containerID="ecbff86946fb366e485d44a146ac2998664c52a1b60ad30dd4585b7cf70bfede" exitCode=0 Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.595641 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.596299 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b6c55469-3aa2-4471-932a-442ce56570a7","Type":"ContainerDied","Data":"ecbff86946fb366e485d44a146ac2998664c52a1b60ad30dd4585b7cf70bfede"} Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.596749 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b6c55469-3aa2-4471-932a-442ce56570a7","Type":"ContainerDied","Data":"3e15ae2331b94d3c6d65cab2376b0b1e088c96cfaa63266969feb367a3f3d213"} Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.596853 4867 scope.go:117] "RemoveContainer" containerID="972cae5e159f32657523e5994c8475d8de82cc180b3a8e9a74d4c60a95877238" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.597355 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6c55469-3aa2-4471-932a-442ce56570a7-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "b6c55469-3aa2-4471-932a-442ce56570a7" (UID: "b6c55469-3aa2-4471-932a-442ce56570a7"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.603738 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6c55469-3aa2-4471-932a-442ce56570a7-scripts" (OuterVolumeSpecName: "scripts") pod "b6c55469-3aa2-4471-932a-442ce56570a7" (UID: "b6c55469-3aa2-4471-932a-442ce56570a7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.624874 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-569c46898f-bbd5l" event={"ID":"8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d","Type":"ContainerDied","Data":"028f5efc08b53a55521858d44a43207730eee63dfa58503296592bae2f4868dd"} Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.625336 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-569c46898f-bbd5l" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.647711 4867 scope.go:117] "RemoveContainer" containerID="ecbff86946fb366e485d44a146ac2998664c52a1b60ad30dd4585b7cf70bfede" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.691663 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kv2hq\" (UniqueName: \"kubernetes.io/projected/b6c55469-3aa2-4471-932a-442ce56570a7-kube-api-access-kv2hq\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.691698 4867 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b6c55469-3aa2-4471-932a-442ce56570a7-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.691709 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6c55469-3aa2-4471-932a-442ce56570a7-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.691720 4867 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b6c55469-3aa2-4471-932a-442ce56570a7-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.691848 4867 scope.go:117] "RemoveContainer" containerID="972cae5e159f32657523e5994c8475d8de82cc180b3a8e9a74d4c60a95877238" Feb 14 04:32:13 crc kubenswrapper[4867]: E0214 04:32:13.694972 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"972cae5e159f32657523e5994c8475d8de82cc180b3a8e9a74d4c60a95877238\": container with ID starting with 972cae5e159f32657523e5994c8475d8de82cc180b3a8e9a74d4c60a95877238 not found: ID does not exist" containerID="972cae5e159f32657523e5994c8475d8de82cc180b3a8e9a74d4c60a95877238" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.695015 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"972cae5e159f32657523e5994c8475d8de82cc180b3a8e9a74d4c60a95877238"} err="failed to get container status \"972cae5e159f32657523e5994c8475d8de82cc180b3a8e9a74d4c60a95877238\": rpc error: code = NotFound desc = could not find container \"972cae5e159f32657523e5994c8475d8de82cc180b3a8e9a74d4c60a95877238\": container with ID starting with 972cae5e159f32657523e5994c8475d8de82cc180b3a8e9a74d4c60a95877238 not found: ID does not exist" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.695051 4867 scope.go:117] "RemoveContainer" containerID="ecbff86946fb366e485d44a146ac2998664c52a1b60ad30dd4585b7cf70bfede" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.701605 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6c55469-3aa2-4471-932a-442ce56570a7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b6c55469-3aa2-4471-932a-442ce56570a7" (UID: "b6c55469-3aa2-4471-932a-442ce56570a7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:13 crc kubenswrapper[4867]: E0214 04:32:13.702107 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ecbff86946fb366e485d44a146ac2998664c52a1b60ad30dd4585b7cf70bfede\": container with ID starting with ecbff86946fb366e485d44a146ac2998664c52a1b60ad30dd4585b7cf70bfede not found: ID does not exist" containerID="ecbff86946fb366e485d44a146ac2998664c52a1b60ad30dd4585b7cf70bfede" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.702465 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ecbff86946fb366e485d44a146ac2998664c52a1b60ad30dd4585b7cf70bfede"} err="failed to get container status \"ecbff86946fb366e485d44a146ac2998664c52a1b60ad30dd4585b7cf70bfede\": rpc error: code = NotFound desc = could not find container \"ecbff86946fb366e485d44a146ac2998664c52a1b60ad30dd4585b7cf70bfede\": container with ID starting with ecbff86946fb366e485d44a146ac2998664c52a1b60ad30dd4585b7cf70bfede not found: ID does not exist" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.702726 4867 scope.go:117] "RemoveContainer" containerID="f445405ff2670ec25765e689c899369e6b86208982965111c8fd6b86edd2a3f9" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.731656 4867 scope.go:117] "RemoveContainer" containerID="df38319c35b43b20a57003cff86a29347a0b01099020f21394a48e3029dd9a34" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.740683 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-569c46898f-bbd5l"] Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.742607 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6c55469-3aa2-4471-932a-442ce56570a7-config-data" (OuterVolumeSpecName: "config-data") pod "b6c55469-3aa2-4471-932a-442ce56570a7" (UID: "b6c55469-3aa2-4471-932a-442ce56570a7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.770021 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-569c46898f-bbd5l"] Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.797067 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6c55469-3aa2-4471-932a-442ce56570a7-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.797326 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6c55469-3aa2-4471-932a-442ce56570a7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.935536 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.945518 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.958685 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 14 04:32:13 crc kubenswrapper[4867]: E0214 04:32:13.959143 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d" containerName="neutron-api" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.959166 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d" containerName="neutron-api" Feb 14 04:32:13 crc kubenswrapper[4867]: E0214 04:32:13.959190 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6c55469-3aa2-4471-932a-442ce56570a7" containerName="probe" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.959196 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6c55469-3aa2-4471-932a-442ce56570a7" containerName="probe" Feb 14 04:32:13 crc kubenswrapper[4867]: E0214 04:32:13.959211 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41682938-f603-460d-91e2-9de423799697" containerName="dnsmasq-dns" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.959217 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="41682938-f603-460d-91e2-9de423799697" containerName="dnsmasq-dns" Feb 14 04:32:13 crc kubenswrapper[4867]: E0214 04:32:13.959243 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d" containerName="neutron-httpd" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.959249 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d" containerName="neutron-httpd" Feb 14 04:32:13 crc kubenswrapper[4867]: E0214 04:32:13.959267 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41682938-f603-460d-91e2-9de423799697" containerName="init" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.959274 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="41682938-f603-460d-91e2-9de423799697" containerName="init" Feb 14 04:32:13 crc kubenswrapper[4867]: E0214 04:32:13.959290 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6c55469-3aa2-4471-932a-442ce56570a7" containerName="cinder-scheduler" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.959296 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6c55469-3aa2-4471-932a-442ce56570a7" containerName="cinder-scheduler" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.959487 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d" containerName="neutron-api" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.959517 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6c55469-3aa2-4471-932a-442ce56570a7" containerName="probe" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.959530 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d" containerName="neutron-httpd" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.959539 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="41682938-f603-460d-91e2-9de423799697" containerName="dnsmasq-dns" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.959552 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6c55469-3aa2-4471-932a-442ce56570a7" containerName="cinder-scheduler" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.960891 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.963831 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.986568 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 14 04:32:14 crc kubenswrapper[4867]: I0214 04:32:14.104998 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38c903d9-50f6-418b-84d5-7ee82e9d1e2f-config-data\") pod \"cinder-scheduler-0\" (UID: \"38c903d9-50f6-418b-84d5-7ee82e9d1e2f\") " pod="openstack/cinder-scheduler-0" Feb 14 04:32:14 crc kubenswrapper[4867]: I0214 04:32:14.105086 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/38c903d9-50f6-418b-84d5-7ee82e9d1e2f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"38c903d9-50f6-418b-84d5-7ee82e9d1e2f\") " pod="openstack/cinder-scheduler-0" Feb 14 04:32:14 crc kubenswrapper[4867]: I0214 04:32:14.105390 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/38c903d9-50f6-418b-84d5-7ee82e9d1e2f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"38c903d9-50f6-418b-84d5-7ee82e9d1e2f\") " pod="openstack/cinder-scheduler-0" Feb 14 04:32:14 crc kubenswrapper[4867]: I0214 04:32:14.105451 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38c903d9-50f6-418b-84d5-7ee82e9d1e2f-scripts\") pod \"cinder-scheduler-0\" (UID: \"38c903d9-50f6-418b-84d5-7ee82e9d1e2f\") " pod="openstack/cinder-scheduler-0" Feb 14 04:32:14 crc kubenswrapper[4867]: I0214 04:32:14.105541 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77zwv\" (UniqueName: \"kubernetes.io/projected/38c903d9-50f6-418b-84d5-7ee82e9d1e2f-kube-api-access-77zwv\") pod \"cinder-scheduler-0\" (UID: \"38c903d9-50f6-418b-84d5-7ee82e9d1e2f\") " pod="openstack/cinder-scheduler-0" Feb 14 04:32:14 crc kubenswrapper[4867]: I0214 04:32:14.105593 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38c903d9-50f6-418b-84d5-7ee82e9d1e2f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"38c903d9-50f6-418b-84d5-7ee82e9d1e2f\") " pod="openstack/cinder-scheduler-0" Feb 14 04:32:14 crc kubenswrapper[4867]: I0214 04:32:14.209779 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38c903d9-50f6-418b-84d5-7ee82e9d1e2f-config-data\") pod \"cinder-scheduler-0\" (UID: \"38c903d9-50f6-418b-84d5-7ee82e9d1e2f\") " pod="openstack/cinder-scheduler-0" Feb 14 04:32:14 crc kubenswrapper[4867]: I0214 04:32:14.209914 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/38c903d9-50f6-418b-84d5-7ee82e9d1e2f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"38c903d9-50f6-418b-84d5-7ee82e9d1e2f\") " pod="openstack/cinder-scheduler-0" Feb 14 04:32:14 crc kubenswrapper[4867]: I0214 04:32:14.210009 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/38c903d9-50f6-418b-84d5-7ee82e9d1e2f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"38c903d9-50f6-418b-84d5-7ee82e9d1e2f\") " pod="openstack/cinder-scheduler-0" Feb 14 04:32:14 crc kubenswrapper[4867]: I0214 04:32:14.210069 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38c903d9-50f6-418b-84d5-7ee82e9d1e2f-scripts\") pod \"cinder-scheduler-0\" (UID: \"38c903d9-50f6-418b-84d5-7ee82e9d1e2f\") " pod="openstack/cinder-scheduler-0" Feb 14 04:32:14 crc kubenswrapper[4867]: I0214 04:32:14.210205 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77zwv\" (UniqueName: \"kubernetes.io/projected/38c903d9-50f6-418b-84d5-7ee82e9d1e2f-kube-api-access-77zwv\") pod \"cinder-scheduler-0\" (UID: \"38c903d9-50f6-418b-84d5-7ee82e9d1e2f\") " pod="openstack/cinder-scheduler-0" Feb 14 04:32:14 crc kubenswrapper[4867]: I0214 04:32:14.210295 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38c903d9-50f6-418b-84d5-7ee82e9d1e2f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"38c903d9-50f6-418b-84d5-7ee82e9d1e2f\") " pod="openstack/cinder-scheduler-0" Feb 14 04:32:14 crc kubenswrapper[4867]: I0214 04:32:14.210963 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/38c903d9-50f6-418b-84d5-7ee82e9d1e2f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"38c903d9-50f6-418b-84d5-7ee82e9d1e2f\") " pod="openstack/cinder-scheduler-0" Feb 14 04:32:14 crc kubenswrapper[4867]: I0214 04:32:14.216781 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38c903d9-50f6-418b-84d5-7ee82e9d1e2f-scripts\") pod \"cinder-scheduler-0\" (UID: \"38c903d9-50f6-418b-84d5-7ee82e9d1e2f\") " pod="openstack/cinder-scheduler-0" Feb 14 04:32:14 crc kubenswrapper[4867]: I0214 04:32:14.216862 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38c903d9-50f6-418b-84d5-7ee82e9d1e2f-config-data\") pod \"cinder-scheduler-0\" (UID: \"38c903d9-50f6-418b-84d5-7ee82e9d1e2f\") " pod="openstack/cinder-scheduler-0" Feb 14 04:32:14 crc kubenswrapper[4867]: I0214 04:32:14.218894 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/38c903d9-50f6-418b-84d5-7ee82e9d1e2f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"38c903d9-50f6-418b-84d5-7ee82e9d1e2f\") " pod="openstack/cinder-scheduler-0" Feb 14 04:32:14 crc kubenswrapper[4867]: I0214 04:32:14.220240 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38c903d9-50f6-418b-84d5-7ee82e9d1e2f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"38c903d9-50f6-418b-84d5-7ee82e9d1e2f\") " pod="openstack/cinder-scheduler-0" Feb 14 04:32:14 crc kubenswrapper[4867]: I0214 04:32:14.242482 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77zwv\" (UniqueName: \"kubernetes.io/projected/38c903d9-50f6-418b-84d5-7ee82e9d1e2f-kube-api-access-77zwv\") pod \"cinder-scheduler-0\" (UID: \"38c903d9-50f6-418b-84d5-7ee82e9d1e2f\") " pod="openstack/cinder-scheduler-0" Feb 14 04:32:14 crc kubenswrapper[4867]: I0214 04:32:14.278560 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 14 04:32:14 crc kubenswrapper[4867]: I0214 04:32:14.796297 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 14 04:32:15 crc kubenswrapper[4867]: I0214 04:32:15.012096 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d" path="/var/lib/kubelet/pods/8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d/volumes" Feb 14 04:32:15 crc kubenswrapper[4867]: I0214 04:32:15.013018 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6c55469-3aa2-4471-932a-442ce56570a7" path="/var/lib/kubelet/pods/b6c55469-3aa2-4471-932a-442ce56570a7/volumes" Feb 14 04:32:15 crc kubenswrapper[4867]: I0214 04:32:15.652209 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"38c903d9-50f6-418b-84d5-7ee82e9d1e2f","Type":"ContainerStarted","Data":"702bb86d1f52e378d22876224d381176ef1535b855223d432ee7fca7f6c8bd06"} Feb 14 04:32:15 crc kubenswrapper[4867]: I0214 04:32:15.652485 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"38c903d9-50f6-418b-84d5-7ee82e9d1e2f","Type":"ContainerStarted","Data":"b96089b9e38c0ea636878ef1bd934fcde069d5a09954a966d32d520181a11a44"} Feb 14 04:32:16 crc kubenswrapper[4867]: I0214 04:32:16.665944 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"38c903d9-50f6-418b-84d5-7ee82e9d1e2f","Type":"ContainerStarted","Data":"6720ffef72a95db4909acc117037c90ac9a391f6a23631323aba22f62f962e10"} Feb 14 04:32:16 crc kubenswrapper[4867]: I0214 04:32:16.691040 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.6910190529999998 podStartE2EDuration="3.691019053s" podCreationTimestamp="2026-02-14 04:32:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:32:16.686732488 +0000 UTC m=+1368.767669802" watchObservedRunningTime="2026-02-14 04:32:16.691019053 +0000 UTC m=+1368.771956387" Feb 14 04:32:17 crc kubenswrapper[4867]: I0214 04:32:17.846218 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-584d8cfdf8-4lt8c" Feb 14 04:32:17 crc kubenswrapper[4867]: I0214 04:32:17.889871 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-584d8cfdf8-4lt8c" Feb 14 04:32:17 crc kubenswrapper[4867]: I0214 04:32:17.992523 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-78546bb898-l5722"] Feb 14 04:32:17 crc kubenswrapper[4867]: I0214 04:32:17.992790 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-78546bb898-l5722" podUID="3bf24394-6465-476f-a99e-f46fce318656" containerName="barbican-api-log" containerID="cri-o://3195bbd4ee7008fc50e7835b398535783b87d1f4092164f29b60b4bdc5b3c456" gracePeriod=30 Feb 14 04:32:17 crc kubenswrapper[4867]: I0214 04:32:17.993423 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-78546bb898-l5722" podUID="3bf24394-6465-476f-a99e-f46fce318656" containerName="barbican-api" containerID="cri-o://d7acae34b523e3a580609072a0335d9f4dc1a0643b2d2946b03ae70287735d81" gracePeriod=30 Feb 14 04:32:18 crc kubenswrapper[4867]: I0214 04:32:18.691218 4867 generic.go:334] "Generic (PLEG): container finished" podID="3bf24394-6465-476f-a99e-f46fce318656" containerID="3195bbd4ee7008fc50e7835b398535783b87d1f4092164f29b60b4bdc5b3c456" exitCode=143 Feb 14 04:32:18 crc kubenswrapper[4867]: I0214 04:32:18.691656 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-78546bb898-l5722" event={"ID":"3bf24394-6465-476f-a99e-f46fce318656","Type":"ContainerDied","Data":"3195bbd4ee7008fc50e7835b398535783b87d1f4092164f29b60b4bdc5b3c456"} Feb 14 04:32:18 crc kubenswrapper[4867]: I0214 04:32:18.904354 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 14 04:32:19 crc kubenswrapper[4867]: I0214 04:32:19.279868 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 14 04:32:19 crc kubenswrapper[4867]: I0214 04:32:19.760084 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-74d7c6cb48-8wr7l" Feb 14 04:32:19 crc kubenswrapper[4867]: I0214 04:32:19.761147 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-74d7c6cb48-8wr7l" Feb 14 04:32:20 crc kubenswrapper[4867]: I0214 04:32:20.962615 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-7595b47f77-vtg9d" Feb 14 04:32:21 crc kubenswrapper[4867]: I0214 04:32:21.417692 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-78546bb898-l5722" podUID="3bf24394-6465-476f-a99e-f46fce318656" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.201:9311/healthcheck\": read tcp 10.217.0.2:35138->10.217.0.201:9311: read: connection reset by peer" Feb 14 04:32:21 crc kubenswrapper[4867]: I0214 04:32:21.417884 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-78546bb898-l5722" podUID="3bf24394-6465-476f-a99e-f46fce318656" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.201:9311/healthcheck\": read tcp 10.217.0.2:35132->10.217.0.201:9311: read: connection reset by peer" Feb 14 04:32:21 crc kubenswrapper[4867]: I0214 04:32:21.747764 4867 generic.go:334] "Generic (PLEG): container finished" podID="3bf24394-6465-476f-a99e-f46fce318656" containerID="d7acae34b523e3a580609072a0335d9f4dc1a0643b2d2946b03ae70287735d81" exitCode=0 Feb 14 04:32:21 crc kubenswrapper[4867]: I0214 04:32:21.747816 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-78546bb898-l5722" event={"ID":"3bf24394-6465-476f-a99e-f46fce318656","Type":"ContainerDied","Data":"d7acae34b523e3a580609072a0335d9f4dc1a0643b2d2946b03ae70287735d81"} Feb 14 04:32:22 crc kubenswrapper[4867]: I0214 04:32:22.088067 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-78546bb898-l5722" Feb 14 04:32:22 crc kubenswrapper[4867]: I0214 04:32:22.181975 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bf24394-6465-476f-a99e-f46fce318656-config-data\") pod \"3bf24394-6465-476f-a99e-f46fce318656\" (UID: \"3bf24394-6465-476f-a99e-f46fce318656\") " Feb 14 04:32:22 crc kubenswrapper[4867]: I0214 04:32:22.182134 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3bf24394-6465-476f-a99e-f46fce318656-logs\") pod \"3bf24394-6465-476f-a99e-f46fce318656\" (UID: \"3bf24394-6465-476f-a99e-f46fce318656\") " Feb 14 04:32:22 crc kubenswrapper[4867]: I0214 04:32:22.182208 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bf24394-6465-476f-a99e-f46fce318656-combined-ca-bundle\") pod \"3bf24394-6465-476f-a99e-f46fce318656\" (UID: \"3bf24394-6465-476f-a99e-f46fce318656\") " Feb 14 04:32:22 crc kubenswrapper[4867]: I0214 04:32:22.182305 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3bf24394-6465-476f-a99e-f46fce318656-config-data-custom\") pod \"3bf24394-6465-476f-a99e-f46fce318656\" (UID: \"3bf24394-6465-476f-a99e-f46fce318656\") " Feb 14 04:32:22 crc kubenswrapper[4867]: I0214 04:32:22.182410 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2bvxz\" (UniqueName: \"kubernetes.io/projected/3bf24394-6465-476f-a99e-f46fce318656-kube-api-access-2bvxz\") pod \"3bf24394-6465-476f-a99e-f46fce318656\" (UID: \"3bf24394-6465-476f-a99e-f46fce318656\") " Feb 14 04:32:22 crc kubenswrapper[4867]: I0214 04:32:22.182821 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3bf24394-6465-476f-a99e-f46fce318656-logs" (OuterVolumeSpecName: "logs") pod "3bf24394-6465-476f-a99e-f46fce318656" (UID: "3bf24394-6465-476f-a99e-f46fce318656"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:32:22 crc kubenswrapper[4867]: I0214 04:32:22.183191 4867 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3bf24394-6465-476f-a99e-f46fce318656-logs\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:22 crc kubenswrapper[4867]: I0214 04:32:22.192791 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bf24394-6465-476f-a99e-f46fce318656-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "3bf24394-6465-476f-a99e-f46fce318656" (UID: "3bf24394-6465-476f-a99e-f46fce318656"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:22 crc kubenswrapper[4867]: I0214 04:32:22.207819 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bf24394-6465-476f-a99e-f46fce318656-kube-api-access-2bvxz" (OuterVolumeSpecName: "kube-api-access-2bvxz") pod "3bf24394-6465-476f-a99e-f46fce318656" (UID: "3bf24394-6465-476f-a99e-f46fce318656"). InnerVolumeSpecName "kube-api-access-2bvxz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:32:22 crc kubenswrapper[4867]: I0214 04:32:22.208317 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-8574cd8bdd-r5cv6" Feb 14 04:32:22 crc kubenswrapper[4867]: I0214 04:32:22.210084 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-8574cd8bdd-r5cv6" Feb 14 04:32:22 crc kubenswrapper[4867]: I0214 04:32:22.239831 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bf24394-6465-476f-a99e-f46fce318656-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3bf24394-6465-476f-a99e-f46fce318656" (UID: "3bf24394-6465-476f-a99e-f46fce318656"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:22 crc kubenswrapper[4867]: I0214 04:32:22.284952 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2bvxz\" (UniqueName: \"kubernetes.io/projected/3bf24394-6465-476f-a99e-f46fce318656-kube-api-access-2bvxz\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:22 crc kubenswrapper[4867]: I0214 04:32:22.296379 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bf24394-6465-476f-a99e-f46fce318656-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:22 crc kubenswrapper[4867]: I0214 04:32:22.296886 4867 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3bf24394-6465-476f-a99e-f46fce318656-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:22 crc kubenswrapper[4867]: I0214 04:32:22.285336 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bf24394-6465-476f-a99e-f46fce318656-config-data" (OuterVolumeSpecName: "config-data") pod "3bf24394-6465-476f-a99e-f46fce318656" (UID: "3bf24394-6465-476f-a99e-f46fce318656"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:22 crc kubenswrapper[4867]: I0214 04:32:22.312356 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-74d7c6cb48-8wr7l"] Feb 14 04:32:22 crc kubenswrapper[4867]: I0214 04:32:22.312626 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-74d7c6cb48-8wr7l" podUID="8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2" containerName="placement-log" containerID="cri-o://e3dbb7ce8b1d62d84a2b156d530b4308c99b32ab7b60ee3156b3ed9b46908218" gracePeriod=30 Feb 14 04:32:22 crc kubenswrapper[4867]: I0214 04:32:22.313074 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-74d7c6cb48-8wr7l" podUID="8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2" containerName="placement-api" containerID="cri-o://95f9bf20e81b8ee8296887c27b1fc03c7aeba7ab6e8adc89f4de3b967b5b9c86" gracePeriod=30 Feb 14 04:32:22 crc kubenswrapper[4867]: I0214 04:32:22.399920 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bf24394-6465-476f-a99e-f46fce318656-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:22 crc kubenswrapper[4867]: I0214 04:32:22.761042 4867 generic.go:334] "Generic (PLEG): container finished" podID="8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2" containerID="e3dbb7ce8b1d62d84a2b156d530b4308c99b32ab7b60ee3156b3ed9b46908218" exitCode=143 Feb 14 04:32:22 crc kubenswrapper[4867]: I0214 04:32:22.761160 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-74d7c6cb48-8wr7l" event={"ID":"8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2","Type":"ContainerDied","Data":"e3dbb7ce8b1d62d84a2b156d530b4308c99b32ab7b60ee3156b3ed9b46908218"} Feb 14 04:32:22 crc kubenswrapper[4867]: I0214 04:32:22.764406 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-78546bb898-l5722" event={"ID":"3bf24394-6465-476f-a99e-f46fce318656","Type":"ContainerDied","Data":"8d85459a09b7155a3e119769eaeb23dbfd9aa893f907e0c55fc24cbd558bf78f"} Feb 14 04:32:22 crc kubenswrapper[4867]: I0214 04:32:22.764476 4867 scope.go:117] "RemoveContainer" containerID="d7acae34b523e3a580609072a0335d9f4dc1a0643b2d2946b03ae70287735d81" Feb 14 04:32:22 crc kubenswrapper[4867]: I0214 04:32:22.764431 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-78546bb898-l5722" Feb 14 04:32:22 crc kubenswrapper[4867]: I0214 04:32:22.794635 4867 scope.go:117] "RemoveContainer" containerID="3195bbd4ee7008fc50e7835b398535783b87d1f4092164f29b60b4bdc5b3c456" Feb 14 04:32:22 crc kubenswrapper[4867]: I0214 04:32:22.817267 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-78546bb898-l5722"] Feb 14 04:32:22 crc kubenswrapper[4867]: I0214 04:32:22.830487 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-78546bb898-l5722"] Feb 14 04:32:23 crc kubenswrapper[4867]: I0214 04:32:23.011384 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bf24394-6465-476f-a99e-f46fce318656" path="/var/lib/kubelet/pods/3bf24394-6465-476f-a99e-f46fce318656/volumes" Feb 14 04:32:23 crc kubenswrapper[4867]: I0214 04:32:23.061468 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 14 04:32:23 crc kubenswrapper[4867]: E0214 04:32:23.062076 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bf24394-6465-476f-a99e-f46fce318656" containerName="barbican-api-log" Feb 14 04:32:23 crc kubenswrapper[4867]: I0214 04:32:23.062103 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bf24394-6465-476f-a99e-f46fce318656" containerName="barbican-api-log" Feb 14 04:32:23 crc kubenswrapper[4867]: E0214 04:32:23.062144 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bf24394-6465-476f-a99e-f46fce318656" containerName="barbican-api" Feb 14 04:32:23 crc kubenswrapper[4867]: I0214 04:32:23.062154 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bf24394-6465-476f-a99e-f46fce318656" containerName="barbican-api" Feb 14 04:32:23 crc kubenswrapper[4867]: I0214 04:32:23.062479 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bf24394-6465-476f-a99e-f46fce318656" containerName="barbican-api" Feb 14 04:32:23 crc kubenswrapper[4867]: I0214 04:32:23.062556 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bf24394-6465-476f-a99e-f46fce318656" containerName="barbican-api-log" Feb 14 04:32:23 crc kubenswrapper[4867]: I0214 04:32:23.063532 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 14 04:32:23 crc kubenswrapper[4867]: I0214 04:32:23.065837 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 14 04:32:23 crc kubenswrapper[4867]: I0214 04:32:23.065837 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-th9bg" Feb 14 04:32:23 crc kubenswrapper[4867]: I0214 04:32:23.071896 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 14 04:32:23 crc kubenswrapper[4867]: I0214 04:32:23.088438 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 14 04:32:23 crc kubenswrapper[4867]: I0214 04:32:23.218170 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25trh\" (UniqueName: \"kubernetes.io/projected/6fdee887-8ecb-4c1e-8a88-0284fc050f0e-kube-api-access-25trh\") pod \"openstackclient\" (UID: \"6fdee887-8ecb-4c1e-8a88-0284fc050f0e\") " pod="openstack/openstackclient" Feb 14 04:32:23 crc kubenswrapper[4867]: I0214 04:32:23.218240 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6fdee887-8ecb-4c1e-8a88-0284fc050f0e-openstack-config-secret\") pod \"openstackclient\" (UID: \"6fdee887-8ecb-4c1e-8a88-0284fc050f0e\") " pod="openstack/openstackclient" Feb 14 04:32:23 crc kubenswrapper[4867]: I0214 04:32:23.218260 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6fdee887-8ecb-4c1e-8a88-0284fc050f0e-openstack-config\") pod \"openstackclient\" (UID: \"6fdee887-8ecb-4c1e-8a88-0284fc050f0e\") " pod="openstack/openstackclient" Feb 14 04:32:23 crc kubenswrapper[4867]: I0214 04:32:23.218466 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fdee887-8ecb-4c1e-8a88-0284fc050f0e-combined-ca-bundle\") pod \"openstackclient\" (UID: \"6fdee887-8ecb-4c1e-8a88-0284fc050f0e\") " pod="openstack/openstackclient" Feb 14 04:32:23 crc kubenswrapper[4867]: I0214 04:32:23.321107 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25trh\" (UniqueName: \"kubernetes.io/projected/6fdee887-8ecb-4c1e-8a88-0284fc050f0e-kube-api-access-25trh\") pod \"openstackclient\" (UID: \"6fdee887-8ecb-4c1e-8a88-0284fc050f0e\") " pod="openstack/openstackclient" Feb 14 04:32:23 crc kubenswrapper[4867]: I0214 04:32:23.321186 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6fdee887-8ecb-4c1e-8a88-0284fc050f0e-openstack-config-secret\") pod \"openstackclient\" (UID: \"6fdee887-8ecb-4c1e-8a88-0284fc050f0e\") " pod="openstack/openstackclient" Feb 14 04:32:23 crc kubenswrapper[4867]: I0214 04:32:23.321204 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6fdee887-8ecb-4c1e-8a88-0284fc050f0e-openstack-config\") pod \"openstackclient\" (UID: \"6fdee887-8ecb-4c1e-8a88-0284fc050f0e\") " pod="openstack/openstackclient" Feb 14 04:32:23 crc kubenswrapper[4867]: I0214 04:32:23.321243 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fdee887-8ecb-4c1e-8a88-0284fc050f0e-combined-ca-bundle\") pod \"openstackclient\" (UID: \"6fdee887-8ecb-4c1e-8a88-0284fc050f0e\") " pod="openstack/openstackclient" Feb 14 04:32:23 crc kubenswrapper[4867]: I0214 04:32:23.323166 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6fdee887-8ecb-4c1e-8a88-0284fc050f0e-openstack-config\") pod \"openstackclient\" (UID: \"6fdee887-8ecb-4c1e-8a88-0284fc050f0e\") " pod="openstack/openstackclient" Feb 14 04:32:23 crc kubenswrapper[4867]: I0214 04:32:23.325541 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fdee887-8ecb-4c1e-8a88-0284fc050f0e-combined-ca-bundle\") pod \"openstackclient\" (UID: \"6fdee887-8ecb-4c1e-8a88-0284fc050f0e\") " pod="openstack/openstackclient" Feb 14 04:32:23 crc kubenswrapper[4867]: I0214 04:32:23.325879 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6fdee887-8ecb-4c1e-8a88-0284fc050f0e-openstack-config-secret\") pod \"openstackclient\" (UID: \"6fdee887-8ecb-4c1e-8a88-0284fc050f0e\") " pod="openstack/openstackclient" Feb 14 04:32:23 crc kubenswrapper[4867]: I0214 04:32:23.339306 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25trh\" (UniqueName: \"kubernetes.io/projected/6fdee887-8ecb-4c1e-8a88-0284fc050f0e-kube-api-access-25trh\") pod \"openstackclient\" (UID: \"6fdee887-8ecb-4c1e-8a88-0284fc050f0e\") " pod="openstack/openstackclient" Feb 14 04:32:23 crc kubenswrapper[4867]: I0214 04:32:23.382659 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 14 04:32:23 crc kubenswrapper[4867]: W0214 04:32:23.870300 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6fdee887_8ecb_4c1e_8a88_0284fc050f0e.slice/crio-73a29c963a8e17fd430e1895debed06ddaa001f6eafd4d2fdd31bcc1d7d2e132 WatchSource:0}: Error finding container 73a29c963a8e17fd430e1895debed06ddaa001f6eafd4d2fdd31bcc1d7d2e132: Status 404 returned error can't find the container with id 73a29c963a8e17fd430e1895debed06ddaa001f6eafd4d2fdd31bcc1d7d2e132 Feb 14 04:32:23 crc kubenswrapper[4867]: I0214 04:32:23.874691 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 14 04:32:24 crc kubenswrapper[4867]: I0214 04:32:24.535959 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 14 04:32:24 crc kubenswrapper[4867]: I0214 04:32:24.791716 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"6fdee887-8ecb-4c1e-8a88-0284fc050f0e","Type":"ContainerStarted","Data":"73a29c963a8e17fd430e1895debed06ddaa001f6eafd4d2fdd31bcc1d7d2e132"} Feb 14 04:32:25 crc kubenswrapper[4867]: I0214 04:32:25.808308 4867 generic.go:334] "Generic (PLEG): container finished" podID="8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2" containerID="95f9bf20e81b8ee8296887c27b1fc03c7aeba7ab6e8adc89f4de3b967b5b9c86" exitCode=0 Feb 14 04:32:25 crc kubenswrapper[4867]: I0214 04:32:25.808686 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-74d7c6cb48-8wr7l" event={"ID":"8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2","Type":"ContainerDied","Data":"95f9bf20e81b8ee8296887c27b1fc03c7aeba7ab6e8adc89f4de3b967b5b9c86"} Feb 14 04:32:26 crc kubenswrapper[4867]: I0214 04:32:26.132606 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-74d7c6cb48-8wr7l" Feb 14 04:32:26 crc kubenswrapper[4867]: I0214 04:32:26.311233 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2-internal-tls-certs\") pod \"8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2\" (UID: \"8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2\") " Feb 14 04:32:26 crc kubenswrapper[4867]: I0214 04:32:26.311448 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2-combined-ca-bundle\") pod \"8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2\" (UID: \"8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2\") " Feb 14 04:32:26 crc kubenswrapper[4867]: I0214 04:32:26.311668 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzmv2\" (UniqueName: \"kubernetes.io/projected/8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2-kube-api-access-nzmv2\") pod \"8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2\" (UID: \"8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2\") " Feb 14 04:32:26 crc kubenswrapper[4867]: I0214 04:32:26.311784 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2-config-data\") pod \"8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2\" (UID: \"8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2\") " Feb 14 04:32:26 crc kubenswrapper[4867]: I0214 04:32:26.311871 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2-public-tls-certs\") pod \"8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2\" (UID: \"8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2\") " Feb 14 04:32:26 crc kubenswrapper[4867]: I0214 04:32:26.311927 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2-logs\") pod \"8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2\" (UID: \"8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2\") " Feb 14 04:32:26 crc kubenswrapper[4867]: I0214 04:32:26.311960 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2-scripts\") pod \"8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2\" (UID: \"8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2\") " Feb 14 04:32:26 crc kubenswrapper[4867]: I0214 04:32:26.314370 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2-logs" (OuterVolumeSpecName: "logs") pod "8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2" (UID: "8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:32:26 crc kubenswrapper[4867]: I0214 04:32:26.319989 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2-scripts" (OuterVolumeSpecName: "scripts") pod "8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2" (UID: "8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:26 crc kubenswrapper[4867]: I0214 04:32:26.320523 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2-kube-api-access-nzmv2" (OuterVolumeSpecName: "kube-api-access-nzmv2") pod "8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2" (UID: "8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2"). InnerVolumeSpecName "kube-api-access-nzmv2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:32:26 crc kubenswrapper[4867]: I0214 04:32:26.401809 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2" (UID: "8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:26 crc kubenswrapper[4867]: I0214 04:32:26.412831 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2-config-data" (OuterVolumeSpecName: "config-data") pod "8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2" (UID: "8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:26 crc kubenswrapper[4867]: I0214 04:32:26.416085 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:26 crc kubenswrapper[4867]: I0214 04:32:26.416268 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzmv2\" (UniqueName: \"kubernetes.io/projected/8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2-kube-api-access-nzmv2\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:26 crc kubenswrapper[4867]: I0214 04:32:26.416381 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:26 crc kubenswrapper[4867]: I0214 04:32:26.416486 4867 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2-logs\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:26 crc kubenswrapper[4867]: I0214 04:32:26.416591 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:26 crc kubenswrapper[4867]: I0214 04:32:26.475232 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2" (UID: "8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:26 crc kubenswrapper[4867]: I0214 04:32:26.487766 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2" (UID: "8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:26 crc kubenswrapper[4867]: I0214 04:32:26.519491 4867 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:26 crc kubenswrapper[4867]: I0214 04:32:26.519540 4867 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:26 crc kubenswrapper[4867]: I0214 04:32:26.824493 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-74d7c6cb48-8wr7l" event={"ID":"8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2","Type":"ContainerDied","Data":"d72d747bf641f17caffe57b13805170a59917becd98a04f814a50119c9f846ba"} Feb 14 04:32:26 crc kubenswrapper[4867]: I0214 04:32:26.824600 4867 scope.go:117] "RemoveContainer" containerID="95f9bf20e81b8ee8296887c27b1fc03c7aeba7ab6e8adc89f4de3b967b5b9c86" Feb 14 04:32:26 crc kubenswrapper[4867]: I0214 04:32:26.824667 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-74d7c6cb48-8wr7l" Feb 14 04:32:26 crc kubenswrapper[4867]: I0214 04:32:26.856651 4867 scope.go:117] "RemoveContainer" containerID="e3dbb7ce8b1d62d84a2b156d530b4308c99b32ab7b60ee3156b3ed9b46908218" Feb 14 04:32:26 crc kubenswrapper[4867]: I0214 04:32:26.870629 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-74d7c6cb48-8wr7l"] Feb 14 04:32:26 crc kubenswrapper[4867]: I0214 04:32:26.887474 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-74d7c6cb48-8wr7l"] Feb 14 04:32:27 crc kubenswrapper[4867]: I0214 04:32:27.014594 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2" path="/var/lib/kubelet/pods/8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2/volumes" Feb 14 04:32:27 crc kubenswrapper[4867]: I0214 04:32:27.155857 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-5559ff585f-sb7wb"] Feb 14 04:32:27 crc kubenswrapper[4867]: E0214 04:32:27.156427 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2" containerName="placement-log" Feb 14 04:32:27 crc kubenswrapper[4867]: I0214 04:32:27.156445 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2" containerName="placement-log" Feb 14 04:32:27 crc kubenswrapper[4867]: E0214 04:32:27.156487 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2" containerName="placement-api" Feb 14 04:32:27 crc kubenswrapper[4867]: I0214 04:32:27.156495 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2" containerName="placement-api" Feb 14 04:32:27 crc kubenswrapper[4867]: I0214 04:32:27.156758 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2" containerName="placement-api" Feb 14 04:32:27 crc kubenswrapper[4867]: I0214 04:32:27.156971 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2" containerName="placement-log" Feb 14 04:32:27 crc kubenswrapper[4867]: I0214 04:32:27.158230 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5559ff585f-sb7wb" Feb 14 04:32:27 crc kubenswrapper[4867]: I0214 04:32:27.162365 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 14 04:32:27 crc kubenswrapper[4867]: I0214 04:32:27.162601 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 14 04:32:27 crc kubenswrapper[4867]: I0214 04:32:27.162729 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 14 04:32:27 crc kubenswrapper[4867]: I0214 04:32:27.182429 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-5559ff585f-sb7wb"] Feb 14 04:32:27 crc kubenswrapper[4867]: I0214 04:32:27.238541 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76fdab94-9bfb-48b7-82f9-bdd6d2258cdb-combined-ca-bundle\") pod \"swift-proxy-5559ff585f-sb7wb\" (UID: \"76fdab94-9bfb-48b7-82f9-bdd6d2258cdb\") " pod="openstack/swift-proxy-5559ff585f-sb7wb" Feb 14 04:32:27 crc kubenswrapper[4867]: I0214 04:32:27.238606 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/76fdab94-9bfb-48b7-82f9-bdd6d2258cdb-etc-swift\") pod \"swift-proxy-5559ff585f-sb7wb\" (UID: \"76fdab94-9bfb-48b7-82f9-bdd6d2258cdb\") " pod="openstack/swift-proxy-5559ff585f-sb7wb" Feb 14 04:32:27 crc kubenswrapper[4867]: I0214 04:32:27.238641 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/76fdab94-9bfb-48b7-82f9-bdd6d2258cdb-run-httpd\") pod \"swift-proxy-5559ff585f-sb7wb\" (UID: \"76fdab94-9bfb-48b7-82f9-bdd6d2258cdb\") " pod="openstack/swift-proxy-5559ff585f-sb7wb" Feb 14 04:32:27 crc kubenswrapper[4867]: I0214 04:32:27.238685 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/76fdab94-9bfb-48b7-82f9-bdd6d2258cdb-log-httpd\") pod \"swift-proxy-5559ff585f-sb7wb\" (UID: \"76fdab94-9bfb-48b7-82f9-bdd6d2258cdb\") " pod="openstack/swift-proxy-5559ff585f-sb7wb" Feb 14 04:32:27 crc kubenswrapper[4867]: I0214 04:32:27.238717 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/76fdab94-9bfb-48b7-82f9-bdd6d2258cdb-internal-tls-certs\") pod \"swift-proxy-5559ff585f-sb7wb\" (UID: \"76fdab94-9bfb-48b7-82f9-bdd6d2258cdb\") " pod="openstack/swift-proxy-5559ff585f-sb7wb" Feb 14 04:32:27 crc kubenswrapper[4867]: I0214 04:32:27.239821 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vjtl\" (UniqueName: \"kubernetes.io/projected/76fdab94-9bfb-48b7-82f9-bdd6d2258cdb-kube-api-access-9vjtl\") pod \"swift-proxy-5559ff585f-sb7wb\" (UID: \"76fdab94-9bfb-48b7-82f9-bdd6d2258cdb\") " pod="openstack/swift-proxy-5559ff585f-sb7wb" Feb 14 04:32:27 crc kubenswrapper[4867]: I0214 04:32:27.240219 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/76fdab94-9bfb-48b7-82f9-bdd6d2258cdb-public-tls-certs\") pod \"swift-proxy-5559ff585f-sb7wb\" (UID: \"76fdab94-9bfb-48b7-82f9-bdd6d2258cdb\") " pod="openstack/swift-proxy-5559ff585f-sb7wb" Feb 14 04:32:27 crc kubenswrapper[4867]: I0214 04:32:27.240293 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76fdab94-9bfb-48b7-82f9-bdd6d2258cdb-config-data\") pod \"swift-proxy-5559ff585f-sb7wb\" (UID: \"76fdab94-9bfb-48b7-82f9-bdd6d2258cdb\") " pod="openstack/swift-proxy-5559ff585f-sb7wb" Feb 14 04:32:27 crc kubenswrapper[4867]: I0214 04:32:27.346420 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vjtl\" (UniqueName: \"kubernetes.io/projected/76fdab94-9bfb-48b7-82f9-bdd6d2258cdb-kube-api-access-9vjtl\") pod \"swift-proxy-5559ff585f-sb7wb\" (UID: \"76fdab94-9bfb-48b7-82f9-bdd6d2258cdb\") " pod="openstack/swift-proxy-5559ff585f-sb7wb" Feb 14 04:32:27 crc kubenswrapper[4867]: I0214 04:32:27.346806 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/76fdab94-9bfb-48b7-82f9-bdd6d2258cdb-public-tls-certs\") pod \"swift-proxy-5559ff585f-sb7wb\" (UID: \"76fdab94-9bfb-48b7-82f9-bdd6d2258cdb\") " pod="openstack/swift-proxy-5559ff585f-sb7wb" Feb 14 04:32:27 crc kubenswrapper[4867]: I0214 04:32:27.346839 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76fdab94-9bfb-48b7-82f9-bdd6d2258cdb-config-data\") pod \"swift-proxy-5559ff585f-sb7wb\" (UID: \"76fdab94-9bfb-48b7-82f9-bdd6d2258cdb\") " pod="openstack/swift-proxy-5559ff585f-sb7wb" Feb 14 04:32:27 crc kubenswrapper[4867]: I0214 04:32:27.346916 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76fdab94-9bfb-48b7-82f9-bdd6d2258cdb-combined-ca-bundle\") pod \"swift-proxy-5559ff585f-sb7wb\" (UID: \"76fdab94-9bfb-48b7-82f9-bdd6d2258cdb\") " pod="openstack/swift-proxy-5559ff585f-sb7wb" Feb 14 04:32:27 crc kubenswrapper[4867]: I0214 04:32:27.346936 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/76fdab94-9bfb-48b7-82f9-bdd6d2258cdb-etc-swift\") pod \"swift-proxy-5559ff585f-sb7wb\" (UID: \"76fdab94-9bfb-48b7-82f9-bdd6d2258cdb\") " pod="openstack/swift-proxy-5559ff585f-sb7wb" Feb 14 04:32:27 crc kubenswrapper[4867]: I0214 04:32:27.346954 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/76fdab94-9bfb-48b7-82f9-bdd6d2258cdb-run-httpd\") pod \"swift-proxy-5559ff585f-sb7wb\" (UID: \"76fdab94-9bfb-48b7-82f9-bdd6d2258cdb\") " pod="openstack/swift-proxy-5559ff585f-sb7wb" Feb 14 04:32:27 crc kubenswrapper[4867]: I0214 04:32:27.346986 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/76fdab94-9bfb-48b7-82f9-bdd6d2258cdb-log-httpd\") pod \"swift-proxy-5559ff585f-sb7wb\" (UID: \"76fdab94-9bfb-48b7-82f9-bdd6d2258cdb\") " pod="openstack/swift-proxy-5559ff585f-sb7wb" Feb 14 04:32:27 crc kubenswrapper[4867]: I0214 04:32:27.347004 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/76fdab94-9bfb-48b7-82f9-bdd6d2258cdb-internal-tls-certs\") pod \"swift-proxy-5559ff585f-sb7wb\" (UID: \"76fdab94-9bfb-48b7-82f9-bdd6d2258cdb\") " pod="openstack/swift-proxy-5559ff585f-sb7wb" Feb 14 04:32:27 crc kubenswrapper[4867]: I0214 04:32:27.348027 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/76fdab94-9bfb-48b7-82f9-bdd6d2258cdb-run-httpd\") pod \"swift-proxy-5559ff585f-sb7wb\" (UID: \"76fdab94-9bfb-48b7-82f9-bdd6d2258cdb\") " pod="openstack/swift-proxy-5559ff585f-sb7wb" Feb 14 04:32:27 crc kubenswrapper[4867]: I0214 04:32:27.349144 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/76fdab94-9bfb-48b7-82f9-bdd6d2258cdb-log-httpd\") pod \"swift-proxy-5559ff585f-sb7wb\" (UID: \"76fdab94-9bfb-48b7-82f9-bdd6d2258cdb\") " pod="openstack/swift-proxy-5559ff585f-sb7wb" Feb 14 04:32:27 crc kubenswrapper[4867]: I0214 04:32:27.351326 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76fdab94-9bfb-48b7-82f9-bdd6d2258cdb-config-data\") pod \"swift-proxy-5559ff585f-sb7wb\" (UID: \"76fdab94-9bfb-48b7-82f9-bdd6d2258cdb\") " pod="openstack/swift-proxy-5559ff585f-sb7wb" Feb 14 04:32:27 crc kubenswrapper[4867]: I0214 04:32:27.353337 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/76fdab94-9bfb-48b7-82f9-bdd6d2258cdb-internal-tls-certs\") pod \"swift-proxy-5559ff585f-sb7wb\" (UID: \"76fdab94-9bfb-48b7-82f9-bdd6d2258cdb\") " pod="openstack/swift-proxy-5559ff585f-sb7wb" Feb 14 04:32:27 crc kubenswrapper[4867]: I0214 04:32:27.353802 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/76fdab94-9bfb-48b7-82f9-bdd6d2258cdb-public-tls-certs\") pod \"swift-proxy-5559ff585f-sb7wb\" (UID: \"76fdab94-9bfb-48b7-82f9-bdd6d2258cdb\") " pod="openstack/swift-proxy-5559ff585f-sb7wb" Feb 14 04:32:27 crc kubenswrapper[4867]: I0214 04:32:27.353918 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/76fdab94-9bfb-48b7-82f9-bdd6d2258cdb-etc-swift\") pod \"swift-proxy-5559ff585f-sb7wb\" (UID: \"76fdab94-9bfb-48b7-82f9-bdd6d2258cdb\") " pod="openstack/swift-proxy-5559ff585f-sb7wb" Feb 14 04:32:27 crc kubenswrapper[4867]: I0214 04:32:27.354566 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76fdab94-9bfb-48b7-82f9-bdd6d2258cdb-combined-ca-bundle\") pod \"swift-proxy-5559ff585f-sb7wb\" (UID: \"76fdab94-9bfb-48b7-82f9-bdd6d2258cdb\") " pod="openstack/swift-proxy-5559ff585f-sb7wb" Feb 14 04:32:27 crc kubenswrapper[4867]: I0214 04:32:27.364408 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vjtl\" (UniqueName: \"kubernetes.io/projected/76fdab94-9bfb-48b7-82f9-bdd6d2258cdb-kube-api-access-9vjtl\") pod \"swift-proxy-5559ff585f-sb7wb\" (UID: \"76fdab94-9bfb-48b7-82f9-bdd6d2258cdb\") " pod="openstack/swift-proxy-5559ff585f-sb7wb" Feb 14 04:32:27 crc kubenswrapper[4867]: I0214 04:32:27.487399 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5559ff585f-sb7wb" Feb 14 04:32:28 crc kubenswrapper[4867]: W0214 04:32:28.123896 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod76fdab94_9bfb_48b7_82f9_bdd6d2258cdb.slice/crio-b4a8ff39ae65f8b71af03f89aaa9768f336d409db08fd8c3fc67bdc9a1d89233 WatchSource:0}: Error finding container b4a8ff39ae65f8b71af03f89aaa9768f336d409db08fd8c3fc67bdc9a1d89233: Status 404 returned error can't find the container with id b4a8ff39ae65f8b71af03f89aaa9768f336d409db08fd8c3fc67bdc9a1d89233 Feb 14 04:32:28 crc kubenswrapper[4867]: I0214 04:32:28.139910 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-5559ff585f-sb7wb"] Feb 14 04:32:28 crc kubenswrapper[4867]: I0214 04:32:28.851741 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5559ff585f-sb7wb" event={"ID":"76fdab94-9bfb-48b7-82f9-bdd6d2258cdb","Type":"ContainerStarted","Data":"b59bd80307cb38da657610fdfea874e3ba1d1dada932f211c3e5710d88178369"} Feb 14 04:32:28 crc kubenswrapper[4867]: I0214 04:32:28.851785 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5559ff585f-sb7wb" event={"ID":"76fdab94-9bfb-48b7-82f9-bdd6d2258cdb","Type":"ContainerStarted","Data":"338afbbd6ca87f6d2a8404cb72131a62bf136cc006b50cb5ceea030a6fa1583b"} Feb 14 04:32:28 crc kubenswrapper[4867]: I0214 04:32:28.851800 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5559ff585f-sb7wb" event={"ID":"76fdab94-9bfb-48b7-82f9-bdd6d2258cdb","Type":"ContainerStarted","Data":"b4a8ff39ae65f8b71af03f89aaa9768f336d409db08fd8c3fc67bdc9a1d89233"} Feb 14 04:32:28 crc kubenswrapper[4867]: I0214 04:32:28.853234 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5559ff585f-sb7wb" Feb 14 04:32:28 crc kubenswrapper[4867]: I0214 04:32:28.853260 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5559ff585f-sb7wb" Feb 14 04:32:28 crc kubenswrapper[4867]: I0214 04:32:28.857999 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:32:28 crc kubenswrapper[4867]: I0214 04:32:28.858293 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7ce36665-fb1a-4860-bc8a-5e12431d4cd6" containerName="ceilometer-central-agent" containerID="cri-o://fa147253ee7488f81ea6eca1453e9afe783991b356d4806c11a9a0f690b9282a" gracePeriod=30 Feb 14 04:32:28 crc kubenswrapper[4867]: I0214 04:32:28.858717 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7ce36665-fb1a-4860-bc8a-5e12431d4cd6" containerName="sg-core" containerID="cri-o://48f01dc9aa282450371f6297a6c143b96aef3bdcad1b711eb94a51bfc381c6b0" gracePeriod=30 Feb 14 04:32:28 crc kubenswrapper[4867]: I0214 04:32:28.858863 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7ce36665-fb1a-4860-bc8a-5e12431d4cd6" containerName="proxy-httpd" containerID="cri-o://fbab9809e65a478959fcc20b95a52910111448975d370afd8952ef2712282827" gracePeriod=30 Feb 14 04:32:28 crc kubenswrapper[4867]: I0214 04:32:28.858906 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7ce36665-fb1a-4860-bc8a-5e12431d4cd6" containerName="ceilometer-notification-agent" containerID="cri-o://fe5aa9c47c46abdc1b30cca0eb25c76a83c0676a5128f68950adc248471821b2" gracePeriod=30 Feb 14 04:32:28 crc kubenswrapper[4867]: I0214 04:32:28.869343 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="7ce36665-fb1a-4860-bc8a-5e12431d4cd6" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.205:3000/\": EOF" Feb 14 04:32:28 crc kubenswrapper[4867]: I0214 04:32:28.879086 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-5559ff585f-sb7wb" podStartSLOduration=1.879064981 podStartE2EDuration="1.879064981s" podCreationTimestamp="2026-02-14 04:32:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:32:28.878603058 +0000 UTC m=+1380.959540372" watchObservedRunningTime="2026-02-14 04:32:28.879064981 +0000 UTC m=+1380.960002295" Feb 14 04:32:29 crc kubenswrapper[4867]: I0214 04:32:29.866061 4867 generic.go:334] "Generic (PLEG): container finished" podID="7ce36665-fb1a-4860-bc8a-5e12431d4cd6" containerID="fbab9809e65a478959fcc20b95a52910111448975d370afd8952ef2712282827" exitCode=0 Feb 14 04:32:29 crc kubenswrapper[4867]: I0214 04:32:29.866432 4867 generic.go:334] "Generic (PLEG): container finished" podID="7ce36665-fb1a-4860-bc8a-5e12431d4cd6" containerID="48f01dc9aa282450371f6297a6c143b96aef3bdcad1b711eb94a51bfc381c6b0" exitCode=2 Feb 14 04:32:29 crc kubenswrapper[4867]: I0214 04:32:29.866448 4867 generic.go:334] "Generic (PLEG): container finished" podID="7ce36665-fb1a-4860-bc8a-5e12431d4cd6" containerID="fa147253ee7488f81ea6eca1453e9afe783991b356d4806c11a9a0f690b9282a" exitCode=0 Feb 14 04:32:29 crc kubenswrapper[4867]: I0214 04:32:29.866142 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7ce36665-fb1a-4860-bc8a-5e12431d4cd6","Type":"ContainerDied","Data":"fbab9809e65a478959fcc20b95a52910111448975d370afd8952ef2712282827"} Feb 14 04:32:29 crc kubenswrapper[4867]: I0214 04:32:29.866637 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7ce36665-fb1a-4860-bc8a-5e12431d4cd6","Type":"ContainerDied","Data":"48f01dc9aa282450371f6297a6c143b96aef3bdcad1b711eb94a51bfc381c6b0"} Feb 14 04:32:29 crc kubenswrapper[4867]: I0214 04:32:29.866657 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7ce36665-fb1a-4860-bc8a-5e12431d4cd6","Type":"ContainerDied","Data":"fa147253ee7488f81ea6eca1453e9afe783991b356d4806c11a9a0f690b9282a"} Feb 14 04:32:31 crc kubenswrapper[4867]: I0214 04:32:31.251010 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 04:32:31 crc kubenswrapper[4867]: I0214 04:32:31.251383 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 04:32:31 crc kubenswrapper[4867]: I0214 04:32:31.251441 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" Feb 14 04:32:31 crc kubenswrapper[4867]: I0214 04:32:31.252409 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9c4b967cf6b24751f9f07fc3f33e355390aef9adbb8efd8f22637fd0bfe6c0be"} pod="openshift-machine-config-operator/machine-config-daemon-4s95t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 04:32:31 crc kubenswrapper[4867]: I0214 04:32:31.252487 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" containerID="cri-o://9c4b967cf6b24751f9f07fc3f33e355390aef9adbb8efd8f22637fd0bfe6c0be" gracePeriod=600 Feb 14 04:32:31 crc kubenswrapper[4867]: I0214 04:32:31.899963 4867 generic.go:334] "Generic (PLEG): container finished" podID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerID="9c4b967cf6b24751f9f07fc3f33e355390aef9adbb8efd8f22637fd0bfe6c0be" exitCode=0 Feb 14 04:32:31 crc kubenswrapper[4867]: I0214 04:32:31.900008 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" event={"ID":"5992e46c-bce7-4b9f-82f2-c7ffb93286cd","Type":"ContainerDied","Data":"9c4b967cf6b24751f9f07fc3f33e355390aef9adbb8efd8f22637fd0bfe6c0be"} Feb 14 04:32:31 crc kubenswrapper[4867]: I0214 04:32:31.900040 4867 scope.go:117] "RemoveContainer" containerID="a6dbe719cdc073fcc8481a2727f00815982a8bd61b2cd10d4229a11b7b5cb46c" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.052124 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-677c4ffcdf-n44s6"] Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.055200 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-677c4ffcdf-n44s6" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.060938 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.061016 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.061450 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-pzjfh" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.114570 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-677c4ffcdf-n44s6"] Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.216283 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-ccbrl"] Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.219190 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-ccbrl" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.219562 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2ce3fe5-1f15-484b-a608-da9f03d714c9-combined-ca-bundle\") pod \"heat-engine-677c4ffcdf-n44s6\" (UID: \"a2ce3fe5-1f15-484b-a608-da9f03d714c9\") " pod="openstack/heat-engine-677c4ffcdf-n44s6" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.219621 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2ce3fe5-1f15-484b-a608-da9f03d714c9-config-data\") pod \"heat-engine-677c4ffcdf-n44s6\" (UID: \"a2ce3fe5-1f15-484b-a608-da9f03d714c9\") " pod="openstack/heat-engine-677c4ffcdf-n44s6" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.219753 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgmf8\" (UniqueName: \"kubernetes.io/projected/a2ce3fe5-1f15-484b-a608-da9f03d714c9-kube-api-access-lgmf8\") pod \"heat-engine-677c4ffcdf-n44s6\" (UID: \"a2ce3fe5-1f15-484b-a608-da9f03d714c9\") " pod="openstack/heat-engine-677c4ffcdf-n44s6" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.219820 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a2ce3fe5-1f15-484b-a608-da9f03d714c9-config-data-custom\") pod \"heat-engine-677c4ffcdf-n44s6\" (UID: \"a2ce3fe5-1f15-484b-a608-da9f03d714c9\") " pod="openstack/heat-engine-677c4ffcdf-n44s6" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.253078 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-ccbrl"] Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.317716 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-667b98697-gxqph"] Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.319501 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-667b98697-gxqph" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.321546 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.321893 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7959a0fa-00bd-492c-9892-a8c8727549c6-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-ccbrl\" (UID: \"7959a0fa-00bd-492c-9892-a8c8727549c6\") " pod="openstack/dnsmasq-dns-7756b9d78c-ccbrl" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.321989 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7959a0fa-00bd-492c-9892-a8c8727549c6-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-ccbrl\" (UID: \"7959a0fa-00bd-492c-9892-a8c8727549c6\") " pod="openstack/dnsmasq-dns-7756b9d78c-ccbrl" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.322060 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bm4h\" (UniqueName: \"kubernetes.io/projected/7959a0fa-00bd-492c-9892-a8c8727549c6-kube-api-access-5bm4h\") pod \"dnsmasq-dns-7756b9d78c-ccbrl\" (UID: \"7959a0fa-00bd-492c-9892-a8c8727549c6\") " pod="openstack/dnsmasq-dns-7756b9d78c-ccbrl" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.322115 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2ce3fe5-1f15-484b-a608-da9f03d714c9-combined-ca-bundle\") pod \"heat-engine-677c4ffcdf-n44s6\" (UID: \"a2ce3fe5-1f15-484b-a608-da9f03d714c9\") " pod="openstack/heat-engine-677c4ffcdf-n44s6" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.322153 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2ce3fe5-1f15-484b-a608-da9f03d714c9-config-data\") pod \"heat-engine-677c4ffcdf-n44s6\" (UID: \"a2ce3fe5-1f15-484b-a608-da9f03d714c9\") " pod="openstack/heat-engine-677c4ffcdf-n44s6" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.322205 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7959a0fa-00bd-492c-9892-a8c8727549c6-config\") pod \"dnsmasq-dns-7756b9d78c-ccbrl\" (UID: \"7959a0fa-00bd-492c-9892-a8c8727549c6\") " pod="openstack/dnsmasq-dns-7756b9d78c-ccbrl" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.322258 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgmf8\" (UniqueName: \"kubernetes.io/projected/a2ce3fe5-1f15-484b-a608-da9f03d714c9-kube-api-access-lgmf8\") pod \"heat-engine-677c4ffcdf-n44s6\" (UID: \"a2ce3fe5-1f15-484b-a608-da9f03d714c9\") " pod="openstack/heat-engine-677c4ffcdf-n44s6" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.322281 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7959a0fa-00bd-492c-9892-a8c8727549c6-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-ccbrl\" (UID: \"7959a0fa-00bd-492c-9892-a8c8727549c6\") " pod="openstack/dnsmasq-dns-7756b9d78c-ccbrl" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.322322 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a2ce3fe5-1f15-484b-a608-da9f03d714c9-config-data-custom\") pod \"heat-engine-677c4ffcdf-n44s6\" (UID: \"a2ce3fe5-1f15-484b-a608-da9f03d714c9\") " pod="openstack/heat-engine-677c4ffcdf-n44s6" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.322351 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7959a0fa-00bd-492c-9892-a8c8727549c6-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-ccbrl\" (UID: \"7959a0fa-00bd-492c-9892-a8c8727549c6\") " pod="openstack/dnsmasq-dns-7756b9d78c-ccbrl" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.333109 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-667b98697-gxqph"] Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.350910 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2ce3fe5-1f15-484b-a608-da9f03d714c9-combined-ca-bundle\") pod \"heat-engine-677c4ffcdf-n44s6\" (UID: \"a2ce3fe5-1f15-484b-a608-da9f03d714c9\") " pod="openstack/heat-engine-677c4ffcdf-n44s6" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.351341 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a2ce3fe5-1f15-484b-a608-da9f03d714c9-config-data-custom\") pod \"heat-engine-677c4ffcdf-n44s6\" (UID: \"a2ce3fe5-1f15-484b-a608-da9f03d714c9\") " pod="openstack/heat-engine-677c4ffcdf-n44s6" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.352023 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2ce3fe5-1f15-484b-a608-da9f03d714c9-config-data\") pod \"heat-engine-677c4ffcdf-n44s6\" (UID: \"a2ce3fe5-1f15-484b-a608-da9f03d714c9\") " pod="openstack/heat-engine-677c4ffcdf-n44s6" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.356817 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgmf8\" (UniqueName: \"kubernetes.io/projected/a2ce3fe5-1f15-484b-a608-da9f03d714c9-kube-api-access-lgmf8\") pod \"heat-engine-677c4ffcdf-n44s6\" (UID: \"a2ce3fe5-1f15-484b-a608-da9f03d714c9\") " pod="openstack/heat-engine-677c4ffcdf-n44s6" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.380355 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-74c87bfcc9-g5dr4"] Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.382978 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-74c87bfcc9-g5dr4" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.390826 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.399662 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-677c4ffcdf-n44s6" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.404675 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-74c87bfcc9-g5dr4"] Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.424913 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bm4h\" (UniqueName: \"kubernetes.io/projected/7959a0fa-00bd-492c-9892-a8c8727549c6-kube-api-access-5bm4h\") pod \"dnsmasq-dns-7756b9d78c-ccbrl\" (UID: \"7959a0fa-00bd-492c-9892-a8c8727549c6\") " pod="openstack/dnsmasq-dns-7756b9d78c-ccbrl" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.424977 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fd29ee2-33af-4629-8c0d-fa62c0e07240-combined-ca-bundle\") pod \"heat-api-667b98697-gxqph\" (UID: \"4fd29ee2-33af-4629-8c0d-fa62c0e07240\") " pod="openstack/heat-api-667b98697-gxqph" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.425073 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7959a0fa-00bd-492c-9892-a8c8727549c6-config\") pod \"dnsmasq-dns-7756b9d78c-ccbrl\" (UID: \"7959a0fa-00bd-492c-9892-a8c8727549c6\") " pod="openstack/dnsmasq-dns-7756b9d78c-ccbrl" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.425112 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7959a0fa-00bd-492c-9892-a8c8727549c6-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-ccbrl\" (UID: \"7959a0fa-00bd-492c-9892-a8c8727549c6\") " pod="openstack/dnsmasq-dns-7756b9d78c-ccbrl" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.425147 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7959a0fa-00bd-492c-9892-a8c8727549c6-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-ccbrl\" (UID: \"7959a0fa-00bd-492c-9892-a8c8727549c6\") " pod="openstack/dnsmasq-dns-7756b9d78c-ccbrl" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.425187 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrk9t\" (UniqueName: \"kubernetes.io/projected/4fd29ee2-33af-4629-8c0d-fa62c0e07240-kube-api-access-hrk9t\") pod \"heat-api-667b98697-gxqph\" (UID: \"4fd29ee2-33af-4629-8c0d-fa62c0e07240\") " pod="openstack/heat-api-667b98697-gxqph" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.425227 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7959a0fa-00bd-492c-9892-a8c8727549c6-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-ccbrl\" (UID: \"7959a0fa-00bd-492c-9892-a8c8727549c6\") " pod="openstack/dnsmasq-dns-7756b9d78c-ccbrl" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.425262 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fd29ee2-33af-4629-8c0d-fa62c0e07240-config-data\") pod \"heat-api-667b98697-gxqph\" (UID: \"4fd29ee2-33af-4629-8c0d-fa62c0e07240\") " pod="openstack/heat-api-667b98697-gxqph" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.425281 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4fd29ee2-33af-4629-8c0d-fa62c0e07240-config-data-custom\") pod \"heat-api-667b98697-gxqph\" (UID: \"4fd29ee2-33af-4629-8c0d-fa62c0e07240\") " pod="openstack/heat-api-667b98697-gxqph" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.425308 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7959a0fa-00bd-492c-9892-a8c8727549c6-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-ccbrl\" (UID: \"7959a0fa-00bd-492c-9892-a8c8727549c6\") " pod="openstack/dnsmasq-dns-7756b9d78c-ccbrl" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.426940 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7959a0fa-00bd-492c-9892-a8c8727549c6-config\") pod \"dnsmasq-dns-7756b9d78c-ccbrl\" (UID: \"7959a0fa-00bd-492c-9892-a8c8727549c6\") " pod="openstack/dnsmasq-dns-7756b9d78c-ccbrl" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.427185 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7959a0fa-00bd-492c-9892-a8c8727549c6-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-ccbrl\" (UID: \"7959a0fa-00bd-492c-9892-a8c8727549c6\") " pod="openstack/dnsmasq-dns-7756b9d78c-ccbrl" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.427583 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7959a0fa-00bd-492c-9892-a8c8727549c6-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-ccbrl\" (UID: \"7959a0fa-00bd-492c-9892-a8c8727549c6\") " pod="openstack/dnsmasq-dns-7756b9d78c-ccbrl" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.427670 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7959a0fa-00bd-492c-9892-a8c8727549c6-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-ccbrl\" (UID: \"7959a0fa-00bd-492c-9892-a8c8727549c6\") " pod="openstack/dnsmasq-dns-7756b9d78c-ccbrl" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.429141 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7959a0fa-00bd-492c-9892-a8c8727549c6-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-ccbrl\" (UID: \"7959a0fa-00bd-492c-9892-a8c8727549c6\") " pod="openstack/dnsmasq-dns-7756b9d78c-ccbrl" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.445542 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bm4h\" (UniqueName: \"kubernetes.io/projected/7959a0fa-00bd-492c-9892-a8c8727549c6-kube-api-access-5bm4h\") pod \"dnsmasq-dns-7756b9d78c-ccbrl\" (UID: \"7959a0fa-00bd-492c-9892-a8c8727549c6\") " pod="openstack/dnsmasq-dns-7756b9d78c-ccbrl" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.527251 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c28a361-2a59-45f2-baeb-e4d5313b6c17-combined-ca-bundle\") pod \"heat-cfnapi-74c87bfcc9-g5dr4\" (UID: \"6c28a361-2a59-45f2-baeb-e4d5313b6c17\") " pod="openstack/heat-cfnapi-74c87bfcc9-g5dr4" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.527578 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c28a361-2a59-45f2-baeb-e4d5313b6c17-config-data\") pod \"heat-cfnapi-74c87bfcc9-g5dr4\" (UID: \"6c28a361-2a59-45f2-baeb-e4d5313b6c17\") " pod="openstack/heat-cfnapi-74c87bfcc9-g5dr4" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.527702 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6c28a361-2a59-45f2-baeb-e4d5313b6c17-config-data-custom\") pod \"heat-cfnapi-74c87bfcc9-g5dr4\" (UID: \"6c28a361-2a59-45f2-baeb-e4d5313b6c17\") " pod="openstack/heat-cfnapi-74c87bfcc9-g5dr4" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.527862 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfcz2\" (UniqueName: \"kubernetes.io/projected/6c28a361-2a59-45f2-baeb-e4d5313b6c17-kube-api-access-tfcz2\") pod \"heat-cfnapi-74c87bfcc9-g5dr4\" (UID: \"6c28a361-2a59-45f2-baeb-e4d5313b6c17\") " pod="openstack/heat-cfnapi-74c87bfcc9-g5dr4" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.527992 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrk9t\" (UniqueName: \"kubernetes.io/projected/4fd29ee2-33af-4629-8c0d-fa62c0e07240-kube-api-access-hrk9t\") pod \"heat-api-667b98697-gxqph\" (UID: \"4fd29ee2-33af-4629-8c0d-fa62c0e07240\") " pod="openstack/heat-api-667b98697-gxqph" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.528146 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fd29ee2-33af-4629-8c0d-fa62c0e07240-config-data\") pod \"heat-api-667b98697-gxqph\" (UID: \"4fd29ee2-33af-4629-8c0d-fa62c0e07240\") " pod="openstack/heat-api-667b98697-gxqph" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.528244 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4fd29ee2-33af-4629-8c0d-fa62c0e07240-config-data-custom\") pod \"heat-api-667b98697-gxqph\" (UID: \"4fd29ee2-33af-4629-8c0d-fa62c0e07240\") " pod="openstack/heat-api-667b98697-gxqph" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.528366 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fd29ee2-33af-4629-8c0d-fa62c0e07240-combined-ca-bundle\") pod \"heat-api-667b98697-gxqph\" (UID: \"4fd29ee2-33af-4629-8c0d-fa62c0e07240\") " pod="openstack/heat-api-667b98697-gxqph" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.532452 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fd29ee2-33af-4629-8c0d-fa62c0e07240-combined-ca-bundle\") pod \"heat-api-667b98697-gxqph\" (UID: \"4fd29ee2-33af-4629-8c0d-fa62c0e07240\") " pod="openstack/heat-api-667b98697-gxqph" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.534151 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fd29ee2-33af-4629-8c0d-fa62c0e07240-config-data\") pod \"heat-api-667b98697-gxqph\" (UID: \"4fd29ee2-33af-4629-8c0d-fa62c0e07240\") " pod="openstack/heat-api-667b98697-gxqph" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.534949 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4fd29ee2-33af-4629-8c0d-fa62c0e07240-config-data-custom\") pod \"heat-api-667b98697-gxqph\" (UID: \"4fd29ee2-33af-4629-8c0d-fa62c0e07240\") " pod="openstack/heat-api-667b98697-gxqph" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.547869 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrk9t\" (UniqueName: \"kubernetes.io/projected/4fd29ee2-33af-4629-8c0d-fa62c0e07240-kube-api-access-hrk9t\") pod \"heat-api-667b98697-gxqph\" (UID: \"4fd29ee2-33af-4629-8c0d-fa62c0e07240\") " pod="openstack/heat-api-667b98697-gxqph" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.553171 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-ccbrl" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.630355 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c28a361-2a59-45f2-baeb-e4d5313b6c17-combined-ca-bundle\") pod \"heat-cfnapi-74c87bfcc9-g5dr4\" (UID: \"6c28a361-2a59-45f2-baeb-e4d5313b6c17\") " pod="openstack/heat-cfnapi-74c87bfcc9-g5dr4" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.630418 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c28a361-2a59-45f2-baeb-e4d5313b6c17-config-data\") pod \"heat-cfnapi-74c87bfcc9-g5dr4\" (UID: \"6c28a361-2a59-45f2-baeb-e4d5313b6c17\") " pod="openstack/heat-cfnapi-74c87bfcc9-g5dr4" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.630453 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6c28a361-2a59-45f2-baeb-e4d5313b6c17-config-data-custom\") pod \"heat-cfnapi-74c87bfcc9-g5dr4\" (UID: \"6c28a361-2a59-45f2-baeb-e4d5313b6c17\") " pod="openstack/heat-cfnapi-74c87bfcc9-g5dr4" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.630489 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfcz2\" (UniqueName: \"kubernetes.io/projected/6c28a361-2a59-45f2-baeb-e4d5313b6c17-kube-api-access-tfcz2\") pod \"heat-cfnapi-74c87bfcc9-g5dr4\" (UID: \"6c28a361-2a59-45f2-baeb-e4d5313b6c17\") " pod="openstack/heat-cfnapi-74c87bfcc9-g5dr4" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.634942 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6c28a361-2a59-45f2-baeb-e4d5313b6c17-config-data-custom\") pod \"heat-cfnapi-74c87bfcc9-g5dr4\" (UID: \"6c28a361-2a59-45f2-baeb-e4d5313b6c17\") " pod="openstack/heat-cfnapi-74c87bfcc9-g5dr4" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.635128 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c28a361-2a59-45f2-baeb-e4d5313b6c17-config-data\") pod \"heat-cfnapi-74c87bfcc9-g5dr4\" (UID: \"6c28a361-2a59-45f2-baeb-e4d5313b6c17\") " pod="openstack/heat-cfnapi-74c87bfcc9-g5dr4" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.635542 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c28a361-2a59-45f2-baeb-e4d5313b6c17-combined-ca-bundle\") pod \"heat-cfnapi-74c87bfcc9-g5dr4\" (UID: \"6c28a361-2a59-45f2-baeb-e4d5313b6c17\") " pod="openstack/heat-cfnapi-74c87bfcc9-g5dr4" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.658401 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfcz2\" (UniqueName: \"kubernetes.io/projected/6c28a361-2a59-45f2-baeb-e4d5313b6c17-kube-api-access-tfcz2\") pod \"heat-cfnapi-74c87bfcc9-g5dr4\" (UID: \"6c28a361-2a59-45f2-baeb-e4d5313b6c17\") " pod="openstack/heat-cfnapi-74c87bfcc9-g5dr4" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.797859 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-667b98697-gxqph" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.808103 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-74c87bfcc9-g5dr4" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.921853 4867 generic.go:334] "Generic (PLEG): container finished" podID="7ce36665-fb1a-4860-bc8a-5e12431d4cd6" containerID="fe5aa9c47c46abdc1b30cca0eb25c76a83c0676a5128f68950adc248471821b2" exitCode=0 Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.921902 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7ce36665-fb1a-4860-bc8a-5e12431d4cd6","Type":"ContainerDied","Data":"fe5aa9c47c46abdc1b30cca0eb25c76a83c0676a5128f68950adc248471821b2"} Feb 14 04:32:33 crc kubenswrapper[4867]: I0214 04:32:33.528462 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-5ffts"] Feb 14 04:32:33 crc kubenswrapper[4867]: I0214 04:32:33.534377 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-5ffts" Feb 14 04:32:33 crc kubenswrapper[4867]: I0214 04:32:33.591119 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-5ffts"] Feb 14 04:32:33 crc kubenswrapper[4867]: I0214 04:32:33.634212 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-t8trt"] Feb 14 04:32:33 crc kubenswrapper[4867]: I0214 04:32:33.637547 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-t8trt" Feb 14 04:32:33 crc kubenswrapper[4867]: I0214 04:32:33.652130 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-t8trt"] Feb 14 04:32:33 crc kubenswrapper[4867]: I0214 04:32:33.676659 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/289f81c2-9092-4a51-a1b4-8eedaa09aedb-operator-scripts\") pod \"nova-api-db-create-5ffts\" (UID: \"289f81c2-9092-4a51-a1b4-8eedaa09aedb\") " pod="openstack/nova-api-db-create-5ffts" Feb 14 04:32:33 crc kubenswrapper[4867]: I0214 04:32:33.676972 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfw6n\" (UniqueName: \"kubernetes.io/projected/289f81c2-9092-4a51-a1b4-8eedaa09aedb-kube-api-access-pfw6n\") pod \"nova-api-db-create-5ffts\" (UID: \"289f81c2-9092-4a51-a1b4-8eedaa09aedb\") " pod="openstack/nova-api-db-create-5ffts" Feb 14 04:32:33 crc kubenswrapper[4867]: I0214 04:32:33.714142 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-slfhr"] Feb 14 04:32:33 crc kubenswrapper[4867]: I0214 04:32:33.715942 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-slfhr" Feb 14 04:32:33 crc kubenswrapper[4867]: I0214 04:32:33.760611 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-a338-account-create-update-2zjhb"] Feb 14 04:32:33 crc kubenswrapper[4867]: I0214 04:32:33.762989 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a338-account-create-update-2zjhb" Feb 14 04:32:33 crc kubenswrapper[4867]: I0214 04:32:33.766872 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 14 04:32:33 crc kubenswrapper[4867]: I0214 04:32:33.778957 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfw6n\" (UniqueName: \"kubernetes.io/projected/289f81c2-9092-4a51-a1b4-8eedaa09aedb-kube-api-access-pfw6n\") pod \"nova-api-db-create-5ffts\" (UID: \"289f81c2-9092-4a51-a1b4-8eedaa09aedb\") " pod="openstack/nova-api-db-create-5ffts" Feb 14 04:32:33 crc kubenswrapper[4867]: I0214 04:32:33.779061 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/708fbc3f-a05a-4b29-b455-32db117495d1-operator-scripts\") pod \"nova-cell0-db-create-t8trt\" (UID: \"708fbc3f-a05a-4b29-b455-32db117495d1\") " pod="openstack/nova-cell0-db-create-t8trt" Feb 14 04:32:33 crc kubenswrapper[4867]: I0214 04:32:33.779106 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5ch7\" (UniqueName: \"kubernetes.io/projected/708fbc3f-a05a-4b29-b455-32db117495d1-kube-api-access-k5ch7\") pod \"nova-cell0-db-create-t8trt\" (UID: \"708fbc3f-a05a-4b29-b455-32db117495d1\") " pod="openstack/nova-cell0-db-create-t8trt" Feb 14 04:32:33 crc kubenswrapper[4867]: I0214 04:32:33.779149 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/289f81c2-9092-4a51-a1b4-8eedaa09aedb-operator-scripts\") pod \"nova-api-db-create-5ffts\" (UID: \"289f81c2-9092-4a51-a1b4-8eedaa09aedb\") " pod="openstack/nova-api-db-create-5ffts" Feb 14 04:32:33 crc kubenswrapper[4867]: I0214 04:32:33.780052 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/289f81c2-9092-4a51-a1b4-8eedaa09aedb-operator-scripts\") pod \"nova-api-db-create-5ffts\" (UID: \"289f81c2-9092-4a51-a1b4-8eedaa09aedb\") " pod="openstack/nova-api-db-create-5ffts" Feb 14 04:32:33 crc kubenswrapper[4867]: I0214 04:32:33.791580 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-slfhr"] Feb 14 04:32:33 crc kubenswrapper[4867]: I0214 04:32:33.807789 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-a338-account-create-update-2zjhb"] Feb 14 04:32:33 crc kubenswrapper[4867]: I0214 04:32:33.816191 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfw6n\" (UniqueName: \"kubernetes.io/projected/289f81c2-9092-4a51-a1b4-8eedaa09aedb-kube-api-access-pfw6n\") pod \"nova-api-db-create-5ffts\" (UID: \"289f81c2-9092-4a51-a1b4-8eedaa09aedb\") " pod="openstack/nova-api-db-create-5ffts" Feb 14 04:32:33 crc kubenswrapper[4867]: I0214 04:32:33.881882 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/708fbc3f-a05a-4b29-b455-32db117495d1-operator-scripts\") pod \"nova-cell0-db-create-t8trt\" (UID: \"708fbc3f-a05a-4b29-b455-32db117495d1\") " pod="openstack/nova-cell0-db-create-t8trt" Feb 14 04:32:33 crc kubenswrapper[4867]: I0214 04:32:33.881963 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5ch7\" (UniqueName: \"kubernetes.io/projected/708fbc3f-a05a-4b29-b455-32db117495d1-kube-api-access-k5ch7\") pod \"nova-cell0-db-create-t8trt\" (UID: \"708fbc3f-a05a-4b29-b455-32db117495d1\") " pod="openstack/nova-cell0-db-create-t8trt" Feb 14 04:32:33 crc kubenswrapper[4867]: I0214 04:32:33.882026 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c287t\" (UniqueName: \"kubernetes.io/projected/730dbd9b-ddff-4d09-89ff-b9135ed83042-kube-api-access-c287t\") pod \"nova-cell1-db-create-slfhr\" (UID: \"730dbd9b-ddff-4d09-89ff-b9135ed83042\") " pod="openstack/nova-cell1-db-create-slfhr" Feb 14 04:32:33 crc kubenswrapper[4867]: I0214 04:32:33.882054 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/730dbd9b-ddff-4d09-89ff-b9135ed83042-operator-scripts\") pod \"nova-cell1-db-create-slfhr\" (UID: \"730dbd9b-ddff-4d09-89ff-b9135ed83042\") " pod="openstack/nova-cell1-db-create-slfhr" Feb 14 04:32:33 crc kubenswrapper[4867]: I0214 04:32:33.882132 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/041c55d6-87c7-47b4-a53b-9b38cb85e3d2-operator-scripts\") pod \"nova-api-a338-account-create-update-2zjhb\" (UID: \"041c55d6-87c7-47b4-a53b-9b38cb85e3d2\") " pod="openstack/nova-api-a338-account-create-update-2zjhb" Feb 14 04:32:33 crc kubenswrapper[4867]: I0214 04:32:33.882160 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v628v\" (UniqueName: \"kubernetes.io/projected/041c55d6-87c7-47b4-a53b-9b38cb85e3d2-kube-api-access-v628v\") pod \"nova-api-a338-account-create-update-2zjhb\" (UID: \"041c55d6-87c7-47b4-a53b-9b38cb85e3d2\") " pod="openstack/nova-api-a338-account-create-update-2zjhb" Feb 14 04:32:33 crc kubenswrapper[4867]: I0214 04:32:33.882706 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/708fbc3f-a05a-4b29-b455-32db117495d1-operator-scripts\") pod \"nova-cell0-db-create-t8trt\" (UID: \"708fbc3f-a05a-4b29-b455-32db117495d1\") " pod="openstack/nova-cell0-db-create-t8trt" Feb 14 04:32:33 crc kubenswrapper[4867]: I0214 04:32:33.888956 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-8094-account-create-update-pbbgl"] Feb 14 04:32:33 crc kubenswrapper[4867]: I0214 04:32:33.890620 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-8094-account-create-update-pbbgl" Feb 14 04:32:33 crc kubenswrapper[4867]: I0214 04:32:33.892944 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 14 04:32:33 crc kubenswrapper[4867]: I0214 04:32:33.898143 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-5ffts" Feb 14 04:32:33 crc kubenswrapper[4867]: I0214 04:32:33.907771 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-8094-account-create-update-pbbgl"] Feb 14 04:32:33 crc kubenswrapper[4867]: I0214 04:32:33.928825 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5ch7\" (UniqueName: \"kubernetes.io/projected/708fbc3f-a05a-4b29-b455-32db117495d1-kube-api-access-k5ch7\") pod \"nova-cell0-db-create-t8trt\" (UID: \"708fbc3f-a05a-4b29-b455-32db117495d1\") " pod="openstack/nova-cell0-db-create-t8trt" Feb 14 04:32:33 crc kubenswrapper[4867]: I0214 04:32:33.981443 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-t8trt" Feb 14 04:32:33 crc kubenswrapper[4867]: I0214 04:32:33.985347 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/041c55d6-87c7-47b4-a53b-9b38cb85e3d2-operator-scripts\") pod \"nova-api-a338-account-create-update-2zjhb\" (UID: \"041c55d6-87c7-47b4-a53b-9b38cb85e3d2\") " pod="openstack/nova-api-a338-account-create-update-2zjhb" Feb 14 04:32:33 crc kubenswrapper[4867]: I0214 04:32:33.985406 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v628v\" (UniqueName: \"kubernetes.io/projected/041c55d6-87c7-47b4-a53b-9b38cb85e3d2-kube-api-access-v628v\") pod \"nova-api-a338-account-create-update-2zjhb\" (UID: \"041c55d6-87c7-47b4-a53b-9b38cb85e3d2\") " pod="openstack/nova-api-a338-account-create-update-2zjhb" Feb 14 04:32:33 crc kubenswrapper[4867]: I0214 04:32:33.986295 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/041c55d6-87c7-47b4-a53b-9b38cb85e3d2-operator-scripts\") pod \"nova-api-a338-account-create-update-2zjhb\" (UID: \"041c55d6-87c7-47b4-a53b-9b38cb85e3d2\") " pod="openstack/nova-api-a338-account-create-update-2zjhb" Feb 14 04:32:33 crc kubenswrapper[4867]: I0214 04:32:33.985532 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80c71d92-a9d1-4256-b7be-678dc34d1562-operator-scripts\") pod \"nova-cell0-8094-account-create-update-pbbgl\" (UID: \"80c71d92-a9d1-4256-b7be-678dc34d1562\") " pod="openstack/nova-cell0-8094-account-create-update-pbbgl" Feb 14 04:32:33 crc kubenswrapper[4867]: I0214 04:32:33.987330 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnn7m\" (UniqueName: \"kubernetes.io/projected/80c71d92-a9d1-4256-b7be-678dc34d1562-kube-api-access-fnn7m\") pod \"nova-cell0-8094-account-create-update-pbbgl\" (UID: \"80c71d92-a9d1-4256-b7be-678dc34d1562\") " pod="openstack/nova-cell0-8094-account-create-update-pbbgl" Feb 14 04:32:33 crc kubenswrapper[4867]: I0214 04:32:33.987370 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c287t\" (UniqueName: \"kubernetes.io/projected/730dbd9b-ddff-4d09-89ff-b9135ed83042-kube-api-access-c287t\") pod \"nova-cell1-db-create-slfhr\" (UID: \"730dbd9b-ddff-4d09-89ff-b9135ed83042\") " pod="openstack/nova-cell1-db-create-slfhr" Feb 14 04:32:33 crc kubenswrapper[4867]: I0214 04:32:33.987397 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/730dbd9b-ddff-4d09-89ff-b9135ed83042-operator-scripts\") pod \"nova-cell1-db-create-slfhr\" (UID: \"730dbd9b-ddff-4d09-89ff-b9135ed83042\") " pod="openstack/nova-cell1-db-create-slfhr" Feb 14 04:32:33 crc kubenswrapper[4867]: I0214 04:32:33.988300 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/730dbd9b-ddff-4d09-89ff-b9135ed83042-operator-scripts\") pod \"nova-cell1-db-create-slfhr\" (UID: \"730dbd9b-ddff-4d09-89ff-b9135ed83042\") " pod="openstack/nova-cell1-db-create-slfhr" Feb 14 04:32:34 crc kubenswrapper[4867]: I0214 04:32:34.031256 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c287t\" (UniqueName: \"kubernetes.io/projected/730dbd9b-ddff-4d09-89ff-b9135ed83042-kube-api-access-c287t\") pod \"nova-cell1-db-create-slfhr\" (UID: \"730dbd9b-ddff-4d09-89ff-b9135ed83042\") " pod="openstack/nova-cell1-db-create-slfhr" Feb 14 04:32:34 crc kubenswrapper[4867]: I0214 04:32:34.035220 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v628v\" (UniqueName: \"kubernetes.io/projected/041c55d6-87c7-47b4-a53b-9b38cb85e3d2-kube-api-access-v628v\") pod \"nova-api-a338-account-create-update-2zjhb\" (UID: \"041c55d6-87c7-47b4-a53b-9b38cb85e3d2\") " pod="openstack/nova-api-a338-account-create-update-2zjhb" Feb 14 04:32:34 crc kubenswrapper[4867]: I0214 04:32:34.049718 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-slfhr" Feb 14 04:32:34 crc kubenswrapper[4867]: I0214 04:32:34.089481 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80c71d92-a9d1-4256-b7be-678dc34d1562-operator-scripts\") pod \"nova-cell0-8094-account-create-update-pbbgl\" (UID: \"80c71d92-a9d1-4256-b7be-678dc34d1562\") " pod="openstack/nova-cell0-8094-account-create-update-pbbgl" Feb 14 04:32:34 crc kubenswrapper[4867]: I0214 04:32:34.099920 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnn7m\" (UniqueName: \"kubernetes.io/projected/80c71d92-a9d1-4256-b7be-678dc34d1562-kube-api-access-fnn7m\") pod \"nova-cell0-8094-account-create-update-pbbgl\" (UID: \"80c71d92-a9d1-4256-b7be-678dc34d1562\") " pod="openstack/nova-cell0-8094-account-create-update-pbbgl" Feb 14 04:32:34 crc kubenswrapper[4867]: I0214 04:32:34.099459 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a338-account-create-update-2zjhb" Feb 14 04:32:34 crc kubenswrapper[4867]: I0214 04:32:34.090693 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80c71d92-a9d1-4256-b7be-678dc34d1562-operator-scripts\") pod \"nova-cell0-8094-account-create-update-pbbgl\" (UID: \"80c71d92-a9d1-4256-b7be-678dc34d1562\") " pod="openstack/nova-cell0-8094-account-create-update-pbbgl" Feb 14 04:32:34 crc kubenswrapper[4867]: I0214 04:32:34.145155 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-8539-account-create-update-9j9p8"] Feb 14 04:32:34 crc kubenswrapper[4867]: I0214 04:32:34.146043 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnn7m\" (UniqueName: \"kubernetes.io/projected/80c71d92-a9d1-4256-b7be-678dc34d1562-kube-api-access-fnn7m\") pod \"nova-cell0-8094-account-create-update-pbbgl\" (UID: \"80c71d92-a9d1-4256-b7be-678dc34d1562\") " pod="openstack/nova-cell0-8094-account-create-update-pbbgl" Feb 14 04:32:34 crc kubenswrapper[4867]: I0214 04:32:34.177232 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-8539-account-create-update-9j9p8" Feb 14 04:32:34 crc kubenswrapper[4867]: I0214 04:32:34.180321 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 14 04:32:34 crc kubenswrapper[4867]: I0214 04:32:34.192669 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-8539-account-create-update-9j9p8"] Feb 14 04:32:34 crc kubenswrapper[4867]: I0214 04:32:34.278475 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-8094-account-create-update-pbbgl" Feb 14 04:32:34 crc kubenswrapper[4867]: I0214 04:32:34.305467 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b7729cf-7332-4432-999f-fbee997b2201-operator-scripts\") pod \"nova-cell1-8539-account-create-update-9j9p8\" (UID: \"2b7729cf-7332-4432-999f-fbee997b2201\") " pod="openstack/nova-cell1-8539-account-create-update-9j9p8" Feb 14 04:32:34 crc kubenswrapper[4867]: I0214 04:32:34.305602 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbm8h\" (UniqueName: \"kubernetes.io/projected/2b7729cf-7332-4432-999f-fbee997b2201-kube-api-access-bbm8h\") pod \"nova-cell1-8539-account-create-update-9j9p8\" (UID: \"2b7729cf-7332-4432-999f-fbee997b2201\") " pod="openstack/nova-cell1-8539-account-create-update-9j9p8" Feb 14 04:32:34 crc kubenswrapper[4867]: I0214 04:32:34.409292 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b7729cf-7332-4432-999f-fbee997b2201-operator-scripts\") pod \"nova-cell1-8539-account-create-update-9j9p8\" (UID: \"2b7729cf-7332-4432-999f-fbee997b2201\") " pod="openstack/nova-cell1-8539-account-create-update-9j9p8" Feb 14 04:32:34 crc kubenswrapper[4867]: I0214 04:32:34.409451 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbm8h\" (UniqueName: \"kubernetes.io/projected/2b7729cf-7332-4432-999f-fbee997b2201-kube-api-access-bbm8h\") pod \"nova-cell1-8539-account-create-update-9j9p8\" (UID: \"2b7729cf-7332-4432-999f-fbee997b2201\") " pod="openstack/nova-cell1-8539-account-create-update-9j9p8" Feb 14 04:32:34 crc kubenswrapper[4867]: I0214 04:32:34.410273 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b7729cf-7332-4432-999f-fbee997b2201-operator-scripts\") pod \"nova-cell1-8539-account-create-update-9j9p8\" (UID: \"2b7729cf-7332-4432-999f-fbee997b2201\") " pod="openstack/nova-cell1-8539-account-create-update-9j9p8" Feb 14 04:32:34 crc kubenswrapper[4867]: I0214 04:32:34.428329 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbm8h\" (UniqueName: \"kubernetes.io/projected/2b7729cf-7332-4432-999f-fbee997b2201-kube-api-access-bbm8h\") pod \"nova-cell1-8539-account-create-update-9j9p8\" (UID: \"2b7729cf-7332-4432-999f-fbee997b2201\") " pod="openstack/nova-cell1-8539-account-create-update-9j9p8" Feb 14 04:32:34 crc kubenswrapper[4867]: I0214 04:32:34.535607 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-8539-account-create-update-9j9p8" Feb 14 04:32:35 crc kubenswrapper[4867]: I0214 04:32:35.777454 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="7ce36665-fb1a-4860-bc8a-5e12431d4cd6" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.205:3000/\": dial tcp 10.217.0.205:3000: connect: connection refused" Feb 14 04:32:37 crc kubenswrapper[4867]: I0214 04:32:37.355542 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-5ffts"] Feb 14 04:32:37 crc kubenswrapper[4867]: I0214 04:32:37.401230 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 04:32:37 crc kubenswrapper[4867]: I0214 04:32:37.502955 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-5559ff585f-sb7wb" Feb 14 04:32:37 crc kubenswrapper[4867]: I0214 04:32:37.511674 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-5559ff585f-sb7wb" Feb 14 04:32:37 crc kubenswrapper[4867]: I0214 04:32:37.526665 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7886d5654f-wzr2s" Feb 14 04:32:37 crc kubenswrapper[4867]: I0214 04:32:37.534933 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4bfq7\" (UniqueName: \"kubernetes.io/projected/7ce36665-fb1a-4860-bc8a-5e12431d4cd6-kube-api-access-4bfq7\") pod \"7ce36665-fb1a-4860-bc8a-5e12431d4cd6\" (UID: \"7ce36665-fb1a-4860-bc8a-5e12431d4cd6\") " Feb 14 04:32:37 crc kubenswrapper[4867]: I0214 04:32:37.535062 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ce36665-fb1a-4860-bc8a-5e12431d4cd6-combined-ca-bundle\") pod \"7ce36665-fb1a-4860-bc8a-5e12431d4cd6\" (UID: \"7ce36665-fb1a-4860-bc8a-5e12431d4cd6\") " Feb 14 04:32:37 crc kubenswrapper[4867]: I0214 04:32:37.535138 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ce36665-fb1a-4860-bc8a-5e12431d4cd6-scripts\") pod \"7ce36665-fb1a-4860-bc8a-5e12431d4cd6\" (UID: \"7ce36665-fb1a-4860-bc8a-5e12431d4cd6\") " Feb 14 04:32:37 crc kubenswrapper[4867]: I0214 04:32:37.535169 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ce36665-fb1a-4860-bc8a-5e12431d4cd6-config-data\") pod \"7ce36665-fb1a-4860-bc8a-5e12431d4cd6\" (UID: \"7ce36665-fb1a-4860-bc8a-5e12431d4cd6\") " Feb 14 04:32:37 crc kubenswrapper[4867]: I0214 04:32:37.535249 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7ce36665-fb1a-4860-bc8a-5e12431d4cd6-sg-core-conf-yaml\") pod \"7ce36665-fb1a-4860-bc8a-5e12431d4cd6\" (UID: \"7ce36665-fb1a-4860-bc8a-5e12431d4cd6\") " Feb 14 04:32:37 crc kubenswrapper[4867]: I0214 04:32:37.535414 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ce36665-fb1a-4860-bc8a-5e12431d4cd6-log-httpd\") pod \"7ce36665-fb1a-4860-bc8a-5e12431d4cd6\" (UID: \"7ce36665-fb1a-4860-bc8a-5e12431d4cd6\") " Feb 14 04:32:37 crc kubenswrapper[4867]: I0214 04:32:37.535435 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ce36665-fb1a-4860-bc8a-5e12431d4cd6-run-httpd\") pod \"7ce36665-fb1a-4860-bc8a-5e12431d4cd6\" (UID: \"7ce36665-fb1a-4860-bc8a-5e12431d4cd6\") " Feb 14 04:32:37 crc kubenswrapper[4867]: I0214 04:32:37.538767 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ce36665-fb1a-4860-bc8a-5e12431d4cd6-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "7ce36665-fb1a-4860-bc8a-5e12431d4cd6" (UID: "7ce36665-fb1a-4860-bc8a-5e12431d4cd6"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:32:37 crc kubenswrapper[4867]: I0214 04:32:37.565101 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ce36665-fb1a-4860-bc8a-5e12431d4cd6-scripts" (OuterVolumeSpecName: "scripts") pod "7ce36665-fb1a-4860-bc8a-5e12431d4cd6" (UID: "7ce36665-fb1a-4860-bc8a-5e12431d4cd6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:37 crc kubenswrapper[4867]: I0214 04:32:37.570796 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ce36665-fb1a-4860-bc8a-5e12431d4cd6-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "7ce36665-fb1a-4860-bc8a-5e12431d4cd6" (UID: "7ce36665-fb1a-4860-bc8a-5e12431d4cd6"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:32:37 crc kubenswrapper[4867]: I0214 04:32:37.640203 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ce36665-fb1a-4860-bc8a-5e12431d4cd6-kube-api-access-4bfq7" (OuterVolumeSpecName: "kube-api-access-4bfq7") pod "7ce36665-fb1a-4860-bc8a-5e12431d4cd6" (UID: "7ce36665-fb1a-4860-bc8a-5e12431d4cd6"). InnerVolumeSpecName "kube-api-access-4bfq7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:32:37 crc kubenswrapper[4867]: I0214 04:32:37.650575 4867 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ce36665-fb1a-4860-bc8a-5e12431d4cd6-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:37 crc kubenswrapper[4867]: I0214 04:32:37.650612 4867 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ce36665-fb1a-4860-bc8a-5e12431d4cd6-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:37 crc kubenswrapper[4867]: I0214 04:32:37.650624 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4bfq7\" (UniqueName: \"kubernetes.io/projected/7ce36665-fb1a-4860-bc8a-5e12431d4cd6-kube-api-access-4bfq7\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:37 crc kubenswrapper[4867]: I0214 04:32:37.650636 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ce36665-fb1a-4860-bc8a-5e12431d4cd6-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:37 crc kubenswrapper[4867]: I0214 04:32:37.702549 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ce36665-fb1a-4860-bc8a-5e12431d4cd6-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "7ce36665-fb1a-4860-bc8a-5e12431d4cd6" (UID: "7ce36665-fb1a-4860-bc8a-5e12431d4cd6"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:37 crc kubenswrapper[4867]: I0214 04:32:37.732729 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-74c5fcd7cb-sr8z9"] Feb 14 04:32:37 crc kubenswrapper[4867]: I0214 04:32:37.733116 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-74c5fcd7cb-sr8z9" podUID="9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149" containerName="neutron-httpd" containerID="cri-o://a00d0ebf0ff2de031204758114db4258ee7b4d688e4e3e8fcab6451b81a33050" gracePeriod=30 Feb 14 04:32:37 crc kubenswrapper[4867]: I0214 04:32:37.733327 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-74c5fcd7cb-sr8z9" podUID="9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149" containerName="neutron-api" containerID="cri-o://a3270a5cb491a003b02a8ff42a33368a493af6d0e24d1558f76c114ff7412184" gracePeriod=30 Feb 14 04:32:37 crc kubenswrapper[4867]: I0214 04:32:37.753059 4867 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7ce36665-fb1a-4860-bc8a-5e12431d4cd6-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:37 crc kubenswrapper[4867]: I0214 04:32:37.797749 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ce36665-fb1a-4860-bc8a-5e12431d4cd6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7ce36665-fb1a-4860-bc8a-5e12431d4cd6" (UID: "7ce36665-fb1a-4860-bc8a-5e12431d4cd6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:37.856252 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ce36665-fb1a-4860-bc8a-5e12431d4cd6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:37.875478 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ce36665-fb1a-4860-bc8a-5e12431d4cd6-config-data" (OuterVolumeSpecName: "config-data") pod "7ce36665-fb1a-4860-bc8a-5e12431d4cd6" (UID: "7ce36665-fb1a-4860-bc8a-5e12431d4cd6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:37.959467 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ce36665-fb1a-4860-bc8a-5e12431d4cd6-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:37.993010 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-5ffts" event={"ID":"289f81c2-9092-4a51-a1b4-8eedaa09aedb","Type":"ContainerStarted","Data":"d44967a1ebd4e2f70ff240361ffa85a32ea8014b336becbf306d8e84e9755446"} Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:37.999746 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7ce36665-fb1a-4860-bc8a-5e12431d4cd6","Type":"ContainerDied","Data":"c5af4b5f8602cd5b59f39b9b073911fd553022dc70a80e4fe1af5abd876f1920"} Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:37.999773 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:37.999810 4867 scope.go:117] "RemoveContainer" containerID="fbab9809e65a478959fcc20b95a52910111448975d370afd8952ef2712282827" Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:38.005703 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" event={"ID":"5992e46c-bce7-4b9f-82f2-c7ffb93286cd","Type":"ContainerStarted","Data":"7203a3aa09f0fa634ee4bcd02b0e1dff1e29376e8dd84a4e743cbea72d4c480e"} Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:38.050190 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:38.080970 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:38.083223 4867 scope.go:117] "RemoveContainer" containerID="48f01dc9aa282450371f6297a6c143b96aef3bdcad1b711eb94a51bfc381c6b0" Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:38.142715 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:32:38 crc kubenswrapper[4867]: E0214 04:32:38.143283 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ce36665-fb1a-4860-bc8a-5e12431d4cd6" containerName="proxy-httpd" Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:38.143296 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ce36665-fb1a-4860-bc8a-5e12431d4cd6" containerName="proxy-httpd" Feb 14 04:32:38 crc kubenswrapper[4867]: E0214 04:32:38.143308 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ce36665-fb1a-4860-bc8a-5e12431d4cd6" containerName="ceilometer-central-agent" Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:38.143314 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ce36665-fb1a-4860-bc8a-5e12431d4cd6" containerName="ceilometer-central-agent" Feb 14 04:32:38 crc kubenswrapper[4867]: E0214 04:32:38.143329 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ce36665-fb1a-4860-bc8a-5e12431d4cd6" containerName="sg-core" Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:38.143335 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ce36665-fb1a-4860-bc8a-5e12431d4cd6" containerName="sg-core" Feb 14 04:32:38 crc kubenswrapper[4867]: E0214 04:32:38.143347 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ce36665-fb1a-4860-bc8a-5e12431d4cd6" containerName="ceilometer-notification-agent" Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:38.143353 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ce36665-fb1a-4860-bc8a-5e12431d4cd6" containerName="ceilometer-notification-agent" Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:38.143646 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ce36665-fb1a-4860-bc8a-5e12431d4cd6" containerName="sg-core" Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:38.143657 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ce36665-fb1a-4860-bc8a-5e12431d4cd6" containerName="ceilometer-notification-agent" Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:38.143669 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ce36665-fb1a-4860-bc8a-5e12431d4cd6" containerName="ceilometer-central-agent" Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:38.143716 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ce36665-fb1a-4860-bc8a-5e12431d4cd6" containerName="proxy-httpd" Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:38.153888 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:38.157344 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:38.160903 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:38.164520 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30f61907-9cb4-4873-99eb-bbb5adf21fcb-run-httpd\") pod \"ceilometer-0\" (UID: \"30f61907-9cb4-4873-99eb-bbb5adf21fcb\") " pod="openstack/ceilometer-0" Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:38.164575 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30f61907-9cb4-4873-99eb-bbb5adf21fcb-config-data\") pod \"ceilometer-0\" (UID: \"30f61907-9cb4-4873-99eb-bbb5adf21fcb\") " pod="openstack/ceilometer-0" Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:38.164630 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30f61907-9cb4-4873-99eb-bbb5adf21fcb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"30f61907-9cb4-4873-99eb-bbb5adf21fcb\") " pod="openstack/ceilometer-0" Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:38.164707 4867 scope.go:117] "RemoveContainer" containerID="fe5aa9c47c46abdc1b30cca0eb25c76a83c0676a5128f68950adc248471821b2" Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:38.164753 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30f61907-9cb4-4873-99eb-bbb5adf21fcb-log-httpd\") pod \"ceilometer-0\" (UID: \"30f61907-9cb4-4873-99eb-bbb5adf21fcb\") " pod="openstack/ceilometer-0" Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:38.164792 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glsgm\" (UniqueName: \"kubernetes.io/projected/30f61907-9cb4-4873-99eb-bbb5adf21fcb-kube-api-access-glsgm\") pod \"ceilometer-0\" (UID: \"30f61907-9cb4-4873-99eb-bbb5adf21fcb\") " pod="openstack/ceilometer-0" Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:38.164850 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/30f61907-9cb4-4873-99eb-bbb5adf21fcb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"30f61907-9cb4-4873-99eb-bbb5adf21fcb\") " pod="openstack/ceilometer-0" Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:38.164927 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30f61907-9cb4-4873-99eb-bbb5adf21fcb-scripts\") pod \"ceilometer-0\" (UID: \"30f61907-9cb4-4873-99eb-bbb5adf21fcb\") " pod="openstack/ceilometer-0" Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:38.183747 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:38.220571 4867 scope.go:117] "RemoveContainer" containerID="fa147253ee7488f81ea6eca1453e9afe783991b356d4806c11a9a0f690b9282a" Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:38.256347 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:38.256782 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="406727d4-ffca-4ade-b0ca-b5dbfcb23e24" containerName="glance-log" containerID="cri-o://461e174da477dbbe46e48418e6c4b74717f5d942fc161f7932d038f71bf9aca1" gracePeriod=30 Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:38.257784 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="406727d4-ffca-4ade-b0ca-b5dbfcb23e24" containerName="glance-httpd" containerID="cri-o://12a1d2cb9718993931d34f7f092630cac049d31e66bb907373a6a9ebfd3b2034" gracePeriod=30 Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:38.273642 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glsgm\" (UniqueName: \"kubernetes.io/projected/30f61907-9cb4-4873-99eb-bbb5adf21fcb-kube-api-access-glsgm\") pod \"ceilometer-0\" (UID: \"30f61907-9cb4-4873-99eb-bbb5adf21fcb\") " pod="openstack/ceilometer-0" Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:38.274450 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/30f61907-9cb4-4873-99eb-bbb5adf21fcb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"30f61907-9cb4-4873-99eb-bbb5adf21fcb\") " pod="openstack/ceilometer-0" Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:38.274498 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30f61907-9cb4-4873-99eb-bbb5adf21fcb-scripts\") pod \"ceilometer-0\" (UID: \"30f61907-9cb4-4873-99eb-bbb5adf21fcb\") " pod="openstack/ceilometer-0" Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:38.278762 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30f61907-9cb4-4873-99eb-bbb5adf21fcb-run-httpd\") pod \"ceilometer-0\" (UID: \"30f61907-9cb4-4873-99eb-bbb5adf21fcb\") " pod="openstack/ceilometer-0" Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:38.278963 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30f61907-9cb4-4873-99eb-bbb5adf21fcb-config-data\") pod \"ceilometer-0\" (UID: \"30f61907-9cb4-4873-99eb-bbb5adf21fcb\") " pod="openstack/ceilometer-0" Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:38.280014 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30f61907-9cb4-4873-99eb-bbb5adf21fcb-run-httpd\") pod \"ceilometer-0\" (UID: \"30f61907-9cb4-4873-99eb-bbb5adf21fcb\") " pod="openstack/ceilometer-0" Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:38.285565 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30f61907-9cb4-4873-99eb-bbb5adf21fcb-scripts\") pod \"ceilometer-0\" (UID: \"30f61907-9cb4-4873-99eb-bbb5adf21fcb\") " pod="openstack/ceilometer-0" Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:38.291850 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30f61907-9cb4-4873-99eb-bbb5adf21fcb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"30f61907-9cb4-4873-99eb-bbb5adf21fcb\") " pod="openstack/ceilometer-0" Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:38.293521 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30f61907-9cb4-4873-99eb-bbb5adf21fcb-log-httpd\") pod \"ceilometer-0\" (UID: \"30f61907-9cb4-4873-99eb-bbb5adf21fcb\") " pod="openstack/ceilometer-0" Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:38.294496 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30f61907-9cb4-4873-99eb-bbb5adf21fcb-log-httpd\") pod \"ceilometer-0\" (UID: \"30f61907-9cb4-4873-99eb-bbb5adf21fcb\") " pod="openstack/ceilometer-0" Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:38.295072 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/30f61907-9cb4-4873-99eb-bbb5adf21fcb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"30f61907-9cb4-4873-99eb-bbb5adf21fcb\") " pod="openstack/ceilometer-0" Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:38.295498 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30f61907-9cb4-4873-99eb-bbb5adf21fcb-config-data\") pod \"ceilometer-0\" (UID: \"30f61907-9cb4-4873-99eb-bbb5adf21fcb\") " pod="openstack/ceilometer-0" Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:38.299297 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glsgm\" (UniqueName: \"kubernetes.io/projected/30f61907-9cb4-4873-99eb-bbb5adf21fcb-kube-api-access-glsgm\") pod \"ceilometer-0\" (UID: \"30f61907-9cb4-4873-99eb-bbb5adf21fcb\") " pod="openstack/ceilometer-0" Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:38.301908 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30f61907-9cb4-4873-99eb-bbb5adf21fcb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"30f61907-9cb4-4873-99eb-bbb5adf21fcb\") " pod="openstack/ceilometer-0" Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:38.506106 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.029758 4867 generic.go:334] "Generic (PLEG): container finished" podID="9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149" containerID="a00d0ebf0ff2de031204758114db4258ee7b4d688e4e3e8fcab6451b81a33050" exitCode=0 Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.034731 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ce36665-fb1a-4860-bc8a-5e12431d4cd6" path="/var/lib/kubelet/pods/7ce36665-fb1a-4860-bc8a-5e12431d4cd6/volumes" Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.036022 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-5ffts" event={"ID":"289f81c2-9092-4a51-a1b4-8eedaa09aedb","Type":"ContainerStarted","Data":"edb8483472d537c583af237081de995fee4a32c9b18a192549b88c1b5ca41e5a"} Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.036047 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-74c5fcd7cb-sr8z9" event={"ID":"9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149","Type":"ContainerDied","Data":"a00d0ebf0ff2de031204758114db4258ee7b4d688e4e3e8fcab6451b81a33050"} Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.053078 4867 generic.go:334] "Generic (PLEG): container finished" podID="406727d4-ffca-4ade-b0ca-b5dbfcb23e24" containerID="461e174da477dbbe46e48418e6c4b74717f5d942fc161f7932d038f71bf9aca1" exitCode=143 Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.053175 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"406727d4-ffca-4ade-b0ca-b5dbfcb23e24","Type":"ContainerDied","Data":"461e174da477dbbe46e48418e6c4b74717f5d942fc161f7932d038f71bf9aca1"} Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.069808 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"6fdee887-8ecb-4c1e-8a88-0284fc050f0e","Type":"ContainerStarted","Data":"15477c8fe9da164a15217ee678063475cabb536791a08c99852060806de268b3"} Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.084036 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-5ffts" podStartSLOduration=6.084011855 podStartE2EDuration="6.084011855s" podCreationTimestamp="2026-02-14 04:32:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:32:39.070377019 +0000 UTC m=+1391.151314333" watchObservedRunningTime="2026-02-14 04:32:39.084011855 +0000 UTC m=+1391.164949169" Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.103655 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.108333205 podStartE2EDuration="16.103634763s" podCreationTimestamp="2026-02-14 04:32:23 +0000 UTC" firstStartedPulling="2026-02-14 04:32:23.87359682 +0000 UTC m=+1375.954534134" lastFinishedPulling="2026-02-14 04:32:36.868898378 +0000 UTC m=+1388.949835692" observedRunningTime="2026-02-14 04:32:39.086547333 +0000 UTC m=+1391.167484647" watchObservedRunningTime="2026-02-14 04:32:39.103634763 +0000 UTC m=+1391.184572077" Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.150068 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-8539-account-create-update-9j9p8"] Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.165216 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-74c87bfcc9-g5dr4"] Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.177634 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-667b98697-gxqph"] Feb 14 04:32:39 crc kubenswrapper[4867]: W0214 04:32:39.207405 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c28a361_2a59_45f2_baeb_e4d5313b6c17.slice/crio-279dcb9c4b235ad9ee4d170269ff377a20b494792ae727e8d6532186bac5ba51 WatchSource:0}: Error finding container 279dcb9c4b235ad9ee4d170269ff377a20b494792ae727e8d6532186bac5ba51: Status 404 returned error can't find the container with id 279dcb9c4b235ad9ee4d170269ff377a20b494792ae727e8d6532186bac5ba51 Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.435678 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-7797898b6d-54xz8"] Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.437432 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7797898b6d-54xz8" Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.470061 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-8f9d657ff-n8g4q"] Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.471847 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-8f9d657ff-n8g4q" Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.504723 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-7797898b6d-54xz8"] Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.534785 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-8f9d657ff-n8g4q"] Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.546126 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxrqv\" (UniqueName: \"kubernetes.io/projected/bf9a1d71-05e1-40ab-90a7-530d2083fe14-kube-api-access-jxrqv\") pod \"heat-api-8f9d657ff-n8g4q\" (UID: \"bf9a1d71-05e1-40ab-90a7-530d2083fe14\") " pod="openstack/heat-api-8f9d657ff-n8g4q" Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.546228 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpdz2\" (UniqueName: \"kubernetes.io/projected/7535f37c-f2f6-4e75-bfa2-48211fe86ef6-kube-api-access-rpdz2\") pod \"heat-engine-7797898b6d-54xz8\" (UID: \"7535f37c-f2f6-4e75-bfa2-48211fe86ef6\") " pod="openstack/heat-engine-7797898b6d-54xz8" Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.546271 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf9a1d71-05e1-40ab-90a7-530d2083fe14-config-data\") pod \"heat-api-8f9d657ff-n8g4q\" (UID: \"bf9a1d71-05e1-40ab-90a7-530d2083fe14\") " pod="openstack/heat-api-8f9d657ff-n8g4q" Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.546335 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7535f37c-f2f6-4e75-bfa2-48211fe86ef6-config-data-custom\") pod \"heat-engine-7797898b6d-54xz8\" (UID: \"7535f37c-f2f6-4e75-bfa2-48211fe86ef6\") " pod="openstack/heat-engine-7797898b6d-54xz8" Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.546400 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bf9a1d71-05e1-40ab-90a7-530d2083fe14-config-data-custom\") pod \"heat-api-8f9d657ff-n8g4q\" (UID: \"bf9a1d71-05e1-40ab-90a7-530d2083fe14\") " pod="openstack/heat-api-8f9d657ff-n8g4q" Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.546434 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7535f37c-f2f6-4e75-bfa2-48211fe86ef6-config-data\") pod \"heat-engine-7797898b6d-54xz8\" (UID: \"7535f37c-f2f6-4e75-bfa2-48211fe86ef6\") " pod="openstack/heat-engine-7797898b6d-54xz8" Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.546452 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7535f37c-f2f6-4e75-bfa2-48211fe86ef6-combined-ca-bundle\") pod \"heat-engine-7797898b6d-54xz8\" (UID: \"7535f37c-f2f6-4e75-bfa2-48211fe86ef6\") " pod="openstack/heat-engine-7797898b6d-54xz8" Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.546470 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf9a1d71-05e1-40ab-90a7-530d2083fe14-combined-ca-bundle\") pod \"heat-api-8f9d657ff-n8g4q\" (UID: \"bf9a1d71-05e1-40ab-90a7-530d2083fe14\") " pod="openstack/heat-api-8f9d657ff-n8g4q" Feb 14 04:32:39 crc kubenswrapper[4867]: W0214 04:32:39.566703 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7959a0fa_00bd_492c_9892_a8c8727549c6.slice/crio-509c3996717307d8c2159fc143b05ca2d8e25b377427985ddf997628e72d1f60 WatchSource:0}: Error finding container 509c3996717307d8c2159fc143b05ca2d8e25b377427985ddf997628e72d1f60: Status 404 returned error can't find the container with id 509c3996717307d8c2159fc143b05ca2d8e25b377427985ddf997628e72d1f60 Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.577579 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-cf78bc599-cbb7h"] Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.579326 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-cf78bc599-cbb7h" Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.604561 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-cf78bc599-cbb7h"] Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.635947 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-677c4ffcdf-n44s6"] Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.666296 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpdz2\" (UniqueName: \"kubernetes.io/projected/7535f37c-f2f6-4e75-bfa2-48211fe86ef6-kube-api-access-rpdz2\") pod \"heat-engine-7797898b6d-54xz8\" (UID: \"7535f37c-f2f6-4e75-bfa2-48211fe86ef6\") " pod="openstack/heat-engine-7797898b6d-54xz8" Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.666411 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf9a1d71-05e1-40ab-90a7-530d2083fe14-config-data\") pod \"heat-api-8f9d657ff-n8g4q\" (UID: \"bf9a1d71-05e1-40ab-90a7-530d2083fe14\") " pod="openstack/heat-api-8f9d657ff-n8g4q" Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.666558 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4e650fa8-a893-47e0-a5d5-0df60430ea9e-config-data-custom\") pod \"heat-cfnapi-cf78bc599-cbb7h\" (UID: \"4e650fa8-a893-47e0-a5d5-0df60430ea9e\") " pod="openstack/heat-cfnapi-cf78bc599-cbb7h" Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.666588 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e650fa8-a893-47e0-a5d5-0df60430ea9e-combined-ca-bundle\") pod \"heat-cfnapi-cf78bc599-cbb7h\" (UID: \"4e650fa8-a893-47e0-a5d5-0df60430ea9e\") " pod="openstack/heat-cfnapi-cf78bc599-cbb7h" Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.666638 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7535f37c-f2f6-4e75-bfa2-48211fe86ef6-config-data-custom\") pod \"heat-engine-7797898b6d-54xz8\" (UID: \"7535f37c-f2f6-4e75-bfa2-48211fe86ef6\") " pod="openstack/heat-engine-7797898b6d-54xz8" Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.666662 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e650fa8-a893-47e0-a5d5-0df60430ea9e-config-data\") pod \"heat-cfnapi-cf78bc599-cbb7h\" (UID: \"4e650fa8-a893-47e0-a5d5-0df60430ea9e\") " pod="openstack/heat-cfnapi-cf78bc599-cbb7h" Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.666862 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bf9a1d71-05e1-40ab-90a7-530d2083fe14-config-data-custom\") pod \"heat-api-8f9d657ff-n8g4q\" (UID: \"bf9a1d71-05e1-40ab-90a7-530d2083fe14\") " pod="openstack/heat-api-8f9d657ff-n8g4q" Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.666930 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7535f37c-f2f6-4e75-bfa2-48211fe86ef6-config-data\") pod \"heat-engine-7797898b6d-54xz8\" (UID: \"7535f37c-f2f6-4e75-bfa2-48211fe86ef6\") " pod="openstack/heat-engine-7797898b6d-54xz8" Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.666965 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7535f37c-f2f6-4e75-bfa2-48211fe86ef6-combined-ca-bundle\") pod \"heat-engine-7797898b6d-54xz8\" (UID: \"7535f37c-f2f6-4e75-bfa2-48211fe86ef6\") " pod="openstack/heat-engine-7797898b6d-54xz8" Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.666988 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf9a1d71-05e1-40ab-90a7-530d2083fe14-combined-ca-bundle\") pod \"heat-api-8f9d657ff-n8g4q\" (UID: \"bf9a1d71-05e1-40ab-90a7-530d2083fe14\") " pod="openstack/heat-api-8f9d657ff-n8g4q" Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.667059 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxrqv\" (UniqueName: \"kubernetes.io/projected/bf9a1d71-05e1-40ab-90a7-530d2083fe14-kube-api-access-jxrqv\") pod \"heat-api-8f9d657ff-n8g4q\" (UID: \"bf9a1d71-05e1-40ab-90a7-530d2083fe14\") " pod="openstack/heat-api-8f9d657ff-n8g4q" Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.667111 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mh87m\" (UniqueName: \"kubernetes.io/projected/4e650fa8-a893-47e0-a5d5-0df60430ea9e-kube-api-access-mh87m\") pod \"heat-cfnapi-cf78bc599-cbb7h\" (UID: \"4e650fa8-a893-47e0-a5d5-0df60430ea9e\") " pod="openstack/heat-cfnapi-cf78bc599-cbb7h" Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.673206 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7535f37c-f2f6-4e75-bfa2-48211fe86ef6-config-data\") pod \"heat-engine-7797898b6d-54xz8\" (UID: \"7535f37c-f2f6-4e75-bfa2-48211fe86ef6\") " pod="openstack/heat-engine-7797898b6d-54xz8" Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.673946 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bf9a1d71-05e1-40ab-90a7-530d2083fe14-config-data-custom\") pod \"heat-api-8f9d657ff-n8g4q\" (UID: \"bf9a1d71-05e1-40ab-90a7-530d2083fe14\") " pod="openstack/heat-api-8f9d657ff-n8g4q" Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.674272 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-ccbrl"] Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.674899 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7535f37c-f2f6-4e75-bfa2-48211fe86ef6-combined-ca-bundle\") pod \"heat-engine-7797898b6d-54xz8\" (UID: \"7535f37c-f2f6-4e75-bfa2-48211fe86ef6\") " pod="openstack/heat-engine-7797898b6d-54xz8" Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.694920 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpdz2\" (UniqueName: \"kubernetes.io/projected/7535f37c-f2f6-4e75-bfa2-48211fe86ef6-kube-api-access-rpdz2\") pod \"heat-engine-7797898b6d-54xz8\" (UID: \"7535f37c-f2f6-4e75-bfa2-48211fe86ef6\") " pod="openstack/heat-engine-7797898b6d-54xz8" Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.695891 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7535f37c-f2f6-4e75-bfa2-48211fe86ef6-config-data-custom\") pod \"heat-engine-7797898b6d-54xz8\" (UID: \"7535f37c-f2f6-4e75-bfa2-48211fe86ef6\") " pod="openstack/heat-engine-7797898b6d-54xz8" Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.703623 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf9a1d71-05e1-40ab-90a7-530d2083fe14-config-data\") pod \"heat-api-8f9d657ff-n8g4q\" (UID: \"bf9a1d71-05e1-40ab-90a7-530d2083fe14\") " pod="openstack/heat-api-8f9d657ff-n8g4q" Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.705927 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf9a1d71-05e1-40ab-90a7-530d2083fe14-combined-ca-bundle\") pod \"heat-api-8f9d657ff-n8g4q\" (UID: \"bf9a1d71-05e1-40ab-90a7-530d2083fe14\") " pod="openstack/heat-api-8f9d657ff-n8g4q" Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.707251 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxrqv\" (UniqueName: \"kubernetes.io/projected/bf9a1d71-05e1-40ab-90a7-530d2083fe14-kube-api-access-jxrqv\") pod \"heat-api-8f9d657ff-n8g4q\" (UID: \"bf9a1d71-05e1-40ab-90a7-530d2083fe14\") " pod="openstack/heat-api-8f9d657ff-n8g4q" Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.770107 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mh87m\" (UniqueName: \"kubernetes.io/projected/4e650fa8-a893-47e0-a5d5-0df60430ea9e-kube-api-access-mh87m\") pod \"heat-cfnapi-cf78bc599-cbb7h\" (UID: \"4e650fa8-a893-47e0-a5d5-0df60430ea9e\") " pod="openstack/heat-cfnapi-cf78bc599-cbb7h" Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.770240 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4e650fa8-a893-47e0-a5d5-0df60430ea9e-config-data-custom\") pod \"heat-cfnapi-cf78bc599-cbb7h\" (UID: \"4e650fa8-a893-47e0-a5d5-0df60430ea9e\") " pod="openstack/heat-cfnapi-cf78bc599-cbb7h" Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.770261 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e650fa8-a893-47e0-a5d5-0df60430ea9e-combined-ca-bundle\") pod \"heat-cfnapi-cf78bc599-cbb7h\" (UID: \"4e650fa8-a893-47e0-a5d5-0df60430ea9e\") " pod="openstack/heat-cfnapi-cf78bc599-cbb7h" Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.770291 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e650fa8-a893-47e0-a5d5-0df60430ea9e-config-data\") pod \"heat-cfnapi-cf78bc599-cbb7h\" (UID: \"4e650fa8-a893-47e0-a5d5-0df60430ea9e\") " pod="openstack/heat-cfnapi-cf78bc599-cbb7h" Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.787681 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e650fa8-a893-47e0-a5d5-0df60430ea9e-config-data\") pod \"heat-cfnapi-cf78bc599-cbb7h\" (UID: \"4e650fa8-a893-47e0-a5d5-0df60430ea9e\") " pod="openstack/heat-cfnapi-cf78bc599-cbb7h" Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.793474 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e650fa8-a893-47e0-a5d5-0df60430ea9e-combined-ca-bundle\") pod \"heat-cfnapi-cf78bc599-cbb7h\" (UID: \"4e650fa8-a893-47e0-a5d5-0df60430ea9e\") " pod="openstack/heat-cfnapi-cf78bc599-cbb7h" Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.804380 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4e650fa8-a893-47e0-a5d5-0df60430ea9e-config-data-custom\") pod \"heat-cfnapi-cf78bc599-cbb7h\" (UID: \"4e650fa8-a893-47e0-a5d5-0df60430ea9e\") " pod="openstack/heat-cfnapi-cf78bc599-cbb7h" Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.812675 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mh87m\" (UniqueName: \"kubernetes.io/projected/4e650fa8-a893-47e0-a5d5-0df60430ea9e-kube-api-access-mh87m\") pod \"heat-cfnapi-cf78bc599-cbb7h\" (UID: \"4e650fa8-a893-47e0-a5d5-0df60430ea9e\") " pod="openstack/heat-cfnapi-cf78bc599-cbb7h" Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.900698 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-8094-account-create-update-pbbgl"] Feb 14 04:32:39 crc kubenswrapper[4867]: W0214 04:32:39.909912 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod708fbc3f_a05a_4b29_b455_32db117495d1.slice/crio-4cf6961920f386662ea24ebe41d55c71401248492bb629399ef841615543fa48 WatchSource:0}: Error finding container 4cf6961920f386662ea24ebe41d55c71401248492bb629399ef841615543fa48: Status 404 returned error can't find the container with id 4cf6961920f386662ea24ebe41d55c71401248492bb629399ef841615543fa48 Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.923467 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-t8trt"] Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.952622 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-a338-account-create-update-2zjhb"] Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.979963 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7797898b6d-54xz8" Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.980896 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-slfhr"] Feb 14 04:32:40 crc kubenswrapper[4867]: I0214 04:32:40.117017 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:32:40 crc kubenswrapper[4867]: I0214 04:32:40.145756 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-8f9d657ff-n8g4q" Feb 14 04:32:40 crc kubenswrapper[4867]: I0214 04:32:40.163691 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-cf78bc599-cbb7h" Feb 14 04:32:40 crc kubenswrapper[4867]: I0214 04:32:40.181238 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-t8trt" event={"ID":"708fbc3f-a05a-4b29-b455-32db117495d1","Type":"ContainerStarted","Data":"4cf6961920f386662ea24ebe41d55c71401248492bb629399ef841615543fa48"} Feb 14 04:32:40 crc kubenswrapper[4867]: I0214 04:32:40.189318 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-a338-account-create-update-2zjhb" event={"ID":"041c55d6-87c7-47b4-a53b-9b38cb85e3d2","Type":"ContainerStarted","Data":"38548c5a0efacccdfcfdf4445dc4dbf80ccfe685a7da35040dbadb7094f914d2"} Feb 14 04:32:40 crc kubenswrapper[4867]: I0214 04:32:40.206490 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-667b98697-gxqph" event={"ID":"4fd29ee2-33af-4629-8c0d-fa62c0e07240","Type":"ContainerStarted","Data":"6a911f22f2445bf520e5b58ee0d37ec6810d7143ae0f24d44f2a1ba98f13ca47"} Feb 14 04:32:40 crc kubenswrapper[4867]: I0214 04:32:40.228382 4867 generic.go:334] "Generic (PLEG): container finished" podID="289f81c2-9092-4a51-a1b4-8eedaa09aedb" containerID="edb8483472d537c583af237081de995fee4a32c9b18a192549b88c1b5ca41e5a" exitCode=0 Feb 14 04:32:40 crc kubenswrapper[4867]: I0214 04:32:40.231410 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-5ffts" event={"ID":"289f81c2-9092-4a51-a1b4-8eedaa09aedb","Type":"ContainerDied","Data":"edb8483472d537c583af237081de995fee4a32c9b18a192549b88c1b5ca41e5a"} Feb 14 04:32:40 crc kubenswrapper[4867]: I0214 04:32:40.237747 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-ccbrl" event={"ID":"7959a0fa-00bd-492c-9892-a8c8727549c6","Type":"ContainerStarted","Data":"509c3996717307d8c2159fc143b05ca2d8e25b377427985ddf997628e72d1f60"} Feb 14 04:32:40 crc kubenswrapper[4867]: I0214 04:32:40.239650 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-8539-account-create-update-9j9p8" event={"ID":"2b7729cf-7332-4432-999f-fbee997b2201","Type":"ContainerStarted","Data":"6bd7d606fb9b6188c28f7b964e2aed897ff801c850465bbc0ee30e5f3fa5796c"} Feb 14 04:32:40 crc kubenswrapper[4867]: I0214 04:32:40.239679 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-8539-account-create-update-9j9p8" event={"ID":"2b7729cf-7332-4432-999f-fbee997b2201","Type":"ContainerStarted","Data":"b1d07c0e74e8771e0fbf29c29a6ed70e22ae7cb3f29a34ff3052d92b0f985a1f"} Feb 14 04:32:40 crc kubenswrapper[4867]: I0214 04:32:40.258469 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-slfhr" event={"ID":"730dbd9b-ddff-4d09-89ff-b9135ed83042","Type":"ContainerStarted","Data":"26251869056a11a68a5d33b008a4b88fb45a9155c0e2b8d4aa9fdfe9d69f6cab"} Feb 14 04:32:40 crc kubenswrapper[4867]: I0214 04:32:40.289379 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-8094-account-create-update-pbbgl" event={"ID":"80c71d92-a9d1-4256-b7be-678dc34d1562","Type":"ContainerStarted","Data":"073c45a9d481932551862dd339dfbf035cc064529affc0929ce845e3152133c0"} Feb 14 04:32:40 crc kubenswrapper[4867]: I0214 04:32:40.309420 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-677c4ffcdf-n44s6" event={"ID":"a2ce3fe5-1f15-484b-a608-da9f03d714c9","Type":"ContainerStarted","Data":"5411ca415d9a87d0850d6fbf4033b3de2e9b4aed86c0a53707211fd73a6a37cc"} Feb 14 04:32:40 crc kubenswrapper[4867]: I0214 04:32:40.326167 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-74c87bfcc9-g5dr4" event={"ID":"6c28a361-2a59-45f2-baeb-e4d5313b6c17","Type":"ContainerStarted","Data":"279dcb9c4b235ad9ee4d170269ff377a20b494792ae727e8d6532186bac5ba51"} Feb 14 04:32:40 crc kubenswrapper[4867]: I0214 04:32:40.339206 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-8539-account-create-update-9j9p8" podStartSLOduration=6.339184721 podStartE2EDuration="6.339184721s" podCreationTimestamp="2026-02-14 04:32:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:32:40.293104413 +0000 UTC m=+1392.374041747" watchObservedRunningTime="2026-02-14 04:32:40.339184721 +0000 UTC m=+1392.420122035" Feb 14 04:32:40 crc kubenswrapper[4867]: I0214 04:32:40.795436 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-7797898b6d-54xz8"] Feb 14 04:32:40 crc kubenswrapper[4867]: W0214 04:32:40.858697 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7535f37c_f2f6_4e75_bfa2_48211fe86ef6.slice/crio-dd3c354011933e0f94727b4d8a7a0061c7e339109544dc62c211e6c435dc4d43 WatchSource:0}: Error finding container dd3c354011933e0f94727b4d8a7a0061c7e339109544dc62c211e6c435dc4d43: Status 404 returned error can't find the container with id dd3c354011933e0f94727b4d8a7a0061c7e339109544dc62c211e6c435dc4d43 Feb 14 04:32:40 crc kubenswrapper[4867]: I0214 04:32:40.953958 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-8f9d657ff-n8g4q"] Feb 14 04:32:41 crc kubenswrapper[4867]: W0214 04:32:41.021764 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbf9a1d71_05e1_40ab_90a7_530d2083fe14.slice/crio-da29745824d45aedf75030755306f42e86da161913c87bf4c3798a011179b320 WatchSource:0}: Error finding container da29745824d45aedf75030755306f42e86da161913c87bf4c3798a011179b320: Status 404 returned error can't find the container with id da29745824d45aedf75030755306f42e86da161913c87bf4c3798a011179b320 Feb 14 04:32:41 crc kubenswrapper[4867]: I0214 04:32:41.285372 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-cf78bc599-cbb7h"] Feb 14 04:32:41 crc kubenswrapper[4867]: I0214 04:32:41.357854 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30f61907-9cb4-4873-99eb-bbb5adf21fcb","Type":"ContainerStarted","Data":"0c811d9a27d93bea50cf31c5a59216074fd035a7dfb9975cb4e0ef8eaca3d79f"} Feb 14 04:32:41 crc kubenswrapper[4867]: I0214 04:32:41.366620 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-a338-account-create-update-2zjhb" event={"ID":"041c55d6-87c7-47b4-a53b-9b38cb85e3d2","Type":"ContainerStarted","Data":"ac04f78f97056d2b2550db33626b10963bebb9d175cf60c35210d274045c9458"} Feb 14 04:32:41 crc kubenswrapper[4867]: I0214 04:32:41.371037 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7797898b6d-54xz8" event={"ID":"7535f37c-f2f6-4e75-bfa2-48211fe86ef6","Type":"ContainerStarted","Data":"dd3c354011933e0f94727b4d8a7a0061c7e339109544dc62c211e6c435dc4d43"} Feb 14 04:32:41 crc kubenswrapper[4867]: I0214 04:32:41.377628 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-8094-account-create-update-pbbgl" event={"ID":"80c71d92-a9d1-4256-b7be-678dc34d1562","Type":"ContainerStarted","Data":"d2f2315be8742d702e7dd2d0f528c431c081e7e1ce092b2f26f01dd567075c43"} Feb 14 04:32:41 crc kubenswrapper[4867]: W0214 04:32:41.382444 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4e650fa8_a893_47e0_a5d5_0df60430ea9e.slice/crio-a2992054f9a747435b4dfa57d015a5d3a94fc0840d14d8df3c6c61038a7f9365 WatchSource:0}: Error finding container a2992054f9a747435b4dfa57d015a5d3a94fc0840d14d8df3c6c61038a7f9365: Status 404 returned error can't find the container with id a2992054f9a747435b4dfa57d015a5d3a94fc0840d14d8df3c6c61038a7f9365 Feb 14 04:32:41 crc kubenswrapper[4867]: I0214 04:32:41.411640 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-a338-account-create-update-2zjhb" podStartSLOduration=8.411617046 podStartE2EDuration="8.411617046s" podCreationTimestamp="2026-02-14 04:32:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:32:41.404826213 +0000 UTC m=+1393.485763537" watchObservedRunningTime="2026-02-14 04:32:41.411617046 +0000 UTC m=+1393.492554360" Feb 14 04:32:41 crc kubenswrapper[4867]: I0214 04:32:41.414579 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-677c4ffcdf-n44s6" event={"ID":"a2ce3fe5-1f15-484b-a608-da9f03d714c9","Type":"ContainerStarted","Data":"6a3313dda26c1a2d9982bba482eb657c4e81d8dc170b8fa9912ec40df49eb639"} Feb 14 04:32:41 crc kubenswrapper[4867]: I0214 04:32:41.415585 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-677c4ffcdf-n44s6" Feb 14 04:32:41 crc kubenswrapper[4867]: I0214 04:32:41.419950 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-t8trt" event={"ID":"708fbc3f-a05a-4b29-b455-32db117495d1","Type":"ContainerStarted","Data":"0c5aa3d36bd716587576d157b08b003ad1372b31da48794e4d003f7f4a82a1b3"} Feb 14 04:32:41 crc kubenswrapper[4867]: I0214 04:32:41.427624 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-8094-account-create-update-pbbgl" podStartSLOduration=8.427600295 podStartE2EDuration="8.427600295s" podCreationTimestamp="2026-02-14 04:32:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:32:41.422752855 +0000 UTC m=+1393.503690169" watchObservedRunningTime="2026-02-14 04:32:41.427600295 +0000 UTC m=+1393.508537609" Feb 14 04:32:41 crc kubenswrapper[4867]: I0214 04:32:41.428319 4867 generic.go:334] "Generic (PLEG): container finished" podID="7959a0fa-00bd-492c-9892-a8c8727549c6" containerID="82838cd053ec19d9355b8bed3bca33d40ca78328ccc5425dbe3475e660e9969c" exitCode=0 Feb 14 04:32:41 crc kubenswrapper[4867]: I0214 04:32:41.428411 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-ccbrl" event={"ID":"7959a0fa-00bd-492c-9892-a8c8727549c6","Type":"ContainerDied","Data":"82838cd053ec19d9355b8bed3bca33d40ca78328ccc5425dbe3475e660e9969c"} Feb 14 04:32:41 crc kubenswrapper[4867]: I0214 04:32:41.443796 4867 generic.go:334] "Generic (PLEG): container finished" podID="2b7729cf-7332-4432-999f-fbee997b2201" containerID="6bd7d606fb9b6188c28f7b964e2aed897ff801c850465bbc0ee30e5f3fa5796c" exitCode=0 Feb 14 04:32:41 crc kubenswrapper[4867]: I0214 04:32:41.443860 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-8539-account-create-update-9j9p8" event={"ID":"2b7729cf-7332-4432-999f-fbee997b2201","Type":"ContainerDied","Data":"6bd7d606fb9b6188c28f7b964e2aed897ff801c850465bbc0ee30e5f3fa5796c"} Feb 14 04:32:41 crc kubenswrapper[4867]: I0214 04:32:41.461927 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-8f9d657ff-n8g4q" event={"ID":"bf9a1d71-05e1-40ab-90a7-530d2083fe14","Type":"ContainerStarted","Data":"da29745824d45aedf75030755306f42e86da161913c87bf4c3798a011179b320"} Feb 14 04:32:41 crc kubenswrapper[4867]: I0214 04:32:41.475006 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-677c4ffcdf-n44s6" podStartSLOduration=9.474984328 podStartE2EDuration="9.474984328s" podCreationTimestamp="2026-02-14 04:32:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:32:41.443427741 +0000 UTC m=+1393.524365045" watchObservedRunningTime="2026-02-14 04:32:41.474984328 +0000 UTC m=+1393.555921642" Feb 14 04:32:41 crc kubenswrapper[4867]: I0214 04:32:41.505461 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-t8trt" podStartSLOduration=8.505434727 podStartE2EDuration="8.505434727s" podCreationTimestamp="2026-02-14 04:32:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:32:41.462524264 +0000 UTC m=+1393.543461578" watchObservedRunningTime="2026-02-14 04:32:41.505434727 +0000 UTC m=+1393.586372041" Feb 14 04:32:41 crc kubenswrapper[4867]: E0214 04:32:41.911025 4867 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod406727d4_ffca_4ade_b0ca_b5dbfcb23e24.slice/crio-conmon-12a1d2cb9718993931d34f7f092630cac049d31e66bb907373a6a9ebfd3b2034.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod406727d4_ffca_4ade_b0ca_b5dbfcb23e24.slice/crio-12a1d2cb9718993931d34f7f092630cac049d31e66bb907373a6a9ebfd3b2034.scope\": RecentStats: unable to find data in memory cache]" Feb 14 04:32:42 crc kubenswrapper[4867]: I0214 04:32:42.481155 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-ccbrl" event={"ID":"7959a0fa-00bd-492c-9892-a8c8727549c6","Type":"ContainerStarted","Data":"5a01ea22a86b95bd3d047ecc780ee7786ac3f26352c9a5ce1e038cc9e891bc74"} Feb 14 04:32:42 crc kubenswrapper[4867]: I0214 04:32:42.481725 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7756b9d78c-ccbrl" Feb 14 04:32:42 crc kubenswrapper[4867]: I0214 04:32:42.485914 4867 generic.go:334] "Generic (PLEG): container finished" podID="9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149" containerID="a3270a5cb491a003b02a8ff42a33368a493af6d0e24d1558f76c114ff7412184" exitCode=0 Feb 14 04:32:42 crc kubenswrapper[4867]: I0214 04:32:42.485995 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-74c5fcd7cb-sr8z9" event={"ID":"9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149","Type":"ContainerDied","Data":"a3270a5cb491a003b02a8ff42a33368a493af6d0e24d1558f76c114ff7412184"} Feb 14 04:32:42 crc kubenswrapper[4867]: I0214 04:32:42.487556 4867 generic.go:334] "Generic (PLEG): container finished" podID="041c55d6-87c7-47b4-a53b-9b38cb85e3d2" containerID="ac04f78f97056d2b2550db33626b10963bebb9d175cf60c35210d274045c9458" exitCode=0 Feb 14 04:32:42 crc kubenswrapper[4867]: I0214 04:32:42.487621 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-a338-account-create-update-2zjhb" event={"ID":"041c55d6-87c7-47b4-a53b-9b38cb85e3d2","Type":"ContainerDied","Data":"ac04f78f97056d2b2550db33626b10963bebb9d175cf60c35210d274045c9458"} Feb 14 04:32:42 crc kubenswrapper[4867]: I0214 04:32:42.490192 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7797898b6d-54xz8" event={"ID":"7535f37c-f2f6-4e75-bfa2-48211fe86ef6","Type":"ContainerStarted","Data":"f9f2e84685b68ba026ed32e937f1e9734f0455c3a2cb5f5a9465424b4369a198"} Feb 14 04:32:42 crc kubenswrapper[4867]: I0214 04:32:42.490399 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-7797898b6d-54xz8" Feb 14 04:32:42 crc kubenswrapper[4867]: I0214 04:32:42.506855 4867 generic.go:334] "Generic (PLEG): container finished" podID="730dbd9b-ddff-4d09-89ff-b9135ed83042" containerID="3e1ef6da3ebdc2673f2981d47e0b77af1c8ade8d3cd5fb3292ef5cb9e14386e5" exitCode=0 Feb 14 04:32:42 crc kubenswrapper[4867]: I0214 04:32:42.506959 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-slfhr" event={"ID":"730dbd9b-ddff-4d09-89ff-b9135ed83042","Type":"ContainerDied","Data":"3e1ef6da3ebdc2673f2981d47e0b77af1c8ade8d3cd5fb3292ef5cb9e14386e5"} Feb 14 04:32:42 crc kubenswrapper[4867]: I0214 04:32:42.509555 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7756b9d78c-ccbrl" podStartSLOduration=10.509477123 podStartE2EDuration="10.509477123s" podCreationTimestamp="2026-02-14 04:32:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:32:42.497210554 +0000 UTC m=+1394.578147868" watchObservedRunningTime="2026-02-14 04:32:42.509477123 +0000 UTC m=+1394.590414437" Feb 14 04:32:42 crc kubenswrapper[4867]: I0214 04:32:42.521059 4867 generic.go:334] "Generic (PLEG): container finished" podID="708fbc3f-a05a-4b29-b455-32db117495d1" containerID="0c5aa3d36bd716587576d157b08b003ad1372b31da48794e4d003f7f4a82a1b3" exitCode=0 Feb 14 04:32:42 crc kubenswrapper[4867]: I0214 04:32:42.521407 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-t8trt" event={"ID":"708fbc3f-a05a-4b29-b455-32db117495d1","Type":"ContainerDied","Data":"0c5aa3d36bd716587576d157b08b003ad1372b31da48794e4d003f7f4a82a1b3"} Feb 14 04:32:42 crc kubenswrapper[4867]: I0214 04:32:42.531105 4867 generic.go:334] "Generic (PLEG): container finished" podID="406727d4-ffca-4ade-b0ca-b5dbfcb23e24" containerID="12a1d2cb9718993931d34f7f092630cac049d31e66bb907373a6a9ebfd3b2034" exitCode=0 Feb 14 04:32:42 crc kubenswrapper[4867]: I0214 04:32:42.531189 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"406727d4-ffca-4ade-b0ca-b5dbfcb23e24","Type":"ContainerDied","Data":"12a1d2cb9718993931d34f7f092630cac049d31e66bb907373a6a9ebfd3b2034"} Feb 14 04:32:42 crc kubenswrapper[4867]: I0214 04:32:42.538744 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-7797898b6d-54xz8" podStartSLOduration=3.538724699 podStartE2EDuration="3.538724699s" podCreationTimestamp="2026-02-14 04:32:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:32:42.53578193 +0000 UTC m=+1394.616719254" watchObservedRunningTime="2026-02-14 04:32:42.538724699 +0000 UTC m=+1394.619662013" Feb 14 04:32:42 crc kubenswrapper[4867]: I0214 04:32:42.539101 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30f61907-9cb4-4873-99eb-bbb5adf21fcb","Type":"ContainerStarted","Data":"979729ed029e7493c86fa97c73b6e4c07235cd2c42a9dffb387845d8efe2d144"} Feb 14 04:32:42 crc kubenswrapper[4867]: I0214 04:32:42.544047 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-cf78bc599-cbb7h" event={"ID":"4e650fa8-a893-47e0-a5d5-0df60430ea9e","Type":"ContainerStarted","Data":"a2992054f9a747435b4dfa57d015a5d3a94fc0840d14d8df3c6c61038a7f9365"} Feb 14 04:32:42 crc kubenswrapper[4867]: I0214 04:32:42.546114 4867 generic.go:334] "Generic (PLEG): container finished" podID="80c71d92-a9d1-4256-b7be-678dc34d1562" containerID="d2f2315be8742d702e7dd2d0f528c431c081e7e1ce092b2f26f01dd567075c43" exitCode=0 Feb 14 04:32:42 crc kubenswrapper[4867]: I0214 04:32:42.546397 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-8094-account-create-update-pbbgl" event={"ID":"80c71d92-a9d1-4256-b7be-678dc34d1562","Type":"ContainerDied","Data":"d2f2315be8742d702e7dd2d0f528c431c081e7e1ce092b2f26f01dd567075c43"} Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.028531 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-667b98697-gxqph"] Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.032972 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-74c87bfcc9-g5dr4"] Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.066724 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-6f55d59bf5-wfw72"] Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.068434 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6f55d59bf5-wfw72" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.072049 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.072115 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.087928 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-74d8ffb764-wz9cp"] Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.090139 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-74d8ffb764-wz9cp" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.098992 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.099225 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.103535 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6f55d59bf5-wfw72"] Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.123761 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-74d8ffb764-wz9cp"] Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.155079 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe0cc502-2f6a-41d9-8761-da930802201e-config-data\") pod \"heat-api-6f55d59bf5-wfw72\" (UID: \"fe0cc502-2f6a-41d9-8761-da930802201e\") " pod="openstack/heat-api-6f55d59bf5-wfw72" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.155151 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe0cc502-2f6a-41d9-8761-da930802201e-combined-ca-bundle\") pod \"heat-api-6f55d59bf5-wfw72\" (UID: \"fe0cc502-2f6a-41d9-8761-da930802201e\") " pod="openstack/heat-api-6f55d59bf5-wfw72" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.155259 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vf86p\" (UniqueName: \"kubernetes.io/projected/fe0cc502-2f6a-41d9-8761-da930802201e-kube-api-access-vf86p\") pod \"heat-api-6f55d59bf5-wfw72\" (UID: \"fe0cc502-2f6a-41d9-8761-da930802201e\") " pod="openstack/heat-api-6f55d59bf5-wfw72" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.155325 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fe0cc502-2f6a-41d9-8761-da930802201e-config-data-custom\") pod \"heat-api-6f55d59bf5-wfw72\" (UID: \"fe0cc502-2f6a-41d9-8761-da930802201e\") " pod="openstack/heat-api-6f55d59bf5-wfw72" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.155386 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe0cc502-2f6a-41d9-8761-da930802201e-internal-tls-certs\") pod \"heat-api-6f55d59bf5-wfw72\" (UID: \"fe0cc502-2f6a-41d9-8761-da930802201e\") " pod="openstack/heat-api-6f55d59bf5-wfw72" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.155403 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe0cc502-2f6a-41d9-8761-da930802201e-public-tls-certs\") pod \"heat-api-6f55d59bf5-wfw72\" (UID: \"fe0cc502-2f6a-41d9-8761-da930802201e\") " pod="openstack/heat-api-6f55d59bf5-wfw72" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.257484 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe0cc502-2f6a-41d9-8761-da930802201e-config-data\") pod \"heat-api-6f55d59bf5-wfw72\" (UID: \"fe0cc502-2f6a-41d9-8761-da930802201e\") " pod="openstack/heat-api-6f55d59bf5-wfw72" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.257617 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe0cc502-2f6a-41d9-8761-da930802201e-combined-ca-bundle\") pod \"heat-api-6f55d59bf5-wfw72\" (UID: \"fe0cc502-2f6a-41d9-8761-da930802201e\") " pod="openstack/heat-api-6f55d59bf5-wfw72" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.257685 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/16f76a07-1b4d-4057-84c6-0cae915e01f7-public-tls-certs\") pod \"heat-cfnapi-74d8ffb764-wz9cp\" (UID: \"16f76a07-1b4d-4057-84c6-0cae915e01f7\") " pod="openstack/heat-cfnapi-74d8ffb764-wz9cp" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.257752 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vf86p\" (UniqueName: \"kubernetes.io/projected/fe0cc502-2f6a-41d9-8761-da930802201e-kube-api-access-vf86p\") pod \"heat-api-6f55d59bf5-wfw72\" (UID: \"fe0cc502-2f6a-41d9-8761-da930802201e\") " pod="openstack/heat-api-6f55d59bf5-wfw72" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.257817 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16f76a07-1b4d-4057-84c6-0cae915e01f7-combined-ca-bundle\") pod \"heat-cfnapi-74d8ffb764-wz9cp\" (UID: \"16f76a07-1b4d-4057-84c6-0cae915e01f7\") " pod="openstack/heat-cfnapi-74d8ffb764-wz9cp" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.257842 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/16f76a07-1b4d-4057-84c6-0cae915e01f7-internal-tls-certs\") pod \"heat-cfnapi-74d8ffb764-wz9cp\" (UID: \"16f76a07-1b4d-4057-84c6-0cae915e01f7\") " pod="openstack/heat-cfnapi-74d8ffb764-wz9cp" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.257866 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fe0cc502-2f6a-41d9-8761-da930802201e-config-data-custom\") pod \"heat-api-6f55d59bf5-wfw72\" (UID: \"fe0cc502-2f6a-41d9-8761-da930802201e\") " pod="openstack/heat-api-6f55d59bf5-wfw72" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.257900 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/16f76a07-1b4d-4057-84c6-0cae915e01f7-config-data-custom\") pod \"heat-cfnapi-74d8ffb764-wz9cp\" (UID: \"16f76a07-1b4d-4057-84c6-0cae915e01f7\") " pod="openstack/heat-cfnapi-74d8ffb764-wz9cp" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.258018 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe0cc502-2f6a-41d9-8761-da930802201e-internal-tls-certs\") pod \"heat-api-6f55d59bf5-wfw72\" (UID: \"fe0cc502-2f6a-41d9-8761-da930802201e\") " pod="openstack/heat-api-6f55d59bf5-wfw72" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.258036 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe0cc502-2f6a-41d9-8761-da930802201e-public-tls-certs\") pod \"heat-api-6f55d59bf5-wfw72\" (UID: \"fe0cc502-2f6a-41d9-8761-da930802201e\") " pod="openstack/heat-api-6f55d59bf5-wfw72" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.258053 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m47wg\" (UniqueName: \"kubernetes.io/projected/16f76a07-1b4d-4057-84c6-0cae915e01f7-kube-api-access-m47wg\") pod \"heat-cfnapi-74d8ffb764-wz9cp\" (UID: \"16f76a07-1b4d-4057-84c6-0cae915e01f7\") " pod="openstack/heat-cfnapi-74d8ffb764-wz9cp" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.258079 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16f76a07-1b4d-4057-84c6-0cae915e01f7-config-data\") pod \"heat-cfnapi-74d8ffb764-wz9cp\" (UID: \"16f76a07-1b4d-4057-84c6-0cae915e01f7\") " pod="openstack/heat-cfnapi-74d8ffb764-wz9cp" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.264813 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe0cc502-2f6a-41d9-8761-da930802201e-internal-tls-certs\") pod \"heat-api-6f55d59bf5-wfw72\" (UID: \"fe0cc502-2f6a-41d9-8761-da930802201e\") " pod="openstack/heat-api-6f55d59bf5-wfw72" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.264885 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe0cc502-2f6a-41d9-8761-da930802201e-public-tls-certs\") pod \"heat-api-6f55d59bf5-wfw72\" (UID: \"fe0cc502-2f6a-41d9-8761-da930802201e\") " pod="openstack/heat-api-6f55d59bf5-wfw72" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.265222 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe0cc502-2f6a-41d9-8761-da930802201e-combined-ca-bundle\") pod \"heat-api-6f55d59bf5-wfw72\" (UID: \"fe0cc502-2f6a-41d9-8761-da930802201e\") " pod="openstack/heat-api-6f55d59bf5-wfw72" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.267660 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fe0cc502-2f6a-41d9-8761-da930802201e-config-data-custom\") pod \"heat-api-6f55d59bf5-wfw72\" (UID: \"fe0cc502-2f6a-41d9-8761-da930802201e\") " pod="openstack/heat-api-6f55d59bf5-wfw72" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.272773 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe0cc502-2f6a-41d9-8761-da930802201e-config-data\") pod \"heat-api-6f55d59bf5-wfw72\" (UID: \"fe0cc502-2f6a-41d9-8761-da930802201e\") " pod="openstack/heat-api-6f55d59bf5-wfw72" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.281289 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vf86p\" (UniqueName: \"kubernetes.io/projected/fe0cc502-2f6a-41d9-8761-da930802201e-kube-api-access-vf86p\") pod \"heat-api-6f55d59bf5-wfw72\" (UID: \"fe0cc502-2f6a-41d9-8761-da930802201e\") " pod="openstack/heat-api-6f55d59bf5-wfw72" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.362640 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/16f76a07-1b4d-4057-84c6-0cae915e01f7-public-tls-certs\") pod \"heat-cfnapi-74d8ffb764-wz9cp\" (UID: \"16f76a07-1b4d-4057-84c6-0cae915e01f7\") " pod="openstack/heat-cfnapi-74d8ffb764-wz9cp" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.362948 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16f76a07-1b4d-4057-84c6-0cae915e01f7-combined-ca-bundle\") pod \"heat-cfnapi-74d8ffb764-wz9cp\" (UID: \"16f76a07-1b4d-4057-84c6-0cae915e01f7\") " pod="openstack/heat-cfnapi-74d8ffb764-wz9cp" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.362974 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/16f76a07-1b4d-4057-84c6-0cae915e01f7-internal-tls-certs\") pod \"heat-cfnapi-74d8ffb764-wz9cp\" (UID: \"16f76a07-1b4d-4057-84c6-0cae915e01f7\") " pod="openstack/heat-cfnapi-74d8ffb764-wz9cp" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.363025 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/16f76a07-1b4d-4057-84c6-0cae915e01f7-config-data-custom\") pod \"heat-cfnapi-74d8ffb764-wz9cp\" (UID: \"16f76a07-1b4d-4057-84c6-0cae915e01f7\") " pod="openstack/heat-cfnapi-74d8ffb764-wz9cp" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.363091 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m47wg\" (UniqueName: \"kubernetes.io/projected/16f76a07-1b4d-4057-84c6-0cae915e01f7-kube-api-access-m47wg\") pod \"heat-cfnapi-74d8ffb764-wz9cp\" (UID: \"16f76a07-1b4d-4057-84c6-0cae915e01f7\") " pod="openstack/heat-cfnapi-74d8ffb764-wz9cp" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.363443 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16f76a07-1b4d-4057-84c6-0cae915e01f7-config-data\") pod \"heat-cfnapi-74d8ffb764-wz9cp\" (UID: \"16f76a07-1b4d-4057-84c6-0cae915e01f7\") " pod="openstack/heat-cfnapi-74d8ffb764-wz9cp" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.372284 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16f76a07-1b4d-4057-84c6-0cae915e01f7-config-data\") pod \"heat-cfnapi-74d8ffb764-wz9cp\" (UID: \"16f76a07-1b4d-4057-84c6-0cae915e01f7\") " pod="openstack/heat-cfnapi-74d8ffb764-wz9cp" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.372292 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/16f76a07-1b4d-4057-84c6-0cae915e01f7-public-tls-certs\") pod \"heat-cfnapi-74d8ffb764-wz9cp\" (UID: \"16f76a07-1b4d-4057-84c6-0cae915e01f7\") " pod="openstack/heat-cfnapi-74d8ffb764-wz9cp" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.372673 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/16f76a07-1b4d-4057-84c6-0cae915e01f7-internal-tls-certs\") pod \"heat-cfnapi-74d8ffb764-wz9cp\" (UID: \"16f76a07-1b4d-4057-84c6-0cae915e01f7\") " pod="openstack/heat-cfnapi-74d8ffb764-wz9cp" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.372677 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16f76a07-1b4d-4057-84c6-0cae915e01f7-combined-ca-bundle\") pod \"heat-cfnapi-74d8ffb764-wz9cp\" (UID: \"16f76a07-1b4d-4057-84c6-0cae915e01f7\") " pod="openstack/heat-cfnapi-74d8ffb764-wz9cp" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.375816 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/16f76a07-1b4d-4057-84c6-0cae915e01f7-config-data-custom\") pod \"heat-cfnapi-74d8ffb764-wz9cp\" (UID: \"16f76a07-1b4d-4057-84c6-0cae915e01f7\") " pod="openstack/heat-cfnapi-74d8ffb764-wz9cp" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.378160 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m47wg\" (UniqueName: \"kubernetes.io/projected/16f76a07-1b4d-4057-84c6-0cae915e01f7-kube-api-access-m47wg\") pod \"heat-cfnapi-74d8ffb764-wz9cp\" (UID: \"16f76a07-1b4d-4057-84c6-0cae915e01f7\") " pod="openstack/heat-cfnapi-74d8ffb764-wz9cp" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.400286 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6f55d59bf5-wfw72" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.430204 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-74d8ffb764-wz9cp" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.530958 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-8539-account-create-update-9j9p8" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.542542 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-5ffts" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.554618 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.564602 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-74c5fcd7cb-sr8z9" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.564888 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-74c5fcd7cb-sr8z9" event={"ID":"9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149","Type":"ContainerDied","Data":"39d679b02b54e70585a87ea7dbf473acb26533d3e4ea7319177999bccaf06766"} Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.564967 4867 scope.go:117] "RemoveContainer" containerID="a00d0ebf0ff2de031204758114db4258ee7b4d688e4e3e8fcab6451b81a33050" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.571718 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-8539-account-create-update-9j9p8" event={"ID":"2b7729cf-7332-4432-999f-fbee997b2201","Type":"ContainerDied","Data":"b1d07c0e74e8771e0fbf29c29a6ed70e22ae7cb3f29a34ff3052d92b0f985a1f"} Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.571759 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1d07c0e74e8771e0fbf29c29a6ed70e22ae7cb3f29a34ff3052d92b0f985a1f" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.571827 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-8539-account-create-update-9j9p8" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.577924 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"406727d4-ffca-4ade-b0ca-b5dbfcb23e24","Type":"ContainerDied","Data":"a3cc1da73263e85bbf2b7d750ab646192fbf22c988007a55f775707de3030a59"} Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.578029 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.581561 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-5ffts" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.582287 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-5ffts" event={"ID":"289f81c2-9092-4a51-a1b4-8eedaa09aedb","Type":"ContainerDied","Data":"d44967a1ebd4e2f70ff240361ffa85a32ea8014b336becbf306d8e84e9755446"} Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.582370 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d44967a1ebd4e2f70ff240361ffa85a32ea8014b336becbf306d8e84e9755446" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.669992 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bbm8h\" (UniqueName: \"kubernetes.io/projected/2b7729cf-7332-4432-999f-fbee997b2201-kube-api-access-bbm8h\") pod \"2b7729cf-7332-4432-999f-fbee997b2201\" (UID: \"2b7729cf-7332-4432-999f-fbee997b2201\") " Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.670170 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149-httpd-config\") pod \"9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149\" (UID: \"9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149\") " Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.670204 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/406727d4-ffca-4ade-b0ca-b5dbfcb23e24-scripts\") pod \"406727d4-ffca-4ade-b0ca-b5dbfcb23e24\" (UID: \"406727d4-ffca-4ade-b0ca-b5dbfcb23e24\") " Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.670234 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b7729cf-7332-4432-999f-fbee997b2201-operator-scripts\") pod \"2b7729cf-7332-4432-999f-fbee997b2201\" (UID: \"2b7729cf-7332-4432-999f-fbee997b2201\") " Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.670324 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/406727d4-ffca-4ade-b0ca-b5dbfcb23e24-combined-ca-bundle\") pod \"406727d4-ffca-4ade-b0ca-b5dbfcb23e24\" (UID: \"406727d4-ffca-4ade-b0ca-b5dbfcb23e24\") " Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.670370 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149-config\") pod \"9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149\" (UID: \"9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149\") " Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.670405 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/289f81c2-9092-4a51-a1b4-8eedaa09aedb-operator-scripts\") pod \"289f81c2-9092-4a51-a1b4-8eedaa09aedb\" (UID: \"289f81c2-9092-4a51-a1b4-8eedaa09aedb\") " Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.670438 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149-ovndb-tls-certs\") pod \"9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149\" (UID: \"9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149\") " Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.670469 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/406727d4-ffca-4ade-b0ca-b5dbfcb23e24-httpd-run\") pod \"406727d4-ffca-4ade-b0ca-b5dbfcb23e24\" (UID: \"406727d4-ffca-4ade-b0ca-b5dbfcb23e24\") " Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.670561 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/406727d4-ffca-4ade-b0ca-b5dbfcb23e24-config-data\") pod \"406727d4-ffca-4ade-b0ca-b5dbfcb23e24\" (UID: \"406727d4-ffca-4ade-b0ca-b5dbfcb23e24\") " Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.670764 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-36cbded9-e56a-4712-a8db-251c7dcbb87d\") pod \"406727d4-ffca-4ade-b0ca-b5dbfcb23e24\" (UID: \"406727d4-ffca-4ade-b0ca-b5dbfcb23e24\") " Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.670790 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pfw6n\" (UniqueName: \"kubernetes.io/projected/289f81c2-9092-4a51-a1b4-8eedaa09aedb-kube-api-access-pfw6n\") pod \"289f81c2-9092-4a51-a1b4-8eedaa09aedb\" (UID: \"289f81c2-9092-4a51-a1b4-8eedaa09aedb\") " Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.670816 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/406727d4-ffca-4ade-b0ca-b5dbfcb23e24-logs\") pod \"406727d4-ffca-4ade-b0ca-b5dbfcb23e24\" (UID: \"406727d4-ffca-4ade-b0ca-b5dbfcb23e24\") " Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.670869 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/406727d4-ffca-4ade-b0ca-b5dbfcb23e24-public-tls-certs\") pod \"406727d4-ffca-4ade-b0ca-b5dbfcb23e24\" (UID: \"406727d4-ffca-4ade-b0ca-b5dbfcb23e24\") " Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.672298 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/289f81c2-9092-4a51-a1b4-8eedaa09aedb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "289f81c2-9092-4a51-a1b4-8eedaa09aedb" (UID: "289f81c2-9092-4a51-a1b4-8eedaa09aedb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.675600 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ncmbs\" (UniqueName: \"kubernetes.io/projected/406727d4-ffca-4ade-b0ca-b5dbfcb23e24-kube-api-access-ncmbs\") pod \"406727d4-ffca-4ade-b0ca-b5dbfcb23e24\" (UID: \"406727d4-ffca-4ade-b0ca-b5dbfcb23e24\") " Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.675663 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-prsd4\" (UniqueName: \"kubernetes.io/projected/9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149-kube-api-access-prsd4\") pod \"9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149\" (UID: \"9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149\") " Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.675690 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149-combined-ca-bundle\") pod \"9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149\" (UID: \"9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149\") " Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.676872 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/289f81c2-9092-4a51-a1b4-8eedaa09aedb-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.681029 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/406727d4-ffca-4ade-b0ca-b5dbfcb23e24-scripts" (OuterVolumeSpecName: "scripts") pod "406727d4-ffca-4ade-b0ca-b5dbfcb23e24" (UID: "406727d4-ffca-4ade-b0ca-b5dbfcb23e24"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.681316 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/406727d4-ffca-4ade-b0ca-b5dbfcb23e24-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "406727d4-ffca-4ade-b0ca-b5dbfcb23e24" (UID: "406727d4-ffca-4ade-b0ca-b5dbfcb23e24"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.681773 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b7729cf-7332-4432-999f-fbee997b2201-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2b7729cf-7332-4432-999f-fbee997b2201" (UID: "2b7729cf-7332-4432-999f-fbee997b2201"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.682128 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/406727d4-ffca-4ade-b0ca-b5dbfcb23e24-logs" (OuterVolumeSpecName: "logs") pod "406727d4-ffca-4ade-b0ca-b5dbfcb23e24" (UID: "406727d4-ffca-4ade-b0ca-b5dbfcb23e24"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.691417 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b7729cf-7332-4432-999f-fbee997b2201-kube-api-access-bbm8h" (OuterVolumeSpecName: "kube-api-access-bbm8h") pod "2b7729cf-7332-4432-999f-fbee997b2201" (UID: "2b7729cf-7332-4432-999f-fbee997b2201"). InnerVolumeSpecName "kube-api-access-bbm8h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.691780 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149-kube-api-access-prsd4" (OuterVolumeSpecName: "kube-api-access-prsd4") pod "9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149" (UID: "9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149"). InnerVolumeSpecName "kube-api-access-prsd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.694764 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/406727d4-ffca-4ade-b0ca-b5dbfcb23e24-kube-api-access-ncmbs" (OuterVolumeSpecName: "kube-api-access-ncmbs") pod "406727d4-ffca-4ade-b0ca-b5dbfcb23e24" (UID: "406727d4-ffca-4ade-b0ca-b5dbfcb23e24"). InnerVolumeSpecName "kube-api-access-ncmbs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.725667 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149" (UID: "9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.734775 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/289f81c2-9092-4a51-a1b4-8eedaa09aedb-kube-api-access-pfw6n" (OuterVolumeSpecName: "kube-api-access-pfw6n") pod "289f81c2-9092-4a51-a1b4-8eedaa09aedb" (UID: "289f81c2-9092-4a51-a1b4-8eedaa09aedb"). InnerVolumeSpecName "kube-api-access-pfw6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.746079 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-36cbded9-e56a-4712-a8db-251c7dcbb87d" (OuterVolumeSpecName: "glance") pod "406727d4-ffca-4ade-b0ca-b5dbfcb23e24" (UID: "406727d4-ffca-4ade-b0ca-b5dbfcb23e24"). InnerVolumeSpecName "pvc-36cbded9-e56a-4712-a8db-251c7dcbb87d". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.779023 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ncmbs\" (UniqueName: \"kubernetes.io/projected/406727d4-ffca-4ade-b0ca-b5dbfcb23e24-kube-api-access-ncmbs\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.779237 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-prsd4\" (UniqueName: \"kubernetes.io/projected/9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149-kube-api-access-prsd4\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.779295 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bbm8h\" (UniqueName: \"kubernetes.io/projected/2b7729cf-7332-4432-999f-fbee997b2201-kube-api-access-bbm8h\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.779365 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/406727d4-ffca-4ade-b0ca-b5dbfcb23e24-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.779419 4867 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.779478 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b7729cf-7332-4432-999f-fbee997b2201-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.779548 4867 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/406727d4-ffca-4ade-b0ca-b5dbfcb23e24-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.779613 4867 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-36cbded9-e56a-4712-a8db-251c7dcbb87d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-36cbded9-e56a-4712-a8db-251c7dcbb87d\") on node \"crc\" " Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.779666 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pfw6n\" (UniqueName: \"kubernetes.io/projected/289f81c2-9092-4a51-a1b4-8eedaa09aedb-kube-api-access-pfw6n\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.779914 4867 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/406727d4-ffca-4ade-b0ca-b5dbfcb23e24-logs\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.823009 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/406727d4-ffca-4ade-b0ca-b5dbfcb23e24-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "406727d4-ffca-4ade-b0ca-b5dbfcb23e24" (UID: "406727d4-ffca-4ade-b0ca-b5dbfcb23e24"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.827225 4867 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.827398 4867 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-36cbded9-e56a-4712-a8db-251c7dcbb87d" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-36cbded9-e56a-4712-a8db-251c7dcbb87d") on node "crc" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.833159 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149-config" (OuterVolumeSpecName: "config") pod "9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149" (UID: "9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.858791 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/406727d4-ffca-4ade-b0ca-b5dbfcb23e24-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "406727d4-ffca-4ade-b0ca-b5dbfcb23e24" (UID: "406727d4-ffca-4ade-b0ca-b5dbfcb23e24"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.859830 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149" (UID: "9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.890721 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.891129 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/406727d4-ffca-4ade-b0ca-b5dbfcb23e24-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.891157 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.891172 4867 reconciler_common.go:293] "Volume detached for volume \"pvc-36cbded9-e56a-4712-a8db-251c7dcbb87d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-36cbded9-e56a-4712-a8db-251c7dcbb87d\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.891187 4867 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/406727d4-ffca-4ade-b0ca-b5dbfcb23e24-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.927122 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/406727d4-ffca-4ade-b0ca-b5dbfcb23e24-config-data" (OuterVolumeSpecName: "config-data") pod "406727d4-ffca-4ade-b0ca-b5dbfcb23e24" (UID: "406727d4-ffca-4ade-b0ca-b5dbfcb23e24"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.967636 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149" (UID: "9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.993919 4867 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.993954 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/406727d4-ffca-4ade-b0ca-b5dbfcb23e24-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.244494 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.270064 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.291312 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 14 04:32:44 crc kubenswrapper[4867]: E0214 04:32:44.292123 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="406727d4-ffca-4ade-b0ca-b5dbfcb23e24" containerName="glance-httpd" Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.292137 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="406727d4-ffca-4ade-b0ca-b5dbfcb23e24" containerName="glance-httpd" Feb 14 04:32:44 crc kubenswrapper[4867]: E0214 04:32:44.292150 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b7729cf-7332-4432-999f-fbee997b2201" containerName="mariadb-account-create-update" Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.292156 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b7729cf-7332-4432-999f-fbee997b2201" containerName="mariadb-account-create-update" Feb 14 04:32:44 crc kubenswrapper[4867]: E0214 04:32:44.292173 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149" containerName="neutron-httpd" Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.292180 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149" containerName="neutron-httpd" Feb 14 04:32:44 crc kubenswrapper[4867]: E0214 04:32:44.292190 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="289f81c2-9092-4a51-a1b4-8eedaa09aedb" containerName="mariadb-database-create" Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.292196 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="289f81c2-9092-4a51-a1b4-8eedaa09aedb" containerName="mariadb-database-create" Feb 14 04:32:44 crc kubenswrapper[4867]: E0214 04:32:44.292240 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="406727d4-ffca-4ade-b0ca-b5dbfcb23e24" containerName="glance-log" Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.292246 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="406727d4-ffca-4ade-b0ca-b5dbfcb23e24" containerName="glance-log" Feb 14 04:32:44 crc kubenswrapper[4867]: E0214 04:32:44.292258 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149" containerName="neutron-api" Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.292263 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149" containerName="neutron-api" Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.292455 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="289f81c2-9092-4a51-a1b4-8eedaa09aedb" containerName="mariadb-database-create" Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.292467 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="406727d4-ffca-4ade-b0ca-b5dbfcb23e24" containerName="glance-httpd" Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.292476 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149" containerName="neutron-api" Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.292490 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149" containerName="neutron-httpd" Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.292516 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b7729cf-7332-4432-999f-fbee997b2201" containerName="mariadb-account-create-update" Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.292538 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="406727d4-ffca-4ade-b0ca-b5dbfcb23e24" containerName="glance-log" Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.298004 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.301467 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.301664 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.322732 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.408269 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-36cbded9-e56a-4712-a8db-251c7dcbb87d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-36cbded9-e56a-4712-a8db-251c7dcbb87d\") pod \"glance-default-external-api-0\" (UID: \"f5e42dca-0c7d-485a-95bc-b26db4e12369\") " pod="openstack/glance-default-external-api-0" Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.408618 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5e42dca-0c7d-485a-95bc-b26db4e12369-config-data\") pod \"glance-default-external-api-0\" (UID: \"f5e42dca-0c7d-485a-95bc-b26db4e12369\") " pod="openstack/glance-default-external-api-0" Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.410466 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5e42dca-0c7d-485a-95bc-b26db4e12369-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f5e42dca-0c7d-485a-95bc-b26db4e12369\") " pod="openstack/glance-default-external-api-0" Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.410639 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfv5z\" (UniqueName: \"kubernetes.io/projected/f5e42dca-0c7d-485a-95bc-b26db4e12369-kube-api-access-cfv5z\") pod \"glance-default-external-api-0\" (UID: \"f5e42dca-0c7d-485a-95bc-b26db4e12369\") " pod="openstack/glance-default-external-api-0" Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.410716 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5e42dca-0c7d-485a-95bc-b26db4e12369-logs\") pod \"glance-default-external-api-0\" (UID: \"f5e42dca-0c7d-485a-95bc-b26db4e12369\") " pod="openstack/glance-default-external-api-0" Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.411038 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5e42dca-0c7d-485a-95bc-b26db4e12369-scripts\") pod \"glance-default-external-api-0\" (UID: \"f5e42dca-0c7d-485a-95bc-b26db4e12369\") " pod="openstack/glance-default-external-api-0" Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.411754 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5e42dca-0c7d-485a-95bc-b26db4e12369-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f5e42dca-0c7d-485a-95bc-b26db4e12369\") " pod="openstack/glance-default-external-api-0" Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.411831 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f5e42dca-0c7d-485a-95bc-b26db4e12369-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f5e42dca-0c7d-485a-95bc-b26db4e12369\") " pod="openstack/glance-default-external-api-0" Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.514060 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5e42dca-0c7d-485a-95bc-b26db4e12369-scripts\") pod \"glance-default-external-api-0\" (UID: \"f5e42dca-0c7d-485a-95bc-b26db4e12369\") " pod="openstack/glance-default-external-api-0" Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.515150 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5e42dca-0c7d-485a-95bc-b26db4e12369-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f5e42dca-0c7d-485a-95bc-b26db4e12369\") " pod="openstack/glance-default-external-api-0" Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.515542 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f5e42dca-0c7d-485a-95bc-b26db4e12369-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f5e42dca-0c7d-485a-95bc-b26db4e12369\") " pod="openstack/glance-default-external-api-0" Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.515797 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-36cbded9-e56a-4712-a8db-251c7dcbb87d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-36cbded9-e56a-4712-a8db-251c7dcbb87d\") pod \"glance-default-external-api-0\" (UID: \"f5e42dca-0c7d-485a-95bc-b26db4e12369\") " pod="openstack/glance-default-external-api-0" Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.516031 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5e42dca-0c7d-485a-95bc-b26db4e12369-config-data\") pod \"glance-default-external-api-0\" (UID: \"f5e42dca-0c7d-485a-95bc-b26db4e12369\") " pod="openstack/glance-default-external-api-0" Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.516129 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5e42dca-0c7d-485a-95bc-b26db4e12369-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f5e42dca-0c7d-485a-95bc-b26db4e12369\") " pod="openstack/glance-default-external-api-0" Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.516220 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfv5z\" (UniqueName: \"kubernetes.io/projected/f5e42dca-0c7d-485a-95bc-b26db4e12369-kube-api-access-cfv5z\") pod \"glance-default-external-api-0\" (UID: \"f5e42dca-0c7d-485a-95bc-b26db4e12369\") " pod="openstack/glance-default-external-api-0" Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.516261 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f5e42dca-0c7d-485a-95bc-b26db4e12369-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f5e42dca-0c7d-485a-95bc-b26db4e12369\") " pod="openstack/glance-default-external-api-0" Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.516341 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5e42dca-0c7d-485a-95bc-b26db4e12369-logs\") pod \"glance-default-external-api-0\" (UID: \"f5e42dca-0c7d-485a-95bc-b26db4e12369\") " pod="openstack/glance-default-external-api-0" Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.516745 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5e42dca-0c7d-485a-95bc-b26db4e12369-logs\") pod \"glance-default-external-api-0\" (UID: \"f5e42dca-0c7d-485a-95bc-b26db4e12369\") " pod="openstack/glance-default-external-api-0" Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.519942 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5e42dca-0c7d-485a-95bc-b26db4e12369-config-data\") pod \"glance-default-external-api-0\" (UID: \"f5e42dca-0c7d-485a-95bc-b26db4e12369\") " pod="openstack/glance-default-external-api-0" Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.520529 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5e42dca-0c7d-485a-95bc-b26db4e12369-scripts\") pod \"glance-default-external-api-0\" (UID: \"f5e42dca-0c7d-485a-95bc-b26db4e12369\") " pod="openstack/glance-default-external-api-0" Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.521158 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5e42dca-0c7d-485a-95bc-b26db4e12369-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f5e42dca-0c7d-485a-95bc-b26db4e12369\") " pod="openstack/glance-default-external-api-0" Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.523686 4867 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.523740 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-36cbded9-e56a-4712-a8db-251c7dcbb87d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-36cbded9-e56a-4712-a8db-251c7dcbb87d\") pod \"glance-default-external-api-0\" (UID: \"f5e42dca-0c7d-485a-95bc-b26db4e12369\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/2911fee5623424610909110255172e6a670235da2c51b706f28d869aaa21b2f4/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.525491 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5e42dca-0c7d-485a-95bc-b26db4e12369-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f5e42dca-0c7d-485a-95bc-b26db4e12369\") " pod="openstack/glance-default-external-api-0" Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.538417 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfv5z\" (UniqueName: \"kubernetes.io/projected/f5e42dca-0c7d-485a-95bc-b26db4e12369-kube-api-access-cfv5z\") pod \"glance-default-external-api-0\" (UID: \"f5e42dca-0c7d-485a-95bc-b26db4e12369\") " pod="openstack/glance-default-external-api-0" Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.608737 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-74c5fcd7cb-sr8z9" Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.633101 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-36cbded9-e56a-4712-a8db-251c7dcbb87d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-36cbded9-e56a-4712-a8db-251c7dcbb87d\") pod \"glance-default-external-api-0\" (UID: \"f5e42dca-0c7d-485a-95bc-b26db4e12369\") " pod="openstack/glance-default-external-api-0" Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.663680 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-74c5fcd7cb-sr8z9"] Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.674388 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-74c5fcd7cb-sr8z9"] Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.710409 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.858854 4867 scope.go:117] "RemoveContainer" containerID="a3270a5cb491a003b02a8ff42a33368a493af6d0e24d1558f76c114ff7412184" Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.919473 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-t8trt" Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.960285 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-slfhr" Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.964742 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-8094-account-create-update-pbbgl" Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.978492 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a338-account-create-update-2zjhb" Feb 14 04:32:45 crc kubenswrapper[4867]: I0214 04:32:45.040712 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="406727d4-ffca-4ade-b0ca-b5dbfcb23e24" path="/var/lib/kubelet/pods/406727d4-ffca-4ade-b0ca-b5dbfcb23e24/volumes" Feb 14 04:32:45 crc kubenswrapper[4867]: I0214 04:32:45.042668 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149" path="/var/lib/kubelet/pods/9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149/volumes" Feb 14 04:32:45 crc kubenswrapper[4867]: I0214 04:32:45.045627 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/708fbc3f-a05a-4b29-b455-32db117495d1-operator-scripts\") pod \"708fbc3f-a05a-4b29-b455-32db117495d1\" (UID: \"708fbc3f-a05a-4b29-b455-32db117495d1\") " Feb 14 04:32:45 crc kubenswrapper[4867]: I0214 04:32:45.045674 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/730dbd9b-ddff-4d09-89ff-b9135ed83042-operator-scripts\") pod \"730dbd9b-ddff-4d09-89ff-b9135ed83042\" (UID: \"730dbd9b-ddff-4d09-89ff-b9135ed83042\") " Feb 14 04:32:45 crc kubenswrapper[4867]: I0214 04:32:45.045805 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80c71d92-a9d1-4256-b7be-678dc34d1562-operator-scripts\") pod \"80c71d92-a9d1-4256-b7be-678dc34d1562\" (UID: \"80c71d92-a9d1-4256-b7be-678dc34d1562\") " Feb 14 04:32:45 crc kubenswrapper[4867]: I0214 04:32:45.045828 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fnn7m\" (UniqueName: \"kubernetes.io/projected/80c71d92-a9d1-4256-b7be-678dc34d1562-kube-api-access-fnn7m\") pod \"80c71d92-a9d1-4256-b7be-678dc34d1562\" (UID: \"80c71d92-a9d1-4256-b7be-678dc34d1562\") " Feb 14 04:32:45 crc kubenswrapper[4867]: I0214 04:32:45.046938 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/730dbd9b-ddff-4d09-89ff-b9135ed83042-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "730dbd9b-ddff-4d09-89ff-b9135ed83042" (UID: "730dbd9b-ddff-4d09-89ff-b9135ed83042"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:32:45 crc kubenswrapper[4867]: I0214 04:32:45.047415 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80c71d92-a9d1-4256-b7be-678dc34d1562-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "80c71d92-a9d1-4256-b7be-678dc34d1562" (UID: "80c71d92-a9d1-4256-b7be-678dc34d1562"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:32:45 crc kubenswrapper[4867]: I0214 04:32:45.050686 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/708fbc3f-a05a-4b29-b455-32db117495d1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "708fbc3f-a05a-4b29-b455-32db117495d1" (UID: "708fbc3f-a05a-4b29-b455-32db117495d1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:32:45 crc kubenswrapper[4867]: I0214 04:32:45.051124 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5ch7\" (UniqueName: \"kubernetes.io/projected/708fbc3f-a05a-4b29-b455-32db117495d1-kube-api-access-k5ch7\") pod \"708fbc3f-a05a-4b29-b455-32db117495d1\" (UID: \"708fbc3f-a05a-4b29-b455-32db117495d1\") " Feb 14 04:32:45 crc kubenswrapper[4867]: I0214 04:32:45.051251 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v628v\" (UniqueName: \"kubernetes.io/projected/041c55d6-87c7-47b4-a53b-9b38cb85e3d2-kube-api-access-v628v\") pod \"041c55d6-87c7-47b4-a53b-9b38cb85e3d2\" (UID: \"041c55d6-87c7-47b4-a53b-9b38cb85e3d2\") " Feb 14 04:32:45 crc kubenswrapper[4867]: I0214 04:32:45.051346 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c287t\" (UniqueName: \"kubernetes.io/projected/730dbd9b-ddff-4d09-89ff-b9135ed83042-kube-api-access-c287t\") pod \"730dbd9b-ddff-4d09-89ff-b9135ed83042\" (UID: \"730dbd9b-ddff-4d09-89ff-b9135ed83042\") " Feb 14 04:32:45 crc kubenswrapper[4867]: I0214 04:32:45.051412 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80c71d92-a9d1-4256-b7be-678dc34d1562-kube-api-access-fnn7m" (OuterVolumeSpecName: "kube-api-access-fnn7m") pod "80c71d92-a9d1-4256-b7be-678dc34d1562" (UID: "80c71d92-a9d1-4256-b7be-678dc34d1562"). InnerVolumeSpecName "kube-api-access-fnn7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:32:45 crc kubenswrapper[4867]: I0214 04:32:45.051469 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/041c55d6-87c7-47b4-a53b-9b38cb85e3d2-operator-scripts\") pod \"041c55d6-87c7-47b4-a53b-9b38cb85e3d2\" (UID: \"041c55d6-87c7-47b4-a53b-9b38cb85e3d2\") " Feb 14 04:32:45 crc kubenswrapper[4867]: I0214 04:32:45.052245 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/041c55d6-87c7-47b4-a53b-9b38cb85e3d2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "041c55d6-87c7-47b4-a53b-9b38cb85e3d2" (UID: "041c55d6-87c7-47b4-a53b-9b38cb85e3d2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:32:45 crc kubenswrapper[4867]: I0214 04:32:45.052905 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/041c55d6-87c7-47b4-a53b-9b38cb85e3d2-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:45 crc kubenswrapper[4867]: I0214 04:32:45.052971 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/708fbc3f-a05a-4b29-b455-32db117495d1-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:45 crc kubenswrapper[4867]: I0214 04:32:45.053044 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/730dbd9b-ddff-4d09-89ff-b9135ed83042-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:45 crc kubenswrapper[4867]: I0214 04:32:45.053097 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80c71d92-a9d1-4256-b7be-678dc34d1562-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:45 crc kubenswrapper[4867]: I0214 04:32:45.053150 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fnn7m\" (UniqueName: \"kubernetes.io/projected/80c71d92-a9d1-4256-b7be-678dc34d1562-kube-api-access-fnn7m\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:45 crc kubenswrapper[4867]: I0214 04:32:45.055350 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/041c55d6-87c7-47b4-a53b-9b38cb85e3d2-kube-api-access-v628v" (OuterVolumeSpecName: "kube-api-access-v628v") pod "041c55d6-87c7-47b4-a53b-9b38cb85e3d2" (UID: "041c55d6-87c7-47b4-a53b-9b38cb85e3d2"). InnerVolumeSpecName "kube-api-access-v628v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:32:45 crc kubenswrapper[4867]: I0214 04:32:45.061192 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/708fbc3f-a05a-4b29-b455-32db117495d1-kube-api-access-k5ch7" (OuterVolumeSpecName: "kube-api-access-k5ch7") pod "708fbc3f-a05a-4b29-b455-32db117495d1" (UID: "708fbc3f-a05a-4b29-b455-32db117495d1"). InnerVolumeSpecName "kube-api-access-k5ch7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:32:45 crc kubenswrapper[4867]: I0214 04:32:45.063346 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/730dbd9b-ddff-4d09-89ff-b9135ed83042-kube-api-access-c287t" (OuterVolumeSpecName: "kube-api-access-c287t") pod "730dbd9b-ddff-4d09-89ff-b9135ed83042" (UID: "730dbd9b-ddff-4d09-89ff-b9135ed83042"). InnerVolumeSpecName "kube-api-access-c287t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:32:45 crc kubenswrapper[4867]: I0214 04:32:45.087544 4867 scope.go:117] "RemoveContainer" containerID="12a1d2cb9718993931d34f7f092630cac049d31e66bb907373a6a9ebfd3b2034" Feb 14 04:32:45 crc kubenswrapper[4867]: I0214 04:32:45.160344 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5ch7\" (UniqueName: \"kubernetes.io/projected/708fbc3f-a05a-4b29-b455-32db117495d1-kube-api-access-k5ch7\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:45 crc kubenswrapper[4867]: I0214 04:32:45.160651 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v628v\" (UniqueName: \"kubernetes.io/projected/041c55d6-87c7-47b4-a53b-9b38cb85e3d2-kube-api-access-v628v\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:45 crc kubenswrapper[4867]: I0214 04:32:45.160661 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c287t\" (UniqueName: \"kubernetes.io/projected/730dbd9b-ddff-4d09-89ff-b9135ed83042-kube-api-access-c287t\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:45 crc kubenswrapper[4867]: I0214 04:32:45.258025 4867 scope.go:117] "RemoveContainer" containerID="461e174da477dbbe46e48418e6c4b74717f5d942fc161f7932d038f71bf9aca1" Feb 14 04:32:45 crc kubenswrapper[4867]: I0214 04:32:45.643200 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-t8trt" event={"ID":"708fbc3f-a05a-4b29-b455-32db117495d1","Type":"ContainerDied","Data":"4cf6961920f386662ea24ebe41d55c71401248492bb629399ef841615543fa48"} Feb 14 04:32:45 crc kubenswrapper[4867]: I0214 04:32:45.644220 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4cf6961920f386662ea24ebe41d55c71401248492bb629399ef841615543fa48" Feb 14 04:32:45 crc kubenswrapper[4867]: I0214 04:32:45.643735 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-t8trt" Feb 14 04:32:45 crc kubenswrapper[4867]: I0214 04:32:45.647039 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30f61907-9cb4-4873-99eb-bbb5adf21fcb","Type":"ContainerStarted","Data":"02a21d192b838bcd292ed433b9bda0d9ab33f8abcba6bd3963579f24d84fa41e"} Feb 14 04:32:45 crc kubenswrapper[4867]: I0214 04:32:45.657139 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-a338-account-create-update-2zjhb" event={"ID":"041c55d6-87c7-47b4-a53b-9b38cb85e3d2","Type":"ContainerDied","Data":"38548c5a0efacccdfcfdf4445dc4dbf80ccfe685a7da35040dbadb7094f914d2"} Feb 14 04:32:45 crc kubenswrapper[4867]: I0214 04:32:45.657200 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38548c5a0efacccdfcfdf4445dc4dbf80ccfe685a7da35040dbadb7094f914d2" Feb 14 04:32:45 crc kubenswrapper[4867]: I0214 04:32:45.657314 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a338-account-create-update-2zjhb" Feb 14 04:32:45 crc kubenswrapper[4867]: I0214 04:32:45.660893 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-slfhr" Feb 14 04:32:45 crc kubenswrapper[4867]: I0214 04:32:45.660909 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-slfhr" event={"ID":"730dbd9b-ddff-4d09-89ff-b9135ed83042","Type":"ContainerDied","Data":"26251869056a11a68a5d33b008a4b88fb45a9155c0e2b8d4aa9fdfe9d69f6cab"} Feb 14 04:32:45 crc kubenswrapper[4867]: I0214 04:32:45.660957 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26251869056a11a68a5d33b008a4b88fb45a9155c0e2b8d4aa9fdfe9d69f6cab" Feb 14 04:32:45 crc kubenswrapper[4867]: I0214 04:32:45.692533 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-8094-account-create-update-pbbgl" event={"ID":"80c71d92-a9d1-4256-b7be-678dc34d1562","Type":"ContainerDied","Data":"073c45a9d481932551862dd339dfbf035cc064529affc0929ce845e3152133c0"} Feb 14 04:32:45 crc kubenswrapper[4867]: I0214 04:32:45.692580 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="073c45a9d481932551862dd339dfbf035cc064529affc0929ce845e3152133c0" Feb 14 04:32:45 crc kubenswrapper[4867]: I0214 04:32:45.692648 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-8094-account-create-update-pbbgl" Feb 14 04:32:45 crc kubenswrapper[4867]: I0214 04:32:45.700197 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6f55d59bf5-wfw72"] Feb 14 04:32:45 crc kubenswrapper[4867]: I0214 04:32:45.848773 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-74d8ffb764-wz9cp"] Feb 14 04:32:45 crc kubenswrapper[4867]: I0214 04:32:45.936184 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 14 04:32:46 crc kubenswrapper[4867]: I0214 04:32:46.735024 4867 generic.go:334] "Generic (PLEG): container finished" podID="4e650fa8-a893-47e0-a5d5-0df60430ea9e" containerID="5853e720bee74ecffda2b3607cd04e8d46a528baf84e2f915f4143c80e908cce" exitCode=1 Feb 14 04:32:46 crc kubenswrapper[4867]: I0214 04:32:46.735712 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-cf78bc599-cbb7h" event={"ID":"4e650fa8-a893-47e0-a5d5-0df60430ea9e","Type":"ContainerDied","Data":"5853e720bee74ecffda2b3607cd04e8d46a528baf84e2f915f4143c80e908cce"} Feb 14 04:32:46 crc kubenswrapper[4867]: I0214 04:32:46.736067 4867 scope.go:117] "RemoveContainer" containerID="5853e720bee74ecffda2b3607cd04e8d46a528baf84e2f915f4143c80e908cce" Feb 14 04:32:46 crc kubenswrapper[4867]: I0214 04:32:46.746448 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6f55d59bf5-wfw72" event={"ID":"fe0cc502-2f6a-41d9-8761-da930802201e","Type":"ContainerStarted","Data":"c20166d492a9fa577b96f854901f8b51fbf65bf3bdc78203cf773c6d9899e503"} Feb 14 04:32:46 crc kubenswrapper[4867]: I0214 04:32:46.746489 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6f55d59bf5-wfw72" event={"ID":"fe0cc502-2f6a-41d9-8761-da930802201e","Type":"ContainerStarted","Data":"049d086e76bc10d2a5f14c7d8a9fe02a2d5fd8eadb747b6e9d8413f65e7ceb0e"} Feb 14 04:32:46 crc kubenswrapper[4867]: I0214 04:32:46.747142 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-6f55d59bf5-wfw72" Feb 14 04:32:46 crc kubenswrapper[4867]: I0214 04:32:46.765235 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-667b98697-gxqph" event={"ID":"4fd29ee2-33af-4629-8c0d-fa62c0e07240","Type":"ContainerStarted","Data":"25e73f24691faede9bee41e1ee55092d6468385d3e91a16311d3419e479b9ed4"} Feb 14 04:32:46 crc kubenswrapper[4867]: I0214 04:32:46.765354 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-667b98697-gxqph" podUID="4fd29ee2-33af-4629-8c0d-fa62c0e07240" containerName="heat-api" containerID="cri-o://25e73f24691faede9bee41e1ee55092d6468385d3e91a16311d3419e479b9ed4" gracePeriod=60 Feb 14 04:32:46 crc kubenswrapper[4867]: I0214 04:32:46.765582 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-667b98697-gxqph" Feb 14 04:32:46 crc kubenswrapper[4867]: I0214 04:32:46.791689 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-6f55d59bf5-wfw72" podStartSLOduration=3.791660481 podStartE2EDuration="3.791660481s" podCreationTimestamp="2026-02-14 04:32:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:32:46.784086218 +0000 UTC m=+1398.865023532" watchObservedRunningTime="2026-02-14 04:32:46.791660481 +0000 UTC m=+1398.872597795" Feb 14 04:32:46 crc kubenswrapper[4867]: I0214 04:32:46.795549 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-74d8ffb764-wz9cp" event={"ID":"16f76a07-1b4d-4057-84c6-0cae915e01f7","Type":"ContainerStarted","Data":"830da82a952f0eb79b72815166e8401af585ff9b46564a5260025bbc1ac28ad6"} Feb 14 04:32:46 crc kubenswrapper[4867]: I0214 04:32:46.796299 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-74d8ffb764-wz9cp" Feb 14 04:32:46 crc kubenswrapper[4867]: I0214 04:32:46.800991 4867 generic.go:334] "Generic (PLEG): container finished" podID="bf9a1d71-05e1-40ab-90a7-530d2083fe14" containerID="3f0c6b148827ea32a231e9e007d2dafbba391e6afdc3bb2dbabd5ec06a7c50e3" exitCode=1 Feb 14 04:32:46 crc kubenswrapper[4867]: I0214 04:32:46.801061 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-8f9d657ff-n8g4q" event={"ID":"bf9a1d71-05e1-40ab-90a7-530d2083fe14","Type":"ContainerDied","Data":"3f0c6b148827ea32a231e9e007d2dafbba391e6afdc3bb2dbabd5ec06a7c50e3"} Feb 14 04:32:46 crc kubenswrapper[4867]: I0214 04:32:46.801776 4867 scope.go:117] "RemoveContainer" containerID="3f0c6b148827ea32a231e9e007d2dafbba391e6afdc3bb2dbabd5ec06a7c50e3" Feb 14 04:32:46 crc kubenswrapper[4867]: I0214 04:32:46.813773 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-74c87bfcc9-g5dr4" event={"ID":"6c28a361-2a59-45f2-baeb-e4d5313b6c17","Type":"ContainerStarted","Data":"d3e9e4355332a2ff0d32cacebbea6af4c97294a693e97fe84efa4e22b02595f6"} Feb 14 04:32:46 crc kubenswrapper[4867]: I0214 04:32:46.813959 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-74c87bfcc9-g5dr4" podUID="6c28a361-2a59-45f2-baeb-e4d5313b6c17" containerName="heat-cfnapi" containerID="cri-o://d3e9e4355332a2ff0d32cacebbea6af4c97294a693e97fe84efa4e22b02595f6" gracePeriod=60 Feb 14 04:32:46 crc kubenswrapper[4867]: I0214 04:32:46.814098 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-74c87bfcc9-g5dr4" Feb 14 04:32:46 crc kubenswrapper[4867]: I0214 04:32:46.821835 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f5e42dca-0c7d-485a-95bc-b26db4e12369","Type":"ContainerStarted","Data":"ee1285703ead2f3b077f5f2b6bf2a06f26f518800f3dabf9a444c8ff6e1390dd"} Feb 14 04:32:46 crc kubenswrapper[4867]: I0214 04:32:46.859735 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-667b98697-gxqph" podStartSLOduration=9.061091547 podStartE2EDuration="14.85971289s" podCreationTimestamp="2026-02-14 04:32:32 +0000 UTC" firstStartedPulling="2026-02-14 04:32:39.214776009 +0000 UTC m=+1391.295713313" lastFinishedPulling="2026-02-14 04:32:45.013397342 +0000 UTC m=+1397.094334656" observedRunningTime="2026-02-14 04:32:46.805862133 +0000 UTC m=+1398.886799447" watchObservedRunningTime="2026-02-14 04:32:46.85971289 +0000 UTC m=+1398.940650204" Feb 14 04:32:46 crc kubenswrapper[4867]: I0214 04:32:46.868690 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30f61907-9cb4-4873-99eb-bbb5adf21fcb","Type":"ContainerStarted","Data":"cb1ce511267c1e1bb4cd8621896e5bff9f87cd7f132d080f973fbe44eacb8ee4"} Feb 14 04:32:46 crc kubenswrapper[4867]: I0214 04:32:46.892711 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-74d8ffb764-wz9cp" podStartSLOduration=3.892683295 podStartE2EDuration="3.892683295s" podCreationTimestamp="2026-02-14 04:32:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:32:46.827182375 +0000 UTC m=+1398.908119689" watchObservedRunningTime="2026-02-14 04:32:46.892683295 +0000 UTC m=+1398.973620629" Feb 14 04:32:46 crc kubenswrapper[4867]: I0214 04:32:46.924374 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-74c87bfcc9-g5dr4" podStartSLOduration=9.177062453 podStartE2EDuration="14.924355956s" podCreationTimestamp="2026-02-14 04:32:32 +0000 UTC" firstStartedPulling="2026-02-14 04:32:39.212756075 +0000 UTC m=+1391.293693389" lastFinishedPulling="2026-02-14 04:32:44.960049568 +0000 UTC m=+1397.040986892" observedRunningTime="2026-02-14 04:32:46.9047533 +0000 UTC m=+1398.985690614" watchObservedRunningTime="2026-02-14 04:32:46.924355956 +0000 UTC m=+1399.005293270" Feb 14 04:32:47 crc kubenswrapper[4867]: I0214 04:32:47.564195 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7756b9d78c-ccbrl" Feb 14 04:32:47 crc kubenswrapper[4867]: I0214 04:32:47.708423 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-pq99b"] Feb 14 04:32:47 crc kubenswrapper[4867]: I0214 04:32:47.708700 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-pq99b" podUID="746b9097-84d0-4d00-a92c-808df9206d8a" containerName="dnsmasq-dns" containerID="cri-o://bbbd663b685e649aba7c6d25d4f5d873760ad4fd5bfb839326762e3bc9aeb52d" gracePeriod=10 Feb 14 04:32:47 crc kubenswrapper[4867]: I0214 04:32:47.899696 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f5e42dca-0c7d-485a-95bc-b26db4e12369","Type":"ContainerStarted","Data":"0118fc901e642976e79b1611b934e8452bfc845018c10ec766a047bc2408aaf8"} Feb 14 04:32:47 crc kubenswrapper[4867]: I0214 04:32:47.903367 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-cf78bc599-cbb7h" event={"ID":"4e650fa8-a893-47e0-a5d5-0df60430ea9e","Type":"ContainerStarted","Data":"f3518d1f2b8b6a76c52e19fef766123bf78780e7f313d75009ab8059dc0d7904"} Feb 14 04:32:47 crc kubenswrapper[4867]: I0214 04:32:47.905247 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-cf78bc599-cbb7h" Feb 14 04:32:47 crc kubenswrapper[4867]: I0214 04:32:47.908280 4867 generic.go:334] "Generic (PLEG): container finished" podID="4fd29ee2-33af-4629-8c0d-fa62c0e07240" containerID="25e73f24691faede9bee41e1ee55092d6468385d3e91a16311d3419e479b9ed4" exitCode=0 Feb 14 04:32:47 crc kubenswrapper[4867]: I0214 04:32:47.908339 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-667b98697-gxqph" event={"ID":"4fd29ee2-33af-4629-8c0d-fa62c0e07240","Type":"ContainerDied","Data":"25e73f24691faede9bee41e1ee55092d6468385d3e91a16311d3419e479b9ed4"} Feb 14 04:32:47 crc kubenswrapper[4867]: I0214 04:32:47.912876 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-74d8ffb764-wz9cp" event={"ID":"16f76a07-1b4d-4057-84c6-0cae915e01f7","Type":"ContainerStarted","Data":"11c8bf6db3fba0102b4b30e1ce307cf289b32ee921d87494ebf82f97afd541e7"} Feb 14 04:32:47 crc kubenswrapper[4867]: I0214 04:32:47.917095 4867 generic.go:334] "Generic (PLEG): container finished" podID="6c28a361-2a59-45f2-baeb-e4d5313b6c17" containerID="d3e9e4355332a2ff0d32cacebbea6af4c97294a693e97fe84efa4e22b02595f6" exitCode=0 Feb 14 04:32:47 crc kubenswrapper[4867]: I0214 04:32:47.918072 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-74c87bfcc9-g5dr4" event={"ID":"6c28a361-2a59-45f2-baeb-e4d5313b6c17","Type":"ContainerDied","Data":"d3e9e4355332a2ff0d32cacebbea6af4c97294a693e97fe84efa4e22b02595f6"} Feb 14 04:32:47 crc kubenswrapper[4867]: I0214 04:32:47.959589 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-cf78bc599-cbb7h" podStartSLOduration=5.234941165 podStartE2EDuration="8.959569172s" podCreationTimestamp="2026-02-14 04:32:39 +0000 UTC" firstStartedPulling="2026-02-14 04:32:41.416850506 +0000 UTC m=+1393.497787820" lastFinishedPulling="2026-02-14 04:32:45.141478513 +0000 UTC m=+1397.222415827" observedRunningTime="2026-02-14 04:32:47.954335921 +0000 UTC m=+1400.035273235" watchObservedRunningTime="2026-02-14 04:32:47.959569172 +0000 UTC m=+1400.040506476" Feb 14 04:32:48 crc kubenswrapper[4867]: I0214 04:32:48.113538 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-74c87bfcc9-g5dr4" Feb 14 04:32:48 crc kubenswrapper[4867]: I0214 04:32:48.194472 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-667b98697-gxqph" Feb 14 04:32:48 crc kubenswrapper[4867]: I0214 04:32:48.228246 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c28a361-2a59-45f2-baeb-e4d5313b6c17-combined-ca-bundle\") pod \"6c28a361-2a59-45f2-baeb-e4d5313b6c17\" (UID: \"6c28a361-2a59-45f2-baeb-e4d5313b6c17\") " Feb 14 04:32:48 crc kubenswrapper[4867]: I0214 04:32:48.228438 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6c28a361-2a59-45f2-baeb-e4d5313b6c17-config-data-custom\") pod \"6c28a361-2a59-45f2-baeb-e4d5313b6c17\" (UID: \"6c28a361-2a59-45f2-baeb-e4d5313b6c17\") " Feb 14 04:32:48 crc kubenswrapper[4867]: I0214 04:32:48.228529 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c28a361-2a59-45f2-baeb-e4d5313b6c17-config-data\") pod \"6c28a361-2a59-45f2-baeb-e4d5313b6c17\" (UID: \"6c28a361-2a59-45f2-baeb-e4d5313b6c17\") " Feb 14 04:32:48 crc kubenswrapper[4867]: I0214 04:32:48.228658 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tfcz2\" (UniqueName: \"kubernetes.io/projected/6c28a361-2a59-45f2-baeb-e4d5313b6c17-kube-api-access-tfcz2\") pod \"6c28a361-2a59-45f2-baeb-e4d5313b6c17\" (UID: \"6c28a361-2a59-45f2-baeb-e4d5313b6c17\") " Feb 14 04:32:48 crc kubenswrapper[4867]: I0214 04:32:48.243330 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c28a361-2a59-45f2-baeb-e4d5313b6c17-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "6c28a361-2a59-45f2-baeb-e4d5313b6c17" (UID: "6c28a361-2a59-45f2-baeb-e4d5313b6c17"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:48 crc kubenswrapper[4867]: I0214 04:32:48.244567 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c28a361-2a59-45f2-baeb-e4d5313b6c17-kube-api-access-tfcz2" (OuterVolumeSpecName: "kube-api-access-tfcz2") pod "6c28a361-2a59-45f2-baeb-e4d5313b6c17" (UID: "6c28a361-2a59-45f2-baeb-e4d5313b6c17"). InnerVolumeSpecName "kube-api-access-tfcz2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:32:48 crc kubenswrapper[4867]: I0214 04:32:48.331756 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4fd29ee2-33af-4629-8c0d-fa62c0e07240-config-data-custom\") pod \"4fd29ee2-33af-4629-8c0d-fa62c0e07240\" (UID: \"4fd29ee2-33af-4629-8c0d-fa62c0e07240\") " Feb 14 04:32:48 crc kubenswrapper[4867]: I0214 04:32:48.341921 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fd29ee2-33af-4629-8c0d-fa62c0e07240-config-data\") pod \"4fd29ee2-33af-4629-8c0d-fa62c0e07240\" (UID: \"4fd29ee2-33af-4629-8c0d-fa62c0e07240\") " Feb 14 04:32:48 crc kubenswrapper[4867]: I0214 04:32:48.342068 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrk9t\" (UniqueName: \"kubernetes.io/projected/4fd29ee2-33af-4629-8c0d-fa62c0e07240-kube-api-access-hrk9t\") pod \"4fd29ee2-33af-4629-8c0d-fa62c0e07240\" (UID: \"4fd29ee2-33af-4629-8c0d-fa62c0e07240\") " Feb 14 04:32:48 crc kubenswrapper[4867]: I0214 04:32:48.342104 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fd29ee2-33af-4629-8c0d-fa62c0e07240-combined-ca-bundle\") pod \"4fd29ee2-33af-4629-8c0d-fa62c0e07240\" (UID: \"4fd29ee2-33af-4629-8c0d-fa62c0e07240\") " Feb 14 04:32:48 crc kubenswrapper[4867]: I0214 04:32:48.343923 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tfcz2\" (UniqueName: \"kubernetes.io/projected/6c28a361-2a59-45f2-baeb-e4d5313b6c17-kube-api-access-tfcz2\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:48 crc kubenswrapper[4867]: I0214 04:32:48.343948 4867 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6c28a361-2a59-45f2-baeb-e4d5313b6c17-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:48 crc kubenswrapper[4867]: I0214 04:32:48.387429 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fd29ee2-33af-4629-8c0d-fa62c0e07240-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "4fd29ee2-33af-4629-8c0d-fa62c0e07240" (UID: "4fd29ee2-33af-4629-8c0d-fa62c0e07240"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:48 crc kubenswrapper[4867]: I0214 04:32:48.436343 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fd29ee2-33af-4629-8c0d-fa62c0e07240-kube-api-access-hrk9t" (OuterVolumeSpecName: "kube-api-access-hrk9t") pod "4fd29ee2-33af-4629-8c0d-fa62c0e07240" (UID: "4fd29ee2-33af-4629-8c0d-fa62c0e07240"). InnerVolumeSpecName "kube-api-access-hrk9t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:32:48 crc kubenswrapper[4867]: I0214 04:32:48.440572 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 14 04:32:48 crc kubenswrapper[4867]: I0214 04:32:48.440864 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="7f21b5d2-75e5-4cc5-96d0-670e9ed88df0" containerName="glance-log" containerID="cri-o://70953f2317efbfb87d7a56f4d71c52385c4847b32874288de71ce95ba977de9e" gracePeriod=30 Feb 14 04:32:48 crc kubenswrapper[4867]: I0214 04:32:48.441463 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="7f21b5d2-75e5-4cc5-96d0-670e9ed88df0" containerName="glance-httpd" containerID="cri-o://784cfaee3c31733050d3a1efb21352103c907f523d29c5e564d74f7dfef79bf4" gracePeriod=30 Feb 14 04:32:48 crc kubenswrapper[4867]: I0214 04:32:48.446765 4867 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4fd29ee2-33af-4629-8c0d-fa62c0e07240-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:48 crc kubenswrapper[4867]: I0214 04:32:48.446796 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hrk9t\" (UniqueName: \"kubernetes.io/projected/4fd29ee2-33af-4629-8c0d-fa62c0e07240-kube-api-access-hrk9t\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:48 crc kubenswrapper[4867]: I0214 04:32:48.475013 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c28a361-2a59-45f2-baeb-e4d5313b6c17-config-data" (OuterVolumeSpecName: "config-data") pod "6c28a361-2a59-45f2-baeb-e4d5313b6c17" (UID: "6c28a361-2a59-45f2-baeb-e4d5313b6c17"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:48 crc kubenswrapper[4867]: I0214 04:32:48.482706 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c28a361-2a59-45f2-baeb-e4d5313b6c17-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6c28a361-2a59-45f2-baeb-e4d5313b6c17" (UID: "6c28a361-2a59-45f2-baeb-e4d5313b6c17"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:48 crc kubenswrapper[4867]: I0214 04:32:48.524360 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fd29ee2-33af-4629-8c0d-fa62c0e07240-config-data" (OuterVolumeSpecName: "config-data") pod "4fd29ee2-33af-4629-8c0d-fa62c0e07240" (UID: "4fd29ee2-33af-4629-8c0d-fa62c0e07240"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:48 crc kubenswrapper[4867]: I0214 04:32:48.530722 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fd29ee2-33af-4629-8c0d-fa62c0e07240-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4fd29ee2-33af-4629-8c0d-fa62c0e07240" (UID: "4fd29ee2-33af-4629-8c0d-fa62c0e07240"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:48 crc kubenswrapper[4867]: I0214 04:32:48.552248 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fd29ee2-33af-4629-8c0d-fa62c0e07240-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:48 crc kubenswrapper[4867]: I0214 04:32:48.552282 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fd29ee2-33af-4629-8c0d-fa62c0e07240-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:48 crc kubenswrapper[4867]: I0214 04:32:48.552295 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c28a361-2a59-45f2-baeb-e4d5313b6c17-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:48 crc kubenswrapper[4867]: I0214 04:32:48.552305 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c28a361-2a59-45f2-baeb-e4d5313b6c17-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:48 crc kubenswrapper[4867]: I0214 04:32:48.935364 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-pq99b" Feb 14 04:32:48 crc kubenswrapper[4867]: I0214 04:32:48.978754 4867 generic.go:334] "Generic (PLEG): container finished" podID="746b9097-84d0-4d00-a92c-808df9206d8a" containerID="bbbd663b685e649aba7c6d25d4f5d873760ad4fd5bfb839326762e3bc9aeb52d" exitCode=0 Feb 14 04:32:48 crc kubenswrapper[4867]: I0214 04:32:48.979950 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-pq99b" Feb 14 04:32:48 crc kubenswrapper[4867]: I0214 04:32:48.980744 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-pq99b" event={"ID":"746b9097-84d0-4d00-a92c-808df9206d8a","Type":"ContainerDied","Data":"bbbd663b685e649aba7c6d25d4f5d873760ad4fd5bfb839326762e3bc9aeb52d"} Feb 14 04:32:48 crc kubenswrapper[4867]: I0214 04:32:48.980785 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-pq99b" event={"ID":"746b9097-84d0-4d00-a92c-808df9206d8a","Type":"ContainerDied","Data":"9ac4c13dc3497256b1b6cb1aa9076b705851041e05e9b02af05f329d0735ed8b"} Feb 14 04:32:48 crc kubenswrapper[4867]: I0214 04:32:48.980805 4867 scope.go:117] "RemoveContainer" containerID="bbbd663b685e649aba7c6d25d4f5d873760ad4fd5bfb839326762e3bc9aeb52d" Feb 14 04:32:48 crc kubenswrapper[4867]: I0214 04:32:48.997989 4867 generic.go:334] "Generic (PLEG): container finished" podID="bf9a1d71-05e1-40ab-90a7-530d2083fe14" containerID="c6319e5096476af1cd4794441879b2c2e802659e71c263027bc77f340f5ae574" exitCode=1 Feb 14 04:32:48 crc kubenswrapper[4867]: I0214 04:32:48.998813 4867 scope.go:117] "RemoveContainer" containerID="c6319e5096476af1cd4794441879b2c2e802659e71c263027bc77f340f5ae574" Feb 14 04:32:48 crc kubenswrapper[4867]: E0214 04:32:48.999126 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-8f9d657ff-n8g4q_openstack(bf9a1d71-05e1-40ab-90a7-530d2083fe14)\"" pod="openstack/heat-api-8f9d657ff-n8g4q" podUID="bf9a1d71-05e1-40ab-90a7-530d2083fe14" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.011865 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-74c87bfcc9-g5dr4" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.031572 4867 generic.go:334] "Generic (PLEG): container finished" podID="4e650fa8-a893-47e0-a5d5-0df60430ea9e" containerID="f3518d1f2b8b6a76c52e19fef766123bf78780e7f313d75009ab8059dc0d7904" exitCode=1 Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.043045 4867 scope.go:117] "RemoveContainer" containerID="f3518d1f2b8b6a76c52e19fef766123bf78780e7f313d75009ab8059dc0d7904" Feb 14 04:32:49 crc kubenswrapper[4867]: E0214 04:32:49.043308 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-cf78bc599-cbb7h_openstack(4e650fa8-a893-47e0-a5d5-0df60430ea9e)\"" pod="openstack/heat-cfnapi-cf78bc599-cbb7h" podUID="4e650fa8-a893-47e0-a5d5-0df60430ea9e" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.045891 4867 generic.go:334] "Generic (PLEG): container finished" podID="7f21b5d2-75e5-4cc5-96d0-670e9ed88df0" containerID="70953f2317efbfb87d7a56f4d71c52385c4847b32874288de71ce95ba977de9e" exitCode=143 Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.054094 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-8f9d657ff-n8g4q" event={"ID":"bf9a1d71-05e1-40ab-90a7-530d2083fe14","Type":"ContainerDied","Data":"c6319e5096476af1cd4794441879b2c2e802659e71c263027bc77f340f5ae574"} Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.054144 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-74c87bfcc9-g5dr4" event={"ID":"6c28a361-2a59-45f2-baeb-e4d5313b6c17","Type":"ContainerDied","Data":"279dcb9c4b235ad9ee4d170269ff377a20b494792ae727e8d6532186bac5ba51"} Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.054162 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-cf78bc599-cbb7h" event={"ID":"4e650fa8-a893-47e0-a5d5-0df60430ea9e","Type":"ContainerDied","Data":"f3518d1f2b8b6a76c52e19fef766123bf78780e7f313d75009ab8059dc0d7904"} Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.054179 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7f21b5d2-75e5-4cc5-96d0-670e9ed88df0","Type":"ContainerDied","Data":"70953f2317efbfb87d7a56f4d71c52385c4847b32874288de71ce95ba977de9e"} Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.068182 4867 scope.go:117] "RemoveContainer" containerID="5b62b9c4c18730bd95cd769f97a701a763d73eb11bbefea4c9a65847618af00e" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.076588 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-667b98697-gxqph" event={"ID":"4fd29ee2-33af-4629-8c0d-fa62c0e07240","Type":"ContainerDied","Data":"6a911f22f2445bf520e5b58ee0d37ec6810d7143ae0f24d44f2a1ba98f13ca47"} Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.093801 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-667b98697-gxqph" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.097558 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/746b9097-84d0-4d00-a92c-808df9206d8a-dns-svc\") pod \"746b9097-84d0-4d00-a92c-808df9206d8a\" (UID: \"746b9097-84d0-4d00-a92c-808df9206d8a\") " Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.097821 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/746b9097-84d0-4d00-a92c-808df9206d8a-ovsdbserver-nb\") pod \"746b9097-84d0-4d00-a92c-808df9206d8a\" (UID: \"746b9097-84d0-4d00-a92c-808df9206d8a\") " Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.097886 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/746b9097-84d0-4d00-a92c-808df9206d8a-ovsdbserver-sb\") pod \"746b9097-84d0-4d00-a92c-808df9206d8a\" (UID: \"746b9097-84d0-4d00-a92c-808df9206d8a\") " Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.098014 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/746b9097-84d0-4d00-a92c-808df9206d8a-dns-swift-storage-0\") pod \"746b9097-84d0-4d00-a92c-808df9206d8a\" (UID: \"746b9097-84d0-4d00-a92c-808df9206d8a\") " Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.098103 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/746b9097-84d0-4d00-a92c-808df9206d8a-config\") pod \"746b9097-84d0-4d00-a92c-808df9206d8a\" (UID: \"746b9097-84d0-4d00-a92c-808df9206d8a\") " Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.098168 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8rdd\" (UniqueName: \"kubernetes.io/projected/746b9097-84d0-4d00-a92c-808df9206d8a-kube-api-access-j8rdd\") pod \"746b9097-84d0-4d00-a92c-808df9206d8a\" (UID: \"746b9097-84d0-4d00-a92c-808df9206d8a\") " Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.163017 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/746b9097-84d0-4d00-a92c-808df9206d8a-kube-api-access-j8rdd" (OuterVolumeSpecName: "kube-api-access-j8rdd") pod "746b9097-84d0-4d00-a92c-808df9206d8a" (UID: "746b9097-84d0-4d00-a92c-808df9206d8a"). InnerVolumeSpecName "kube-api-access-j8rdd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.188871 4867 scope.go:117] "RemoveContainer" containerID="bbbd663b685e649aba7c6d25d4f5d873760ad4fd5bfb839326762e3bc9aeb52d" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.202434 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8rdd\" (UniqueName: \"kubernetes.io/projected/746b9097-84d0-4d00-a92c-808df9206d8a-kube-api-access-j8rdd\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:49 crc kubenswrapper[4867]: E0214 04:32:49.204107 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bbbd663b685e649aba7c6d25d4f5d873760ad4fd5bfb839326762e3bc9aeb52d\": container with ID starting with bbbd663b685e649aba7c6d25d4f5d873760ad4fd5bfb839326762e3bc9aeb52d not found: ID does not exist" containerID="bbbd663b685e649aba7c6d25d4f5d873760ad4fd5bfb839326762e3bc9aeb52d" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.204148 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbbd663b685e649aba7c6d25d4f5d873760ad4fd5bfb839326762e3bc9aeb52d"} err="failed to get container status \"bbbd663b685e649aba7c6d25d4f5d873760ad4fd5bfb839326762e3bc9aeb52d\": rpc error: code = NotFound desc = could not find container \"bbbd663b685e649aba7c6d25d4f5d873760ad4fd5bfb839326762e3bc9aeb52d\": container with ID starting with bbbd663b685e649aba7c6d25d4f5d873760ad4fd5bfb839326762e3bc9aeb52d not found: ID does not exist" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.204176 4867 scope.go:117] "RemoveContainer" containerID="5b62b9c4c18730bd95cd769f97a701a763d73eb11bbefea4c9a65847618af00e" Feb 14 04:32:49 crc kubenswrapper[4867]: E0214 04:32:49.218702 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b62b9c4c18730bd95cd769f97a701a763d73eb11bbefea4c9a65847618af00e\": container with ID starting with 5b62b9c4c18730bd95cd769f97a701a763d73eb11bbefea4c9a65847618af00e not found: ID does not exist" containerID="5b62b9c4c18730bd95cd769f97a701a763d73eb11bbefea4c9a65847618af00e" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.218750 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b62b9c4c18730bd95cd769f97a701a763d73eb11bbefea4c9a65847618af00e"} err="failed to get container status \"5b62b9c4c18730bd95cd769f97a701a763d73eb11bbefea4c9a65847618af00e\": rpc error: code = NotFound desc = could not find container \"5b62b9c4c18730bd95cd769f97a701a763d73eb11bbefea4c9a65847618af00e\": container with ID starting with 5b62b9c4c18730bd95cd769f97a701a763d73eb11bbefea4c9a65847618af00e not found: ID does not exist" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.218783 4867 scope.go:117] "RemoveContainer" containerID="3f0c6b148827ea32a231e9e007d2dafbba391e6afdc3bb2dbabd5ec06a7c50e3" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.277480 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/746b9097-84d0-4d00-a92c-808df9206d8a-config" (OuterVolumeSpecName: "config") pod "746b9097-84d0-4d00-a92c-808df9206d8a" (UID: "746b9097-84d0-4d00-a92c-808df9206d8a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.306031 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/746b9097-84d0-4d00-a92c-808df9206d8a-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.314988 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/746b9097-84d0-4d00-a92c-808df9206d8a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "746b9097-84d0-4d00-a92c-808df9206d8a" (UID: "746b9097-84d0-4d00-a92c-808df9206d8a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.362146 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/746b9097-84d0-4d00-a92c-808df9206d8a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "746b9097-84d0-4d00-a92c-808df9206d8a" (UID: "746b9097-84d0-4d00-a92c-808df9206d8a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.391710 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-667b98697-gxqph"] Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.407652 4867 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/746b9097-84d0-4d00-a92c-808df9206d8a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.407682 4867 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/746b9097-84d0-4d00-a92c-808df9206d8a-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.415777 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-667b98697-gxqph"] Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.438133 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/746b9097-84d0-4d00-a92c-808df9206d8a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "746b9097-84d0-4d00-a92c-808df9206d8a" (UID: "746b9097-84d0-4d00-a92c-808df9206d8a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.443581 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-74c87bfcc9-g5dr4"] Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.449737 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/746b9097-84d0-4d00-a92c-808df9206d8a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "746b9097-84d0-4d00-a92c-808df9206d8a" (UID: "746b9097-84d0-4d00-a92c-808df9206d8a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.461572 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-74c87bfcc9-g5dr4"] Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.478574 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-vwg9c"] Feb 14 04:32:49 crc kubenswrapper[4867]: E0214 04:32:49.479077 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80c71d92-a9d1-4256-b7be-678dc34d1562" containerName="mariadb-account-create-update" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.479094 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="80c71d92-a9d1-4256-b7be-678dc34d1562" containerName="mariadb-account-create-update" Feb 14 04:32:49 crc kubenswrapper[4867]: E0214 04:32:49.479108 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="041c55d6-87c7-47b4-a53b-9b38cb85e3d2" containerName="mariadb-account-create-update" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.479113 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="041c55d6-87c7-47b4-a53b-9b38cb85e3d2" containerName="mariadb-account-create-update" Feb 14 04:32:49 crc kubenswrapper[4867]: E0214 04:32:49.479134 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="730dbd9b-ddff-4d09-89ff-b9135ed83042" containerName="mariadb-database-create" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.479140 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="730dbd9b-ddff-4d09-89ff-b9135ed83042" containerName="mariadb-database-create" Feb 14 04:32:49 crc kubenswrapper[4867]: E0214 04:32:49.479149 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="746b9097-84d0-4d00-a92c-808df9206d8a" containerName="init" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.479155 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="746b9097-84d0-4d00-a92c-808df9206d8a" containerName="init" Feb 14 04:32:49 crc kubenswrapper[4867]: E0214 04:32:49.479170 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fd29ee2-33af-4629-8c0d-fa62c0e07240" containerName="heat-api" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.479176 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fd29ee2-33af-4629-8c0d-fa62c0e07240" containerName="heat-api" Feb 14 04:32:49 crc kubenswrapper[4867]: E0214 04:32:49.479188 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c28a361-2a59-45f2-baeb-e4d5313b6c17" containerName="heat-cfnapi" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.479193 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c28a361-2a59-45f2-baeb-e4d5313b6c17" containerName="heat-cfnapi" Feb 14 04:32:49 crc kubenswrapper[4867]: E0214 04:32:49.479216 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="708fbc3f-a05a-4b29-b455-32db117495d1" containerName="mariadb-database-create" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.479221 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="708fbc3f-a05a-4b29-b455-32db117495d1" containerName="mariadb-database-create" Feb 14 04:32:49 crc kubenswrapper[4867]: E0214 04:32:49.479235 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="746b9097-84d0-4d00-a92c-808df9206d8a" containerName="dnsmasq-dns" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.479241 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="746b9097-84d0-4d00-a92c-808df9206d8a" containerName="dnsmasq-dns" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.479454 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="746b9097-84d0-4d00-a92c-808df9206d8a" containerName="dnsmasq-dns" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.479463 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="041c55d6-87c7-47b4-a53b-9b38cb85e3d2" containerName="mariadb-account-create-update" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.479476 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="730dbd9b-ddff-4d09-89ff-b9135ed83042" containerName="mariadb-database-create" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.479488 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c28a361-2a59-45f2-baeb-e4d5313b6c17" containerName="heat-cfnapi" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.479518 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="80c71d92-a9d1-4256-b7be-678dc34d1562" containerName="mariadb-account-create-update" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.479531 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="708fbc3f-a05a-4b29-b455-32db117495d1" containerName="mariadb-database-create" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.479543 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fd29ee2-33af-4629-8c0d-fa62c0e07240" containerName="heat-api" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.480345 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-vwg9c" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.483750 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-fspzg" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.483976 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.486217 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.491392 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-vwg9c"] Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.510069 4867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/746b9097-84d0-4d00-a92c-808df9206d8a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.510102 4867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/746b9097-84d0-4d00-a92c-808df9206d8a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.536614 4867 scope.go:117] "RemoveContainer" containerID="d3e9e4355332a2ff0d32cacebbea6af4c97294a693e97fe84efa4e22b02595f6" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.579971 4867 scope.go:117] "RemoveContainer" containerID="5853e720bee74ecffda2b3607cd04e8d46a528baf84e2f915f4143c80e908cce" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.627416 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd08e0e3-a41f-4b25-b71a-1c968410d52e-config-data\") pod \"nova-cell0-conductor-db-sync-vwg9c\" (UID: \"cd08e0e3-a41f-4b25-b71a-1c968410d52e\") " pod="openstack/nova-cell0-conductor-db-sync-vwg9c" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.627606 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd08e0e3-a41f-4b25-b71a-1c968410d52e-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-vwg9c\" (UID: \"cd08e0e3-a41f-4b25-b71a-1c968410d52e\") " pod="openstack/nova-cell0-conductor-db-sync-vwg9c" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.627782 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbwhx\" (UniqueName: \"kubernetes.io/projected/cd08e0e3-a41f-4b25-b71a-1c968410d52e-kube-api-access-lbwhx\") pod \"nova-cell0-conductor-db-sync-vwg9c\" (UID: \"cd08e0e3-a41f-4b25-b71a-1c968410d52e\") " pod="openstack/nova-cell0-conductor-db-sync-vwg9c" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.627825 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd08e0e3-a41f-4b25-b71a-1c968410d52e-scripts\") pod \"nova-cell0-conductor-db-sync-vwg9c\" (UID: \"cd08e0e3-a41f-4b25-b71a-1c968410d52e\") " pod="openstack/nova-cell0-conductor-db-sync-vwg9c" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.713689 4867 scope.go:117] "RemoveContainer" containerID="25e73f24691faede9bee41e1ee55092d6468385d3e91a16311d3419e479b9ed4" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.726614 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-pq99b"] Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.729730 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd08e0e3-a41f-4b25-b71a-1c968410d52e-config-data\") pod \"nova-cell0-conductor-db-sync-vwg9c\" (UID: \"cd08e0e3-a41f-4b25-b71a-1c968410d52e\") " pod="openstack/nova-cell0-conductor-db-sync-vwg9c" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.729782 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd08e0e3-a41f-4b25-b71a-1c968410d52e-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-vwg9c\" (UID: \"cd08e0e3-a41f-4b25-b71a-1c968410d52e\") " pod="openstack/nova-cell0-conductor-db-sync-vwg9c" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.729851 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbwhx\" (UniqueName: \"kubernetes.io/projected/cd08e0e3-a41f-4b25-b71a-1c968410d52e-kube-api-access-lbwhx\") pod \"nova-cell0-conductor-db-sync-vwg9c\" (UID: \"cd08e0e3-a41f-4b25-b71a-1c968410d52e\") " pod="openstack/nova-cell0-conductor-db-sync-vwg9c" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.729872 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd08e0e3-a41f-4b25-b71a-1c968410d52e-scripts\") pod \"nova-cell0-conductor-db-sync-vwg9c\" (UID: \"cd08e0e3-a41f-4b25-b71a-1c968410d52e\") " pod="openstack/nova-cell0-conductor-db-sync-vwg9c" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.736739 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-pq99b"] Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.739181 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd08e0e3-a41f-4b25-b71a-1c968410d52e-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-vwg9c\" (UID: \"cd08e0e3-a41f-4b25-b71a-1c968410d52e\") " pod="openstack/nova-cell0-conductor-db-sync-vwg9c" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.742409 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd08e0e3-a41f-4b25-b71a-1c968410d52e-scripts\") pod \"nova-cell0-conductor-db-sync-vwg9c\" (UID: \"cd08e0e3-a41f-4b25-b71a-1c968410d52e\") " pod="openstack/nova-cell0-conductor-db-sync-vwg9c" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.747085 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd08e0e3-a41f-4b25-b71a-1c968410d52e-config-data\") pod \"nova-cell0-conductor-db-sync-vwg9c\" (UID: \"cd08e0e3-a41f-4b25-b71a-1c968410d52e\") " pod="openstack/nova-cell0-conductor-db-sync-vwg9c" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.761082 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbwhx\" (UniqueName: \"kubernetes.io/projected/cd08e0e3-a41f-4b25-b71a-1c968410d52e-kube-api-access-lbwhx\") pod \"nova-cell0-conductor-db-sync-vwg9c\" (UID: \"cd08e0e3-a41f-4b25-b71a-1c968410d52e\") " pod="openstack/nova-cell0-conductor-db-sync-vwg9c" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.823086 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-vwg9c" Feb 14 04:32:50 crc kubenswrapper[4867]: I0214 04:32:50.051072 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:32:50 crc kubenswrapper[4867]: I0214 04:32:50.095085 4867 scope.go:117] "RemoveContainer" containerID="c6319e5096476af1cd4794441879b2c2e802659e71c263027bc77f340f5ae574" Feb 14 04:32:50 crc kubenswrapper[4867]: E0214 04:32:50.095368 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-8f9d657ff-n8g4q_openstack(bf9a1d71-05e1-40ab-90a7-530d2083fe14)\"" pod="openstack/heat-api-8f9d657ff-n8g4q" podUID="bf9a1d71-05e1-40ab-90a7-530d2083fe14" Feb 14 04:32:50 crc kubenswrapper[4867]: I0214 04:32:50.113530 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f5e42dca-0c7d-485a-95bc-b26db4e12369","Type":"ContainerStarted","Data":"c2581e3d0590fbe6419e0dbf9d06af960b2f1cc3d05ee57976db9265c7418fa0"} Feb 14 04:32:50 crc kubenswrapper[4867]: I0214 04:32:50.116809 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30f61907-9cb4-4873-99eb-bbb5adf21fcb","Type":"ContainerStarted","Data":"cb6215577dbf26db944d9b9070ec6a13180867b3c5b2b1bcee4c08837896c2c9"} Feb 14 04:32:50 crc kubenswrapper[4867]: I0214 04:32:50.117415 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 14 04:32:50 crc kubenswrapper[4867]: I0214 04:32:50.118532 4867 scope.go:117] "RemoveContainer" containerID="f3518d1f2b8b6a76c52e19fef766123bf78780e7f313d75009ab8059dc0d7904" Feb 14 04:32:50 crc kubenswrapper[4867]: E0214 04:32:50.118778 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-cf78bc599-cbb7h_openstack(4e650fa8-a893-47e0-a5d5-0df60430ea9e)\"" pod="openstack/heat-cfnapi-cf78bc599-cbb7h" podUID="4e650fa8-a893-47e0-a5d5-0df60430ea9e" Feb 14 04:32:50 crc kubenswrapper[4867]: I0214 04:32:50.146703 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-8f9d657ff-n8g4q" Feb 14 04:32:50 crc kubenswrapper[4867]: I0214 04:32:50.146806 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-8f9d657ff-n8g4q" Feb 14 04:32:50 crc kubenswrapper[4867]: I0214 04:32:50.164640 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-cf78bc599-cbb7h" Feb 14 04:32:50 crc kubenswrapper[4867]: I0214 04:32:50.171075 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.997061285 podStartE2EDuration="12.171057351s" podCreationTimestamp="2026-02-14 04:32:38 +0000 UTC" firstStartedPulling="2026-02-14 04:32:40.250704873 +0000 UTC m=+1392.331642187" lastFinishedPulling="2026-02-14 04:32:48.424700939 +0000 UTC m=+1400.505638253" observedRunningTime="2026-02-14 04:32:50.168379979 +0000 UTC m=+1402.249317303" watchObservedRunningTime="2026-02-14 04:32:50.171057351 +0000 UTC m=+1402.251994665" Feb 14 04:32:50 crc kubenswrapper[4867]: I0214 04:32:50.213873 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.213845941 podStartE2EDuration="6.213845941s" podCreationTimestamp="2026-02-14 04:32:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:32:50.19595741 +0000 UTC m=+1402.276894724" watchObservedRunningTime="2026-02-14 04:32:50.213845941 +0000 UTC m=+1402.294783265" Feb 14 04:32:50 crc kubenswrapper[4867]: I0214 04:32:50.433237 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-vwg9c"] Feb 14 04:32:51 crc kubenswrapper[4867]: I0214 04:32:51.012542 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fd29ee2-33af-4629-8c0d-fa62c0e07240" path="/var/lib/kubelet/pods/4fd29ee2-33af-4629-8c0d-fa62c0e07240/volumes" Feb 14 04:32:51 crc kubenswrapper[4867]: I0214 04:32:51.014363 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c28a361-2a59-45f2-baeb-e4d5313b6c17" path="/var/lib/kubelet/pods/6c28a361-2a59-45f2-baeb-e4d5313b6c17/volumes" Feb 14 04:32:51 crc kubenswrapper[4867]: I0214 04:32:51.015052 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="746b9097-84d0-4d00-a92c-808df9206d8a" path="/var/lib/kubelet/pods/746b9097-84d0-4d00-a92c-808df9206d8a/volumes" Feb 14 04:32:51 crc kubenswrapper[4867]: I0214 04:32:51.138239 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-vwg9c" event={"ID":"cd08e0e3-a41f-4b25-b71a-1c968410d52e","Type":"ContainerStarted","Data":"bd096683847f90cf05e85285ccd82cb246a3d9366805a56c5de6b41e0584b142"} Feb 14 04:32:51 crc kubenswrapper[4867]: I0214 04:32:51.138733 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="30f61907-9cb4-4873-99eb-bbb5adf21fcb" containerName="ceilometer-central-agent" containerID="cri-o://979729ed029e7493c86fa97c73b6e4c07235cd2c42a9dffb387845d8efe2d144" gracePeriod=30 Feb 14 04:32:51 crc kubenswrapper[4867]: I0214 04:32:51.138776 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="30f61907-9cb4-4873-99eb-bbb5adf21fcb" containerName="proxy-httpd" containerID="cri-o://cb6215577dbf26db944d9b9070ec6a13180867b3c5b2b1bcee4c08837896c2c9" gracePeriod=30 Feb 14 04:32:51 crc kubenswrapper[4867]: I0214 04:32:51.138838 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="30f61907-9cb4-4873-99eb-bbb5adf21fcb" containerName="ceilometer-notification-agent" containerID="cri-o://02a21d192b838bcd292ed433b9bda0d9ab33f8abcba6bd3963579f24d84fa41e" gracePeriod=30 Feb 14 04:32:51 crc kubenswrapper[4867]: I0214 04:32:51.138814 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="30f61907-9cb4-4873-99eb-bbb5adf21fcb" containerName="sg-core" containerID="cri-o://cb1ce511267c1e1bb4cd8621896e5bff9f87cd7f132d080f973fbe44eacb8ee4" gracePeriod=30 Feb 14 04:32:51 crc kubenswrapper[4867]: I0214 04:32:51.139108 4867 scope.go:117] "RemoveContainer" containerID="c6319e5096476af1cd4794441879b2c2e802659e71c263027bc77f340f5ae574" Feb 14 04:32:51 crc kubenswrapper[4867]: E0214 04:32:51.139544 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-8f9d657ff-n8g4q_openstack(bf9a1d71-05e1-40ab-90a7-530d2083fe14)\"" pod="openstack/heat-api-8f9d657ff-n8g4q" podUID="bf9a1d71-05e1-40ab-90a7-530d2083fe14" Feb 14 04:32:51 crc kubenswrapper[4867]: I0214 04:32:51.139631 4867 scope.go:117] "RemoveContainer" containerID="f3518d1f2b8b6a76c52e19fef766123bf78780e7f313d75009ab8059dc0d7904" Feb 14 04:32:51 crc kubenswrapper[4867]: E0214 04:32:51.139896 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-cf78bc599-cbb7h_openstack(4e650fa8-a893-47e0-a5d5-0df60430ea9e)\"" pod="openstack/heat-cfnapi-cf78bc599-cbb7h" podUID="4e650fa8-a893-47e0-a5d5-0df60430ea9e" Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.198496 4867 generic.go:334] "Generic (PLEG): container finished" podID="7f21b5d2-75e5-4cc5-96d0-670e9ed88df0" containerID="784cfaee3c31733050d3a1efb21352103c907f523d29c5e564d74f7dfef79bf4" exitCode=0 Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.198649 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7f21b5d2-75e5-4cc5-96d0-670e9ed88df0","Type":"ContainerDied","Data":"784cfaee3c31733050d3a1efb21352103c907f523d29c5e564d74f7dfef79bf4"} Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.205115 4867 generic.go:334] "Generic (PLEG): container finished" podID="30f61907-9cb4-4873-99eb-bbb5adf21fcb" containerID="cb6215577dbf26db944d9b9070ec6a13180867b3c5b2b1bcee4c08837896c2c9" exitCode=0 Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.205148 4867 generic.go:334] "Generic (PLEG): container finished" podID="30f61907-9cb4-4873-99eb-bbb5adf21fcb" containerID="cb1ce511267c1e1bb4cd8621896e5bff9f87cd7f132d080f973fbe44eacb8ee4" exitCode=2 Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.205159 4867 generic.go:334] "Generic (PLEG): container finished" podID="30f61907-9cb4-4873-99eb-bbb5adf21fcb" containerID="02a21d192b838bcd292ed433b9bda0d9ab33f8abcba6bd3963579f24d84fa41e" exitCode=0 Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.205168 4867 generic.go:334] "Generic (PLEG): container finished" podID="30f61907-9cb4-4873-99eb-bbb5adf21fcb" containerID="979729ed029e7493c86fa97c73b6e4c07235cd2c42a9dffb387845d8efe2d144" exitCode=0 Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.206134 4867 scope.go:117] "RemoveContainer" containerID="f3518d1f2b8b6a76c52e19fef766123bf78780e7f313d75009ab8059dc0d7904" Feb 14 04:32:52 crc kubenswrapper[4867]: E0214 04:32:52.206534 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-cf78bc599-cbb7h_openstack(4e650fa8-a893-47e0-a5d5-0df60430ea9e)\"" pod="openstack/heat-cfnapi-cf78bc599-cbb7h" podUID="4e650fa8-a893-47e0-a5d5-0df60430ea9e" Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.207076 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30f61907-9cb4-4873-99eb-bbb5adf21fcb","Type":"ContainerDied","Data":"cb6215577dbf26db944d9b9070ec6a13180867b3c5b2b1bcee4c08837896c2c9"} Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.207109 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30f61907-9cb4-4873-99eb-bbb5adf21fcb","Type":"ContainerDied","Data":"cb1ce511267c1e1bb4cd8621896e5bff9f87cd7f132d080f973fbe44eacb8ee4"} Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.207122 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30f61907-9cb4-4873-99eb-bbb5adf21fcb","Type":"ContainerDied","Data":"02a21d192b838bcd292ed433b9bda0d9ab33f8abcba6bd3963579f24d84fa41e"} Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.207133 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30f61907-9cb4-4873-99eb-bbb5adf21fcb","Type":"ContainerDied","Data":"979729ed029e7493c86fa97c73b6e4c07235cd2c42a9dffb387845d8efe2d144"} Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.207644 4867 scope.go:117] "RemoveContainer" containerID="c6319e5096476af1cd4794441879b2c2e802659e71c263027bc77f340f5ae574" Feb 14 04:32:52 crc kubenswrapper[4867]: E0214 04:32:52.207955 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-8f9d657ff-n8g4q_openstack(bf9a1d71-05e1-40ab-90a7-530d2083fe14)\"" pod="openstack/heat-api-8f9d657ff-n8g4q" podUID="bf9a1d71-05e1-40ab-90a7-530d2083fe14" Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.251824 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.403964 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-glsgm\" (UniqueName: \"kubernetes.io/projected/30f61907-9cb4-4873-99eb-bbb5adf21fcb-kube-api-access-glsgm\") pod \"30f61907-9cb4-4873-99eb-bbb5adf21fcb\" (UID: \"30f61907-9cb4-4873-99eb-bbb5adf21fcb\") " Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.404065 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30f61907-9cb4-4873-99eb-bbb5adf21fcb-scripts\") pod \"30f61907-9cb4-4873-99eb-bbb5adf21fcb\" (UID: \"30f61907-9cb4-4873-99eb-bbb5adf21fcb\") " Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.404127 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30f61907-9cb4-4873-99eb-bbb5adf21fcb-log-httpd\") pod \"30f61907-9cb4-4873-99eb-bbb5adf21fcb\" (UID: \"30f61907-9cb4-4873-99eb-bbb5adf21fcb\") " Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.404147 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30f61907-9cb4-4873-99eb-bbb5adf21fcb-combined-ca-bundle\") pod \"30f61907-9cb4-4873-99eb-bbb5adf21fcb\" (UID: \"30f61907-9cb4-4873-99eb-bbb5adf21fcb\") " Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.404260 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30f61907-9cb4-4873-99eb-bbb5adf21fcb-run-httpd\") pod \"30f61907-9cb4-4873-99eb-bbb5adf21fcb\" (UID: \"30f61907-9cb4-4873-99eb-bbb5adf21fcb\") " Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.404284 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/30f61907-9cb4-4873-99eb-bbb5adf21fcb-sg-core-conf-yaml\") pod \"30f61907-9cb4-4873-99eb-bbb5adf21fcb\" (UID: \"30f61907-9cb4-4873-99eb-bbb5adf21fcb\") " Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.404324 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30f61907-9cb4-4873-99eb-bbb5adf21fcb-config-data\") pod \"30f61907-9cb4-4873-99eb-bbb5adf21fcb\" (UID: \"30f61907-9cb4-4873-99eb-bbb5adf21fcb\") " Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.407876 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30f61907-9cb4-4873-99eb-bbb5adf21fcb-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "30f61907-9cb4-4873-99eb-bbb5adf21fcb" (UID: "30f61907-9cb4-4873-99eb-bbb5adf21fcb"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.408262 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30f61907-9cb4-4873-99eb-bbb5adf21fcb-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "30f61907-9cb4-4873-99eb-bbb5adf21fcb" (UID: "30f61907-9cb4-4873-99eb-bbb5adf21fcb"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.413928 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30f61907-9cb4-4873-99eb-bbb5adf21fcb-scripts" (OuterVolumeSpecName: "scripts") pod "30f61907-9cb4-4873-99eb-bbb5adf21fcb" (UID: "30f61907-9cb4-4873-99eb-bbb5adf21fcb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.427959 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30f61907-9cb4-4873-99eb-bbb5adf21fcb-kube-api-access-glsgm" (OuterVolumeSpecName: "kube-api-access-glsgm") pod "30f61907-9cb4-4873-99eb-bbb5adf21fcb" (UID: "30f61907-9cb4-4873-99eb-bbb5adf21fcb"). InnerVolumeSpecName "kube-api-access-glsgm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.452590 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-677c4ffcdf-n44s6" Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.502809 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30f61907-9cb4-4873-99eb-bbb5adf21fcb-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "30f61907-9cb4-4873-99eb-bbb5adf21fcb" (UID: "30f61907-9cb4-4873-99eb-bbb5adf21fcb"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.514893 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-glsgm\" (UniqueName: \"kubernetes.io/projected/30f61907-9cb4-4873-99eb-bbb5adf21fcb-kube-api-access-glsgm\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.515726 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30f61907-9cb4-4873-99eb-bbb5adf21fcb-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.515745 4867 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30f61907-9cb4-4873-99eb-bbb5adf21fcb-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.515758 4867 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30f61907-9cb4-4873-99eb-bbb5adf21fcb-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.515770 4867 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/30f61907-9cb4-4873-99eb-bbb5adf21fcb-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.681555 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30f61907-9cb4-4873-99eb-bbb5adf21fcb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "30f61907-9cb4-4873-99eb-bbb5adf21fcb" (UID: "30f61907-9cb4-4873-99eb-bbb5adf21fcb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.693804 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30f61907-9cb4-4873-99eb-bbb5adf21fcb-config-data" (OuterVolumeSpecName: "config-data") pod "30f61907-9cb4-4873-99eb-bbb5adf21fcb" (UID: "30f61907-9cb4-4873-99eb-bbb5adf21fcb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.721034 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30f61907-9cb4-4873-99eb-bbb5adf21fcb-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.721071 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30f61907-9cb4-4873-99eb-bbb5adf21fcb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.762185 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.822365 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qmjl2\" (UniqueName: \"kubernetes.io/projected/7f21b5d2-75e5-4cc5-96d0-670e9ed88df0-kube-api-access-qmjl2\") pod \"7f21b5d2-75e5-4cc5-96d0-670e9ed88df0\" (UID: \"7f21b5d2-75e5-4cc5-96d0-670e9ed88df0\") " Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.822422 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f21b5d2-75e5-4cc5-96d0-670e9ed88df0-internal-tls-certs\") pod \"7f21b5d2-75e5-4cc5-96d0-670e9ed88df0\" (UID: \"7f21b5d2-75e5-4cc5-96d0-670e9ed88df0\") " Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.822519 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f21b5d2-75e5-4cc5-96d0-670e9ed88df0-combined-ca-bundle\") pod \"7f21b5d2-75e5-4cc5-96d0-670e9ed88df0\" (UID: \"7f21b5d2-75e5-4cc5-96d0-670e9ed88df0\") " Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.822563 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f21b5d2-75e5-4cc5-96d0-670e9ed88df0-scripts\") pod \"7f21b5d2-75e5-4cc5-96d0-670e9ed88df0\" (UID: \"7f21b5d2-75e5-4cc5-96d0-670e9ed88df0\") " Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.822626 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f21b5d2-75e5-4cc5-96d0-670e9ed88df0-config-data\") pod \"7f21b5d2-75e5-4cc5-96d0-670e9ed88df0\" (UID: \"7f21b5d2-75e5-4cc5-96d0-670e9ed88df0\") " Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.826339 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f21b5d2-75e5-4cc5-96d0-670e9ed88df0-scripts" (OuterVolumeSpecName: "scripts") pod "7f21b5d2-75e5-4cc5-96d0-670e9ed88df0" (UID: "7f21b5d2-75e5-4cc5-96d0-670e9ed88df0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.826738 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0f267588-31b6-42e6-a1eb-3b23ad395d25\") pod \"7f21b5d2-75e5-4cc5-96d0-670e9ed88df0\" (UID: \"7f21b5d2-75e5-4cc5-96d0-670e9ed88df0\") " Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.832420 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7f21b5d2-75e5-4cc5-96d0-670e9ed88df0-httpd-run\") pod \"7f21b5d2-75e5-4cc5-96d0-670e9ed88df0\" (UID: \"7f21b5d2-75e5-4cc5-96d0-670e9ed88df0\") " Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.832663 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f21b5d2-75e5-4cc5-96d0-670e9ed88df0-kube-api-access-qmjl2" (OuterVolumeSpecName: "kube-api-access-qmjl2") pod "7f21b5d2-75e5-4cc5-96d0-670e9ed88df0" (UID: "7f21b5d2-75e5-4cc5-96d0-670e9ed88df0"). InnerVolumeSpecName "kube-api-access-qmjl2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.834113 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f21b5d2-75e5-4cc5-96d0-670e9ed88df0-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "7f21b5d2-75e5-4cc5-96d0-670e9ed88df0" (UID: "7f21b5d2-75e5-4cc5-96d0-670e9ed88df0"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.834237 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f21b5d2-75e5-4cc5-96d0-670e9ed88df0-logs\") pod \"7f21b5d2-75e5-4cc5-96d0-670e9ed88df0\" (UID: \"7f21b5d2-75e5-4cc5-96d0-670e9ed88df0\") " Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.835449 4867 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7f21b5d2-75e5-4cc5-96d0-670e9ed88df0-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.835477 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qmjl2\" (UniqueName: \"kubernetes.io/projected/7f21b5d2-75e5-4cc5-96d0-670e9ed88df0-kube-api-access-qmjl2\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.835490 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f21b5d2-75e5-4cc5-96d0-670e9ed88df0-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.839078 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f21b5d2-75e5-4cc5-96d0-670e9ed88df0-logs" (OuterVolumeSpecName: "logs") pod "7f21b5d2-75e5-4cc5-96d0-670e9ed88df0" (UID: "7f21b5d2-75e5-4cc5-96d0-670e9ed88df0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.860779 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0f267588-31b6-42e6-a1eb-3b23ad395d25" (OuterVolumeSpecName: "glance") pod "7f21b5d2-75e5-4cc5-96d0-670e9ed88df0" (UID: "7f21b5d2-75e5-4cc5-96d0-670e9ed88df0"). InnerVolumeSpecName "pvc-0f267588-31b6-42e6-a1eb-3b23ad395d25". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.871535 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f21b5d2-75e5-4cc5-96d0-670e9ed88df0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7f21b5d2-75e5-4cc5-96d0-670e9ed88df0" (UID: "7f21b5d2-75e5-4cc5-96d0-670e9ed88df0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.890241 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f21b5d2-75e5-4cc5-96d0-670e9ed88df0-config-data" (OuterVolumeSpecName: "config-data") pod "7f21b5d2-75e5-4cc5-96d0-670e9ed88df0" (UID: "7f21b5d2-75e5-4cc5-96d0-670e9ed88df0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.919305 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f21b5d2-75e5-4cc5-96d0-670e9ed88df0-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "7f21b5d2-75e5-4cc5-96d0-670e9ed88df0" (UID: "7f21b5d2-75e5-4cc5-96d0-670e9ed88df0"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.937948 4867 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f21b5d2-75e5-4cc5-96d0-670e9ed88df0-logs\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.938098 4867 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f21b5d2-75e5-4cc5-96d0-670e9ed88df0-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.938115 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f21b5d2-75e5-4cc5-96d0-670e9ed88df0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.938126 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f21b5d2-75e5-4cc5-96d0-670e9ed88df0-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.938175 4867 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-0f267588-31b6-42e6-a1eb-3b23ad395d25\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0f267588-31b6-42e6-a1eb-3b23ad395d25\") on node \"crc\" " Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.969298 4867 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.969449 4867 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-0f267588-31b6-42e6-a1eb-3b23ad395d25" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0f267588-31b6-42e6-a1eb-3b23ad395d25") on node "crc" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.040784 4867 reconciler_common.go:293] "Volume detached for volume \"pvc-0f267588-31b6-42e6-a1eb-3b23ad395d25\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0f267588-31b6-42e6-a1eb-3b23ad395d25\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.225280 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30f61907-9cb4-4873-99eb-bbb5adf21fcb","Type":"ContainerDied","Data":"0c811d9a27d93bea50cf31c5a59216074fd035a7dfb9975cb4e0ef8eaca3d79f"} Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.225335 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.225341 4867 scope.go:117] "RemoveContainer" containerID="cb6215577dbf26db944d9b9070ec6a13180867b3c5b2b1bcee4c08837896c2c9" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.231335 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.231305 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7f21b5d2-75e5-4cc5-96d0-670e9ed88df0","Type":"ContainerDied","Data":"a058ad6cbd2191072dd3095571bbab2223991ccf0e5587286e857f99ac25261b"} Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.258395 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.276393 4867 scope.go:117] "RemoveContainer" containerID="cb1ce511267c1e1bb4cd8621896e5bff9f87cd7f132d080f973fbe44eacb8ee4" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.284312 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.303413 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.340521 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.357673 4867 scope.go:117] "RemoveContainer" containerID="02a21d192b838bcd292ed433b9bda0d9ab33f8abcba6bd3963579f24d84fa41e" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.403269 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:32:53 crc kubenswrapper[4867]: E0214 04:32:53.404380 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30f61907-9cb4-4873-99eb-bbb5adf21fcb" containerName="ceilometer-central-agent" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.404430 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="30f61907-9cb4-4873-99eb-bbb5adf21fcb" containerName="ceilometer-central-agent" Feb 14 04:32:53 crc kubenswrapper[4867]: E0214 04:32:53.404446 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30f61907-9cb4-4873-99eb-bbb5adf21fcb" containerName="sg-core" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.404455 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="30f61907-9cb4-4873-99eb-bbb5adf21fcb" containerName="sg-core" Feb 14 04:32:53 crc kubenswrapper[4867]: E0214 04:32:53.404481 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f21b5d2-75e5-4cc5-96d0-670e9ed88df0" containerName="glance-httpd" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.404490 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f21b5d2-75e5-4cc5-96d0-670e9ed88df0" containerName="glance-httpd" Feb 14 04:32:53 crc kubenswrapper[4867]: E0214 04:32:53.405342 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f21b5d2-75e5-4cc5-96d0-670e9ed88df0" containerName="glance-log" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.405359 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f21b5d2-75e5-4cc5-96d0-670e9ed88df0" containerName="glance-log" Feb 14 04:32:53 crc kubenswrapper[4867]: E0214 04:32:53.405400 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30f61907-9cb4-4873-99eb-bbb5adf21fcb" containerName="ceilometer-notification-agent" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.405409 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="30f61907-9cb4-4873-99eb-bbb5adf21fcb" containerName="ceilometer-notification-agent" Feb 14 04:32:53 crc kubenswrapper[4867]: E0214 04:32:53.405427 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30f61907-9cb4-4873-99eb-bbb5adf21fcb" containerName="proxy-httpd" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.405436 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="30f61907-9cb4-4873-99eb-bbb5adf21fcb" containerName="proxy-httpd" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.405949 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="30f61907-9cb4-4873-99eb-bbb5adf21fcb" containerName="sg-core" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.405972 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f21b5d2-75e5-4cc5-96d0-670e9ed88df0" containerName="glance-httpd" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.406009 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f21b5d2-75e5-4cc5-96d0-670e9ed88df0" containerName="glance-log" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.406028 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="30f61907-9cb4-4873-99eb-bbb5adf21fcb" containerName="ceilometer-central-agent" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.406049 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="30f61907-9cb4-4873-99eb-bbb5adf21fcb" containerName="ceilometer-notification-agent" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.406084 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="30f61907-9cb4-4873-99eb-bbb5adf21fcb" containerName="proxy-httpd" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.409850 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.419243 4867 scope.go:117] "RemoveContainer" containerID="979729ed029e7493c86fa97c73b6e4c07235cd2c42a9dffb387845d8efe2d144" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.420109 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.420630 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.447552 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.455815 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.458543 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.461929 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.467660 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.473097 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.515199 4867 scope.go:117] "RemoveContainer" containerID="784cfaee3c31733050d3a1efb21352103c907f523d29c5e564d74f7dfef79bf4" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.562236 4867 scope.go:117] "RemoveContainer" containerID="70953f2317efbfb87d7a56f4d71c52385c4847b32874288de71ce95ba977de9e" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.566596 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53d13a71-03e0-46f0-9ca1-a868d38727f8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"53d13a71-03e0-46f0-9ca1-a868d38727f8\") " pod="openstack/ceilometer-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.566640 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h46x6\" (UniqueName: \"kubernetes.io/projected/b66304c6-61a4-4b8b-b77b-dd816c0a0890-kube-api-access-h46x6\") pod \"glance-default-internal-api-0\" (UID: \"b66304c6-61a4-4b8b-b77b-dd816c0a0890\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.566663 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53d13a71-03e0-46f0-9ca1-a868d38727f8-scripts\") pod \"ceilometer-0\" (UID: \"53d13a71-03e0-46f0-9ca1-a868d38727f8\") " pod="openstack/ceilometer-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.566698 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b66304c6-61a4-4b8b-b77b-dd816c0a0890-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b66304c6-61a4-4b8b-b77b-dd816c0a0890\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.566900 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b66304c6-61a4-4b8b-b77b-dd816c0a0890-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b66304c6-61a4-4b8b-b77b-dd816c0a0890\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.567115 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b66304c6-61a4-4b8b-b77b-dd816c0a0890-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b66304c6-61a4-4b8b-b77b-dd816c0a0890\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.567285 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b66304c6-61a4-4b8b-b77b-dd816c0a0890-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b66304c6-61a4-4b8b-b77b-dd816c0a0890\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.567373 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53d13a71-03e0-46f0-9ca1-a868d38727f8-config-data\") pod \"ceilometer-0\" (UID: \"53d13a71-03e0-46f0-9ca1-a868d38727f8\") " pod="openstack/ceilometer-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.567396 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/53d13a71-03e0-46f0-9ca1-a868d38727f8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"53d13a71-03e0-46f0-9ca1-a868d38727f8\") " pod="openstack/ceilometer-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.567448 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b66304c6-61a4-4b8b-b77b-dd816c0a0890-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b66304c6-61a4-4b8b-b77b-dd816c0a0890\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.567709 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2x4ch\" (UniqueName: \"kubernetes.io/projected/53d13a71-03e0-46f0-9ca1-a868d38727f8-kube-api-access-2x4ch\") pod \"ceilometer-0\" (UID: \"53d13a71-03e0-46f0-9ca1-a868d38727f8\") " pod="openstack/ceilometer-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.567760 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/53d13a71-03e0-46f0-9ca1-a868d38727f8-run-httpd\") pod \"ceilometer-0\" (UID: \"53d13a71-03e0-46f0-9ca1-a868d38727f8\") " pod="openstack/ceilometer-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.567951 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b66304c6-61a4-4b8b-b77b-dd816c0a0890-logs\") pod \"glance-default-internal-api-0\" (UID: \"b66304c6-61a4-4b8b-b77b-dd816c0a0890\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.568028 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0f267588-31b6-42e6-a1eb-3b23ad395d25\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0f267588-31b6-42e6-a1eb-3b23ad395d25\") pod \"glance-default-internal-api-0\" (UID: \"b66304c6-61a4-4b8b-b77b-dd816c0a0890\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.568281 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/53d13a71-03e0-46f0-9ca1-a868d38727f8-log-httpd\") pod \"ceilometer-0\" (UID: \"53d13a71-03e0-46f0-9ca1-a868d38727f8\") " pod="openstack/ceilometer-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.670813 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2x4ch\" (UniqueName: \"kubernetes.io/projected/53d13a71-03e0-46f0-9ca1-a868d38727f8-kube-api-access-2x4ch\") pod \"ceilometer-0\" (UID: \"53d13a71-03e0-46f0-9ca1-a868d38727f8\") " pod="openstack/ceilometer-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.670859 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/53d13a71-03e0-46f0-9ca1-a868d38727f8-run-httpd\") pod \"ceilometer-0\" (UID: \"53d13a71-03e0-46f0-9ca1-a868d38727f8\") " pod="openstack/ceilometer-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.670902 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b66304c6-61a4-4b8b-b77b-dd816c0a0890-logs\") pod \"glance-default-internal-api-0\" (UID: \"b66304c6-61a4-4b8b-b77b-dd816c0a0890\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.670927 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0f267588-31b6-42e6-a1eb-3b23ad395d25\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0f267588-31b6-42e6-a1eb-3b23ad395d25\") pod \"glance-default-internal-api-0\" (UID: \"b66304c6-61a4-4b8b-b77b-dd816c0a0890\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.671011 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/53d13a71-03e0-46f0-9ca1-a868d38727f8-log-httpd\") pod \"ceilometer-0\" (UID: \"53d13a71-03e0-46f0-9ca1-a868d38727f8\") " pod="openstack/ceilometer-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.671034 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53d13a71-03e0-46f0-9ca1-a868d38727f8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"53d13a71-03e0-46f0-9ca1-a868d38727f8\") " pod="openstack/ceilometer-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.671061 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h46x6\" (UniqueName: \"kubernetes.io/projected/b66304c6-61a4-4b8b-b77b-dd816c0a0890-kube-api-access-h46x6\") pod \"glance-default-internal-api-0\" (UID: \"b66304c6-61a4-4b8b-b77b-dd816c0a0890\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.671079 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53d13a71-03e0-46f0-9ca1-a868d38727f8-scripts\") pod \"ceilometer-0\" (UID: \"53d13a71-03e0-46f0-9ca1-a868d38727f8\") " pod="openstack/ceilometer-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.671110 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b66304c6-61a4-4b8b-b77b-dd816c0a0890-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b66304c6-61a4-4b8b-b77b-dd816c0a0890\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.671149 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b66304c6-61a4-4b8b-b77b-dd816c0a0890-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b66304c6-61a4-4b8b-b77b-dd816c0a0890\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.671190 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b66304c6-61a4-4b8b-b77b-dd816c0a0890-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b66304c6-61a4-4b8b-b77b-dd816c0a0890\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.671249 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b66304c6-61a4-4b8b-b77b-dd816c0a0890-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b66304c6-61a4-4b8b-b77b-dd816c0a0890\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.671282 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53d13a71-03e0-46f0-9ca1-a868d38727f8-config-data\") pod \"ceilometer-0\" (UID: \"53d13a71-03e0-46f0-9ca1-a868d38727f8\") " pod="openstack/ceilometer-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.671302 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/53d13a71-03e0-46f0-9ca1-a868d38727f8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"53d13a71-03e0-46f0-9ca1-a868d38727f8\") " pod="openstack/ceilometer-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.671335 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b66304c6-61a4-4b8b-b77b-dd816c0a0890-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b66304c6-61a4-4b8b-b77b-dd816c0a0890\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.671435 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b66304c6-61a4-4b8b-b77b-dd816c0a0890-logs\") pod \"glance-default-internal-api-0\" (UID: \"b66304c6-61a4-4b8b-b77b-dd816c0a0890\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.671724 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/53d13a71-03e0-46f0-9ca1-a868d38727f8-log-httpd\") pod \"ceilometer-0\" (UID: \"53d13a71-03e0-46f0-9ca1-a868d38727f8\") " pod="openstack/ceilometer-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.672249 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b66304c6-61a4-4b8b-b77b-dd816c0a0890-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b66304c6-61a4-4b8b-b77b-dd816c0a0890\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.672995 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/53d13a71-03e0-46f0-9ca1-a868d38727f8-run-httpd\") pod \"ceilometer-0\" (UID: \"53d13a71-03e0-46f0-9ca1-a868d38727f8\") " pod="openstack/ceilometer-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.676567 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/53d13a71-03e0-46f0-9ca1-a868d38727f8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"53d13a71-03e0-46f0-9ca1-a868d38727f8\") " pod="openstack/ceilometer-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.681407 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53d13a71-03e0-46f0-9ca1-a868d38727f8-scripts\") pod \"ceilometer-0\" (UID: \"53d13a71-03e0-46f0-9ca1-a868d38727f8\") " pod="openstack/ceilometer-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.682852 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b66304c6-61a4-4b8b-b77b-dd816c0a0890-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b66304c6-61a4-4b8b-b77b-dd816c0a0890\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.683025 4867 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.683055 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0f267588-31b6-42e6-a1eb-3b23ad395d25\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0f267588-31b6-42e6-a1eb-3b23ad395d25\") pod \"glance-default-internal-api-0\" (UID: \"b66304c6-61a4-4b8b-b77b-dd816c0a0890\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/75d9da1254ce7e619341632ffa065d218ee4aa27b9558c722e4cc97bdf7e072d/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.683195 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b66304c6-61a4-4b8b-b77b-dd816c0a0890-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b66304c6-61a4-4b8b-b77b-dd816c0a0890\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.684151 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53d13a71-03e0-46f0-9ca1-a868d38727f8-config-data\") pod \"ceilometer-0\" (UID: \"53d13a71-03e0-46f0-9ca1-a868d38727f8\") " pod="openstack/ceilometer-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.684644 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b66304c6-61a4-4b8b-b77b-dd816c0a0890-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b66304c6-61a4-4b8b-b77b-dd816c0a0890\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.705982 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b66304c6-61a4-4b8b-b77b-dd816c0a0890-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b66304c6-61a4-4b8b-b77b-dd816c0a0890\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.708498 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53d13a71-03e0-46f0-9ca1-a868d38727f8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"53d13a71-03e0-46f0-9ca1-a868d38727f8\") " pod="openstack/ceilometer-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.711091 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2x4ch\" (UniqueName: \"kubernetes.io/projected/53d13a71-03e0-46f0-9ca1-a868d38727f8-kube-api-access-2x4ch\") pod \"ceilometer-0\" (UID: \"53d13a71-03e0-46f0-9ca1-a868d38727f8\") " pod="openstack/ceilometer-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.727490 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h46x6\" (UniqueName: \"kubernetes.io/projected/b66304c6-61a4-4b8b-b77b-dd816c0a0890-kube-api-access-h46x6\") pod \"glance-default-internal-api-0\" (UID: \"b66304c6-61a4-4b8b-b77b-dd816c0a0890\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.775525 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0f267588-31b6-42e6-a1eb-3b23ad395d25\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0f267588-31b6-42e6-a1eb-3b23ad395d25\") pod \"glance-default-internal-api-0\" (UID: \"b66304c6-61a4-4b8b-b77b-dd816c0a0890\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.811615 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.818260 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 14 04:32:54 crc kubenswrapper[4867]: I0214 04:32:54.402562 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:32:54 crc kubenswrapper[4867]: W0214 04:32:54.415841 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod53d13a71_03e0_46f0_9ca1_a868d38727f8.slice/crio-d7f626293a253c0f81c7bd94b01af430ab3e2653b40c33393d86f55218de6f1d WatchSource:0}: Error finding container d7f626293a253c0f81c7bd94b01af430ab3e2653b40c33393d86f55218de6f1d: Status 404 returned error can't find the container with id d7f626293a253c0f81c7bd94b01af430ab3e2653b40c33393d86f55218de6f1d Feb 14 04:32:54 crc kubenswrapper[4867]: I0214 04:32:54.549968 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 14 04:32:54 crc kubenswrapper[4867]: I0214 04:32:54.711315 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 14 04:32:54 crc kubenswrapper[4867]: I0214 04:32:54.711368 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 14 04:32:54 crc kubenswrapper[4867]: I0214 04:32:54.756040 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 14 04:32:54 crc kubenswrapper[4867]: I0214 04:32:54.775191 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 14 04:32:55 crc kubenswrapper[4867]: I0214 04:32:55.024162 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30f61907-9cb4-4873-99eb-bbb5adf21fcb" path="/var/lib/kubelet/pods/30f61907-9cb4-4873-99eb-bbb5adf21fcb/volumes" Feb 14 04:32:55 crc kubenswrapper[4867]: I0214 04:32:55.025338 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f21b5d2-75e5-4cc5-96d0-670e9ed88df0" path="/var/lib/kubelet/pods/7f21b5d2-75e5-4cc5-96d0-670e9ed88df0/volumes" Feb 14 04:32:55 crc kubenswrapper[4867]: I0214 04:32:55.362212 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b66304c6-61a4-4b8b-b77b-dd816c0a0890","Type":"ContainerStarted","Data":"1ea57d131344734c752fa842970f47956e33a4cc4a22d8305307580b76055219"} Feb 14 04:32:55 crc kubenswrapper[4867]: I0214 04:32:55.362269 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b66304c6-61a4-4b8b-b77b-dd816c0a0890","Type":"ContainerStarted","Data":"fc781c3d26b2502683df7246dfdd94466c432563aa0b1a5c5fc2fb6ceabe2b9a"} Feb 14 04:32:55 crc kubenswrapper[4867]: I0214 04:32:55.368221 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"53d13a71-03e0-46f0-9ca1-a868d38727f8","Type":"ContainerStarted","Data":"c1a49b4b77bae4bd28c8b0c4b6a3607ac206e2b7cf6262bf828fee39c66f8743"} Feb 14 04:32:55 crc kubenswrapper[4867]: I0214 04:32:55.368272 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"53d13a71-03e0-46f0-9ca1-a868d38727f8","Type":"ContainerStarted","Data":"d7f626293a253c0f81c7bd94b01af430ab3e2653b40c33393d86f55218de6f1d"} Feb 14 04:32:55 crc kubenswrapper[4867]: I0214 04:32:55.368559 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 14 04:32:55 crc kubenswrapper[4867]: I0214 04:32:55.369592 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 14 04:32:55 crc kubenswrapper[4867]: I0214 04:32:55.484083 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-6f55d59bf5-wfw72" Feb 14 04:32:55 crc kubenswrapper[4867]: I0214 04:32:55.578820 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-8f9d657ff-n8g4q"] Feb 14 04:32:56 crc kubenswrapper[4867]: I0214 04:32:56.324568 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-8f9d657ff-n8g4q" Feb 14 04:32:56 crc kubenswrapper[4867]: I0214 04:32:56.407439 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-8f9d657ff-n8g4q" event={"ID":"bf9a1d71-05e1-40ab-90a7-530d2083fe14","Type":"ContainerDied","Data":"da29745824d45aedf75030755306f42e86da161913c87bf4c3798a011179b320"} Feb 14 04:32:56 crc kubenswrapper[4867]: I0214 04:32:56.407518 4867 scope.go:117] "RemoveContainer" containerID="c6319e5096476af1cd4794441879b2c2e802659e71c263027bc77f340f5ae574" Feb 14 04:32:56 crc kubenswrapper[4867]: I0214 04:32:56.407810 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-8f9d657ff-n8g4q" Feb 14 04:32:56 crc kubenswrapper[4867]: I0214 04:32:56.409294 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf9a1d71-05e1-40ab-90a7-530d2083fe14-config-data\") pod \"bf9a1d71-05e1-40ab-90a7-530d2083fe14\" (UID: \"bf9a1d71-05e1-40ab-90a7-530d2083fe14\") " Feb 14 04:32:56 crc kubenswrapper[4867]: I0214 04:32:56.409403 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf9a1d71-05e1-40ab-90a7-530d2083fe14-combined-ca-bundle\") pod \"bf9a1d71-05e1-40ab-90a7-530d2083fe14\" (UID: \"bf9a1d71-05e1-40ab-90a7-530d2083fe14\") " Feb 14 04:32:56 crc kubenswrapper[4867]: I0214 04:32:56.409481 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bf9a1d71-05e1-40ab-90a7-530d2083fe14-config-data-custom\") pod \"bf9a1d71-05e1-40ab-90a7-530d2083fe14\" (UID: \"bf9a1d71-05e1-40ab-90a7-530d2083fe14\") " Feb 14 04:32:56 crc kubenswrapper[4867]: I0214 04:32:56.409632 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jxrqv\" (UniqueName: \"kubernetes.io/projected/bf9a1d71-05e1-40ab-90a7-530d2083fe14-kube-api-access-jxrqv\") pod \"bf9a1d71-05e1-40ab-90a7-530d2083fe14\" (UID: \"bf9a1d71-05e1-40ab-90a7-530d2083fe14\") " Feb 14 04:32:56 crc kubenswrapper[4867]: I0214 04:32:56.421609 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf9a1d71-05e1-40ab-90a7-530d2083fe14-kube-api-access-jxrqv" (OuterVolumeSpecName: "kube-api-access-jxrqv") pod "bf9a1d71-05e1-40ab-90a7-530d2083fe14" (UID: "bf9a1d71-05e1-40ab-90a7-530d2083fe14"). InnerVolumeSpecName "kube-api-access-jxrqv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:32:56 crc kubenswrapper[4867]: I0214 04:32:56.421734 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"53d13a71-03e0-46f0-9ca1-a868d38727f8","Type":"ContainerStarted","Data":"9acb43ba5b6f8734f595001c299ec6b770fd758556efb397ef7af5d7c1128490"} Feb 14 04:32:56 crc kubenswrapper[4867]: I0214 04:32:56.440404 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf9a1d71-05e1-40ab-90a7-530d2083fe14-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "bf9a1d71-05e1-40ab-90a7-530d2083fe14" (UID: "bf9a1d71-05e1-40ab-90a7-530d2083fe14"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:56 crc kubenswrapper[4867]: I0214 04:32:56.485643 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf9a1d71-05e1-40ab-90a7-530d2083fe14-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bf9a1d71-05e1-40ab-90a7-530d2083fe14" (UID: "bf9a1d71-05e1-40ab-90a7-530d2083fe14"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:56 crc kubenswrapper[4867]: I0214 04:32:56.566839 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf9a1d71-05e1-40ab-90a7-530d2083fe14-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:56 crc kubenswrapper[4867]: I0214 04:32:56.566879 4867 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bf9a1d71-05e1-40ab-90a7-530d2083fe14-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:56 crc kubenswrapper[4867]: I0214 04:32:56.566889 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jxrqv\" (UniqueName: \"kubernetes.io/projected/bf9a1d71-05e1-40ab-90a7-530d2083fe14-kube-api-access-jxrqv\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:56 crc kubenswrapper[4867]: I0214 04:32:56.618704 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf9a1d71-05e1-40ab-90a7-530d2083fe14-config-data" (OuterVolumeSpecName: "config-data") pod "bf9a1d71-05e1-40ab-90a7-530d2083fe14" (UID: "bf9a1d71-05e1-40ab-90a7-530d2083fe14"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:56 crc kubenswrapper[4867]: I0214 04:32:56.671486 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf9a1d71-05e1-40ab-90a7-530d2083fe14-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:56 crc kubenswrapper[4867]: I0214 04:32:56.807570 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-8f9d657ff-n8g4q"] Feb 14 04:32:56 crc kubenswrapper[4867]: I0214 04:32:56.823210 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-8f9d657ff-n8g4q"] Feb 14 04:32:57 crc kubenswrapper[4867]: I0214 04:32:57.016579 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf9a1d71-05e1-40ab-90a7-530d2083fe14" path="/var/lib/kubelet/pods/bf9a1d71-05e1-40ab-90a7-530d2083fe14/volumes" Feb 14 04:32:57 crc kubenswrapper[4867]: I0214 04:32:57.269909 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-74d8ffb764-wz9cp" Feb 14 04:32:57 crc kubenswrapper[4867]: I0214 04:32:57.342048 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-cf78bc599-cbb7h"] Feb 14 04:32:57 crc kubenswrapper[4867]: I0214 04:32:57.520608 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"53d13a71-03e0-46f0-9ca1-a868d38727f8","Type":"ContainerStarted","Data":"7a99e95642b07c831a11e55d61a4998c2a981443de40d98a474d53cd803563e3"} Feb 14 04:32:57 crc kubenswrapper[4867]: I0214 04:32:57.561662 4867 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 14 04:32:57 crc kubenswrapper[4867]: I0214 04:32:57.562030 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b66304c6-61a4-4b8b-b77b-dd816c0a0890","Type":"ContainerStarted","Data":"ba02f048f072a54327e598e13510a8bb3841c70f454f4bda93b06c6a6f71f60d"} Feb 14 04:32:57 crc kubenswrapper[4867]: I0214 04:32:57.562067 4867 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 14 04:32:57 crc kubenswrapper[4867]: I0214 04:32:57.620113 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.6200870080000005 podStartE2EDuration="4.620087008s" podCreationTimestamp="2026-02-14 04:32:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:32:57.595717513 +0000 UTC m=+1409.676654817" watchObservedRunningTime="2026-02-14 04:32:57.620087008 +0000 UTC m=+1409.701024322" Feb 14 04:32:57 crc kubenswrapper[4867]: I0214 04:32:57.958109 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-cf78bc599-cbb7h" Feb 14 04:32:58 crc kubenswrapper[4867]: I0214 04:32:58.020777 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mh87m\" (UniqueName: \"kubernetes.io/projected/4e650fa8-a893-47e0-a5d5-0df60430ea9e-kube-api-access-mh87m\") pod \"4e650fa8-a893-47e0-a5d5-0df60430ea9e\" (UID: \"4e650fa8-a893-47e0-a5d5-0df60430ea9e\") " Feb 14 04:32:58 crc kubenswrapper[4867]: I0214 04:32:58.021411 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e650fa8-a893-47e0-a5d5-0df60430ea9e-combined-ca-bundle\") pod \"4e650fa8-a893-47e0-a5d5-0df60430ea9e\" (UID: \"4e650fa8-a893-47e0-a5d5-0df60430ea9e\") " Feb 14 04:32:58 crc kubenswrapper[4867]: I0214 04:32:58.021441 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e650fa8-a893-47e0-a5d5-0df60430ea9e-config-data\") pod \"4e650fa8-a893-47e0-a5d5-0df60430ea9e\" (UID: \"4e650fa8-a893-47e0-a5d5-0df60430ea9e\") " Feb 14 04:32:58 crc kubenswrapper[4867]: I0214 04:32:58.021472 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4e650fa8-a893-47e0-a5d5-0df60430ea9e-config-data-custom\") pod \"4e650fa8-a893-47e0-a5d5-0df60430ea9e\" (UID: \"4e650fa8-a893-47e0-a5d5-0df60430ea9e\") " Feb 14 04:32:58 crc kubenswrapper[4867]: I0214 04:32:58.028866 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e650fa8-a893-47e0-a5d5-0df60430ea9e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "4e650fa8-a893-47e0-a5d5-0df60430ea9e" (UID: "4e650fa8-a893-47e0-a5d5-0df60430ea9e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:58 crc kubenswrapper[4867]: I0214 04:32:58.056094 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e650fa8-a893-47e0-a5d5-0df60430ea9e-kube-api-access-mh87m" (OuterVolumeSpecName: "kube-api-access-mh87m") pod "4e650fa8-a893-47e0-a5d5-0df60430ea9e" (UID: "4e650fa8-a893-47e0-a5d5-0df60430ea9e"). InnerVolumeSpecName "kube-api-access-mh87m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:32:58 crc kubenswrapper[4867]: I0214 04:32:58.121244 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e650fa8-a893-47e0-a5d5-0df60430ea9e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4e650fa8-a893-47e0-a5d5-0df60430ea9e" (UID: "4e650fa8-a893-47e0-a5d5-0df60430ea9e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:58 crc kubenswrapper[4867]: I0214 04:32:58.123645 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e650fa8-a893-47e0-a5d5-0df60430ea9e-config-data" (OuterVolumeSpecName: "config-data") pod "4e650fa8-a893-47e0-a5d5-0df60430ea9e" (UID: "4e650fa8-a893-47e0-a5d5-0df60430ea9e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:58 crc kubenswrapper[4867]: I0214 04:32:58.123937 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e650fa8-a893-47e0-a5d5-0df60430ea9e-config-data\") pod \"4e650fa8-a893-47e0-a5d5-0df60430ea9e\" (UID: \"4e650fa8-a893-47e0-a5d5-0df60430ea9e\") " Feb 14 04:32:58 crc kubenswrapper[4867]: I0214 04:32:58.125259 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mh87m\" (UniqueName: \"kubernetes.io/projected/4e650fa8-a893-47e0-a5d5-0df60430ea9e-kube-api-access-mh87m\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:58 crc kubenswrapper[4867]: I0214 04:32:58.125287 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e650fa8-a893-47e0-a5d5-0df60430ea9e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:58 crc kubenswrapper[4867]: I0214 04:32:58.125301 4867 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4e650fa8-a893-47e0-a5d5-0df60430ea9e-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:58 crc kubenswrapper[4867]: W0214 04:32:58.125983 4867 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/4e650fa8-a893-47e0-a5d5-0df60430ea9e/volumes/kubernetes.io~secret/config-data Feb 14 04:32:58 crc kubenswrapper[4867]: I0214 04:32:58.126014 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e650fa8-a893-47e0-a5d5-0df60430ea9e-config-data" (OuterVolumeSpecName: "config-data") pod "4e650fa8-a893-47e0-a5d5-0df60430ea9e" (UID: "4e650fa8-a893-47e0-a5d5-0df60430ea9e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:58 crc kubenswrapper[4867]: I0214 04:32:58.227389 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e650fa8-a893-47e0-a5d5-0df60430ea9e-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:58 crc kubenswrapper[4867]: I0214 04:32:58.602068 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-cf78bc599-cbb7h" event={"ID":"4e650fa8-a893-47e0-a5d5-0df60430ea9e","Type":"ContainerDied","Data":"a2992054f9a747435b4dfa57d015a5d3a94fc0840d14d8df3c6c61038a7f9365"} Feb 14 04:32:58 crc kubenswrapper[4867]: I0214 04:32:58.602131 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-cf78bc599-cbb7h" Feb 14 04:32:58 crc kubenswrapper[4867]: I0214 04:32:58.602182 4867 scope.go:117] "RemoveContainer" containerID="f3518d1f2b8b6a76c52e19fef766123bf78780e7f313d75009ab8059dc0d7904" Feb 14 04:32:58 crc kubenswrapper[4867]: I0214 04:32:58.663072 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-cf78bc599-cbb7h"] Feb 14 04:32:58 crc kubenswrapper[4867]: I0214 04:32:58.692619 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-cf78bc599-cbb7h"] Feb 14 04:32:59 crc kubenswrapper[4867]: I0214 04:32:59.025642 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e650fa8-a893-47e0-a5d5-0df60430ea9e" path="/var/lib/kubelet/pods/4e650fa8-a893-47e0-a5d5-0df60430ea9e/volumes" Feb 14 04:33:00 crc kubenswrapper[4867]: I0214 04:33:00.102731 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 14 04:33:00 crc kubenswrapper[4867]: I0214 04:33:00.103169 4867 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 14 04:33:00 crc kubenswrapper[4867]: I0214 04:33:00.203346 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-7797898b6d-54xz8" Feb 14 04:33:00 crc kubenswrapper[4867]: I0214 04:33:00.241000 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 14 04:33:00 crc kubenswrapper[4867]: I0214 04:33:00.292847 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-677c4ffcdf-n44s6"] Feb 14 04:33:00 crc kubenswrapper[4867]: I0214 04:33:00.293057 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-677c4ffcdf-n44s6" podUID="a2ce3fe5-1f15-484b-a608-da9f03d714c9" containerName="heat-engine" containerID="cri-o://6a3313dda26c1a2d9982bba482eb657c4e81d8dc170b8fa9912ec40df49eb639" gracePeriod=60 Feb 14 04:33:02 crc kubenswrapper[4867]: E0214 04:33:02.408087 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6a3313dda26c1a2d9982bba482eb657c4e81d8dc170b8fa9912ec40df49eb639" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 14 04:33:02 crc kubenswrapper[4867]: E0214 04:33:02.411213 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6a3313dda26c1a2d9982bba482eb657c4e81d8dc170b8fa9912ec40df49eb639" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 14 04:33:02 crc kubenswrapper[4867]: E0214 04:33:02.416663 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6a3313dda26c1a2d9982bba482eb657c4e81d8dc170b8fa9912ec40df49eb639" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 14 04:33:02 crc kubenswrapper[4867]: E0214 04:33:02.416731 4867 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-677c4ffcdf-n44s6" podUID="a2ce3fe5-1f15-484b-a608-da9f03d714c9" containerName="heat-engine" Feb 14 04:33:03 crc kubenswrapper[4867]: I0214 04:33:03.819334 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 14 04:33:03 crc kubenswrapper[4867]: I0214 04:33:03.822104 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 14 04:33:03 crc kubenswrapper[4867]: I0214 04:33:03.897296 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 14 04:33:03 crc kubenswrapper[4867]: I0214 04:33:03.901682 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 14 04:33:04 crc kubenswrapper[4867]: I0214 04:33:04.657410 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:33:04 crc kubenswrapper[4867]: I0214 04:33:04.777567 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 14 04:33:04 crc kubenswrapper[4867]: I0214 04:33:04.777987 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 14 04:33:08 crc kubenswrapper[4867]: I0214 04:33:08.456834 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 14 04:33:08 crc kubenswrapper[4867]: I0214 04:33:08.458010 4867 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 14 04:33:08 crc kubenswrapper[4867]: I0214 04:33:08.462793 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 14 04:33:08 crc kubenswrapper[4867]: I0214 04:33:08.856763 4867 generic.go:334] "Generic (PLEG): container finished" podID="a2ce3fe5-1f15-484b-a608-da9f03d714c9" containerID="6a3313dda26c1a2d9982bba482eb657c4e81d8dc170b8fa9912ec40df49eb639" exitCode=0 Feb 14 04:33:08 crc kubenswrapper[4867]: I0214 04:33:08.856855 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-677c4ffcdf-n44s6" event={"ID":"a2ce3fe5-1f15-484b-a608-da9f03d714c9","Type":"ContainerDied","Data":"6a3313dda26c1a2d9982bba482eb657c4e81d8dc170b8fa9912ec40df49eb639"} Feb 14 04:33:09 crc kubenswrapper[4867]: E0214 04:33:09.924149 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified" Feb 14 04:33:09 crc kubenswrapper[4867]: E0214 04:33:09.924357 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:nova-cell0-conductor-db-sync,Image:quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CELL_NAME,Value:cell0,ValueFrom:nil,},EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:false,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/kolla/config_files/config.json,SubPath:nova-conductor-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lbwhx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42436,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-cell0-conductor-db-sync-vwg9c_openstack(cd08e0e3-a41f-4b25-b71a-1c968410d52e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 04:33:09 crc kubenswrapper[4867]: E0214 04:33:09.925616 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-cell0-conductor-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/nova-cell0-conductor-db-sync-vwg9c" podUID="cd08e0e3-a41f-4b25-b71a-1c968410d52e" Feb 14 04:33:10 crc kubenswrapper[4867]: I0214 04:33:10.458257 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-677c4ffcdf-n44s6" Feb 14 04:33:10 crc kubenswrapper[4867]: I0214 04:33:10.548379 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a2ce3fe5-1f15-484b-a608-da9f03d714c9-config-data-custom\") pod \"a2ce3fe5-1f15-484b-a608-da9f03d714c9\" (UID: \"a2ce3fe5-1f15-484b-a608-da9f03d714c9\") " Feb 14 04:33:10 crc kubenswrapper[4867]: I0214 04:33:10.548468 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2ce3fe5-1f15-484b-a608-da9f03d714c9-config-data\") pod \"a2ce3fe5-1f15-484b-a608-da9f03d714c9\" (UID: \"a2ce3fe5-1f15-484b-a608-da9f03d714c9\") " Feb 14 04:33:10 crc kubenswrapper[4867]: I0214 04:33:10.548636 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lgmf8\" (UniqueName: \"kubernetes.io/projected/a2ce3fe5-1f15-484b-a608-da9f03d714c9-kube-api-access-lgmf8\") pod \"a2ce3fe5-1f15-484b-a608-da9f03d714c9\" (UID: \"a2ce3fe5-1f15-484b-a608-da9f03d714c9\") " Feb 14 04:33:10 crc kubenswrapper[4867]: I0214 04:33:10.549431 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2ce3fe5-1f15-484b-a608-da9f03d714c9-combined-ca-bundle\") pod \"a2ce3fe5-1f15-484b-a608-da9f03d714c9\" (UID: \"a2ce3fe5-1f15-484b-a608-da9f03d714c9\") " Feb 14 04:33:10 crc kubenswrapper[4867]: I0214 04:33:10.561320 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2ce3fe5-1f15-484b-a608-da9f03d714c9-kube-api-access-lgmf8" (OuterVolumeSpecName: "kube-api-access-lgmf8") pod "a2ce3fe5-1f15-484b-a608-da9f03d714c9" (UID: "a2ce3fe5-1f15-484b-a608-da9f03d714c9"). InnerVolumeSpecName "kube-api-access-lgmf8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:33:10 crc kubenswrapper[4867]: I0214 04:33:10.594756 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2ce3fe5-1f15-484b-a608-da9f03d714c9-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a2ce3fe5-1f15-484b-a608-da9f03d714c9" (UID: "a2ce3fe5-1f15-484b-a608-da9f03d714c9"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:33:10 crc kubenswrapper[4867]: I0214 04:33:10.655171 4867 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a2ce3fe5-1f15-484b-a608-da9f03d714c9-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 14 04:33:10 crc kubenswrapper[4867]: I0214 04:33:10.656097 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lgmf8\" (UniqueName: \"kubernetes.io/projected/a2ce3fe5-1f15-484b-a608-da9f03d714c9-kube-api-access-lgmf8\") on node \"crc\" DevicePath \"\"" Feb 14 04:33:10 crc kubenswrapper[4867]: I0214 04:33:10.665859 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2ce3fe5-1f15-484b-a608-da9f03d714c9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a2ce3fe5-1f15-484b-a608-da9f03d714c9" (UID: "a2ce3fe5-1f15-484b-a608-da9f03d714c9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:33:10 crc kubenswrapper[4867]: I0214 04:33:10.685272 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2ce3fe5-1f15-484b-a608-da9f03d714c9-config-data" (OuterVolumeSpecName: "config-data") pod "a2ce3fe5-1f15-484b-a608-da9f03d714c9" (UID: "a2ce3fe5-1f15-484b-a608-da9f03d714c9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:33:10 crc kubenswrapper[4867]: I0214 04:33:10.758543 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2ce3fe5-1f15-484b-a608-da9f03d714c9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:33:10 crc kubenswrapper[4867]: I0214 04:33:10.758588 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2ce3fe5-1f15-484b-a608-da9f03d714c9-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:33:10 crc kubenswrapper[4867]: I0214 04:33:10.880288 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-677c4ffcdf-n44s6" Feb 14 04:33:10 crc kubenswrapper[4867]: I0214 04:33:10.880600 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-677c4ffcdf-n44s6" event={"ID":"a2ce3fe5-1f15-484b-a608-da9f03d714c9","Type":"ContainerDied","Data":"5411ca415d9a87d0850d6fbf4033b3de2e9b4aed86c0a53707211fd73a6a37cc"} Feb 14 04:33:10 crc kubenswrapper[4867]: I0214 04:33:10.880666 4867 scope.go:117] "RemoveContainer" containerID="6a3313dda26c1a2d9982bba482eb657c4e81d8dc170b8fa9912ec40df49eb639" Feb 14 04:33:10 crc kubenswrapper[4867]: I0214 04:33:10.895712 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="53d13a71-03e0-46f0-9ca1-a868d38727f8" containerName="ceilometer-central-agent" containerID="cri-o://c1a49b4b77bae4bd28c8b0c4b6a3607ac206e2b7cf6262bf828fee39c66f8743" gracePeriod=30 Feb 14 04:33:10 crc kubenswrapper[4867]: I0214 04:33:10.896037 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="53d13a71-03e0-46f0-9ca1-a868d38727f8" containerName="proxy-httpd" containerID="cri-o://b01aeddd7a6627ea9d173d8005e626741ff32516f251c5c2ab496415a659b79e" gracePeriod=30 Feb 14 04:33:10 crc kubenswrapper[4867]: I0214 04:33:10.896094 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="53d13a71-03e0-46f0-9ca1-a868d38727f8" containerName="sg-core" containerID="cri-o://7a99e95642b07c831a11e55d61a4998c2a981443de40d98a474d53cd803563e3" gracePeriod=30 Feb 14 04:33:10 crc kubenswrapper[4867]: I0214 04:33:10.896130 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="53d13a71-03e0-46f0-9ca1-a868d38727f8" containerName="ceilometer-notification-agent" containerID="cri-o://9acb43ba5b6f8734f595001c299ec6b770fd758556efb397ef7af5d7c1128490" gracePeriod=30 Feb 14 04:33:10 crc kubenswrapper[4867]: I0214 04:33:10.896174 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"53d13a71-03e0-46f0-9ca1-a868d38727f8","Type":"ContainerStarted","Data":"b01aeddd7a6627ea9d173d8005e626741ff32516f251c5c2ab496415a659b79e"} Feb 14 04:33:10 crc kubenswrapper[4867]: I0214 04:33:10.896203 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 14 04:33:10 crc kubenswrapper[4867]: E0214 04:33:10.914654 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-cell0-conductor-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified\\\"\"" pod="openstack/nova-cell0-conductor-db-sync-vwg9c" podUID="cd08e0e3-a41f-4b25-b71a-1c968410d52e" Feb 14 04:33:10 crc kubenswrapper[4867]: I0214 04:33:10.939857 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=13.568828552 podStartE2EDuration="17.939832995s" podCreationTimestamp="2026-02-14 04:32:53 +0000 UTC" firstStartedPulling="2026-02-14 04:32:54.418859845 +0000 UTC m=+1406.499797159" lastFinishedPulling="2026-02-14 04:32:58.789864288 +0000 UTC m=+1410.870801602" observedRunningTime="2026-02-14 04:33:10.92401539 +0000 UTC m=+1423.004952704" watchObservedRunningTime="2026-02-14 04:33:10.939832995 +0000 UTC m=+1423.020770309" Feb 14 04:33:11 crc kubenswrapper[4867]: I0214 04:33:11.026306 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-677c4ffcdf-n44s6"] Feb 14 04:33:11 crc kubenswrapper[4867]: I0214 04:33:11.026346 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-677c4ffcdf-n44s6"] Feb 14 04:33:11 crc kubenswrapper[4867]: I0214 04:33:11.917633 4867 generic.go:334] "Generic (PLEG): container finished" podID="53d13a71-03e0-46f0-9ca1-a868d38727f8" containerID="b01aeddd7a6627ea9d173d8005e626741ff32516f251c5c2ab496415a659b79e" exitCode=0 Feb 14 04:33:11 crc kubenswrapper[4867]: I0214 04:33:11.917888 4867 generic.go:334] "Generic (PLEG): container finished" podID="53d13a71-03e0-46f0-9ca1-a868d38727f8" containerID="7a99e95642b07c831a11e55d61a4998c2a981443de40d98a474d53cd803563e3" exitCode=2 Feb 14 04:33:11 crc kubenswrapper[4867]: I0214 04:33:11.917702 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"53d13a71-03e0-46f0-9ca1-a868d38727f8","Type":"ContainerDied","Data":"b01aeddd7a6627ea9d173d8005e626741ff32516f251c5c2ab496415a659b79e"} Feb 14 04:33:11 crc kubenswrapper[4867]: I0214 04:33:11.917943 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"53d13a71-03e0-46f0-9ca1-a868d38727f8","Type":"ContainerDied","Data":"7a99e95642b07c831a11e55d61a4998c2a981443de40d98a474d53cd803563e3"} Feb 14 04:33:12 crc kubenswrapper[4867]: I0214 04:33:12.932544 4867 generic.go:334] "Generic (PLEG): container finished" podID="53d13a71-03e0-46f0-9ca1-a868d38727f8" containerID="9acb43ba5b6f8734f595001c299ec6b770fd758556efb397ef7af5d7c1128490" exitCode=0 Feb 14 04:33:12 crc kubenswrapper[4867]: I0214 04:33:12.932617 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"53d13a71-03e0-46f0-9ca1-a868d38727f8","Type":"ContainerDied","Data":"9acb43ba5b6f8734f595001c299ec6b770fd758556efb397ef7af5d7c1128490"} Feb 14 04:33:13 crc kubenswrapper[4867]: I0214 04:33:13.011274 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2ce3fe5-1f15-484b-a608-da9f03d714c9" path="/var/lib/kubelet/pods/a2ce3fe5-1f15-484b-a608-da9f03d714c9/volumes" Feb 14 04:33:14 crc kubenswrapper[4867]: I0214 04:33:14.959643 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 04:33:14 crc kubenswrapper[4867]: I0214 04:33:14.960348 4867 generic.go:334] "Generic (PLEG): container finished" podID="53d13a71-03e0-46f0-9ca1-a868d38727f8" containerID="c1a49b4b77bae4bd28c8b0c4b6a3607ac206e2b7cf6262bf828fee39c66f8743" exitCode=0 Feb 14 04:33:14 crc kubenswrapper[4867]: I0214 04:33:14.960399 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"53d13a71-03e0-46f0-9ca1-a868d38727f8","Type":"ContainerDied","Data":"c1a49b4b77bae4bd28c8b0c4b6a3607ac206e2b7cf6262bf828fee39c66f8743"} Feb 14 04:33:14 crc kubenswrapper[4867]: I0214 04:33:14.960437 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"53d13a71-03e0-46f0-9ca1-a868d38727f8","Type":"ContainerDied","Data":"d7f626293a253c0f81c7bd94b01af430ab3e2653b40c33393d86f55218de6f1d"} Feb 14 04:33:14 crc kubenswrapper[4867]: I0214 04:33:14.960459 4867 scope.go:117] "RemoveContainer" containerID="b01aeddd7a6627ea9d173d8005e626741ff32516f251c5c2ab496415a659b79e" Feb 14 04:33:14 crc kubenswrapper[4867]: I0214 04:33:14.997772 4867 scope.go:117] "RemoveContainer" containerID="7a99e95642b07c831a11e55d61a4998c2a981443de40d98a474d53cd803563e3" Feb 14 04:33:15 crc kubenswrapper[4867]: I0214 04:33:15.019555 4867 scope.go:117] "RemoveContainer" containerID="9acb43ba5b6f8734f595001c299ec6b770fd758556efb397ef7af5d7c1128490" Feb 14 04:33:15 crc kubenswrapper[4867]: I0214 04:33:15.055239 4867 scope.go:117] "RemoveContainer" containerID="c1a49b4b77bae4bd28c8b0c4b6a3607ac206e2b7cf6262bf828fee39c66f8743" Feb 14 04:33:15 crc kubenswrapper[4867]: I0214 04:33:15.065566 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53d13a71-03e0-46f0-9ca1-a868d38727f8-config-data\") pod \"53d13a71-03e0-46f0-9ca1-a868d38727f8\" (UID: \"53d13a71-03e0-46f0-9ca1-a868d38727f8\") " Feb 14 04:33:15 crc kubenswrapper[4867]: I0214 04:33:15.065670 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/53d13a71-03e0-46f0-9ca1-a868d38727f8-run-httpd\") pod \"53d13a71-03e0-46f0-9ca1-a868d38727f8\" (UID: \"53d13a71-03e0-46f0-9ca1-a868d38727f8\") " Feb 14 04:33:15 crc kubenswrapper[4867]: I0214 04:33:15.065871 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/53d13a71-03e0-46f0-9ca1-a868d38727f8-log-httpd\") pod \"53d13a71-03e0-46f0-9ca1-a868d38727f8\" (UID: \"53d13a71-03e0-46f0-9ca1-a868d38727f8\") " Feb 14 04:33:15 crc kubenswrapper[4867]: I0214 04:33:15.065908 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53d13a71-03e0-46f0-9ca1-a868d38727f8-combined-ca-bundle\") pod \"53d13a71-03e0-46f0-9ca1-a868d38727f8\" (UID: \"53d13a71-03e0-46f0-9ca1-a868d38727f8\") " Feb 14 04:33:15 crc kubenswrapper[4867]: I0214 04:33:15.066035 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2x4ch\" (UniqueName: \"kubernetes.io/projected/53d13a71-03e0-46f0-9ca1-a868d38727f8-kube-api-access-2x4ch\") pod \"53d13a71-03e0-46f0-9ca1-a868d38727f8\" (UID: \"53d13a71-03e0-46f0-9ca1-a868d38727f8\") " Feb 14 04:33:15 crc kubenswrapper[4867]: I0214 04:33:15.066162 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/53d13a71-03e0-46f0-9ca1-a868d38727f8-sg-core-conf-yaml\") pod \"53d13a71-03e0-46f0-9ca1-a868d38727f8\" (UID: \"53d13a71-03e0-46f0-9ca1-a868d38727f8\") " Feb 14 04:33:15 crc kubenswrapper[4867]: I0214 04:33:15.066265 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53d13a71-03e0-46f0-9ca1-a868d38727f8-scripts\") pod \"53d13a71-03e0-46f0-9ca1-a868d38727f8\" (UID: \"53d13a71-03e0-46f0-9ca1-a868d38727f8\") " Feb 14 04:33:15 crc kubenswrapper[4867]: I0214 04:33:15.068320 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53d13a71-03e0-46f0-9ca1-a868d38727f8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "53d13a71-03e0-46f0-9ca1-a868d38727f8" (UID: "53d13a71-03e0-46f0-9ca1-a868d38727f8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:33:15 crc kubenswrapper[4867]: I0214 04:33:15.068752 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53d13a71-03e0-46f0-9ca1-a868d38727f8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "53d13a71-03e0-46f0-9ca1-a868d38727f8" (UID: "53d13a71-03e0-46f0-9ca1-a868d38727f8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:33:15 crc kubenswrapper[4867]: I0214 04:33:15.074224 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53d13a71-03e0-46f0-9ca1-a868d38727f8-kube-api-access-2x4ch" (OuterVolumeSpecName: "kube-api-access-2x4ch") pod "53d13a71-03e0-46f0-9ca1-a868d38727f8" (UID: "53d13a71-03e0-46f0-9ca1-a868d38727f8"). InnerVolumeSpecName "kube-api-access-2x4ch". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:33:15 crc kubenswrapper[4867]: I0214 04:33:15.074315 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53d13a71-03e0-46f0-9ca1-a868d38727f8-scripts" (OuterVolumeSpecName: "scripts") pod "53d13a71-03e0-46f0-9ca1-a868d38727f8" (UID: "53d13a71-03e0-46f0-9ca1-a868d38727f8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:33:15 crc kubenswrapper[4867]: I0214 04:33:15.103115 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53d13a71-03e0-46f0-9ca1-a868d38727f8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "53d13a71-03e0-46f0-9ca1-a868d38727f8" (UID: "53d13a71-03e0-46f0-9ca1-a868d38727f8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:33:15 crc kubenswrapper[4867]: I0214 04:33:15.112147 4867 scope.go:117] "RemoveContainer" containerID="b01aeddd7a6627ea9d173d8005e626741ff32516f251c5c2ab496415a659b79e" Feb 14 04:33:15 crc kubenswrapper[4867]: E0214 04:33:15.112911 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b01aeddd7a6627ea9d173d8005e626741ff32516f251c5c2ab496415a659b79e\": container with ID starting with b01aeddd7a6627ea9d173d8005e626741ff32516f251c5c2ab496415a659b79e not found: ID does not exist" containerID="b01aeddd7a6627ea9d173d8005e626741ff32516f251c5c2ab496415a659b79e" Feb 14 04:33:15 crc kubenswrapper[4867]: I0214 04:33:15.112960 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b01aeddd7a6627ea9d173d8005e626741ff32516f251c5c2ab496415a659b79e"} err="failed to get container status \"b01aeddd7a6627ea9d173d8005e626741ff32516f251c5c2ab496415a659b79e\": rpc error: code = NotFound desc = could not find container \"b01aeddd7a6627ea9d173d8005e626741ff32516f251c5c2ab496415a659b79e\": container with ID starting with b01aeddd7a6627ea9d173d8005e626741ff32516f251c5c2ab496415a659b79e not found: ID does not exist" Feb 14 04:33:15 crc kubenswrapper[4867]: I0214 04:33:15.112991 4867 scope.go:117] "RemoveContainer" containerID="7a99e95642b07c831a11e55d61a4998c2a981443de40d98a474d53cd803563e3" Feb 14 04:33:15 crc kubenswrapper[4867]: E0214 04:33:15.116673 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a99e95642b07c831a11e55d61a4998c2a981443de40d98a474d53cd803563e3\": container with ID starting with 7a99e95642b07c831a11e55d61a4998c2a981443de40d98a474d53cd803563e3 not found: ID does not exist" containerID="7a99e95642b07c831a11e55d61a4998c2a981443de40d98a474d53cd803563e3" Feb 14 04:33:15 crc kubenswrapper[4867]: I0214 04:33:15.116722 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a99e95642b07c831a11e55d61a4998c2a981443de40d98a474d53cd803563e3"} err="failed to get container status \"7a99e95642b07c831a11e55d61a4998c2a981443de40d98a474d53cd803563e3\": rpc error: code = NotFound desc = could not find container \"7a99e95642b07c831a11e55d61a4998c2a981443de40d98a474d53cd803563e3\": container with ID starting with 7a99e95642b07c831a11e55d61a4998c2a981443de40d98a474d53cd803563e3 not found: ID does not exist" Feb 14 04:33:15 crc kubenswrapper[4867]: I0214 04:33:15.116755 4867 scope.go:117] "RemoveContainer" containerID="9acb43ba5b6f8734f595001c299ec6b770fd758556efb397ef7af5d7c1128490" Feb 14 04:33:15 crc kubenswrapper[4867]: E0214 04:33:15.117284 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9acb43ba5b6f8734f595001c299ec6b770fd758556efb397ef7af5d7c1128490\": container with ID starting with 9acb43ba5b6f8734f595001c299ec6b770fd758556efb397ef7af5d7c1128490 not found: ID does not exist" containerID="9acb43ba5b6f8734f595001c299ec6b770fd758556efb397ef7af5d7c1128490" Feb 14 04:33:15 crc kubenswrapper[4867]: I0214 04:33:15.117334 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9acb43ba5b6f8734f595001c299ec6b770fd758556efb397ef7af5d7c1128490"} err="failed to get container status \"9acb43ba5b6f8734f595001c299ec6b770fd758556efb397ef7af5d7c1128490\": rpc error: code = NotFound desc = could not find container \"9acb43ba5b6f8734f595001c299ec6b770fd758556efb397ef7af5d7c1128490\": container with ID starting with 9acb43ba5b6f8734f595001c299ec6b770fd758556efb397ef7af5d7c1128490 not found: ID does not exist" Feb 14 04:33:15 crc kubenswrapper[4867]: I0214 04:33:15.117357 4867 scope.go:117] "RemoveContainer" containerID="c1a49b4b77bae4bd28c8b0c4b6a3607ac206e2b7cf6262bf828fee39c66f8743" Feb 14 04:33:15 crc kubenswrapper[4867]: E0214 04:33:15.118089 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1a49b4b77bae4bd28c8b0c4b6a3607ac206e2b7cf6262bf828fee39c66f8743\": container with ID starting with c1a49b4b77bae4bd28c8b0c4b6a3607ac206e2b7cf6262bf828fee39c66f8743 not found: ID does not exist" containerID="c1a49b4b77bae4bd28c8b0c4b6a3607ac206e2b7cf6262bf828fee39c66f8743" Feb 14 04:33:15 crc kubenswrapper[4867]: I0214 04:33:15.118116 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1a49b4b77bae4bd28c8b0c4b6a3607ac206e2b7cf6262bf828fee39c66f8743"} err="failed to get container status \"c1a49b4b77bae4bd28c8b0c4b6a3607ac206e2b7cf6262bf828fee39c66f8743\": rpc error: code = NotFound desc = could not find container \"c1a49b4b77bae4bd28c8b0c4b6a3607ac206e2b7cf6262bf828fee39c66f8743\": container with ID starting with c1a49b4b77bae4bd28c8b0c4b6a3607ac206e2b7cf6262bf828fee39c66f8743 not found: ID does not exist" Feb 14 04:33:15 crc kubenswrapper[4867]: I0214 04:33:15.169233 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2x4ch\" (UniqueName: \"kubernetes.io/projected/53d13a71-03e0-46f0-9ca1-a868d38727f8-kube-api-access-2x4ch\") on node \"crc\" DevicePath \"\"" Feb 14 04:33:15 crc kubenswrapper[4867]: I0214 04:33:15.169273 4867 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/53d13a71-03e0-46f0-9ca1-a868d38727f8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 14 04:33:15 crc kubenswrapper[4867]: I0214 04:33:15.169284 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53d13a71-03e0-46f0-9ca1-a868d38727f8-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:33:15 crc kubenswrapper[4867]: I0214 04:33:15.169293 4867 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/53d13a71-03e0-46f0-9ca1-a868d38727f8-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 14 04:33:15 crc kubenswrapper[4867]: I0214 04:33:15.169300 4867 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/53d13a71-03e0-46f0-9ca1-a868d38727f8-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 14 04:33:15 crc kubenswrapper[4867]: I0214 04:33:15.207707 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53d13a71-03e0-46f0-9ca1-a868d38727f8-config-data" (OuterVolumeSpecName: "config-data") pod "53d13a71-03e0-46f0-9ca1-a868d38727f8" (UID: "53d13a71-03e0-46f0-9ca1-a868d38727f8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:33:15 crc kubenswrapper[4867]: I0214 04:33:15.222693 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53d13a71-03e0-46f0-9ca1-a868d38727f8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "53d13a71-03e0-46f0-9ca1-a868d38727f8" (UID: "53d13a71-03e0-46f0-9ca1-a868d38727f8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:33:15 crc kubenswrapper[4867]: I0214 04:33:15.272247 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53d13a71-03e0-46f0-9ca1-a868d38727f8-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:33:15 crc kubenswrapper[4867]: I0214 04:33:15.272295 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53d13a71-03e0-46f0-9ca1-a868d38727f8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:33:15 crc kubenswrapper[4867]: I0214 04:33:15.972036 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 04:33:16 crc kubenswrapper[4867]: I0214 04:33:16.011522 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:33:16 crc kubenswrapper[4867]: I0214 04:33:16.023256 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:33:16 crc kubenswrapper[4867]: I0214 04:33:16.044709 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:33:16 crc kubenswrapper[4867]: E0214 04:33:16.045207 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e650fa8-a893-47e0-a5d5-0df60430ea9e" containerName="heat-cfnapi" Feb 14 04:33:16 crc kubenswrapper[4867]: I0214 04:33:16.045228 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e650fa8-a893-47e0-a5d5-0df60430ea9e" containerName="heat-cfnapi" Feb 14 04:33:16 crc kubenswrapper[4867]: E0214 04:33:16.045238 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e650fa8-a893-47e0-a5d5-0df60430ea9e" containerName="heat-cfnapi" Feb 14 04:33:16 crc kubenswrapper[4867]: I0214 04:33:16.045245 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e650fa8-a893-47e0-a5d5-0df60430ea9e" containerName="heat-cfnapi" Feb 14 04:33:16 crc kubenswrapper[4867]: E0214 04:33:16.045269 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53d13a71-03e0-46f0-9ca1-a868d38727f8" containerName="proxy-httpd" Feb 14 04:33:16 crc kubenswrapper[4867]: I0214 04:33:16.045276 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="53d13a71-03e0-46f0-9ca1-a868d38727f8" containerName="proxy-httpd" Feb 14 04:33:16 crc kubenswrapper[4867]: E0214 04:33:16.045290 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53d13a71-03e0-46f0-9ca1-a868d38727f8" containerName="ceilometer-notification-agent" Feb 14 04:33:16 crc kubenswrapper[4867]: I0214 04:33:16.045296 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="53d13a71-03e0-46f0-9ca1-a868d38727f8" containerName="ceilometer-notification-agent" Feb 14 04:33:16 crc kubenswrapper[4867]: E0214 04:33:16.045318 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf9a1d71-05e1-40ab-90a7-530d2083fe14" containerName="heat-api" Feb 14 04:33:16 crc kubenswrapper[4867]: I0214 04:33:16.045326 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf9a1d71-05e1-40ab-90a7-530d2083fe14" containerName="heat-api" Feb 14 04:33:16 crc kubenswrapper[4867]: E0214 04:33:16.045337 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53d13a71-03e0-46f0-9ca1-a868d38727f8" containerName="ceilometer-central-agent" Feb 14 04:33:16 crc kubenswrapper[4867]: I0214 04:33:16.045342 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="53d13a71-03e0-46f0-9ca1-a868d38727f8" containerName="ceilometer-central-agent" Feb 14 04:33:16 crc kubenswrapper[4867]: E0214 04:33:16.045361 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53d13a71-03e0-46f0-9ca1-a868d38727f8" containerName="sg-core" Feb 14 04:33:16 crc kubenswrapper[4867]: I0214 04:33:16.045368 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="53d13a71-03e0-46f0-9ca1-a868d38727f8" containerName="sg-core" Feb 14 04:33:16 crc kubenswrapper[4867]: E0214 04:33:16.045383 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2ce3fe5-1f15-484b-a608-da9f03d714c9" containerName="heat-engine" Feb 14 04:33:16 crc kubenswrapper[4867]: I0214 04:33:16.045389 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2ce3fe5-1f15-484b-a608-da9f03d714c9" containerName="heat-engine" Feb 14 04:33:16 crc kubenswrapper[4867]: I0214 04:33:16.045600 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="53d13a71-03e0-46f0-9ca1-a868d38727f8" containerName="ceilometer-notification-agent" Feb 14 04:33:16 crc kubenswrapper[4867]: I0214 04:33:16.045617 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e650fa8-a893-47e0-a5d5-0df60430ea9e" containerName="heat-cfnapi" Feb 14 04:33:16 crc kubenswrapper[4867]: I0214 04:33:16.045625 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e650fa8-a893-47e0-a5d5-0df60430ea9e" containerName="heat-cfnapi" Feb 14 04:33:16 crc kubenswrapper[4867]: I0214 04:33:16.045633 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf9a1d71-05e1-40ab-90a7-530d2083fe14" containerName="heat-api" Feb 14 04:33:16 crc kubenswrapper[4867]: I0214 04:33:16.045645 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf9a1d71-05e1-40ab-90a7-530d2083fe14" containerName="heat-api" Feb 14 04:33:16 crc kubenswrapper[4867]: I0214 04:33:16.045659 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="53d13a71-03e0-46f0-9ca1-a868d38727f8" containerName="proxy-httpd" Feb 14 04:33:16 crc kubenswrapper[4867]: I0214 04:33:16.045680 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="53d13a71-03e0-46f0-9ca1-a868d38727f8" containerName="ceilometer-central-agent" Feb 14 04:33:16 crc kubenswrapper[4867]: I0214 04:33:16.045693 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="53d13a71-03e0-46f0-9ca1-a868d38727f8" containerName="sg-core" Feb 14 04:33:16 crc kubenswrapper[4867]: I0214 04:33:16.045707 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2ce3fe5-1f15-484b-a608-da9f03d714c9" containerName="heat-engine" Feb 14 04:33:16 crc kubenswrapper[4867]: E0214 04:33:16.045907 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf9a1d71-05e1-40ab-90a7-530d2083fe14" containerName="heat-api" Feb 14 04:33:16 crc kubenswrapper[4867]: I0214 04:33:16.045917 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf9a1d71-05e1-40ab-90a7-530d2083fe14" containerName="heat-api" Feb 14 04:33:16 crc kubenswrapper[4867]: I0214 04:33:16.048860 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 04:33:16 crc kubenswrapper[4867]: I0214 04:33:16.054137 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 14 04:33:16 crc kubenswrapper[4867]: I0214 04:33:16.055460 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 14 04:33:16 crc kubenswrapper[4867]: I0214 04:33:16.068895 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:33:16 crc kubenswrapper[4867]: I0214 04:33:16.101114 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/788d7241-b06e-48a1-972a-dcfc775b6284-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"788d7241-b06e-48a1-972a-dcfc775b6284\") " pod="openstack/ceilometer-0" Feb 14 04:33:16 crc kubenswrapper[4867]: I0214 04:33:16.101303 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vcb2\" (UniqueName: \"kubernetes.io/projected/788d7241-b06e-48a1-972a-dcfc775b6284-kube-api-access-6vcb2\") pod \"ceilometer-0\" (UID: \"788d7241-b06e-48a1-972a-dcfc775b6284\") " pod="openstack/ceilometer-0" Feb 14 04:33:16 crc kubenswrapper[4867]: I0214 04:33:16.101475 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/788d7241-b06e-48a1-972a-dcfc775b6284-config-data\") pod \"ceilometer-0\" (UID: \"788d7241-b06e-48a1-972a-dcfc775b6284\") " pod="openstack/ceilometer-0" Feb 14 04:33:16 crc kubenswrapper[4867]: I0214 04:33:16.101816 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/788d7241-b06e-48a1-972a-dcfc775b6284-scripts\") pod \"ceilometer-0\" (UID: \"788d7241-b06e-48a1-972a-dcfc775b6284\") " pod="openstack/ceilometer-0" Feb 14 04:33:16 crc kubenswrapper[4867]: I0214 04:33:16.101979 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/788d7241-b06e-48a1-972a-dcfc775b6284-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"788d7241-b06e-48a1-972a-dcfc775b6284\") " pod="openstack/ceilometer-0" Feb 14 04:33:16 crc kubenswrapper[4867]: I0214 04:33:16.102138 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/788d7241-b06e-48a1-972a-dcfc775b6284-log-httpd\") pod \"ceilometer-0\" (UID: \"788d7241-b06e-48a1-972a-dcfc775b6284\") " pod="openstack/ceilometer-0" Feb 14 04:33:16 crc kubenswrapper[4867]: I0214 04:33:16.102532 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/788d7241-b06e-48a1-972a-dcfc775b6284-run-httpd\") pod \"ceilometer-0\" (UID: \"788d7241-b06e-48a1-972a-dcfc775b6284\") " pod="openstack/ceilometer-0" Feb 14 04:33:16 crc kubenswrapper[4867]: I0214 04:33:16.205263 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/788d7241-b06e-48a1-972a-dcfc775b6284-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"788d7241-b06e-48a1-972a-dcfc775b6284\") " pod="openstack/ceilometer-0" Feb 14 04:33:16 crc kubenswrapper[4867]: I0214 04:33:16.205357 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vcb2\" (UniqueName: \"kubernetes.io/projected/788d7241-b06e-48a1-972a-dcfc775b6284-kube-api-access-6vcb2\") pod \"ceilometer-0\" (UID: \"788d7241-b06e-48a1-972a-dcfc775b6284\") " pod="openstack/ceilometer-0" Feb 14 04:33:16 crc kubenswrapper[4867]: I0214 04:33:16.205382 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/788d7241-b06e-48a1-972a-dcfc775b6284-config-data\") pod \"ceilometer-0\" (UID: \"788d7241-b06e-48a1-972a-dcfc775b6284\") " pod="openstack/ceilometer-0" Feb 14 04:33:16 crc kubenswrapper[4867]: I0214 04:33:16.205440 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/788d7241-b06e-48a1-972a-dcfc775b6284-scripts\") pod \"ceilometer-0\" (UID: \"788d7241-b06e-48a1-972a-dcfc775b6284\") " pod="openstack/ceilometer-0" Feb 14 04:33:16 crc kubenswrapper[4867]: I0214 04:33:16.205483 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/788d7241-b06e-48a1-972a-dcfc775b6284-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"788d7241-b06e-48a1-972a-dcfc775b6284\") " pod="openstack/ceilometer-0" Feb 14 04:33:16 crc kubenswrapper[4867]: I0214 04:33:16.205530 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/788d7241-b06e-48a1-972a-dcfc775b6284-log-httpd\") pod \"ceilometer-0\" (UID: \"788d7241-b06e-48a1-972a-dcfc775b6284\") " pod="openstack/ceilometer-0" Feb 14 04:33:16 crc kubenswrapper[4867]: I0214 04:33:16.205589 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/788d7241-b06e-48a1-972a-dcfc775b6284-run-httpd\") pod \"ceilometer-0\" (UID: \"788d7241-b06e-48a1-972a-dcfc775b6284\") " pod="openstack/ceilometer-0" Feb 14 04:33:16 crc kubenswrapper[4867]: I0214 04:33:16.206311 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/788d7241-b06e-48a1-972a-dcfc775b6284-run-httpd\") pod \"ceilometer-0\" (UID: \"788d7241-b06e-48a1-972a-dcfc775b6284\") " pod="openstack/ceilometer-0" Feb 14 04:33:16 crc kubenswrapper[4867]: I0214 04:33:16.206578 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/788d7241-b06e-48a1-972a-dcfc775b6284-log-httpd\") pod \"ceilometer-0\" (UID: \"788d7241-b06e-48a1-972a-dcfc775b6284\") " pod="openstack/ceilometer-0" Feb 14 04:33:16 crc kubenswrapper[4867]: I0214 04:33:16.210855 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/788d7241-b06e-48a1-972a-dcfc775b6284-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"788d7241-b06e-48a1-972a-dcfc775b6284\") " pod="openstack/ceilometer-0" Feb 14 04:33:16 crc kubenswrapper[4867]: I0214 04:33:16.211134 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/788d7241-b06e-48a1-972a-dcfc775b6284-scripts\") pod \"ceilometer-0\" (UID: \"788d7241-b06e-48a1-972a-dcfc775b6284\") " pod="openstack/ceilometer-0" Feb 14 04:33:16 crc kubenswrapper[4867]: I0214 04:33:16.222012 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vcb2\" (UniqueName: \"kubernetes.io/projected/788d7241-b06e-48a1-972a-dcfc775b6284-kube-api-access-6vcb2\") pod \"ceilometer-0\" (UID: \"788d7241-b06e-48a1-972a-dcfc775b6284\") " pod="openstack/ceilometer-0" Feb 14 04:33:16 crc kubenswrapper[4867]: I0214 04:33:16.227620 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/788d7241-b06e-48a1-972a-dcfc775b6284-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"788d7241-b06e-48a1-972a-dcfc775b6284\") " pod="openstack/ceilometer-0" Feb 14 04:33:16 crc kubenswrapper[4867]: I0214 04:33:16.233585 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/788d7241-b06e-48a1-972a-dcfc775b6284-config-data\") pod \"ceilometer-0\" (UID: \"788d7241-b06e-48a1-972a-dcfc775b6284\") " pod="openstack/ceilometer-0" Feb 14 04:33:16 crc kubenswrapper[4867]: I0214 04:33:16.376477 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 04:33:16 crc kubenswrapper[4867]: I0214 04:33:16.888992 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:33:16 crc kubenswrapper[4867]: I0214 04:33:16.986984 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"788d7241-b06e-48a1-972a-dcfc775b6284","Type":"ContainerStarted","Data":"9521b41bed1d9a154f90b192faa3f2ee97914bb5360639cfc7050d90128f992d"} Feb 14 04:33:17 crc kubenswrapper[4867]: I0214 04:33:17.009276 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53d13a71-03e0-46f0-9ca1-a868d38727f8" path="/var/lib/kubelet/pods/53d13a71-03e0-46f0-9ca1-a868d38727f8/volumes" Feb 14 04:33:18 crc kubenswrapper[4867]: I0214 04:33:17.999721 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"788d7241-b06e-48a1-972a-dcfc775b6284","Type":"ContainerStarted","Data":"e18f2a83cfdf9446ab26d91a1eba8e1f68f59b3dbb23a4376180e9d0192d47fe"} Feb 14 04:33:19 crc kubenswrapper[4867]: I0214 04:33:19.028947 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"788d7241-b06e-48a1-972a-dcfc775b6284","Type":"ContainerStarted","Data":"a58f4833c7dc52ae560c46151d14057c1ee06f5e3f9e04d4d19f561f80e18b49"} Feb 14 04:33:20 crc kubenswrapper[4867]: I0214 04:33:20.051782 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"788d7241-b06e-48a1-972a-dcfc775b6284","Type":"ContainerStarted","Data":"08d1bd8680ee0a9caae4afea313d114c5670fd0ccfcb36da45fe6092ef6fbb57"} Feb 14 04:33:22 crc kubenswrapper[4867]: I0214 04:33:22.078481 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"788d7241-b06e-48a1-972a-dcfc775b6284","Type":"ContainerStarted","Data":"831f6143040c0fe5fcda131be628b4577c7e07490142b64520cd275be8a63db1"} Feb 14 04:33:22 crc kubenswrapper[4867]: I0214 04:33:22.079065 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 14 04:33:22 crc kubenswrapper[4867]: I0214 04:33:22.108347 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.00288849 podStartE2EDuration="6.108329528s" podCreationTimestamp="2026-02-14 04:33:16 +0000 UTC" firstStartedPulling="2026-02-14 04:33:16.913999873 +0000 UTC m=+1428.994937187" lastFinishedPulling="2026-02-14 04:33:21.019440921 +0000 UTC m=+1433.100378225" observedRunningTime="2026-02-14 04:33:22.105388129 +0000 UTC m=+1434.186325473" watchObservedRunningTime="2026-02-14 04:33:22.108329528 +0000 UTC m=+1434.189266842" Feb 14 04:33:27 crc kubenswrapper[4867]: I0214 04:33:27.135746 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-vwg9c" event={"ID":"cd08e0e3-a41f-4b25-b71a-1c968410d52e","Type":"ContainerStarted","Data":"0f96994fd5725370a862ce87b1e8d08bfc4ff10235813b94e745a18d93f42f91"} Feb 14 04:33:27 crc kubenswrapper[4867]: I0214 04:33:27.160633 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-vwg9c" podStartSLOduration=2.188576522 podStartE2EDuration="38.160610297s" podCreationTimestamp="2026-02-14 04:32:49 +0000 UTC" firstStartedPulling="2026-02-14 04:32:50.442907176 +0000 UTC m=+1402.523844490" lastFinishedPulling="2026-02-14 04:33:26.414940951 +0000 UTC m=+1438.495878265" observedRunningTime="2026-02-14 04:33:27.151330517 +0000 UTC m=+1439.232267831" watchObservedRunningTime="2026-02-14 04:33:27.160610297 +0000 UTC m=+1439.241547611" Feb 14 04:33:34 crc kubenswrapper[4867]: I0214 04:33:34.450923 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:33:34 crc kubenswrapper[4867]: I0214 04:33:34.451889 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="788d7241-b06e-48a1-972a-dcfc775b6284" containerName="ceilometer-central-agent" containerID="cri-o://e18f2a83cfdf9446ab26d91a1eba8e1f68f59b3dbb23a4376180e9d0192d47fe" gracePeriod=30 Feb 14 04:33:34 crc kubenswrapper[4867]: I0214 04:33:34.452016 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="788d7241-b06e-48a1-972a-dcfc775b6284" containerName="ceilometer-notification-agent" containerID="cri-o://a58f4833c7dc52ae560c46151d14057c1ee06f5e3f9e04d4d19f561f80e18b49" gracePeriod=30 Feb 14 04:33:34 crc kubenswrapper[4867]: I0214 04:33:34.452012 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="788d7241-b06e-48a1-972a-dcfc775b6284" containerName="sg-core" containerID="cri-o://08d1bd8680ee0a9caae4afea313d114c5670fd0ccfcb36da45fe6092ef6fbb57" gracePeriod=30 Feb 14 04:33:34 crc kubenswrapper[4867]: I0214 04:33:34.452048 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="788d7241-b06e-48a1-972a-dcfc775b6284" containerName="proxy-httpd" containerID="cri-o://831f6143040c0fe5fcda131be628b4577c7e07490142b64520cd275be8a63db1" gracePeriod=30 Feb 14 04:33:34 crc kubenswrapper[4867]: I0214 04:33:34.473078 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="788d7241-b06e-48a1-972a-dcfc775b6284" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.232:3000/\": EOF" Feb 14 04:33:35 crc kubenswrapper[4867]: I0214 04:33:35.227775 4867 generic.go:334] "Generic (PLEG): container finished" podID="788d7241-b06e-48a1-972a-dcfc775b6284" containerID="831f6143040c0fe5fcda131be628b4577c7e07490142b64520cd275be8a63db1" exitCode=0 Feb 14 04:33:35 crc kubenswrapper[4867]: I0214 04:33:35.227807 4867 generic.go:334] "Generic (PLEG): container finished" podID="788d7241-b06e-48a1-972a-dcfc775b6284" containerID="08d1bd8680ee0a9caae4afea313d114c5670fd0ccfcb36da45fe6092ef6fbb57" exitCode=2 Feb 14 04:33:35 crc kubenswrapper[4867]: I0214 04:33:35.227816 4867 generic.go:334] "Generic (PLEG): container finished" podID="788d7241-b06e-48a1-972a-dcfc775b6284" containerID="e18f2a83cfdf9446ab26d91a1eba8e1f68f59b3dbb23a4376180e9d0192d47fe" exitCode=0 Feb 14 04:33:35 crc kubenswrapper[4867]: I0214 04:33:35.227844 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"788d7241-b06e-48a1-972a-dcfc775b6284","Type":"ContainerDied","Data":"831f6143040c0fe5fcda131be628b4577c7e07490142b64520cd275be8a63db1"} Feb 14 04:33:35 crc kubenswrapper[4867]: I0214 04:33:35.227898 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"788d7241-b06e-48a1-972a-dcfc775b6284","Type":"ContainerDied","Data":"08d1bd8680ee0a9caae4afea313d114c5670fd0ccfcb36da45fe6092ef6fbb57"} Feb 14 04:33:35 crc kubenswrapper[4867]: I0214 04:33:35.227911 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"788d7241-b06e-48a1-972a-dcfc775b6284","Type":"ContainerDied","Data":"e18f2a83cfdf9446ab26d91a1eba8e1f68f59b3dbb23a4376180e9d0192d47fe"} Feb 14 04:33:35 crc kubenswrapper[4867]: I0214 04:33:35.786258 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 04:33:35 crc kubenswrapper[4867]: I0214 04:33:35.838262 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/788d7241-b06e-48a1-972a-dcfc775b6284-run-httpd\") pod \"788d7241-b06e-48a1-972a-dcfc775b6284\" (UID: \"788d7241-b06e-48a1-972a-dcfc775b6284\") " Feb 14 04:33:35 crc kubenswrapper[4867]: I0214 04:33:35.838377 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vcb2\" (UniqueName: \"kubernetes.io/projected/788d7241-b06e-48a1-972a-dcfc775b6284-kube-api-access-6vcb2\") pod \"788d7241-b06e-48a1-972a-dcfc775b6284\" (UID: \"788d7241-b06e-48a1-972a-dcfc775b6284\") " Feb 14 04:33:35 crc kubenswrapper[4867]: I0214 04:33:35.838490 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/788d7241-b06e-48a1-972a-dcfc775b6284-config-data\") pod \"788d7241-b06e-48a1-972a-dcfc775b6284\" (UID: \"788d7241-b06e-48a1-972a-dcfc775b6284\") " Feb 14 04:33:35 crc kubenswrapper[4867]: I0214 04:33:35.838568 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/788d7241-b06e-48a1-972a-dcfc775b6284-sg-core-conf-yaml\") pod \"788d7241-b06e-48a1-972a-dcfc775b6284\" (UID: \"788d7241-b06e-48a1-972a-dcfc775b6284\") " Feb 14 04:33:35 crc kubenswrapper[4867]: I0214 04:33:35.838656 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/788d7241-b06e-48a1-972a-dcfc775b6284-combined-ca-bundle\") pod \"788d7241-b06e-48a1-972a-dcfc775b6284\" (UID: \"788d7241-b06e-48a1-972a-dcfc775b6284\") " Feb 14 04:33:35 crc kubenswrapper[4867]: I0214 04:33:35.838803 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/788d7241-b06e-48a1-972a-dcfc775b6284-scripts\") pod \"788d7241-b06e-48a1-972a-dcfc775b6284\" (UID: \"788d7241-b06e-48a1-972a-dcfc775b6284\") " Feb 14 04:33:35 crc kubenswrapper[4867]: I0214 04:33:35.838853 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/788d7241-b06e-48a1-972a-dcfc775b6284-log-httpd\") pod \"788d7241-b06e-48a1-972a-dcfc775b6284\" (UID: \"788d7241-b06e-48a1-972a-dcfc775b6284\") " Feb 14 04:33:35 crc kubenswrapper[4867]: I0214 04:33:35.838930 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/788d7241-b06e-48a1-972a-dcfc775b6284-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "788d7241-b06e-48a1-972a-dcfc775b6284" (UID: "788d7241-b06e-48a1-972a-dcfc775b6284"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:33:35 crc kubenswrapper[4867]: I0214 04:33:35.839660 4867 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/788d7241-b06e-48a1-972a-dcfc775b6284-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 14 04:33:35 crc kubenswrapper[4867]: I0214 04:33:35.840868 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/788d7241-b06e-48a1-972a-dcfc775b6284-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "788d7241-b06e-48a1-972a-dcfc775b6284" (UID: "788d7241-b06e-48a1-972a-dcfc775b6284"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:33:35 crc kubenswrapper[4867]: I0214 04:33:35.853697 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/788d7241-b06e-48a1-972a-dcfc775b6284-scripts" (OuterVolumeSpecName: "scripts") pod "788d7241-b06e-48a1-972a-dcfc775b6284" (UID: "788d7241-b06e-48a1-972a-dcfc775b6284"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:33:35 crc kubenswrapper[4867]: I0214 04:33:35.853734 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/788d7241-b06e-48a1-972a-dcfc775b6284-kube-api-access-6vcb2" (OuterVolumeSpecName: "kube-api-access-6vcb2") pod "788d7241-b06e-48a1-972a-dcfc775b6284" (UID: "788d7241-b06e-48a1-972a-dcfc775b6284"). InnerVolumeSpecName "kube-api-access-6vcb2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:33:35 crc kubenswrapper[4867]: I0214 04:33:35.942921 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/788d7241-b06e-48a1-972a-dcfc775b6284-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:33:35 crc kubenswrapper[4867]: I0214 04:33:35.943459 4867 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/788d7241-b06e-48a1-972a-dcfc775b6284-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 14 04:33:35 crc kubenswrapper[4867]: I0214 04:33:35.943588 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vcb2\" (UniqueName: \"kubernetes.io/projected/788d7241-b06e-48a1-972a-dcfc775b6284-kube-api-access-6vcb2\") on node \"crc\" DevicePath \"\"" Feb 14 04:33:35 crc kubenswrapper[4867]: I0214 04:33:35.954791 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/788d7241-b06e-48a1-972a-dcfc775b6284-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "788d7241-b06e-48a1-972a-dcfc775b6284" (UID: "788d7241-b06e-48a1-972a-dcfc775b6284"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:33:35 crc kubenswrapper[4867]: I0214 04:33:35.973491 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/788d7241-b06e-48a1-972a-dcfc775b6284-config-data" (OuterVolumeSpecName: "config-data") pod "788d7241-b06e-48a1-972a-dcfc775b6284" (UID: "788d7241-b06e-48a1-972a-dcfc775b6284"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.015955 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/788d7241-b06e-48a1-972a-dcfc775b6284-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "788d7241-b06e-48a1-972a-dcfc775b6284" (UID: "788d7241-b06e-48a1-972a-dcfc775b6284"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.046143 4867 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/788d7241-b06e-48a1-972a-dcfc775b6284-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.046188 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/788d7241-b06e-48a1-972a-dcfc775b6284-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.046199 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/788d7241-b06e-48a1-972a-dcfc775b6284-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.260432 4867 generic.go:334] "Generic (PLEG): container finished" podID="788d7241-b06e-48a1-972a-dcfc775b6284" containerID="a58f4833c7dc52ae560c46151d14057c1ee06f5e3f9e04d4d19f561f80e18b49" exitCode=0 Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.260532 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"788d7241-b06e-48a1-972a-dcfc775b6284","Type":"ContainerDied","Data":"a58f4833c7dc52ae560c46151d14057c1ee06f5e3f9e04d4d19f561f80e18b49"} Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.260567 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"788d7241-b06e-48a1-972a-dcfc775b6284","Type":"ContainerDied","Data":"9521b41bed1d9a154f90b192faa3f2ee97914bb5360639cfc7050d90128f992d"} Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.260586 4867 scope.go:117] "RemoveContainer" containerID="831f6143040c0fe5fcda131be628b4577c7e07490142b64520cd275be8a63db1" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.260755 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.302909 4867 scope.go:117] "RemoveContainer" containerID="08d1bd8680ee0a9caae4afea313d114c5670fd0ccfcb36da45fe6092ef6fbb57" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.311657 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.328517 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.338263 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:33:36 crc kubenswrapper[4867]: E0214 04:33:36.338775 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="788d7241-b06e-48a1-972a-dcfc775b6284" containerName="ceilometer-central-agent" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.338794 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="788d7241-b06e-48a1-972a-dcfc775b6284" containerName="ceilometer-central-agent" Feb 14 04:33:36 crc kubenswrapper[4867]: E0214 04:33:36.338823 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="788d7241-b06e-48a1-972a-dcfc775b6284" containerName="sg-core" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.338829 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="788d7241-b06e-48a1-972a-dcfc775b6284" containerName="sg-core" Feb 14 04:33:36 crc kubenswrapper[4867]: E0214 04:33:36.338842 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="788d7241-b06e-48a1-972a-dcfc775b6284" containerName="proxy-httpd" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.338848 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="788d7241-b06e-48a1-972a-dcfc775b6284" containerName="proxy-httpd" Feb 14 04:33:36 crc kubenswrapper[4867]: E0214 04:33:36.338868 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="788d7241-b06e-48a1-972a-dcfc775b6284" containerName="ceilometer-notification-agent" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.338874 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="788d7241-b06e-48a1-972a-dcfc775b6284" containerName="ceilometer-notification-agent" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.339254 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="788d7241-b06e-48a1-972a-dcfc775b6284" containerName="ceilometer-notification-agent" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.339275 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="788d7241-b06e-48a1-972a-dcfc775b6284" containerName="sg-core" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.339288 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="788d7241-b06e-48a1-972a-dcfc775b6284" containerName="proxy-httpd" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.339305 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="788d7241-b06e-48a1-972a-dcfc775b6284" containerName="ceilometer-central-agent" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.343764 4867 scope.go:117] "RemoveContainer" containerID="a58f4833c7dc52ae560c46151d14057c1ee06f5e3f9e04d4d19f561f80e18b49" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.345931 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.349315 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.349543 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.359976 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.391910 4867 scope.go:117] "RemoveContainer" containerID="e18f2a83cfdf9446ab26d91a1eba8e1f68f59b3dbb23a4376180e9d0192d47fe" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.421964 4867 scope.go:117] "RemoveContainer" containerID="831f6143040c0fe5fcda131be628b4577c7e07490142b64520cd275be8a63db1" Feb 14 04:33:36 crc kubenswrapper[4867]: E0214 04:33:36.431761 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"831f6143040c0fe5fcda131be628b4577c7e07490142b64520cd275be8a63db1\": container with ID starting with 831f6143040c0fe5fcda131be628b4577c7e07490142b64520cd275be8a63db1 not found: ID does not exist" containerID="831f6143040c0fe5fcda131be628b4577c7e07490142b64520cd275be8a63db1" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.431843 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"831f6143040c0fe5fcda131be628b4577c7e07490142b64520cd275be8a63db1"} err="failed to get container status \"831f6143040c0fe5fcda131be628b4577c7e07490142b64520cd275be8a63db1\": rpc error: code = NotFound desc = could not find container \"831f6143040c0fe5fcda131be628b4577c7e07490142b64520cd275be8a63db1\": container with ID starting with 831f6143040c0fe5fcda131be628b4577c7e07490142b64520cd275be8a63db1 not found: ID does not exist" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.431875 4867 scope.go:117] "RemoveContainer" containerID="08d1bd8680ee0a9caae4afea313d114c5670fd0ccfcb36da45fe6092ef6fbb57" Feb 14 04:33:36 crc kubenswrapper[4867]: E0214 04:33:36.432563 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08d1bd8680ee0a9caae4afea313d114c5670fd0ccfcb36da45fe6092ef6fbb57\": container with ID starting with 08d1bd8680ee0a9caae4afea313d114c5670fd0ccfcb36da45fe6092ef6fbb57 not found: ID does not exist" containerID="08d1bd8680ee0a9caae4afea313d114c5670fd0ccfcb36da45fe6092ef6fbb57" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.432608 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08d1bd8680ee0a9caae4afea313d114c5670fd0ccfcb36da45fe6092ef6fbb57"} err="failed to get container status \"08d1bd8680ee0a9caae4afea313d114c5670fd0ccfcb36da45fe6092ef6fbb57\": rpc error: code = NotFound desc = could not find container \"08d1bd8680ee0a9caae4afea313d114c5670fd0ccfcb36da45fe6092ef6fbb57\": container with ID starting with 08d1bd8680ee0a9caae4afea313d114c5670fd0ccfcb36da45fe6092ef6fbb57 not found: ID does not exist" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.432642 4867 scope.go:117] "RemoveContainer" containerID="a58f4833c7dc52ae560c46151d14057c1ee06f5e3f9e04d4d19f561f80e18b49" Feb 14 04:33:36 crc kubenswrapper[4867]: E0214 04:33:36.434689 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a58f4833c7dc52ae560c46151d14057c1ee06f5e3f9e04d4d19f561f80e18b49\": container with ID starting with a58f4833c7dc52ae560c46151d14057c1ee06f5e3f9e04d4d19f561f80e18b49 not found: ID does not exist" containerID="a58f4833c7dc52ae560c46151d14057c1ee06f5e3f9e04d4d19f561f80e18b49" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.434740 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a58f4833c7dc52ae560c46151d14057c1ee06f5e3f9e04d4d19f561f80e18b49"} err="failed to get container status \"a58f4833c7dc52ae560c46151d14057c1ee06f5e3f9e04d4d19f561f80e18b49\": rpc error: code = NotFound desc = could not find container \"a58f4833c7dc52ae560c46151d14057c1ee06f5e3f9e04d4d19f561f80e18b49\": container with ID starting with a58f4833c7dc52ae560c46151d14057c1ee06f5e3f9e04d4d19f561f80e18b49 not found: ID does not exist" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.434769 4867 scope.go:117] "RemoveContainer" containerID="e18f2a83cfdf9446ab26d91a1eba8e1f68f59b3dbb23a4376180e9d0192d47fe" Feb 14 04:33:36 crc kubenswrapper[4867]: E0214 04:33:36.435118 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e18f2a83cfdf9446ab26d91a1eba8e1f68f59b3dbb23a4376180e9d0192d47fe\": container with ID starting with e18f2a83cfdf9446ab26d91a1eba8e1f68f59b3dbb23a4376180e9d0192d47fe not found: ID does not exist" containerID="e18f2a83cfdf9446ab26d91a1eba8e1f68f59b3dbb23a4376180e9d0192d47fe" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.435215 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e18f2a83cfdf9446ab26d91a1eba8e1f68f59b3dbb23a4376180e9d0192d47fe"} err="failed to get container status \"e18f2a83cfdf9446ab26d91a1eba8e1f68f59b3dbb23a4376180e9d0192d47fe\": rpc error: code = NotFound desc = could not find container \"e18f2a83cfdf9446ab26d91a1eba8e1f68f59b3dbb23a4376180e9d0192d47fe\": container with ID starting with e18f2a83cfdf9446ab26d91a1eba8e1f68f59b3dbb23a4376180e9d0192d47fe not found: ID does not exist" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.470221 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/146fecda-f9b9-4c60-96a7-feb4120cda4c-config-data\") pod \"ceilometer-0\" (UID: \"146fecda-f9b9-4c60-96a7-feb4120cda4c\") " pod="openstack/ceilometer-0" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.470273 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/146fecda-f9b9-4c60-96a7-feb4120cda4c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"146fecda-f9b9-4c60-96a7-feb4120cda4c\") " pod="openstack/ceilometer-0" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.470379 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/146fecda-f9b9-4c60-96a7-feb4120cda4c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"146fecda-f9b9-4c60-96a7-feb4120cda4c\") " pod="openstack/ceilometer-0" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.470546 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/146fecda-f9b9-4c60-96a7-feb4120cda4c-scripts\") pod \"ceilometer-0\" (UID: \"146fecda-f9b9-4c60-96a7-feb4120cda4c\") " pod="openstack/ceilometer-0" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.470576 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/146fecda-f9b9-4c60-96a7-feb4120cda4c-log-httpd\") pod \"ceilometer-0\" (UID: \"146fecda-f9b9-4c60-96a7-feb4120cda4c\") " pod="openstack/ceilometer-0" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.470608 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmp66\" (UniqueName: \"kubernetes.io/projected/146fecda-f9b9-4c60-96a7-feb4120cda4c-kube-api-access-xmp66\") pod \"ceilometer-0\" (UID: \"146fecda-f9b9-4c60-96a7-feb4120cda4c\") " pod="openstack/ceilometer-0" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.470705 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/146fecda-f9b9-4c60-96a7-feb4120cda4c-run-httpd\") pod \"ceilometer-0\" (UID: \"146fecda-f9b9-4c60-96a7-feb4120cda4c\") " pod="openstack/ceilometer-0" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.572272 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/146fecda-f9b9-4c60-96a7-feb4120cda4c-scripts\") pod \"ceilometer-0\" (UID: \"146fecda-f9b9-4c60-96a7-feb4120cda4c\") " pod="openstack/ceilometer-0" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.572440 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/146fecda-f9b9-4c60-96a7-feb4120cda4c-log-httpd\") pod \"ceilometer-0\" (UID: \"146fecda-f9b9-4c60-96a7-feb4120cda4c\") " pod="openstack/ceilometer-0" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.572542 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmp66\" (UniqueName: \"kubernetes.io/projected/146fecda-f9b9-4c60-96a7-feb4120cda4c-kube-api-access-xmp66\") pod \"ceilometer-0\" (UID: \"146fecda-f9b9-4c60-96a7-feb4120cda4c\") " pod="openstack/ceilometer-0" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.572686 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/146fecda-f9b9-4c60-96a7-feb4120cda4c-run-httpd\") pod \"ceilometer-0\" (UID: \"146fecda-f9b9-4c60-96a7-feb4120cda4c\") " pod="openstack/ceilometer-0" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.572779 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/146fecda-f9b9-4c60-96a7-feb4120cda4c-config-data\") pod \"ceilometer-0\" (UID: \"146fecda-f9b9-4c60-96a7-feb4120cda4c\") " pod="openstack/ceilometer-0" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.572846 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/146fecda-f9b9-4c60-96a7-feb4120cda4c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"146fecda-f9b9-4c60-96a7-feb4120cda4c\") " pod="openstack/ceilometer-0" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.572972 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/146fecda-f9b9-4c60-96a7-feb4120cda4c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"146fecda-f9b9-4c60-96a7-feb4120cda4c\") " pod="openstack/ceilometer-0" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.573596 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/146fecda-f9b9-4c60-96a7-feb4120cda4c-log-httpd\") pod \"ceilometer-0\" (UID: \"146fecda-f9b9-4c60-96a7-feb4120cda4c\") " pod="openstack/ceilometer-0" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.577984 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/146fecda-f9b9-4c60-96a7-feb4120cda4c-run-httpd\") pod \"ceilometer-0\" (UID: \"146fecda-f9b9-4c60-96a7-feb4120cda4c\") " pod="openstack/ceilometer-0" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.579448 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/146fecda-f9b9-4c60-96a7-feb4120cda4c-scripts\") pod \"ceilometer-0\" (UID: \"146fecda-f9b9-4c60-96a7-feb4120cda4c\") " pod="openstack/ceilometer-0" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.580273 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/146fecda-f9b9-4c60-96a7-feb4120cda4c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"146fecda-f9b9-4c60-96a7-feb4120cda4c\") " pod="openstack/ceilometer-0" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.581799 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/146fecda-f9b9-4c60-96a7-feb4120cda4c-config-data\") pod \"ceilometer-0\" (UID: \"146fecda-f9b9-4c60-96a7-feb4120cda4c\") " pod="openstack/ceilometer-0" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.582019 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/146fecda-f9b9-4c60-96a7-feb4120cda4c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"146fecda-f9b9-4c60-96a7-feb4120cda4c\") " pod="openstack/ceilometer-0" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.595358 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmp66\" (UniqueName: \"kubernetes.io/projected/146fecda-f9b9-4c60-96a7-feb4120cda4c-kube-api-access-xmp66\") pod \"ceilometer-0\" (UID: \"146fecda-f9b9-4c60-96a7-feb4120cda4c\") " pod="openstack/ceilometer-0" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.662971 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 04:33:37 crc kubenswrapper[4867]: I0214 04:33:37.022405 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="788d7241-b06e-48a1-972a-dcfc775b6284" path="/var/lib/kubelet/pods/788d7241-b06e-48a1-972a-dcfc775b6284/volumes" Feb 14 04:33:37 crc kubenswrapper[4867]: I0214 04:33:37.231864 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:33:37 crc kubenswrapper[4867]: I0214 04:33:37.282560 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"146fecda-f9b9-4c60-96a7-feb4120cda4c","Type":"ContainerStarted","Data":"2a9b10b567b5808562253fe944271d1f75330bc923dcd36a8e5d5a2e2e2a94fb"} Feb 14 04:33:37 crc kubenswrapper[4867]: I0214 04:33:37.804021 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:33:38 crc kubenswrapper[4867]: I0214 04:33:38.293458 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"146fecda-f9b9-4c60-96a7-feb4120cda4c","Type":"ContainerStarted","Data":"384d5807f9dd88aadba8af524a64d3e00b94913efc05bfc5c451124ddaedb1d6"} Feb 14 04:33:39 crc kubenswrapper[4867]: I0214 04:33:39.304442 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"146fecda-f9b9-4c60-96a7-feb4120cda4c","Type":"ContainerStarted","Data":"cf3e71140044d5d04aa52ab9b12ad81933fbe11148f05a7ee12b3ff9ed5ecd0b"} Feb 14 04:33:40 crc kubenswrapper[4867]: I0214 04:33:40.316102 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"146fecda-f9b9-4c60-96a7-feb4120cda4c","Type":"ContainerStarted","Data":"36e2d712d4d8b9ed772106e7c47ea1eef658b8b8e9f298edfa74c23417b23cf7"} Feb 14 04:33:41 crc kubenswrapper[4867]: I0214 04:33:41.327971 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"146fecda-f9b9-4c60-96a7-feb4120cda4c","Type":"ContainerStarted","Data":"24e83f89f28d0ec5c2caa8639449270fad89c9f3a9ccd66267870d308ea41167"} Feb 14 04:33:41 crc kubenswrapper[4867]: I0214 04:33:41.328299 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="146fecda-f9b9-4c60-96a7-feb4120cda4c" containerName="ceilometer-central-agent" containerID="cri-o://384d5807f9dd88aadba8af524a64d3e00b94913efc05bfc5c451124ddaedb1d6" gracePeriod=30 Feb 14 04:33:41 crc kubenswrapper[4867]: I0214 04:33:41.328653 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="146fecda-f9b9-4c60-96a7-feb4120cda4c" containerName="proxy-httpd" containerID="cri-o://24e83f89f28d0ec5c2caa8639449270fad89c9f3a9ccd66267870d308ea41167" gracePeriod=30 Feb 14 04:33:41 crc kubenswrapper[4867]: I0214 04:33:41.328769 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="146fecda-f9b9-4c60-96a7-feb4120cda4c" containerName="sg-core" containerID="cri-o://36e2d712d4d8b9ed772106e7c47ea1eef658b8b8e9f298edfa74c23417b23cf7" gracePeriod=30 Feb 14 04:33:41 crc kubenswrapper[4867]: I0214 04:33:41.328879 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 14 04:33:41 crc kubenswrapper[4867]: I0214 04:33:41.328923 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="146fecda-f9b9-4c60-96a7-feb4120cda4c" containerName="ceilometer-notification-agent" containerID="cri-o://cf3e71140044d5d04aa52ab9b12ad81933fbe11148f05a7ee12b3ff9ed5ecd0b" gracePeriod=30 Feb 14 04:33:41 crc kubenswrapper[4867]: I0214 04:33:41.338289 4867 generic.go:334] "Generic (PLEG): container finished" podID="cd08e0e3-a41f-4b25-b71a-1c968410d52e" containerID="0f96994fd5725370a862ce87b1e8d08bfc4ff10235813b94e745a18d93f42f91" exitCode=0 Feb 14 04:33:41 crc kubenswrapper[4867]: I0214 04:33:41.338341 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-vwg9c" event={"ID":"cd08e0e3-a41f-4b25-b71a-1c968410d52e","Type":"ContainerDied","Data":"0f96994fd5725370a862ce87b1e8d08bfc4ff10235813b94e745a18d93f42f91"} Feb 14 04:33:41 crc kubenswrapper[4867]: I0214 04:33:41.360924 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.6911566900000001 podStartE2EDuration="5.360906371s" podCreationTimestamp="2026-02-14 04:33:36 +0000 UTC" firstStartedPulling="2026-02-14 04:33:37.208749218 +0000 UTC m=+1449.289686532" lastFinishedPulling="2026-02-14 04:33:40.878498899 +0000 UTC m=+1452.959436213" observedRunningTime="2026-02-14 04:33:41.357607733 +0000 UTC m=+1453.438545047" watchObservedRunningTime="2026-02-14 04:33:41.360906371 +0000 UTC m=+1453.441843685" Feb 14 04:33:42 crc kubenswrapper[4867]: I0214 04:33:42.351822 4867 generic.go:334] "Generic (PLEG): container finished" podID="146fecda-f9b9-4c60-96a7-feb4120cda4c" containerID="36e2d712d4d8b9ed772106e7c47ea1eef658b8b8e9f298edfa74c23417b23cf7" exitCode=2 Feb 14 04:33:42 crc kubenswrapper[4867]: I0214 04:33:42.352157 4867 generic.go:334] "Generic (PLEG): container finished" podID="146fecda-f9b9-4c60-96a7-feb4120cda4c" containerID="cf3e71140044d5d04aa52ab9b12ad81933fbe11148f05a7ee12b3ff9ed5ecd0b" exitCode=0 Feb 14 04:33:42 crc kubenswrapper[4867]: I0214 04:33:42.351909 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"146fecda-f9b9-4c60-96a7-feb4120cda4c","Type":"ContainerDied","Data":"36e2d712d4d8b9ed772106e7c47ea1eef658b8b8e9f298edfa74c23417b23cf7"} Feb 14 04:33:42 crc kubenswrapper[4867]: I0214 04:33:42.352213 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"146fecda-f9b9-4c60-96a7-feb4120cda4c","Type":"ContainerDied","Data":"cf3e71140044d5d04aa52ab9b12ad81933fbe11148f05a7ee12b3ff9ed5ecd0b"} Feb 14 04:33:42 crc kubenswrapper[4867]: I0214 04:33:42.792023 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-vwg9c" Feb 14 04:33:42 crc kubenswrapper[4867]: I0214 04:33:42.945915 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd08e0e3-a41f-4b25-b71a-1c968410d52e-combined-ca-bundle\") pod \"cd08e0e3-a41f-4b25-b71a-1c968410d52e\" (UID: \"cd08e0e3-a41f-4b25-b71a-1c968410d52e\") " Feb 14 04:33:42 crc kubenswrapper[4867]: I0214 04:33:42.946009 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd08e0e3-a41f-4b25-b71a-1c968410d52e-config-data\") pod \"cd08e0e3-a41f-4b25-b71a-1c968410d52e\" (UID: \"cd08e0e3-a41f-4b25-b71a-1c968410d52e\") " Feb 14 04:33:42 crc kubenswrapper[4867]: I0214 04:33:42.946098 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbwhx\" (UniqueName: \"kubernetes.io/projected/cd08e0e3-a41f-4b25-b71a-1c968410d52e-kube-api-access-lbwhx\") pod \"cd08e0e3-a41f-4b25-b71a-1c968410d52e\" (UID: \"cd08e0e3-a41f-4b25-b71a-1c968410d52e\") " Feb 14 04:33:42 crc kubenswrapper[4867]: I0214 04:33:42.946243 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd08e0e3-a41f-4b25-b71a-1c968410d52e-scripts\") pod \"cd08e0e3-a41f-4b25-b71a-1c968410d52e\" (UID: \"cd08e0e3-a41f-4b25-b71a-1c968410d52e\") " Feb 14 04:33:42 crc kubenswrapper[4867]: I0214 04:33:42.953745 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd08e0e3-a41f-4b25-b71a-1c968410d52e-scripts" (OuterVolumeSpecName: "scripts") pod "cd08e0e3-a41f-4b25-b71a-1c968410d52e" (UID: "cd08e0e3-a41f-4b25-b71a-1c968410d52e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:33:42 crc kubenswrapper[4867]: I0214 04:33:42.959436 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd08e0e3-a41f-4b25-b71a-1c968410d52e-kube-api-access-lbwhx" (OuterVolumeSpecName: "kube-api-access-lbwhx") pod "cd08e0e3-a41f-4b25-b71a-1c968410d52e" (UID: "cd08e0e3-a41f-4b25-b71a-1c968410d52e"). InnerVolumeSpecName "kube-api-access-lbwhx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:33:42 crc kubenswrapper[4867]: I0214 04:33:42.977936 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd08e0e3-a41f-4b25-b71a-1c968410d52e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cd08e0e3-a41f-4b25-b71a-1c968410d52e" (UID: "cd08e0e3-a41f-4b25-b71a-1c968410d52e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:33:42 crc kubenswrapper[4867]: I0214 04:33:42.995582 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd08e0e3-a41f-4b25-b71a-1c968410d52e-config-data" (OuterVolumeSpecName: "config-data") pod "cd08e0e3-a41f-4b25-b71a-1c968410d52e" (UID: "cd08e0e3-a41f-4b25-b71a-1c968410d52e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:33:43 crc kubenswrapper[4867]: I0214 04:33:43.049938 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd08e0e3-a41f-4b25-b71a-1c968410d52e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:33:43 crc kubenswrapper[4867]: I0214 04:33:43.049988 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd08e0e3-a41f-4b25-b71a-1c968410d52e-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:33:43 crc kubenswrapper[4867]: I0214 04:33:43.050002 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lbwhx\" (UniqueName: \"kubernetes.io/projected/cd08e0e3-a41f-4b25-b71a-1c968410d52e-kube-api-access-lbwhx\") on node \"crc\" DevicePath \"\"" Feb 14 04:33:43 crc kubenswrapper[4867]: I0214 04:33:43.050018 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd08e0e3-a41f-4b25-b71a-1c968410d52e-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:33:43 crc kubenswrapper[4867]: I0214 04:33:43.364498 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-vwg9c" event={"ID":"cd08e0e3-a41f-4b25-b71a-1c968410d52e","Type":"ContainerDied","Data":"bd096683847f90cf05e85285ccd82cb246a3d9366805a56c5de6b41e0584b142"} Feb 14 04:33:43 crc kubenswrapper[4867]: I0214 04:33:43.364887 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd096683847f90cf05e85285ccd82cb246a3d9366805a56c5de6b41e0584b142" Feb 14 04:33:43 crc kubenswrapper[4867]: I0214 04:33:43.364599 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-vwg9c" Feb 14 04:33:43 crc kubenswrapper[4867]: I0214 04:33:43.544351 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 14 04:33:43 crc kubenswrapper[4867]: E0214 04:33:43.544929 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd08e0e3-a41f-4b25-b71a-1c968410d52e" containerName="nova-cell0-conductor-db-sync" Feb 14 04:33:43 crc kubenswrapper[4867]: I0214 04:33:43.544948 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd08e0e3-a41f-4b25-b71a-1c968410d52e" containerName="nova-cell0-conductor-db-sync" Feb 14 04:33:43 crc kubenswrapper[4867]: I0214 04:33:43.545167 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd08e0e3-a41f-4b25-b71a-1c968410d52e" containerName="nova-cell0-conductor-db-sync" Feb 14 04:33:43 crc kubenswrapper[4867]: I0214 04:33:43.546067 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 14 04:33:43 crc kubenswrapper[4867]: I0214 04:33:43.548597 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-fspzg" Feb 14 04:33:43 crc kubenswrapper[4867]: I0214 04:33:43.550441 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 14 04:33:43 crc kubenswrapper[4867]: I0214 04:33:43.558732 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 14 04:33:43 crc kubenswrapper[4867]: I0214 04:33:43.662320 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdfa169f-f57f-4d9c-bef3-529878be941b-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"fdfa169f-f57f-4d9c-bef3-529878be941b\") " pod="openstack/nova-cell0-conductor-0" Feb 14 04:33:43 crc kubenswrapper[4867]: I0214 04:33:43.662422 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdfa169f-f57f-4d9c-bef3-529878be941b-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"fdfa169f-f57f-4d9c-bef3-529878be941b\") " pod="openstack/nova-cell0-conductor-0" Feb 14 04:33:43 crc kubenswrapper[4867]: I0214 04:33:43.662811 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9x27g\" (UniqueName: \"kubernetes.io/projected/fdfa169f-f57f-4d9c-bef3-529878be941b-kube-api-access-9x27g\") pod \"nova-cell0-conductor-0\" (UID: \"fdfa169f-f57f-4d9c-bef3-529878be941b\") " pod="openstack/nova-cell0-conductor-0" Feb 14 04:33:43 crc kubenswrapper[4867]: I0214 04:33:43.765695 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdfa169f-f57f-4d9c-bef3-529878be941b-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"fdfa169f-f57f-4d9c-bef3-529878be941b\") " pod="openstack/nova-cell0-conductor-0" Feb 14 04:33:43 crc kubenswrapper[4867]: I0214 04:33:43.766774 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdfa169f-f57f-4d9c-bef3-529878be941b-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"fdfa169f-f57f-4d9c-bef3-529878be941b\") " pod="openstack/nova-cell0-conductor-0" Feb 14 04:33:43 crc kubenswrapper[4867]: I0214 04:33:43.767204 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9x27g\" (UniqueName: \"kubernetes.io/projected/fdfa169f-f57f-4d9c-bef3-529878be941b-kube-api-access-9x27g\") pod \"nova-cell0-conductor-0\" (UID: \"fdfa169f-f57f-4d9c-bef3-529878be941b\") " pod="openstack/nova-cell0-conductor-0" Feb 14 04:33:43 crc kubenswrapper[4867]: I0214 04:33:43.771771 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdfa169f-f57f-4d9c-bef3-529878be941b-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"fdfa169f-f57f-4d9c-bef3-529878be941b\") " pod="openstack/nova-cell0-conductor-0" Feb 14 04:33:43 crc kubenswrapper[4867]: I0214 04:33:43.773778 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdfa169f-f57f-4d9c-bef3-529878be941b-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"fdfa169f-f57f-4d9c-bef3-529878be941b\") " pod="openstack/nova-cell0-conductor-0" Feb 14 04:33:43 crc kubenswrapper[4867]: I0214 04:33:43.801240 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9x27g\" (UniqueName: \"kubernetes.io/projected/fdfa169f-f57f-4d9c-bef3-529878be941b-kube-api-access-9x27g\") pod \"nova-cell0-conductor-0\" (UID: \"fdfa169f-f57f-4d9c-bef3-529878be941b\") " pod="openstack/nova-cell0-conductor-0" Feb 14 04:33:43 crc kubenswrapper[4867]: I0214 04:33:43.862130 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 14 04:33:43 crc kubenswrapper[4867]: I0214 04:33:43.955941 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-4dwll"] Feb 14 04:33:43 crc kubenswrapper[4867]: I0214 04:33:43.957841 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-4dwll" Feb 14 04:33:43 crc kubenswrapper[4867]: I0214 04:33:43.968548 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-42f0-account-create-update-vx5cp"] Feb 14 04:33:43 crc kubenswrapper[4867]: I0214 04:33:43.970012 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-42f0-account-create-update-vx5cp" Feb 14 04:33:43 crc kubenswrapper[4867]: I0214 04:33:43.972097 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Feb 14 04:33:44 crc kubenswrapper[4867]: I0214 04:33:44.007761 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-4dwll"] Feb 14 04:33:44 crc kubenswrapper[4867]: I0214 04:33:44.026312 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-42f0-account-create-update-vx5cp"] Feb 14 04:33:44 crc kubenswrapper[4867]: I0214 04:33:44.074676 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnnff\" (UniqueName: \"kubernetes.io/projected/4aa569b6-1ec2-48e8-99c2-f165e5ea9604-kube-api-access-xnnff\") pod \"aodh-42f0-account-create-update-vx5cp\" (UID: \"4aa569b6-1ec2-48e8-99c2-f165e5ea9604\") " pod="openstack/aodh-42f0-account-create-update-vx5cp" Feb 14 04:33:44 crc kubenswrapper[4867]: I0214 04:33:44.074726 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zq9pz\" (UniqueName: \"kubernetes.io/projected/486bfb80-5589-4e9e-84d3-10726a066702-kube-api-access-zq9pz\") pod \"aodh-db-create-4dwll\" (UID: \"486bfb80-5589-4e9e-84d3-10726a066702\") " pod="openstack/aodh-db-create-4dwll" Feb 14 04:33:44 crc kubenswrapper[4867]: I0214 04:33:44.074774 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/486bfb80-5589-4e9e-84d3-10726a066702-operator-scripts\") pod \"aodh-db-create-4dwll\" (UID: \"486bfb80-5589-4e9e-84d3-10726a066702\") " pod="openstack/aodh-db-create-4dwll" Feb 14 04:33:44 crc kubenswrapper[4867]: I0214 04:33:44.074910 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4aa569b6-1ec2-48e8-99c2-f165e5ea9604-operator-scripts\") pod \"aodh-42f0-account-create-update-vx5cp\" (UID: \"4aa569b6-1ec2-48e8-99c2-f165e5ea9604\") " pod="openstack/aodh-42f0-account-create-update-vx5cp" Feb 14 04:33:44 crc kubenswrapper[4867]: I0214 04:33:44.176866 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnnff\" (UniqueName: \"kubernetes.io/projected/4aa569b6-1ec2-48e8-99c2-f165e5ea9604-kube-api-access-xnnff\") pod \"aodh-42f0-account-create-update-vx5cp\" (UID: \"4aa569b6-1ec2-48e8-99c2-f165e5ea9604\") " pod="openstack/aodh-42f0-account-create-update-vx5cp" Feb 14 04:33:44 crc kubenswrapper[4867]: I0214 04:33:44.176915 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zq9pz\" (UniqueName: \"kubernetes.io/projected/486bfb80-5589-4e9e-84d3-10726a066702-kube-api-access-zq9pz\") pod \"aodh-db-create-4dwll\" (UID: \"486bfb80-5589-4e9e-84d3-10726a066702\") " pod="openstack/aodh-db-create-4dwll" Feb 14 04:33:44 crc kubenswrapper[4867]: I0214 04:33:44.176957 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/486bfb80-5589-4e9e-84d3-10726a066702-operator-scripts\") pod \"aodh-db-create-4dwll\" (UID: \"486bfb80-5589-4e9e-84d3-10726a066702\") " pod="openstack/aodh-db-create-4dwll" Feb 14 04:33:44 crc kubenswrapper[4867]: I0214 04:33:44.177051 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4aa569b6-1ec2-48e8-99c2-f165e5ea9604-operator-scripts\") pod \"aodh-42f0-account-create-update-vx5cp\" (UID: \"4aa569b6-1ec2-48e8-99c2-f165e5ea9604\") " pod="openstack/aodh-42f0-account-create-update-vx5cp" Feb 14 04:33:44 crc kubenswrapper[4867]: I0214 04:33:44.178577 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/486bfb80-5589-4e9e-84d3-10726a066702-operator-scripts\") pod \"aodh-db-create-4dwll\" (UID: \"486bfb80-5589-4e9e-84d3-10726a066702\") " pod="openstack/aodh-db-create-4dwll" Feb 14 04:33:44 crc kubenswrapper[4867]: I0214 04:33:44.178708 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4aa569b6-1ec2-48e8-99c2-f165e5ea9604-operator-scripts\") pod \"aodh-42f0-account-create-update-vx5cp\" (UID: \"4aa569b6-1ec2-48e8-99c2-f165e5ea9604\") " pod="openstack/aodh-42f0-account-create-update-vx5cp" Feb 14 04:33:44 crc kubenswrapper[4867]: I0214 04:33:44.197745 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnnff\" (UniqueName: \"kubernetes.io/projected/4aa569b6-1ec2-48e8-99c2-f165e5ea9604-kube-api-access-xnnff\") pod \"aodh-42f0-account-create-update-vx5cp\" (UID: \"4aa569b6-1ec2-48e8-99c2-f165e5ea9604\") " pod="openstack/aodh-42f0-account-create-update-vx5cp" Feb 14 04:33:44 crc kubenswrapper[4867]: I0214 04:33:44.216212 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zq9pz\" (UniqueName: \"kubernetes.io/projected/486bfb80-5589-4e9e-84d3-10726a066702-kube-api-access-zq9pz\") pod \"aodh-db-create-4dwll\" (UID: \"486bfb80-5589-4e9e-84d3-10726a066702\") " pod="openstack/aodh-db-create-4dwll" Feb 14 04:33:44 crc kubenswrapper[4867]: I0214 04:33:44.335692 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-4dwll" Feb 14 04:33:44 crc kubenswrapper[4867]: I0214 04:33:44.346122 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-42f0-account-create-update-vx5cp" Feb 14 04:33:44 crc kubenswrapper[4867]: I0214 04:33:44.500003 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 14 04:33:44 crc kubenswrapper[4867]: W0214 04:33:44.943925 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod486bfb80_5589_4e9e_84d3_10726a066702.slice/crio-4f316ece368c5c11c43eaedc9965b6523c35a9abf1623c22f23d982d15d9a1e7 WatchSource:0}: Error finding container 4f316ece368c5c11c43eaedc9965b6523c35a9abf1623c22f23d982d15d9a1e7: Status 404 returned error can't find the container with id 4f316ece368c5c11c43eaedc9965b6523c35a9abf1623c22f23d982d15d9a1e7 Feb 14 04:33:44 crc kubenswrapper[4867]: I0214 04:33:44.946164 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-42f0-account-create-update-vx5cp"] Feb 14 04:33:44 crc kubenswrapper[4867]: I0214 04:33:44.967105 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-4dwll"] Feb 14 04:33:45 crc kubenswrapper[4867]: I0214 04:33:45.394459 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"fdfa169f-f57f-4d9c-bef3-529878be941b","Type":"ContainerStarted","Data":"5583803dac28810d6916569bf5511e8697e9203a7832b557492770fa91b1d747"} Feb 14 04:33:45 crc kubenswrapper[4867]: I0214 04:33:45.394797 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"fdfa169f-f57f-4d9c-bef3-529878be941b","Type":"ContainerStarted","Data":"2cb0fed603959c18b96e191a8248a3082516ac2b75f1907d1852f250904be6e6"} Feb 14 04:33:45 crc kubenswrapper[4867]: I0214 04:33:45.400889 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-4dwll" event={"ID":"486bfb80-5589-4e9e-84d3-10726a066702","Type":"ContainerStarted","Data":"f354428129d549a2471d562380d7b2183b151280e2771b123ea6777b6dcf2c51"} Feb 14 04:33:45 crc kubenswrapper[4867]: I0214 04:33:45.400944 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-4dwll" event={"ID":"486bfb80-5589-4e9e-84d3-10726a066702","Type":"ContainerStarted","Data":"4f316ece368c5c11c43eaedc9965b6523c35a9abf1623c22f23d982d15d9a1e7"} Feb 14 04:33:45 crc kubenswrapper[4867]: I0214 04:33:45.403097 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-42f0-account-create-update-vx5cp" event={"ID":"4aa569b6-1ec2-48e8-99c2-f165e5ea9604","Type":"ContainerStarted","Data":"25d2bb0267b03452021a150ec90554f6e1f81995014c999f80f860ac88461b64"} Feb 14 04:33:45 crc kubenswrapper[4867]: I0214 04:33:45.403142 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-42f0-account-create-update-vx5cp" event={"ID":"4aa569b6-1ec2-48e8-99c2-f165e5ea9604","Type":"ContainerStarted","Data":"a9ad71ee663b264a38d85b7ace139092be1831588b6cb7e85dec1f224d42ae62"} Feb 14 04:33:45 crc kubenswrapper[4867]: I0214 04:33:45.448379 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.448353466 podStartE2EDuration="2.448353466s" podCreationTimestamp="2026-02-14 04:33:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:33:45.421653809 +0000 UTC m=+1457.502591133" watchObservedRunningTime="2026-02-14 04:33:45.448353466 +0000 UTC m=+1457.529290780" Feb 14 04:33:45 crc kubenswrapper[4867]: I0214 04:33:45.466212 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-create-4dwll" podStartSLOduration=2.466187765 podStartE2EDuration="2.466187765s" podCreationTimestamp="2026-02-14 04:33:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:33:45.445390346 +0000 UTC m=+1457.526327660" watchObservedRunningTime="2026-02-14 04:33:45.466187765 +0000 UTC m=+1457.547125089" Feb 14 04:33:45 crc kubenswrapper[4867]: I0214 04:33:45.513922 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-42f0-account-create-update-vx5cp" podStartSLOduration=2.513899077 podStartE2EDuration="2.513899077s" podCreationTimestamp="2026-02-14 04:33:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:33:45.459883676 +0000 UTC m=+1457.540820990" watchObservedRunningTime="2026-02-14 04:33:45.513899077 +0000 UTC m=+1457.594836391" Feb 14 04:33:46 crc kubenswrapper[4867]: I0214 04:33:46.416896 4867 generic.go:334] "Generic (PLEG): container finished" podID="486bfb80-5589-4e9e-84d3-10726a066702" containerID="f354428129d549a2471d562380d7b2183b151280e2771b123ea6777b6dcf2c51" exitCode=0 Feb 14 04:33:46 crc kubenswrapper[4867]: I0214 04:33:46.417177 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-4dwll" event={"ID":"486bfb80-5589-4e9e-84d3-10726a066702","Type":"ContainerDied","Data":"f354428129d549a2471d562380d7b2183b151280e2771b123ea6777b6dcf2c51"} Feb 14 04:33:46 crc kubenswrapper[4867]: I0214 04:33:46.419861 4867 generic.go:334] "Generic (PLEG): container finished" podID="4aa569b6-1ec2-48e8-99c2-f165e5ea9604" containerID="25d2bb0267b03452021a150ec90554f6e1f81995014c999f80f860ac88461b64" exitCode=0 Feb 14 04:33:46 crc kubenswrapper[4867]: I0214 04:33:46.419929 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-42f0-account-create-update-vx5cp" event={"ID":"4aa569b6-1ec2-48e8-99c2-f165e5ea9604","Type":"ContainerDied","Data":"25d2bb0267b03452021a150ec90554f6e1f81995014c999f80f860ac88461b64"} Feb 14 04:33:46 crc kubenswrapper[4867]: I0214 04:33:46.420074 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 14 04:33:47 crc kubenswrapper[4867]: I0214 04:33:47.432662 4867 generic.go:334] "Generic (PLEG): container finished" podID="146fecda-f9b9-4c60-96a7-feb4120cda4c" containerID="384d5807f9dd88aadba8af524a64d3e00b94913efc05bfc5c451124ddaedb1d6" exitCode=0 Feb 14 04:33:47 crc kubenswrapper[4867]: I0214 04:33:47.432724 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"146fecda-f9b9-4c60-96a7-feb4120cda4c","Type":"ContainerDied","Data":"384d5807f9dd88aadba8af524a64d3e00b94913efc05bfc5c451124ddaedb1d6"} Feb 14 04:33:48 crc kubenswrapper[4867]: I0214 04:33:48.005585 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-4dwll" Feb 14 04:33:48 crc kubenswrapper[4867]: I0214 04:33:48.027596 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-42f0-account-create-update-vx5cp" Feb 14 04:33:48 crc kubenswrapper[4867]: I0214 04:33:48.185191 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4aa569b6-1ec2-48e8-99c2-f165e5ea9604-operator-scripts\") pod \"4aa569b6-1ec2-48e8-99c2-f165e5ea9604\" (UID: \"4aa569b6-1ec2-48e8-99c2-f165e5ea9604\") " Feb 14 04:33:48 crc kubenswrapper[4867]: I0214 04:33:48.185271 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zq9pz\" (UniqueName: \"kubernetes.io/projected/486bfb80-5589-4e9e-84d3-10726a066702-kube-api-access-zq9pz\") pod \"486bfb80-5589-4e9e-84d3-10726a066702\" (UID: \"486bfb80-5589-4e9e-84d3-10726a066702\") " Feb 14 04:33:48 crc kubenswrapper[4867]: I0214 04:33:48.185545 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnnff\" (UniqueName: \"kubernetes.io/projected/4aa569b6-1ec2-48e8-99c2-f165e5ea9604-kube-api-access-xnnff\") pod \"4aa569b6-1ec2-48e8-99c2-f165e5ea9604\" (UID: \"4aa569b6-1ec2-48e8-99c2-f165e5ea9604\") " Feb 14 04:33:48 crc kubenswrapper[4867]: I0214 04:33:48.185705 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/486bfb80-5589-4e9e-84d3-10726a066702-operator-scripts\") pod \"486bfb80-5589-4e9e-84d3-10726a066702\" (UID: \"486bfb80-5589-4e9e-84d3-10726a066702\") " Feb 14 04:33:48 crc kubenswrapper[4867]: I0214 04:33:48.187966 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/486bfb80-5589-4e9e-84d3-10726a066702-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "486bfb80-5589-4e9e-84d3-10726a066702" (UID: "486bfb80-5589-4e9e-84d3-10726a066702"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:33:48 crc kubenswrapper[4867]: I0214 04:33:48.193447 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4aa569b6-1ec2-48e8-99c2-f165e5ea9604-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4aa569b6-1ec2-48e8-99c2-f165e5ea9604" (UID: "4aa569b6-1ec2-48e8-99c2-f165e5ea9604"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:33:48 crc kubenswrapper[4867]: I0214 04:33:48.196995 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4aa569b6-1ec2-48e8-99c2-f165e5ea9604-kube-api-access-xnnff" (OuterVolumeSpecName: "kube-api-access-xnnff") pod "4aa569b6-1ec2-48e8-99c2-f165e5ea9604" (UID: "4aa569b6-1ec2-48e8-99c2-f165e5ea9604"). InnerVolumeSpecName "kube-api-access-xnnff". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:33:48 crc kubenswrapper[4867]: I0214 04:33:48.197102 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/486bfb80-5589-4e9e-84d3-10726a066702-kube-api-access-zq9pz" (OuterVolumeSpecName: "kube-api-access-zq9pz") pod "486bfb80-5589-4e9e-84d3-10726a066702" (UID: "486bfb80-5589-4e9e-84d3-10726a066702"). InnerVolumeSpecName "kube-api-access-zq9pz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:33:48 crc kubenswrapper[4867]: I0214 04:33:48.289873 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/486bfb80-5589-4e9e-84d3-10726a066702-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:33:48 crc kubenswrapper[4867]: I0214 04:33:48.289924 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4aa569b6-1ec2-48e8-99c2-f165e5ea9604-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:33:48 crc kubenswrapper[4867]: I0214 04:33:48.289942 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zq9pz\" (UniqueName: \"kubernetes.io/projected/486bfb80-5589-4e9e-84d3-10726a066702-kube-api-access-zq9pz\") on node \"crc\" DevicePath \"\"" Feb 14 04:33:48 crc kubenswrapper[4867]: I0214 04:33:48.289957 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xnnff\" (UniqueName: \"kubernetes.io/projected/4aa569b6-1ec2-48e8-99c2-f165e5ea9604-kube-api-access-xnnff\") on node \"crc\" DevicePath \"\"" Feb 14 04:33:48 crc kubenswrapper[4867]: I0214 04:33:48.461334 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-4dwll" event={"ID":"486bfb80-5589-4e9e-84d3-10726a066702","Type":"ContainerDied","Data":"4f316ece368c5c11c43eaedc9965b6523c35a9abf1623c22f23d982d15d9a1e7"} Feb 14 04:33:48 crc kubenswrapper[4867]: I0214 04:33:48.461402 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f316ece368c5c11c43eaedc9965b6523c35a9abf1623c22f23d982d15d9a1e7" Feb 14 04:33:48 crc kubenswrapper[4867]: I0214 04:33:48.461426 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-4dwll" Feb 14 04:33:48 crc kubenswrapper[4867]: I0214 04:33:48.464294 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-42f0-account-create-update-vx5cp" event={"ID":"4aa569b6-1ec2-48e8-99c2-f165e5ea9604","Type":"ContainerDied","Data":"a9ad71ee663b264a38d85b7ace139092be1831588b6cb7e85dec1f224d42ae62"} Feb 14 04:33:48 crc kubenswrapper[4867]: I0214 04:33:48.464350 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9ad71ee663b264a38d85b7ace139092be1831588b6cb7e85dec1f224d42ae62" Feb 14 04:33:48 crc kubenswrapper[4867]: I0214 04:33:48.464449 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-42f0-account-create-update-vx5cp" Feb 14 04:33:53 crc kubenswrapper[4867]: I0214 04:33:53.892198 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.430849 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-dnl28"] Feb 14 04:33:54 crc kubenswrapper[4867]: E0214 04:33:54.431447 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4aa569b6-1ec2-48e8-99c2-f165e5ea9604" containerName="mariadb-account-create-update" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.431472 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="4aa569b6-1ec2-48e8-99c2-f165e5ea9604" containerName="mariadb-account-create-update" Feb 14 04:33:54 crc kubenswrapper[4867]: E0214 04:33:54.431496 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="486bfb80-5589-4e9e-84d3-10726a066702" containerName="mariadb-database-create" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.431520 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="486bfb80-5589-4e9e-84d3-10726a066702" containerName="mariadb-database-create" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.431801 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="4aa569b6-1ec2-48e8-99c2-f165e5ea9604" containerName="mariadb-account-create-update" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.431829 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="486bfb80-5589-4e9e-84d3-10726a066702" containerName="mariadb-database-create" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.432905 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-dnl28" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.435040 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.435186 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.435878 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.436378 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-bzvlt" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.451147 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-8pszd"] Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.452798 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-8pszd" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.454229 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.456591 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.469523 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-8pszd"] Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.482982 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-dnl28"] Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.538080 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzzc7\" (UniqueName: \"kubernetes.io/projected/9947f337-0734-4b4e-bc31-e68e6354ed74-kube-api-access-jzzc7\") pod \"nova-cell0-cell-mapping-8pszd\" (UID: \"9947f337-0734-4b4e-bc31-e68e6354ed74\") " pod="openstack/nova-cell0-cell-mapping-8pszd" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.538160 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9947f337-0734-4b4e-bc31-e68e6354ed74-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-8pszd\" (UID: \"9947f337-0734-4b4e-bc31-e68e6354ed74\") " pod="openstack/nova-cell0-cell-mapping-8pszd" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.538271 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9947f337-0734-4b4e-bc31-e68e6354ed74-scripts\") pod \"nova-cell0-cell-mapping-8pszd\" (UID: \"9947f337-0734-4b4e-bc31-e68e6354ed74\") " pod="openstack/nova-cell0-cell-mapping-8pszd" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.538289 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df373c99-9a99-4793-90ef-3ad7887e5e3e-config-data\") pod \"aodh-db-sync-dnl28\" (UID: \"df373c99-9a99-4793-90ef-3ad7887e5e3e\") " pod="openstack/aodh-db-sync-dnl28" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.538346 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9947f337-0734-4b4e-bc31-e68e6354ed74-config-data\") pod \"nova-cell0-cell-mapping-8pszd\" (UID: \"9947f337-0734-4b4e-bc31-e68e6354ed74\") " pod="openstack/nova-cell0-cell-mapping-8pszd" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.538425 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df373c99-9a99-4793-90ef-3ad7887e5e3e-scripts\") pod \"aodh-db-sync-dnl28\" (UID: \"df373c99-9a99-4793-90ef-3ad7887e5e3e\") " pod="openstack/aodh-db-sync-dnl28" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.538476 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2gkt\" (UniqueName: \"kubernetes.io/projected/df373c99-9a99-4793-90ef-3ad7887e5e3e-kube-api-access-q2gkt\") pod \"aodh-db-sync-dnl28\" (UID: \"df373c99-9a99-4793-90ef-3ad7887e5e3e\") " pod="openstack/aodh-db-sync-dnl28" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.538609 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df373c99-9a99-4793-90ef-3ad7887e5e3e-combined-ca-bundle\") pod \"aodh-db-sync-dnl28\" (UID: \"df373c99-9a99-4793-90ef-3ad7887e5e3e\") " pod="openstack/aodh-db-sync-dnl28" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.640607 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzzc7\" (UniqueName: \"kubernetes.io/projected/9947f337-0734-4b4e-bc31-e68e6354ed74-kube-api-access-jzzc7\") pod \"nova-cell0-cell-mapping-8pszd\" (UID: \"9947f337-0734-4b4e-bc31-e68e6354ed74\") " pod="openstack/nova-cell0-cell-mapping-8pszd" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.640673 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9947f337-0734-4b4e-bc31-e68e6354ed74-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-8pszd\" (UID: \"9947f337-0734-4b4e-bc31-e68e6354ed74\") " pod="openstack/nova-cell0-cell-mapping-8pszd" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.640721 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9947f337-0734-4b4e-bc31-e68e6354ed74-scripts\") pod \"nova-cell0-cell-mapping-8pszd\" (UID: \"9947f337-0734-4b4e-bc31-e68e6354ed74\") " pod="openstack/nova-cell0-cell-mapping-8pszd" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.640745 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df373c99-9a99-4793-90ef-3ad7887e5e3e-config-data\") pod \"aodh-db-sync-dnl28\" (UID: \"df373c99-9a99-4793-90ef-3ad7887e5e3e\") " pod="openstack/aodh-db-sync-dnl28" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.640785 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9947f337-0734-4b4e-bc31-e68e6354ed74-config-data\") pod \"nova-cell0-cell-mapping-8pszd\" (UID: \"9947f337-0734-4b4e-bc31-e68e6354ed74\") " pod="openstack/nova-cell0-cell-mapping-8pszd" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.640809 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df373c99-9a99-4793-90ef-3ad7887e5e3e-scripts\") pod \"aodh-db-sync-dnl28\" (UID: \"df373c99-9a99-4793-90ef-3ad7887e5e3e\") " pod="openstack/aodh-db-sync-dnl28" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.640842 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2gkt\" (UniqueName: \"kubernetes.io/projected/df373c99-9a99-4793-90ef-3ad7887e5e3e-kube-api-access-q2gkt\") pod \"aodh-db-sync-dnl28\" (UID: \"df373c99-9a99-4793-90ef-3ad7887e5e3e\") " pod="openstack/aodh-db-sync-dnl28" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.640900 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df373c99-9a99-4793-90ef-3ad7887e5e3e-combined-ca-bundle\") pod \"aodh-db-sync-dnl28\" (UID: \"df373c99-9a99-4793-90ef-3ad7887e5e3e\") " pod="openstack/aodh-db-sync-dnl28" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.656045 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df373c99-9a99-4793-90ef-3ad7887e5e3e-config-data\") pod \"aodh-db-sync-dnl28\" (UID: \"df373c99-9a99-4793-90ef-3ad7887e5e3e\") " pod="openstack/aodh-db-sync-dnl28" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.661272 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df373c99-9a99-4793-90ef-3ad7887e5e3e-combined-ca-bundle\") pod \"aodh-db-sync-dnl28\" (UID: \"df373c99-9a99-4793-90ef-3ad7887e5e3e\") " pod="openstack/aodh-db-sync-dnl28" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.669568 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9947f337-0734-4b4e-bc31-e68e6354ed74-scripts\") pod \"nova-cell0-cell-mapping-8pszd\" (UID: \"9947f337-0734-4b4e-bc31-e68e6354ed74\") " pod="openstack/nova-cell0-cell-mapping-8pszd" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.669767 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9947f337-0734-4b4e-bc31-e68e6354ed74-config-data\") pod \"nova-cell0-cell-mapping-8pszd\" (UID: \"9947f337-0734-4b4e-bc31-e68e6354ed74\") " pod="openstack/nova-cell0-cell-mapping-8pszd" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.674842 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df373c99-9a99-4793-90ef-3ad7887e5e3e-scripts\") pod \"aodh-db-sync-dnl28\" (UID: \"df373c99-9a99-4793-90ef-3ad7887e5e3e\") " pod="openstack/aodh-db-sync-dnl28" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.678274 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2gkt\" (UniqueName: \"kubernetes.io/projected/df373c99-9a99-4793-90ef-3ad7887e5e3e-kube-api-access-q2gkt\") pod \"aodh-db-sync-dnl28\" (UID: \"df373c99-9a99-4793-90ef-3ad7887e5e3e\") " pod="openstack/aodh-db-sync-dnl28" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.681025 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzzc7\" (UniqueName: \"kubernetes.io/projected/9947f337-0734-4b4e-bc31-e68e6354ed74-kube-api-access-jzzc7\") pod \"nova-cell0-cell-mapping-8pszd\" (UID: \"9947f337-0734-4b4e-bc31-e68e6354ed74\") " pod="openstack/nova-cell0-cell-mapping-8pszd" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.685230 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.687201 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.689759 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.700602 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9947f337-0734-4b4e-bc31-e68e6354ed74-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-8pszd\" (UID: \"9947f337-0734-4b4e-bc31-e68e6354ed74\") " pod="openstack/nova-cell0-cell-mapping-8pszd" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.758042 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.762867 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-dnl28" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.771463 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-8pszd" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.850964 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef0bc6d9-66ae-4a4d-8650-3c0ac27287cf-config-data\") pod \"nova-scheduler-0\" (UID: \"ef0bc6d9-66ae-4a4d-8650-3c0ac27287cf\") " pod="openstack/nova-scheduler-0" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.851049 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef0bc6d9-66ae-4a4d-8650-3c0ac27287cf-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ef0bc6d9-66ae-4a4d-8650-3c0ac27287cf\") " pod="openstack/nova-scheduler-0" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.851144 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bv467\" (UniqueName: \"kubernetes.io/projected/ef0bc6d9-66ae-4a4d-8650-3c0ac27287cf-kube-api-access-bv467\") pod \"nova-scheduler-0\" (UID: \"ef0bc6d9-66ae-4a4d-8650-3c0ac27287cf\") " pod="openstack/nova-scheduler-0" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.875622 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.877844 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.886094 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.900168 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.932600 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.934841 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.940951 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.953939 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a61bb72-374e-48c9-bfa2-bbcc3e7503e6-config-data\") pod \"nova-api-0\" (UID: \"8a61bb72-374e-48c9-bfa2-bbcc3e7503e6\") " pod="openstack/nova-api-0" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.954017 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7vsl\" (UniqueName: \"kubernetes.io/projected/8a61bb72-374e-48c9-bfa2-bbcc3e7503e6-kube-api-access-d7vsl\") pod \"nova-api-0\" (UID: \"8a61bb72-374e-48c9-bfa2-bbcc3e7503e6\") " pod="openstack/nova-api-0" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.954110 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bv467\" (UniqueName: \"kubernetes.io/projected/ef0bc6d9-66ae-4a4d-8650-3c0ac27287cf-kube-api-access-bv467\") pod \"nova-scheduler-0\" (UID: \"ef0bc6d9-66ae-4a4d-8650-3c0ac27287cf\") " pod="openstack/nova-scheduler-0" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.954154 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a61bb72-374e-48c9-bfa2-bbcc3e7503e6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8a61bb72-374e-48c9-bfa2-bbcc3e7503e6\") " pod="openstack/nova-api-0" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.954268 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a61bb72-374e-48c9-bfa2-bbcc3e7503e6-logs\") pod \"nova-api-0\" (UID: \"8a61bb72-374e-48c9-bfa2-bbcc3e7503e6\") " pod="openstack/nova-api-0" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.954385 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef0bc6d9-66ae-4a4d-8650-3c0ac27287cf-config-data\") pod \"nova-scheduler-0\" (UID: \"ef0bc6d9-66ae-4a4d-8650-3c0ac27287cf\") " pod="openstack/nova-scheduler-0" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.954461 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef0bc6d9-66ae-4a4d-8650-3c0ac27287cf-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ef0bc6d9-66ae-4a4d-8650-3c0ac27287cf\") " pod="openstack/nova-scheduler-0" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.955216 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.972231 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef0bc6d9-66ae-4a4d-8650-3c0ac27287cf-config-data\") pod \"nova-scheduler-0\" (UID: \"ef0bc6d9-66ae-4a4d-8650-3c0ac27287cf\") " pod="openstack/nova-scheduler-0" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.984812 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef0bc6d9-66ae-4a4d-8650-3c0ac27287cf-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ef0bc6d9-66ae-4a4d-8650-3c0ac27287cf\") " pod="openstack/nova-scheduler-0" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.011259 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bv467\" (UniqueName: \"kubernetes.io/projected/ef0bc6d9-66ae-4a4d-8650-3c0ac27287cf-kube-api-access-bv467\") pod \"nova-scheduler-0\" (UID: \"ef0bc6d9-66ae-4a4d-8650-3c0ac27287cf\") " pod="openstack/nova-scheduler-0" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.107485 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a61bb72-374e-48c9-bfa2-bbcc3e7503e6-config-data\") pod \"nova-api-0\" (UID: \"8a61bb72-374e-48c9-bfa2-bbcc3e7503e6\") " pod="openstack/nova-api-0" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.107998 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7vsl\" (UniqueName: \"kubernetes.io/projected/8a61bb72-374e-48c9-bfa2-bbcc3e7503e6-kube-api-access-d7vsl\") pod \"nova-api-0\" (UID: \"8a61bb72-374e-48c9-bfa2-bbcc3e7503e6\") " pod="openstack/nova-api-0" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.108106 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxjqb\" (UniqueName: \"kubernetes.io/projected/f7eae771-49da-40b9-a538-9c7c49f61ce3-kube-api-access-lxjqb\") pod \"nova-metadata-0\" (UID: \"f7eae771-49da-40b9-a538-9c7c49f61ce3\") " pod="openstack/nova-metadata-0" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.108381 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a61bb72-374e-48c9-bfa2-bbcc3e7503e6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8a61bb72-374e-48c9-bfa2-bbcc3e7503e6\") " pod="openstack/nova-api-0" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.108424 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7eae771-49da-40b9-a538-9c7c49f61ce3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f7eae771-49da-40b9-a538-9c7c49f61ce3\") " pod="openstack/nova-metadata-0" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.109257 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a61bb72-374e-48c9-bfa2-bbcc3e7503e6-logs\") pod \"nova-api-0\" (UID: \"8a61bb72-374e-48c9-bfa2-bbcc3e7503e6\") " pod="openstack/nova-api-0" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.109473 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7eae771-49da-40b9-a538-9c7c49f61ce3-config-data\") pod \"nova-metadata-0\" (UID: \"f7eae771-49da-40b9-a538-9c7c49f61ce3\") " pod="openstack/nova-metadata-0" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.109608 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7eae771-49da-40b9-a538-9c7c49f61ce3-logs\") pod \"nova-metadata-0\" (UID: \"f7eae771-49da-40b9-a538-9c7c49f61ce3\") " pod="openstack/nova-metadata-0" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.126384 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a61bb72-374e-48c9-bfa2-bbcc3e7503e6-logs\") pod \"nova-api-0\" (UID: \"8a61bb72-374e-48c9-bfa2-bbcc3e7503e6\") " pod="openstack/nova-api-0" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.127013 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.126965 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a61bb72-374e-48c9-bfa2-bbcc3e7503e6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8a61bb72-374e-48c9-bfa2-bbcc3e7503e6\") " pod="openstack/nova-api-0" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.128195 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a61bb72-374e-48c9-bfa2-bbcc3e7503e6-config-data\") pod \"nova-api-0\" (UID: \"8a61bb72-374e-48c9-bfa2-bbcc3e7503e6\") " pod="openstack/nova-api-0" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.154434 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7vsl\" (UniqueName: \"kubernetes.io/projected/8a61bb72-374e-48c9-bfa2-bbcc3e7503e6-kube-api-access-d7vsl\") pod \"nova-api-0\" (UID: \"8a61bb72-374e-48c9-bfa2-bbcc3e7503e6\") " pod="openstack/nova-api-0" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.159773 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.159827 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.164524 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.203031 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.214385 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7eae771-49da-40b9-a538-9c7c49f61ce3-logs\") pod \"nova-metadata-0\" (UID: \"f7eae771-49da-40b9-a538-9c7c49f61ce3\") " pod="openstack/nova-metadata-0" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.215171 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7eae771-49da-40b9-a538-9c7c49f61ce3-logs\") pod \"nova-metadata-0\" (UID: \"f7eae771-49da-40b9-a538-9c7c49f61ce3\") " pod="openstack/nova-metadata-0" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.220014 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxjqb\" (UniqueName: \"kubernetes.io/projected/f7eae771-49da-40b9-a538-9c7c49f61ce3-kube-api-access-lxjqb\") pod \"nova-metadata-0\" (UID: \"f7eae771-49da-40b9-a538-9c7c49f61ce3\") " pod="openstack/nova-metadata-0" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.220185 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/871276b6-7245-427a-8b55-29dfdfe3695b-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"871276b6-7245-427a-8b55-29dfdfe3695b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.220340 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnxbd\" (UniqueName: \"kubernetes.io/projected/871276b6-7245-427a-8b55-29dfdfe3695b-kube-api-access-dnxbd\") pod \"nova-cell1-novncproxy-0\" (UID: \"871276b6-7245-427a-8b55-29dfdfe3695b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.220388 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7eae771-49da-40b9-a538-9c7c49f61ce3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f7eae771-49da-40b9-a538-9c7c49f61ce3\") " pod="openstack/nova-metadata-0" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.220439 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/871276b6-7245-427a-8b55-29dfdfe3695b-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"871276b6-7245-427a-8b55-29dfdfe3695b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.220774 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7eae771-49da-40b9-a538-9c7c49f61ce3-config-data\") pod \"nova-metadata-0\" (UID: \"f7eae771-49da-40b9-a538-9c7c49f61ce3\") " pod="openstack/nova-metadata-0" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.230528 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.236250 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7eae771-49da-40b9-a538-9c7c49f61ce3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f7eae771-49da-40b9-a538-9c7c49f61ce3\") " pod="openstack/nova-metadata-0" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.260964 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7eae771-49da-40b9-a538-9c7c49f61ce3-config-data\") pod \"nova-metadata-0\" (UID: \"f7eae771-49da-40b9-a538-9c7c49f61ce3\") " pod="openstack/nova-metadata-0" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.276075 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxjqb\" (UniqueName: \"kubernetes.io/projected/f7eae771-49da-40b9-a538-9c7c49f61ce3-kube-api-access-lxjqb\") pod \"nova-metadata-0\" (UID: \"f7eae771-49da-40b9-a538-9c7c49f61ce3\") " pod="openstack/nova-metadata-0" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.297127 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-sf4cl"] Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.308366 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-sf4cl" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.324891 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-sf4cl"] Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.327847 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/871276b6-7245-427a-8b55-29dfdfe3695b-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"871276b6-7245-427a-8b55-29dfdfe3695b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.328038 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnxbd\" (UniqueName: \"kubernetes.io/projected/871276b6-7245-427a-8b55-29dfdfe3695b-kube-api-access-dnxbd\") pod \"nova-cell1-novncproxy-0\" (UID: \"871276b6-7245-427a-8b55-29dfdfe3695b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.331237 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/871276b6-7245-427a-8b55-29dfdfe3695b-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"871276b6-7245-427a-8b55-29dfdfe3695b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.345912 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/871276b6-7245-427a-8b55-29dfdfe3695b-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"871276b6-7245-427a-8b55-29dfdfe3695b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.348356 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/871276b6-7245-427a-8b55-29dfdfe3695b-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"871276b6-7245-427a-8b55-29dfdfe3695b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.383089 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnxbd\" (UniqueName: \"kubernetes.io/projected/871276b6-7245-427a-8b55-29dfdfe3695b-kube-api-access-dnxbd\") pod \"nova-cell1-novncproxy-0\" (UID: \"871276b6-7245-427a-8b55-29dfdfe3695b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.436765 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-sf4cl\" (UID: \"6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9\") " pod="openstack/dnsmasq-dns-9b86998b5-sf4cl" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.436858 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9-dns-svc\") pod \"dnsmasq-dns-9b86998b5-sf4cl\" (UID: \"6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9\") " pod="openstack/dnsmasq-dns-9b86998b5-sf4cl" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.436897 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-sf4cl\" (UID: \"6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9\") " pod="openstack/dnsmasq-dns-9b86998b5-sf4cl" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.436989 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9-config\") pod \"dnsmasq-dns-9b86998b5-sf4cl\" (UID: \"6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9\") " pod="openstack/dnsmasq-dns-9b86998b5-sf4cl" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.437113 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-sf4cl\" (UID: \"6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9\") " pod="openstack/dnsmasq-dns-9b86998b5-sf4cl" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.441762 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpf7v\" (UniqueName: \"kubernetes.io/projected/6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9-kube-api-access-qpf7v\") pod \"dnsmasq-dns-9b86998b5-sf4cl\" (UID: \"6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9\") " pod="openstack/dnsmasq-dns-9b86998b5-sf4cl" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.544516 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-sf4cl\" (UID: \"6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9\") " pod="openstack/dnsmasq-dns-9b86998b5-sf4cl" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.544636 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpf7v\" (UniqueName: \"kubernetes.io/projected/6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9-kube-api-access-qpf7v\") pod \"dnsmasq-dns-9b86998b5-sf4cl\" (UID: \"6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9\") " pod="openstack/dnsmasq-dns-9b86998b5-sf4cl" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.544679 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-sf4cl\" (UID: \"6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9\") " pod="openstack/dnsmasq-dns-9b86998b5-sf4cl" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.544720 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9-dns-svc\") pod \"dnsmasq-dns-9b86998b5-sf4cl\" (UID: \"6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9\") " pod="openstack/dnsmasq-dns-9b86998b5-sf4cl" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.544748 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-sf4cl\" (UID: \"6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9\") " pod="openstack/dnsmasq-dns-9b86998b5-sf4cl" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.544806 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9-config\") pod \"dnsmasq-dns-9b86998b5-sf4cl\" (UID: \"6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9\") " pod="openstack/dnsmasq-dns-9b86998b5-sf4cl" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.545851 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9-config\") pod \"dnsmasq-dns-9b86998b5-sf4cl\" (UID: \"6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9\") " pod="openstack/dnsmasq-dns-9b86998b5-sf4cl" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.546931 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-sf4cl\" (UID: \"6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9\") " pod="openstack/dnsmasq-dns-9b86998b5-sf4cl" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.547464 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-sf4cl\" (UID: \"6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9\") " pod="openstack/dnsmasq-dns-9b86998b5-sf4cl" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.547603 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-sf4cl\" (UID: \"6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9\") " pod="openstack/dnsmasq-dns-9b86998b5-sf4cl" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.547641 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.547737 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9-dns-svc\") pod \"dnsmasq-dns-9b86998b5-sf4cl\" (UID: \"6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9\") " pod="openstack/dnsmasq-dns-9b86998b5-sf4cl" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.576572 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpf7v\" (UniqueName: \"kubernetes.io/projected/6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9-kube-api-access-qpf7v\") pod \"dnsmasq-dns-9b86998b5-sf4cl\" (UID: \"6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9\") " pod="openstack/dnsmasq-dns-9b86998b5-sf4cl" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.598339 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.651716 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-sf4cl" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.850708 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-8pszd"] Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.863395 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-dnl28"] Feb 14 04:33:56 crc kubenswrapper[4867]: I0214 04:33:56.212579 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 14 04:33:56 crc kubenswrapper[4867]: I0214 04:33:56.218457 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 14 04:33:56 crc kubenswrapper[4867]: W0214 04:33:56.258667 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8a61bb72_374e_48c9_bfa2_bbcc3e7503e6.slice/crio-ad32db769940286e14cc05b0d71b14b2584188a0981a1af84763e8fb6a761500 WatchSource:0}: Error finding container ad32db769940286e14cc05b0d71b14b2584188a0981a1af84763e8fb6a761500: Status 404 returned error can't find the container with id ad32db769940286e14cc05b0d71b14b2584188a0981a1af84763e8fb6a761500 Feb 14 04:33:56 crc kubenswrapper[4867]: I0214 04:33:56.452981 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 14 04:33:56 crc kubenswrapper[4867]: I0214 04:33:56.496793 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 14 04:33:56 crc kubenswrapper[4867]: I0214 04:33:56.602628 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"871276b6-7245-427a-8b55-29dfdfe3695b","Type":"ContainerStarted","Data":"7421ae1cc8f7150f6013e7337e1040d9ce9252e306ea9b4407c26605f30d6363"} Feb 14 04:33:56 crc kubenswrapper[4867]: I0214 04:33:56.612127 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ef0bc6d9-66ae-4a4d-8650-3c0ac27287cf","Type":"ContainerStarted","Data":"2f1ec16c434c7fe8c8b2e012785b630337a932a6d095d2d76aaa4e23a79c54fa"} Feb 14 04:33:56 crc kubenswrapper[4867]: I0214 04:33:56.623265 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8a61bb72-374e-48c9-bfa2-bbcc3e7503e6","Type":"ContainerStarted","Data":"ad32db769940286e14cc05b0d71b14b2584188a0981a1af84763e8fb6a761500"} Feb 14 04:33:56 crc kubenswrapper[4867]: I0214 04:33:56.626145 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f7eae771-49da-40b9-a538-9c7c49f61ce3","Type":"ContainerStarted","Data":"ddf55a66062b9b23ede7bc9c23d0eaea8956685a5ae97e614a4a208a0cb63dd4"} Feb 14 04:33:56 crc kubenswrapper[4867]: I0214 04:33:56.640054 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-dnl28" event={"ID":"df373c99-9a99-4793-90ef-3ad7887e5e3e","Type":"ContainerStarted","Data":"1fd83dc61097e21fab2d831bb4e520d45961c33509d79aff1a7bb6b26c09cb8b"} Feb 14 04:33:56 crc kubenswrapper[4867]: I0214 04:33:56.644733 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-8pszd" event={"ID":"9947f337-0734-4b4e-bc31-e68e6354ed74","Type":"ContainerStarted","Data":"4c91a1eedf3612a0a64e4ffb88ac40594ed3abc921178439efbfe687de9b9c76"} Feb 14 04:33:56 crc kubenswrapper[4867]: I0214 04:33:56.644781 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-8pszd" event={"ID":"9947f337-0734-4b4e-bc31-e68e6354ed74","Type":"ContainerStarted","Data":"b83da7feac047b1c75ae9cbc66ea6dcb6125f8dff0301b8d2f1043fda57d7b84"} Feb 14 04:33:56 crc kubenswrapper[4867]: I0214 04:33:56.694427 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-8pszd" podStartSLOduration=2.694403104 podStartE2EDuration="2.694403104s" podCreationTimestamp="2026-02-14 04:33:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:33:56.660197245 +0000 UTC m=+1468.741134559" watchObservedRunningTime="2026-02-14 04:33:56.694403104 +0000 UTC m=+1468.775340418" Feb 14 04:33:56 crc kubenswrapper[4867]: I0214 04:33:56.757348 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-sf4cl"] Feb 14 04:33:56 crc kubenswrapper[4867]: I0214 04:33:56.879774 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-jw78d"] Feb 14 04:33:56 crc kubenswrapper[4867]: I0214 04:33:56.881414 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-jw78d" Feb 14 04:33:56 crc kubenswrapper[4867]: I0214 04:33:56.886881 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 14 04:33:56 crc kubenswrapper[4867]: I0214 04:33:56.887050 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 14 04:33:56 crc kubenswrapper[4867]: I0214 04:33:56.907637 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-jw78d"] Feb 14 04:33:57 crc kubenswrapper[4867]: I0214 04:33:57.025326 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bbf3a42-f012-4bed-a60e-1defcd0b1af9-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-jw78d\" (UID: \"2bbf3a42-f012-4bed-a60e-1defcd0b1af9\") " pod="openstack/nova-cell1-conductor-db-sync-jw78d" Feb 14 04:33:57 crc kubenswrapper[4867]: I0214 04:33:57.025908 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bbf3a42-f012-4bed-a60e-1defcd0b1af9-config-data\") pod \"nova-cell1-conductor-db-sync-jw78d\" (UID: \"2bbf3a42-f012-4bed-a60e-1defcd0b1af9\") " pod="openstack/nova-cell1-conductor-db-sync-jw78d" Feb 14 04:33:57 crc kubenswrapper[4867]: I0214 04:33:57.025992 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2bbf3a42-f012-4bed-a60e-1defcd0b1af9-scripts\") pod \"nova-cell1-conductor-db-sync-jw78d\" (UID: \"2bbf3a42-f012-4bed-a60e-1defcd0b1af9\") " pod="openstack/nova-cell1-conductor-db-sync-jw78d" Feb 14 04:33:57 crc kubenswrapper[4867]: I0214 04:33:57.026080 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5whg\" (UniqueName: \"kubernetes.io/projected/2bbf3a42-f012-4bed-a60e-1defcd0b1af9-kube-api-access-x5whg\") pod \"nova-cell1-conductor-db-sync-jw78d\" (UID: \"2bbf3a42-f012-4bed-a60e-1defcd0b1af9\") " pod="openstack/nova-cell1-conductor-db-sync-jw78d" Feb 14 04:33:57 crc kubenswrapper[4867]: I0214 04:33:57.131230 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bbf3a42-f012-4bed-a60e-1defcd0b1af9-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-jw78d\" (UID: \"2bbf3a42-f012-4bed-a60e-1defcd0b1af9\") " pod="openstack/nova-cell1-conductor-db-sync-jw78d" Feb 14 04:33:57 crc kubenswrapper[4867]: I0214 04:33:57.131350 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bbf3a42-f012-4bed-a60e-1defcd0b1af9-config-data\") pod \"nova-cell1-conductor-db-sync-jw78d\" (UID: \"2bbf3a42-f012-4bed-a60e-1defcd0b1af9\") " pod="openstack/nova-cell1-conductor-db-sync-jw78d" Feb 14 04:33:57 crc kubenswrapper[4867]: I0214 04:33:57.131447 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2bbf3a42-f012-4bed-a60e-1defcd0b1af9-scripts\") pod \"nova-cell1-conductor-db-sync-jw78d\" (UID: \"2bbf3a42-f012-4bed-a60e-1defcd0b1af9\") " pod="openstack/nova-cell1-conductor-db-sync-jw78d" Feb 14 04:33:57 crc kubenswrapper[4867]: I0214 04:33:57.131535 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5whg\" (UniqueName: \"kubernetes.io/projected/2bbf3a42-f012-4bed-a60e-1defcd0b1af9-kube-api-access-x5whg\") pod \"nova-cell1-conductor-db-sync-jw78d\" (UID: \"2bbf3a42-f012-4bed-a60e-1defcd0b1af9\") " pod="openstack/nova-cell1-conductor-db-sync-jw78d" Feb 14 04:33:57 crc kubenswrapper[4867]: I0214 04:33:57.138296 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2bbf3a42-f012-4bed-a60e-1defcd0b1af9-scripts\") pod \"nova-cell1-conductor-db-sync-jw78d\" (UID: \"2bbf3a42-f012-4bed-a60e-1defcd0b1af9\") " pod="openstack/nova-cell1-conductor-db-sync-jw78d" Feb 14 04:33:57 crc kubenswrapper[4867]: I0214 04:33:57.153601 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bbf3a42-f012-4bed-a60e-1defcd0b1af9-config-data\") pod \"nova-cell1-conductor-db-sync-jw78d\" (UID: \"2bbf3a42-f012-4bed-a60e-1defcd0b1af9\") " pod="openstack/nova-cell1-conductor-db-sync-jw78d" Feb 14 04:33:57 crc kubenswrapper[4867]: I0214 04:33:57.154108 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5whg\" (UniqueName: \"kubernetes.io/projected/2bbf3a42-f012-4bed-a60e-1defcd0b1af9-kube-api-access-x5whg\") pod \"nova-cell1-conductor-db-sync-jw78d\" (UID: \"2bbf3a42-f012-4bed-a60e-1defcd0b1af9\") " pod="openstack/nova-cell1-conductor-db-sync-jw78d" Feb 14 04:33:57 crc kubenswrapper[4867]: I0214 04:33:57.154159 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bbf3a42-f012-4bed-a60e-1defcd0b1af9-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-jw78d\" (UID: \"2bbf3a42-f012-4bed-a60e-1defcd0b1af9\") " pod="openstack/nova-cell1-conductor-db-sync-jw78d" Feb 14 04:33:57 crc kubenswrapper[4867]: I0214 04:33:57.300149 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-jw78d" Feb 14 04:33:57 crc kubenswrapper[4867]: I0214 04:33:57.690865 4867 generic.go:334] "Generic (PLEG): container finished" podID="6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9" containerID="25f3fdaf8d189df27a82c7b6c2f5ffc72a3cc21b6fdff3aa5db60ff88eff4374" exitCode=0 Feb 14 04:33:57 crc kubenswrapper[4867]: I0214 04:33:57.694424 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-sf4cl" event={"ID":"6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9","Type":"ContainerDied","Data":"25f3fdaf8d189df27a82c7b6c2f5ffc72a3cc21b6fdff3aa5db60ff88eff4374"} Feb 14 04:33:57 crc kubenswrapper[4867]: I0214 04:33:57.694478 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-sf4cl" event={"ID":"6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9","Type":"ContainerStarted","Data":"d75507374634724c8a1ef310952a5ce339f06c748d3d87d74bf982c68a7ee156"} Feb 14 04:33:58 crc kubenswrapper[4867]: I0214 04:33:58.084583 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-jw78d"] Feb 14 04:33:58 crc kubenswrapper[4867]: I0214 04:33:58.521135 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 14 04:33:58 crc kubenswrapper[4867]: I0214 04:33:58.563175 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 14 04:33:58 crc kubenswrapper[4867]: I0214 04:33:58.734664 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-jw78d" event={"ID":"2bbf3a42-f012-4bed-a60e-1defcd0b1af9","Type":"ContainerStarted","Data":"9434b7a5d62d84c5fafd89a974659be60c5965c5fe3ab11c7ca5ecbded575989"} Feb 14 04:33:58 crc kubenswrapper[4867]: I0214 04:33:58.734716 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-jw78d" event={"ID":"2bbf3a42-f012-4bed-a60e-1defcd0b1af9","Type":"ContainerStarted","Data":"2f582cbf6bdcb91733773e29bff48a780e188f584567e68dfb743d1673b021ed"} Feb 14 04:33:58 crc kubenswrapper[4867]: I0214 04:33:58.743616 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-sf4cl" event={"ID":"6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9","Type":"ContainerStarted","Data":"34d8986cbe09c27161bbd156dfd8b33968031eafe3d3eb76f1f6a490b717eb6f"} Feb 14 04:33:58 crc kubenswrapper[4867]: I0214 04:33:58.745901 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-9b86998b5-sf4cl" Feb 14 04:33:58 crc kubenswrapper[4867]: I0214 04:33:58.767980 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-jw78d" podStartSLOduration=2.767947928 podStartE2EDuration="2.767947928s" podCreationTimestamp="2026-02-14 04:33:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:33:58.756959513 +0000 UTC m=+1470.837896847" watchObservedRunningTime="2026-02-14 04:33:58.767947928 +0000 UTC m=+1470.848885242" Feb 14 04:33:58 crc kubenswrapper[4867]: I0214 04:33:58.794809 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-9b86998b5-sf4cl" podStartSLOduration=3.794776249 podStartE2EDuration="3.794776249s" podCreationTimestamp="2026-02-14 04:33:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:33:58.78887438 +0000 UTC m=+1470.869811694" watchObservedRunningTime="2026-02-14 04:33:58.794776249 +0000 UTC m=+1470.875713573" Feb 14 04:34:05 crc kubenswrapper[4867]: I0214 04:34:05.653668 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-9b86998b5-sf4cl" Feb 14 04:34:05 crc kubenswrapper[4867]: I0214 04:34:05.717412 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-ccbrl"] Feb 14 04:34:05 crc kubenswrapper[4867]: I0214 04:34:05.718724 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7756b9d78c-ccbrl" podUID="7959a0fa-00bd-492c-9892-a8c8727549c6" containerName="dnsmasq-dns" containerID="cri-o://5a01ea22a86b95bd3d047ecc780ee7786ac3f26352c9a5ce1e038cc9e891bc74" gracePeriod=10 Feb 14 04:34:05 crc kubenswrapper[4867]: I0214 04:34:05.894574 4867 generic.go:334] "Generic (PLEG): container finished" podID="7959a0fa-00bd-492c-9892-a8c8727549c6" containerID="5a01ea22a86b95bd3d047ecc780ee7786ac3f26352c9a5ce1e038cc9e891bc74" exitCode=0 Feb 14 04:34:05 crc kubenswrapper[4867]: I0214 04:34:05.894740 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-ccbrl" event={"ID":"7959a0fa-00bd-492c-9892-a8c8727549c6","Type":"ContainerDied","Data":"5a01ea22a86b95bd3d047ecc780ee7786ac3f26352c9a5ce1e038cc9e891bc74"} Feb 14 04:34:05 crc kubenswrapper[4867]: I0214 04:34:05.901886 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f7eae771-49da-40b9-a538-9c7c49f61ce3","Type":"ContainerStarted","Data":"e09897e24480ca2b5eb387baeb8c83bcb5bcba1b2b26539d881c98ae54782ded"} Feb 14 04:34:06 crc kubenswrapper[4867]: I0214 04:34:06.530286 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-ccbrl" Feb 14 04:34:06 crc kubenswrapper[4867]: I0214 04:34:06.657983 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7959a0fa-00bd-492c-9892-a8c8727549c6-ovsdbserver-sb\") pod \"7959a0fa-00bd-492c-9892-a8c8727549c6\" (UID: \"7959a0fa-00bd-492c-9892-a8c8727549c6\") " Feb 14 04:34:06 crc kubenswrapper[4867]: I0214 04:34:06.658997 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7959a0fa-00bd-492c-9892-a8c8727549c6-dns-swift-storage-0\") pod \"7959a0fa-00bd-492c-9892-a8c8727549c6\" (UID: \"7959a0fa-00bd-492c-9892-a8c8727549c6\") " Feb 14 04:34:06 crc kubenswrapper[4867]: I0214 04:34:06.659084 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7959a0fa-00bd-492c-9892-a8c8727549c6-dns-svc\") pod \"7959a0fa-00bd-492c-9892-a8c8727549c6\" (UID: \"7959a0fa-00bd-492c-9892-a8c8727549c6\") " Feb 14 04:34:06 crc kubenswrapper[4867]: I0214 04:34:06.659231 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7959a0fa-00bd-492c-9892-a8c8727549c6-ovsdbserver-nb\") pod \"7959a0fa-00bd-492c-9892-a8c8727549c6\" (UID: \"7959a0fa-00bd-492c-9892-a8c8727549c6\") " Feb 14 04:34:06 crc kubenswrapper[4867]: I0214 04:34:06.659325 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5bm4h\" (UniqueName: \"kubernetes.io/projected/7959a0fa-00bd-492c-9892-a8c8727549c6-kube-api-access-5bm4h\") pod \"7959a0fa-00bd-492c-9892-a8c8727549c6\" (UID: \"7959a0fa-00bd-492c-9892-a8c8727549c6\") " Feb 14 04:34:06 crc kubenswrapper[4867]: I0214 04:34:06.659395 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7959a0fa-00bd-492c-9892-a8c8727549c6-config\") pod \"7959a0fa-00bd-492c-9892-a8c8727549c6\" (UID: \"7959a0fa-00bd-492c-9892-a8c8727549c6\") " Feb 14 04:34:06 crc kubenswrapper[4867]: I0214 04:34:06.679902 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7959a0fa-00bd-492c-9892-a8c8727549c6-kube-api-access-5bm4h" (OuterVolumeSpecName: "kube-api-access-5bm4h") pod "7959a0fa-00bd-492c-9892-a8c8727549c6" (UID: "7959a0fa-00bd-492c-9892-a8c8727549c6"). InnerVolumeSpecName "kube-api-access-5bm4h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:34:06 crc kubenswrapper[4867]: I0214 04:34:06.709438 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="146fecda-f9b9-4c60-96a7-feb4120cda4c" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 14 04:34:06 crc kubenswrapper[4867]: I0214 04:34:06.763837 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5bm4h\" (UniqueName: \"kubernetes.io/projected/7959a0fa-00bd-492c-9892-a8c8727549c6-kube-api-access-5bm4h\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:06 crc kubenswrapper[4867]: I0214 04:34:06.773303 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7959a0fa-00bd-492c-9892-a8c8727549c6-config" (OuterVolumeSpecName: "config") pod "7959a0fa-00bd-492c-9892-a8c8727549c6" (UID: "7959a0fa-00bd-492c-9892-a8c8727549c6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:34:06 crc kubenswrapper[4867]: I0214 04:34:06.780327 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7959a0fa-00bd-492c-9892-a8c8727549c6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7959a0fa-00bd-492c-9892-a8c8727549c6" (UID: "7959a0fa-00bd-492c-9892-a8c8727549c6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:34:06 crc kubenswrapper[4867]: I0214 04:34:06.784153 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7959a0fa-00bd-492c-9892-a8c8727549c6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7959a0fa-00bd-492c-9892-a8c8727549c6" (UID: "7959a0fa-00bd-492c-9892-a8c8727549c6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:34:06 crc kubenswrapper[4867]: I0214 04:34:06.864683 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7959a0fa-00bd-492c-9892-a8c8727549c6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7959a0fa-00bd-492c-9892-a8c8727549c6" (UID: "7959a0fa-00bd-492c-9892-a8c8727549c6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:34:06 crc kubenswrapper[4867]: I0214 04:34:06.865546 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7959a0fa-00bd-492c-9892-a8c8727549c6-ovsdbserver-sb\") pod \"7959a0fa-00bd-492c-9892-a8c8727549c6\" (UID: \"7959a0fa-00bd-492c-9892-a8c8727549c6\") " Feb 14 04:34:06 crc kubenswrapper[4867]: W0214 04:34:06.865732 4867 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/7959a0fa-00bd-492c-9892-a8c8727549c6/volumes/kubernetes.io~configmap/ovsdbserver-sb Feb 14 04:34:06 crc kubenswrapper[4867]: I0214 04:34:06.865753 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7959a0fa-00bd-492c-9892-a8c8727549c6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7959a0fa-00bd-492c-9892-a8c8727549c6" (UID: "7959a0fa-00bd-492c-9892-a8c8727549c6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:34:06 crc kubenswrapper[4867]: I0214 04:34:06.866576 4867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7959a0fa-00bd-492c-9892-a8c8727549c6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:06 crc kubenswrapper[4867]: I0214 04:34:06.866602 4867 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7959a0fa-00bd-492c-9892-a8c8727549c6-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:06 crc kubenswrapper[4867]: I0214 04:34:06.866612 4867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7959a0fa-00bd-492c-9892-a8c8727549c6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:06 crc kubenswrapper[4867]: I0214 04:34:06.866621 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7959a0fa-00bd-492c-9892-a8c8727549c6-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:06 crc kubenswrapper[4867]: I0214 04:34:06.899453 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7959a0fa-00bd-492c-9892-a8c8727549c6-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "7959a0fa-00bd-492c-9892-a8c8727549c6" (UID: "7959a0fa-00bd-492c-9892-a8c8727549c6"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:34:06 crc kubenswrapper[4867]: I0214 04:34:06.928423 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ef0bc6d9-66ae-4a4d-8650-3c0ac27287cf","Type":"ContainerStarted","Data":"bc19b23b550c0ff93b93128b07ead353fc9290a4dbd1f4015fc48de629ff924f"} Feb 14 04:34:06 crc kubenswrapper[4867]: I0214 04:34:06.935067 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8a61bb72-374e-48c9-bfa2-bbcc3e7503e6","Type":"ContainerStarted","Data":"9a5116fd54e01f05e9d364d64710a67006a81d421d560d17bd6b58d16f3ec567"} Feb 14 04:34:06 crc kubenswrapper[4867]: I0214 04:34:06.935131 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8a61bb72-374e-48c9-bfa2-bbcc3e7503e6","Type":"ContainerStarted","Data":"da2202426713d859b897485d737b21484e5ca9b5d7888e558f0886564ebc4004"} Feb 14 04:34:06 crc kubenswrapper[4867]: I0214 04:34:06.949706 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f7eae771-49da-40b9-a538-9c7c49f61ce3","Type":"ContainerStarted","Data":"f026d965689327ff7eaf47896abc06424c95fc23b903539ea722d4d22e226ac8"} Feb 14 04:34:06 crc kubenswrapper[4867]: I0214 04:34:06.950022 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="f7eae771-49da-40b9-a538-9c7c49f61ce3" containerName="nova-metadata-log" containerID="cri-o://e09897e24480ca2b5eb387baeb8c83bcb5bcba1b2b26539d881c98ae54782ded" gracePeriod=30 Feb 14 04:34:06 crc kubenswrapper[4867]: I0214 04:34:06.950163 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="f7eae771-49da-40b9-a538-9c7c49f61ce3" containerName="nova-metadata-metadata" containerID="cri-o://f026d965689327ff7eaf47896abc06424c95fc23b903539ea722d4d22e226ac8" gracePeriod=30 Feb 14 04:34:06 crc kubenswrapper[4867]: I0214 04:34:06.966854 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-dnl28" event={"ID":"df373c99-9a99-4793-90ef-3ad7887e5e3e","Type":"ContainerStarted","Data":"027f7b47ecf95746bb9733dbd606f94b7866eecb1f1ce8cb4d1598a367884200"} Feb 14 04:34:06 crc kubenswrapper[4867]: I0214 04:34:06.969893 4867 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7959a0fa-00bd-492c-9892-a8c8727549c6-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:06 crc kubenswrapper[4867]: I0214 04:34:06.975652 4867 generic.go:334] "Generic (PLEG): container finished" podID="9947f337-0734-4b4e-bc31-e68e6354ed74" containerID="4c91a1eedf3612a0a64e4ffb88ac40594ed3abc921178439efbfe687de9b9c76" exitCode=0 Feb 14 04:34:06 crc kubenswrapper[4867]: I0214 04:34:06.975722 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-8pszd" event={"ID":"9947f337-0734-4b4e-bc31-e68e6354ed74","Type":"ContainerDied","Data":"4c91a1eedf3612a0a64e4ffb88ac40594ed3abc921178439efbfe687de9b9c76"} Feb 14 04:34:06 crc kubenswrapper[4867]: I0214 04:34:06.987476 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.819298479 podStartE2EDuration="12.987452976s" podCreationTimestamp="2026-02-14 04:33:54 +0000 UTC" firstStartedPulling="2026-02-14 04:33:56.183016854 +0000 UTC m=+1468.263954168" lastFinishedPulling="2026-02-14 04:34:05.351171341 +0000 UTC m=+1477.432108665" observedRunningTime="2026-02-14 04:34:06.95409206 +0000 UTC m=+1479.035029374" watchObservedRunningTime="2026-02-14 04:34:06.987452976 +0000 UTC m=+1479.068390290" Feb 14 04:34:06 crc kubenswrapper[4867]: I0214 04:34:06.991452 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-ccbrl" event={"ID":"7959a0fa-00bd-492c-9892-a8c8727549c6","Type":"ContainerDied","Data":"509c3996717307d8c2159fc143b05ca2d8e25b377427985ddf997628e72d1f60"} Feb 14 04:34:06 crc kubenswrapper[4867]: I0214 04:34:06.991520 4867 scope.go:117] "RemoveContainer" containerID="5a01ea22a86b95bd3d047ecc780ee7786ac3f26352c9a5ce1e038cc9e891bc74" Feb 14 04:34:06 crc kubenswrapper[4867]: I0214 04:34:06.991614 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-ccbrl" Feb 14 04:34:07 crc kubenswrapper[4867]: I0214 04:34:07.000758 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="871276b6-7245-427a-8b55-29dfdfe3695b" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://7a29ef69a79abcd2999f8338c936a26c65e2128a8ad6b8ec14625ac6e941e0d9" gracePeriod=30 Feb 14 04:34:07 crc kubenswrapper[4867]: I0214 04:34:07.014669 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=4.170466395 podStartE2EDuration="13.014645167s" podCreationTimestamp="2026-02-14 04:33:54 +0000 UTC" firstStartedPulling="2026-02-14 04:33:56.509611569 +0000 UTC m=+1468.590548883" lastFinishedPulling="2026-02-14 04:34:05.353790341 +0000 UTC m=+1477.434727655" observedRunningTime="2026-02-14 04:34:06.991359651 +0000 UTC m=+1479.072296965" watchObservedRunningTime="2026-02-14 04:34:07.014645167 +0000 UTC m=+1479.095582481" Feb 14 04:34:07 crc kubenswrapper[4867]: I0214 04:34:07.041835 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.97678016 podStartE2EDuration="13.041804586s" podCreationTimestamp="2026-02-14 04:33:54 +0000 UTC" firstStartedPulling="2026-02-14 04:33:56.2707049 +0000 UTC m=+1468.351642214" lastFinishedPulling="2026-02-14 04:34:05.335729326 +0000 UTC m=+1477.416666640" observedRunningTime="2026-02-14 04:34:07.01437977 +0000 UTC m=+1479.095317084" watchObservedRunningTime="2026-02-14 04:34:07.041804586 +0000 UTC m=+1479.122741900" Feb 14 04:34:07 crc kubenswrapper[4867]: I0214 04:34:07.041878 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"871276b6-7245-427a-8b55-29dfdfe3695b","Type":"ContainerStarted","Data":"7a29ef69a79abcd2999f8338c936a26c65e2128a8ad6b8ec14625ac6e941e0d9"} Feb 14 04:34:07 crc kubenswrapper[4867]: I0214 04:34:07.044927 4867 scope.go:117] "RemoveContainer" containerID="82838cd053ec19d9355b8bed3bca33d40ca78328ccc5425dbe3475e660e9969c" Feb 14 04:34:07 crc kubenswrapper[4867]: I0214 04:34:07.082324 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-dnl28" podStartSLOduration=3.639823476 podStartE2EDuration="13.082297494s" podCreationTimestamp="2026-02-14 04:33:54 +0000 UTC" firstStartedPulling="2026-02-14 04:33:55.924732234 +0000 UTC m=+1468.005669538" lastFinishedPulling="2026-02-14 04:34:05.367206242 +0000 UTC m=+1477.448143556" observedRunningTime="2026-02-14 04:34:07.034377527 +0000 UTC m=+1479.115314841" watchObservedRunningTime="2026-02-14 04:34:07.082297494 +0000 UTC m=+1479.163234808" Feb 14 04:34:07 crc kubenswrapper[4867]: I0214 04:34:07.188902 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=4.299235885 podStartE2EDuration="13.188536809s" podCreationTimestamp="2026-02-14 04:33:54 +0000 UTC" firstStartedPulling="2026-02-14 04:33:56.467614951 +0000 UTC m=+1468.548552265" lastFinishedPulling="2026-02-14 04:34:05.356915875 +0000 UTC m=+1477.437853189" observedRunningTime="2026-02-14 04:34:07.117703796 +0000 UTC m=+1479.198641110" watchObservedRunningTime="2026-02-14 04:34:07.188536809 +0000 UTC m=+1479.269474123" Feb 14 04:34:07 crc kubenswrapper[4867]: I0214 04:34:07.206293 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-ccbrl"] Feb 14 04:34:07 crc kubenswrapper[4867]: I0214 04:34:07.223707 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-ccbrl"] Feb 14 04:34:07 crc kubenswrapper[4867]: I0214 04:34:07.739964 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 14 04:34:07 crc kubenswrapper[4867]: I0214 04:34:07.817983 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxjqb\" (UniqueName: \"kubernetes.io/projected/f7eae771-49da-40b9-a538-9c7c49f61ce3-kube-api-access-lxjqb\") pod \"f7eae771-49da-40b9-a538-9c7c49f61ce3\" (UID: \"f7eae771-49da-40b9-a538-9c7c49f61ce3\") " Feb 14 04:34:07 crc kubenswrapper[4867]: I0214 04:34:07.818188 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7eae771-49da-40b9-a538-9c7c49f61ce3-config-data\") pod \"f7eae771-49da-40b9-a538-9c7c49f61ce3\" (UID: \"f7eae771-49da-40b9-a538-9c7c49f61ce3\") " Feb 14 04:34:07 crc kubenswrapper[4867]: I0214 04:34:07.818373 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7eae771-49da-40b9-a538-9c7c49f61ce3-combined-ca-bundle\") pod \"f7eae771-49da-40b9-a538-9c7c49f61ce3\" (UID: \"f7eae771-49da-40b9-a538-9c7c49f61ce3\") " Feb 14 04:34:07 crc kubenswrapper[4867]: I0214 04:34:07.818567 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7eae771-49da-40b9-a538-9c7c49f61ce3-logs\") pod \"f7eae771-49da-40b9-a538-9c7c49f61ce3\" (UID: \"f7eae771-49da-40b9-a538-9c7c49f61ce3\") " Feb 14 04:34:07 crc kubenswrapper[4867]: I0214 04:34:07.820040 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7eae771-49da-40b9-a538-9c7c49f61ce3-logs" (OuterVolumeSpecName: "logs") pod "f7eae771-49da-40b9-a538-9c7c49f61ce3" (UID: "f7eae771-49da-40b9-a538-9c7c49f61ce3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:34:07 crc kubenswrapper[4867]: I0214 04:34:07.835749 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7eae771-49da-40b9-a538-9c7c49f61ce3-kube-api-access-lxjqb" (OuterVolumeSpecName: "kube-api-access-lxjqb") pod "f7eae771-49da-40b9-a538-9c7c49f61ce3" (UID: "f7eae771-49da-40b9-a538-9c7c49f61ce3"). InnerVolumeSpecName "kube-api-access-lxjqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:34:07 crc kubenswrapper[4867]: I0214 04:34:07.912809 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7eae771-49da-40b9-a538-9c7c49f61ce3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f7eae771-49da-40b9-a538-9c7c49f61ce3" (UID: "f7eae771-49da-40b9-a538-9c7c49f61ce3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:34:07 crc kubenswrapper[4867]: I0214 04:34:07.926772 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7eae771-49da-40b9-a538-9c7c49f61ce3-config-data" (OuterVolumeSpecName: "config-data") pod "f7eae771-49da-40b9-a538-9c7c49f61ce3" (UID: "f7eae771-49da-40b9-a538-9c7c49f61ce3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:34:07 crc kubenswrapper[4867]: I0214 04:34:07.928558 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxjqb\" (UniqueName: \"kubernetes.io/projected/f7eae771-49da-40b9-a538-9c7c49f61ce3-kube-api-access-lxjqb\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:07 crc kubenswrapper[4867]: I0214 04:34:07.928580 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7eae771-49da-40b9-a538-9c7c49f61ce3-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:07 crc kubenswrapper[4867]: I0214 04:34:07.928592 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7eae771-49da-40b9-a538-9c7c49f61ce3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:07 crc kubenswrapper[4867]: I0214 04:34:07.928601 4867 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7eae771-49da-40b9-a538-9c7c49f61ce3-logs\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.014374 4867 generic.go:334] "Generic (PLEG): container finished" podID="f7eae771-49da-40b9-a538-9c7c49f61ce3" containerID="f026d965689327ff7eaf47896abc06424c95fc23b903539ea722d4d22e226ac8" exitCode=0 Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.014407 4867 generic.go:334] "Generic (PLEG): container finished" podID="f7eae771-49da-40b9-a538-9c7c49f61ce3" containerID="e09897e24480ca2b5eb387baeb8c83bcb5bcba1b2b26539d881c98ae54782ded" exitCode=143 Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.014644 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.016775 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f7eae771-49da-40b9-a538-9c7c49f61ce3","Type":"ContainerDied","Data":"f026d965689327ff7eaf47896abc06424c95fc23b903539ea722d4d22e226ac8"} Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.016846 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f7eae771-49da-40b9-a538-9c7c49f61ce3","Type":"ContainerDied","Data":"e09897e24480ca2b5eb387baeb8c83bcb5bcba1b2b26539d881c98ae54782ded"} Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.016860 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f7eae771-49da-40b9-a538-9c7c49f61ce3","Type":"ContainerDied","Data":"ddf55a66062b9b23ede7bc9c23d0eaea8956685a5ae97e614a4a208a0cb63dd4"} Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.016880 4867 scope.go:117] "RemoveContainer" containerID="f026d965689327ff7eaf47896abc06424c95fc23b903539ea722d4d22e226ac8" Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.078623 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.082534 4867 scope.go:117] "RemoveContainer" containerID="e09897e24480ca2b5eb387baeb8c83bcb5bcba1b2b26539d881c98ae54782ded" Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.104265 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.122963 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 14 04:34:08 crc kubenswrapper[4867]: E0214 04:34:08.123893 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7eae771-49da-40b9-a538-9c7c49f61ce3" containerName="nova-metadata-metadata" Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.123921 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7eae771-49da-40b9-a538-9c7c49f61ce3" containerName="nova-metadata-metadata" Feb 14 04:34:08 crc kubenswrapper[4867]: E0214 04:34:08.123953 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7eae771-49da-40b9-a538-9c7c49f61ce3" containerName="nova-metadata-log" Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.123962 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7eae771-49da-40b9-a538-9c7c49f61ce3" containerName="nova-metadata-log" Feb 14 04:34:08 crc kubenswrapper[4867]: E0214 04:34:08.123994 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7959a0fa-00bd-492c-9892-a8c8727549c6" containerName="dnsmasq-dns" Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.124006 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="7959a0fa-00bd-492c-9892-a8c8727549c6" containerName="dnsmasq-dns" Feb 14 04:34:08 crc kubenswrapper[4867]: E0214 04:34:08.124031 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7959a0fa-00bd-492c-9892-a8c8727549c6" containerName="init" Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.124039 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="7959a0fa-00bd-492c-9892-a8c8727549c6" containerName="init" Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.124324 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7eae771-49da-40b9-a538-9c7c49f61ce3" containerName="nova-metadata-metadata" Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.124351 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7eae771-49da-40b9-a538-9c7c49f61ce3" containerName="nova-metadata-log" Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.124381 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="7959a0fa-00bd-492c-9892-a8c8727549c6" containerName="dnsmasq-dns" Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.126196 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.129576 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.134359 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.141215 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.205863 4867 scope.go:117] "RemoveContainer" containerID="f026d965689327ff7eaf47896abc06424c95fc23b903539ea722d4d22e226ac8" Feb 14 04:34:08 crc kubenswrapper[4867]: E0214 04:34:08.209072 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f026d965689327ff7eaf47896abc06424c95fc23b903539ea722d4d22e226ac8\": container with ID starting with f026d965689327ff7eaf47896abc06424c95fc23b903539ea722d4d22e226ac8 not found: ID does not exist" containerID="f026d965689327ff7eaf47896abc06424c95fc23b903539ea722d4d22e226ac8" Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.209185 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f026d965689327ff7eaf47896abc06424c95fc23b903539ea722d4d22e226ac8"} err="failed to get container status \"f026d965689327ff7eaf47896abc06424c95fc23b903539ea722d4d22e226ac8\": rpc error: code = NotFound desc = could not find container \"f026d965689327ff7eaf47896abc06424c95fc23b903539ea722d4d22e226ac8\": container with ID starting with f026d965689327ff7eaf47896abc06424c95fc23b903539ea722d4d22e226ac8 not found: ID does not exist" Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.209216 4867 scope.go:117] "RemoveContainer" containerID="e09897e24480ca2b5eb387baeb8c83bcb5bcba1b2b26539d881c98ae54782ded" Feb 14 04:34:08 crc kubenswrapper[4867]: E0214 04:34:08.212024 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e09897e24480ca2b5eb387baeb8c83bcb5bcba1b2b26539d881c98ae54782ded\": container with ID starting with e09897e24480ca2b5eb387baeb8c83bcb5bcba1b2b26539d881c98ae54782ded not found: ID does not exist" containerID="e09897e24480ca2b5eb387baeb8c83bcb5bcba1b2b26539d881c98ae54782ded" Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.212053 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e09897e24480ca2b5eb387baeb8c83bcb5bcba1b2b26539d881c98ae54782ded"} err="failed to get container status \"e09897e24480ca2b5eb387baeb8c83bcb5bcba1b2b26539d881c98ae54782ded\": rpc error: code = NotFound desc = could not find container \"e09897e24480ca2b5eb387baeb8c83bcb5bcba1b2b26539d881c98ae54782ded\": container with ID starting with e09897e24480ca2b5eb387baeb8c83bcb5bcba1b2b26539d881c98ae54782ded not found: ID does not exist" Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.212121 4867 scope.go:117] "RemoveContainer" containerID="f026d965689327ff7eaf47896abc06424c95fc23b903539ea722d4d22e226ac8" Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.212353 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f026d965689327ff7eaf47896abc06424c95fc23b903539ea722d4d22e226ac8"} err="failed to get container status \"f026d965689327ff7eaf47896abc06424c95fc23b903539ea722d4d22e226ac8\": rpc error: code = NotFound desc = could not find container \"f026d965689327ff7eaf47896abc06424c95fc23b903539ea722d4d22e226ac8\": container with ID starting with f026d965689327ff7eaf47896abc06424c95fc23b903539ea722d4d22e226ac8 not found: ID does not exist" Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.212389 4867 scope.go:117] "RemoveContainer" containerID="e09897e24480ca2b5eb387baeb8c83bcb5bcba1b2b26539d881c98ae54782ded" Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.213225 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e09897e24480ca2b5eb387baeb8c83bcb5bcba1b2b26539d881c98ae54782ded"} err="failed to get container status \"e09897e24480ca2b5eb387baeb8c83bcb5bcba1b2b26539d881c98ae54782ded\": rpc error: code = NotFound desc = could not find container \"e09897e24480ca2b5eb387baeb8c83bcb5bcba1b2b26539d881c98ae54782ded\": container with ID starting with e09897e24480ca2b5eb387baeb8c83bcb5bcba1b2b26539d881c98ae54782ded not found: ID does not exist" Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.236605 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac83a182-1841-4e64-9b31-f20e32917613-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ac83a182-1841-4e64-9b31-f20e32917613\") " pod="openstack/nova-metadata-0" Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.236978 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac83a182-1841-4e64-9b31-f20e32917613-config-data\") pod \"nova-metadata-0\" (UID: \"ac83a182-1841-4e64-9b31-f20e32917613\") " pod="openstack/nova-metadata-0" Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.237542 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac83a182-1841-4e64-9b31-f20e32917613-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ac83a182-1841-4e64-9b31-f20e32917613\") " pod="openstack/nova-metadata-0" Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.238006 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac83a182-1841-4e64-9b31-f20e32917613-logs\") pod \"nova-metadata-0\" (UID: \"ac83a182-1841-4e64-9b31-f20e32917613\") " pod="openstack/nova-metadata-0" Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.238437 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qcth\" (UniqueName: \"kubernetes.io/projected/ac83a182-1841-4e64-9b31-f20e32917613-kube-api-access-5qcth\") pod \"nova-metadata-0\" (UID: \"ac83a182-1841-4e64-9b31-f20e32917613\") " pod="openstack/nova-metadata-0" Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.342158 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qcth\" (UniqueName: \"kubernetes.io/projected/ac83a182-1841-4e64-9b31-f20e32917613-kube-api-access-5qcth\") pod \"nova-metadata-0\" (UID: \"ac83a182-1841-4e64-9b31-f20e32917613\") " pod="openstack/nova-metadata-0" Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.342967 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac83a182-1841-4e64-9b31-f20e32917613-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ac83a182-1841-4e64-9b31-f20e32917613\") " pod="openstack/nova-metadata-0" Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.343222 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac83a182-1841-4e64-9b31-f20e32917613-config-data\") pod \"nova-metadata-0\" (UID: \"ac83a182-1841-4e64-9b31-f20e32917613\") " pod="openstack/nova-metadata-0" Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.344352 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac83a182-1841-4e64-9b31-f20e32917613-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ac83a182-1841-4e64-9b31-f20e32917613\") " pod="openstack/nova-metadata-0" Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.344859 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac83a182-1841-4e64-9b31-f20e32917613-logs\") pod \"nova-metadata-0\" (UID: \"ac83a182-1841-4e64-9b31-f20e32917613\") " pod="openstack/nova-metadata-0" Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.345407 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac83a182-1841-4e64-9b31-f20e32917613-logs\") pod \"nova-metadata-0\" (UID: \"ac83a182-1841-4e64-9b31-f20e32917613\") " pod="openstack/nova-metadata-0" Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.347176 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac83a182-1841-4e64-9b31-f20e32917613-config-data\") pod \"nova-metadata-0\" (UID: \"ac83a182-1841-4e64-9b31-f20e32917613\") " pod="openstack/nova-metadata-0" Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.348558 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac83a182-1841-4e64-9b31-f20e32917613-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ac83a182-1841-4e64-9b31-f20e32917613\") " pod="openstack/nova-metadata-0" Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.350464 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac83a182-1841-4e64-9b31-f20e32917613-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ac83a182-1841-4e64-9b31-f20e32917613\") " pod="openstack/nova-metadata-0" Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.364106 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qcth\" (UniqueName: \"kubernetes.io/projected/ac83a182-1841-4e64-9b31-f20e32917613-kube-api-access-5qcth\") pod \"nova-metadata-0\" (UID: \"ac83a182-1841-4e64-9b31-f20e32917613\") " pod="openstack/nova-metadata-0" Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.450961 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.622893 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-8pszd" Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.753490 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jzzc7\" (UniqueName: \"kubernetes.io/projected/9947f337-0734-4b4e-bc31-e68e6354ed74-kube-api-access-jzzc7\") pod \"9947f337-0734-4b4e-bc31-e68e6354ed74\" (UID: \"9947f337-0734-4b4e-bc31-e68e6354ed74\") " Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.753615 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9947f337-0734-4b4e-bc31-e68e6354ed74-scripts\") pod \"9947f337-0734-4b4e-bc31-e68e6354ed74\" (UID: \"9947f337-0734-4b4e-bc31-e68e6354ed74\") " Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.753679 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9947f337-0734-4b4e-bc31-e68e6354ed74-config-data\") pod \"9947f337-0734-4b4e-bc31-e68e6354ed74\" (UID: \"9947f337-0734-4b4e-bc31-e68e6354ed74\") " Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.754116 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9947f337-0734-4b4e-bc31-e68e6354ed74-combined-ca-bundle\") pod \"9947f337-0734-4b4e-bc31-e68e6354ed74\" (UID: \"9947f337-0734-4b4e-bc31-e68e6354ed74\") " Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.763919 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9947f337-0734-4b4e-bc31-e68e6354ed74-scripts" (OuterVolumeSpecName: "scripts") pod "9947f337-0734-4b4e-bc31-e68e6354ed74" (UID: "9947f337-0734-4b4e-bc31-e68e6354ed74"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.764084 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9947f337-0734-4b4e-bc31-e68e6354ed74-kube-api-access-jzzc7" (OuterVolumeSpecName: "kube-api-access-jzzc7") pod "9947f337-0734-4b4e-bc31-e68e6354ed74" (UID: "9947f337-0734-4b4e-bc31-e68e6354ed74"). InnerVolumeSpecName "kube-api-access-jzzc7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.799674 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9947f337-0734-4b4e-bc31-e68e6354ed74-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9947f337-0734-4b4e-bc31-e68e6354ed74" (UID: "9947f337-0734-4b4e-bc31-e68e6354ed74"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.829770 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9947f337-0734-4b4e-bc31-e68e6354ed74-config-data" (OuterVolumeSpecName: "config-data") pod "9947f337-0734-4b4e-bc31-e68e6354ed74" (UID: "9947f337-0734-4b4e-bc31-e68e6354ed74"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.858353 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9947f337-0734-4b4e-bc31-e68e6354ed74-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.858439 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jzzc7\" (UniqueName: \"kubernetes.io/projected/9947f337-0734-4b4e-bc31-e68e6354ed74-kube-api-access-jzzc7\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.858464 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9947f337-0734-4b4e-bc31-e68e6354ed74-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.858479 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9947f337-0734-4b4e-bc31-e68e6354ed74-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.983631 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 14 04:34:08 crc kubenswrapper[4867]: W0214 04:34:08.990523 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac83a182_1841_4e64_9b31_f20e32917613.slice/crio-17bd6501f265854a6cc4968c75a7bac955f83f1c413ca7aa976b818c26157d4b WatchSource:0}: Error finding container 17bd6501f265854a6cc4968c75a7bac955f83f1c413ca7aa976b818c26157d4b: Status 404 returned error can't find the container with id 17bd6501f265854a6cc4968c75a7bac955f83f1c413ca7aa976b818c26157d4b Feb 14 04:34:09 crc kubenswrapper[4867]: I0214 04:34:09.010099 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7959a0fa-00bd-492c-9892-a8c8727549c6" path="/var/lib/kubelet/pods/7959a0fa-00bd-492c-9892-a8c8727549c6/volumes" Feb 14 04:34:09 crc kubenswrapper[4867]: I0214 04:34:09.010786 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7eae771-49da-40b9-a538-9c7c49f61ce3" path="/var/lib/kubelet/pods/f7eae771-49da-40b9-a538-9c7c49f61ce3/volumes" Feb 14 04:34:09 crc kubenswrapper[4867]: I0214 04:34:09.026590 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ac83a182-1841-4e64-9b31-f20e32917613","Type":"ContainerStarted","Data":"17bd6501f265854a6cc4968c75a7bac955f83f1c413ca7aa976b818c26157d4b"} Feb 14 04:34:09 crc kubenswrapper[4867]: I0214 04:34:09.031036 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-8pszd" event={"ID":"9947f337-0734-4b4e-bc31-e68e6354ed74","Type":"ContainerDied","Data":"b83da7feac047b1c75ae9cbc66ea6dcb6125f8dff0301b8d2f1043fda57d7b84"} Feb 14 04:34:09 crc kubenswrapper[4867]: I0214 04:34:09.031082 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b83da7feac047b1c75ae9cbc66ea6dcb6125f8dff0301b8d2f1043fda57d7b84" Feb 14 04:34:09 crc kubenswrapper[4867]: I0214 04:34:09.031158 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-8pszd" Feb 14 04:34:09 crc kubenswrapper[4867]: I0214 04:34:09.033176 4867 generic.go:334] "Generic (PLEG): container finished" podID="2bbf3a42-f012-4bed-a60e-1defcd0b1af9" containerID="9434b7a5d62d84c5fafd89a974659be60c5965c5fe3ab11c7ca5ecbded575989" exitCode=0 Feb 14 04:34:09 crc kubenswrapper[4867]: I0214 04:34:09.033220 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-jw78d" event={"ID":"2bbf3a42-f012-4bed-a60e-1defcd0b1af9","Type":"ContainerDied","Data":"9434b7a5d62d84c5fafd89a974659be60c5965c5fe3ab11c7ca5ecbded575989"} Feb 14 04:34:09 crc kubenswrapper[4867]: I0214 04:34:09.281447 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 14 04:34:09 crc kubenswrapper[4867]: I0214 04:34:09.281794 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="8a61bb72-374e-48c9-bfa2-bbcc3e7503e6" containerName="nova-api-log" containerID="cri-o://da2202426713d859b897485d737b21484e5ca9b5d7888e558f0886564ebc4004" gracePeriod=30 Feb 14 04:34:09 crc kubenswrapper[4867]: I0214 04:34:09.282461 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="8a61bb72-374e-48c9-bfa2-bbcc3e7503e6" containerName="nova-api-api" containerID="cri-o://9a5116fd54e01f05e9d364d64710a67006a81d421d560d17bd6b58d16f3ec567" gracePeriod=30 Feb 14 04:34:09 crc kubenswrapper[4867]: I0214 04:34:09.314212 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 14 04:34:09 crc kubenswrapper[4867]: I0214 04:34:09.314873 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="ef0bc6d9-66ae-4a4d-8650-3c0ac27287cf" containerName="nova-scheduler-scheduler" containerID="cri-o://bc19b23b550c0ff93b93128b07ead353fc9290a4dbd1f4015fc48de629ff924f" gracePeriod=30 Feb 14 04:34:09 crc kubenswrapper[4867]: I0214 04:34:09.330078 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 14 04:34:09 crc kubenswrapper[4867]: I0214 04:34:09.955968 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 14 04:34:09 crc kubenswrapper[4867]: I0214 04:34:09.998927 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7vsl\" (UniqueName: \"kubernetes.io/projected/8a61bb72-374e-48c9-bfa2-bbcc3e7503e6-kube-api-access-d7vsl\") pod \"8a61bb72-374e-48c9-bfa2-bbcc3e7503e6\" (UID: \"8a61bb72-374e-48c9-bfa2-bbcc3e7503e6\") " Feb 14 04:34:09 crc kubenswrapper[4867]: I0214 04:34:09.999200 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a61bb72-374e-48c9-bfa2-bbcc3e7503e6-logs\") pod \"8a61bb72-374e-48c9-bfa2-bbcc3e7503e6\" (UID: \"8a61bb72-374e-48c9-bfa2-bbcc3e7503e6\") " Feb 14 04:34:09 crc kubenswrapper[4867]: I0214 04:34:09.999273 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a61bb72-374e-48c9-bfa2-bbcc3e7503e6-config-data\") pod \"8a61bb72-374e-48c9-bfa2-bbcc3e7503e6\" (UID: \"8a61bb72-374e-48c9-bfa2-bbcc3e7503e6\") " Feb 14 04:34:09 crc kubenswrapper[4867]: I0214 04:34:09.999312 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a61bb72-374e-48c9-bfa2-bbcc3e7503e6-combined-ca-bundle\") pod \"8a61bb72-374e-48c9-bfa2-bbcc3e7503e6\" (UID: \"8a61bb72-374e-48c9-bfa2-bbcc3e7503e6\") " Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.001073 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a61bb72-374e-48c9-bfa2-bbcc3e7503e6-logs" (OuterVolumeSpecName: "logs") pod "8a61bb72-374e-48c9-bfa2-bbcc3e7503e6" (UID: "8a61bb72-374e-48c9-bfa2-bbcc3e7503e6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.009257 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a61bb72-374e-48c9-bfa2-bbcc3e7503e6-kube-api-access-d7vsl" (OuterVolumeSpecName: "kube-api-access-d7vsl") pod "8a61bb72-374e-48c9-bfa2-bbcc3e7503e6" (UID: "8a61bb72-374e-48c9-bfa2-bbcc3e7503e6"). InnerVolumeSpecName "kube-api-access-d7vsl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.042238 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a61bb72-374e-48c9-bfa2-bbcc3e7503e6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8a61bb72-374e-48c9-bfa2-bbcc3e7503e6" (UID: "8a61bb72-374e-48c9-bfa2-bbcc3e7503e6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.046151 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ac83a182-1841-4e64-9b31-f20e32917613","Type":"ContainerStarted","Data":"c5f4d2ce383f399374bc58d1584dbdd0becb6b82315f169b3563b08eb3f414d1"} Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.046208 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ac83a182-1841-4e64-9b31-f20e32917613","Type":"ContainerStarted","Data":"e338dd6321b7cc373e6d70dc187a67843992c598fb81afefb40eee13511f4c40"} Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.046556 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ac83a182-1841-4e64-9b31-f20e32917613" containerName="nova-metadata-log" containerID="cri-o://e338dd6321b7cc373e6d70dc187a67843992c598fb81afefb40eee13511f4c40" gracePeriod=30 Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.046678 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ac83a182-1841-4e64-9b31-f20e32917613" containerName="nova-metadata-metadata" containerID="cri-o://c5f4d2ce383f399374bc58d1584dbdd0becb6b82315f169b3563b08eb3f414d1" gracePeriod=30 Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.049182 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.049325 4867 generic.go:334] "Generic (PLEG): container finished" podID="8a61bb72-374e-48c9-bfa2-bbcc3e7503e6" containerID="9a5116fd54e01f05e9d364d64710a67006a81d421d560d17bd6b58d16f3ec567" exitCode=0 Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.049352 4867 generic.go:334] "Generic (PLEG): container finished" podID="8a61bb72-374e-48c9-bfa2-bbcc3e7503e6" containerID="da2202426713d859b897485d737b21484e5ca9b5d7888e558f0886564ebc4004" exitCode=143 Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.049421 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8a61bb72-374e-48c9-bfa2-bbcc3e7503e6","Type":"ContainerDied","Data":"9a5116fd54e01f05e9d364d64710a67006a81d421d560d17bd6b58d16f3ec567"} Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.049449 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8a61bb72-374e-48c9-bfa2-bbcc3e7503e6","Type":"ContainerDied","Data":"da2202426713d859b897485d737b21484e5ca9b5d7888e558f0886564ebc4004"} Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.049458 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8a61bb72-374e-48c9-bfa2-bbcc3e7503e6","Type":"ContainerDied","Data":"ad32db769940286e14cc05b0d71b14b2584188a0981a1af84763e8fb6a761500"} Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.049474 4867 scope.go:117] "RemoveContainer" containerID="9a5116fd54e01f05e9d364d64710a67006a81d421d560d17bd6b58d16f3ec567" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.053171 4867 generic.go:334] "Generic (PLEG): container finished" podID="df373c99-9a99-4793-90ef-3ad7887e5e3e" containerID="027f7b47ecf95746bb9733dbd606f94b7866eecb1f1ce8cb4d1598a367884200" exitCode=0 Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.053352 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-dnl28" event={"ID":"df373c99-9a99-4793-90ef-3ad7887e5e3e","Type":"ContainerDied","Data":"027f7b47ecf95746bb9733dbd606f94b7866eecb1f1ce8cb4d1598a367884200"} Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.054678 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a61bb72-374e-48c9-bfa2-bbcc3e7503e6-config-data" (OuterVolumeSpecName: "config-data") pod "8a61bb72-374e-48c9-bfa2-bbcc3e7503e6" (UID: "8a61bb72-374e-48c9-bfa2-bbcc3e7503e6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.094626 4867 scope.go:117] "RemoveContainer" containerID="da2202426713d859b897485d737b21484e5ca9b5d7888e558f0886564ebc4004" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.101995 4867 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a61bb72-374e-48c9-bfa2-bbcc3e7503e6-logs\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.102023 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a61bb72-374e-48c9-bfa2-bbcc3e7503e6-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.102033 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a61bb72-374e-48c9-bfa2-bbcc3e7503e6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.102046 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d7vsl\" (UniqueName: \"kubernetes.io/projected/8a61bb72-374e-48c9-bfa2-bbcc3e7503e6-kube-api-access-d7vsl\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.108856 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.108836324 podStartE2EDuration="2.108836324s" podCreationTimestamp="2026-02-14 04:34:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:34:10.062901969 +0000 UTC m=+1482.143839303" watchObservedRunningTime="2026-02-14 04:34:10.108836324 +0000 UTC m=+1482.189773638" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.128765 4867 scope.go:117] "RemoveContainer" containerID="9a5116fd54e01f05e9d364d64710a67006a81d421d560d17bd6b58d16f3ec567" Feb 14 04:34:10 crc kubenswrapper[4867]: E0214 04:34:10.133523 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a5116fd54e01f05e9d364d64710a67006a81d421d560d17bd6b58d16f3ec567\": container with ID starting with 9a5116fd54e01f05e9d364d64710a67006a81d421d560d17bd6b58d16f3ec567 not found: ID does not exist" containerID="9a5116fd54e01f05e9d364d64710a67006a81d421d560d17bd6b58d16f3ec567" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.133583 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a5116fd54e01f05e9d364d64710a67006a81d421d560d17bd6b58d16f3ec567"} err="failed to get container status \"9a5116fd54e01f05e9d364d64710a67006a81d421d560d17bd6b58d16f3ec567\": rpc error: code = NotFound desc = could not find container \"9a5116fd54e01f05e9d364d64710a67006a81d421d560d17bd6b58d16f3ec567\": container with ID starting with 9a5116fd54e01f05e9d364d64710a67006a81d421d560d17bd6b58d16f3ec567 not found: ID does not exist" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.133613 4867 scope.go:117] "RemoveContainer" containerID="da2202426713d859b897485d737b21484e5ca9b5d7888e558f0886564ebc4004" Feb 14 04:34:10 crc kubenswrapper[4867]: E0214 04:34:10.138331 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da2202426713d859b897485d737b21484e5ca9b5d7888e558f0886564ebc4004\": container with ID starting with da2202426713d859b897485d737b21484e5ca9b5d7888e558f0886564ebc4004 not found: ID does not exist" containerID="da2202426713d859b897485d737b21484e5ca9b5d7888e558f0886564ebc4004" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.138368 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da2202426713d859b897485d737b21484e5ca9b5d7888e558f0886564ebc4004"} err="failed to get container status \"da2202426713d859b897485d737b21484e5ca9b5d7888e558f0886564ebc4004\": rpc error: code = NotFound desc = could not find container \"da2202426713d859b897485d737b21484e5ca9b5d7888e558f0886564ebc4004\": container with ID starting with da2202426713d859b897485d737b21484e5ca9b5d7888e558f0886564ebc4004 not found: ID does not exist" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.138387 4867 scope.go:117] "RemoveContainer" containerID="9a5116fd54e01f05e9d364d64710a67006a81d421d560d17bd6b58d16f3ec567" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.139816 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a5116fd54e01f05e9d364d64710a67006a81d421d560d17bd6b58d16f3ec567"} err="failed to get container status \"9a5116fd54e01f05e9d364d64710a67006a81d421d560d17bd6b58d16f3ec567\": rpc error: code = NotFound desc = could not find container \"9a5116fd54e01f05e9d364d64710a67006a81d421d560d17bd6b58d16f3ec567\": container with ID starting with 9a5116fd54e01f05e9d364d64710a67006a81d421d560d17bd6b58d16f3ec567 not found: ID does not exist" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.139869 4867 scope.go:117] "RemoveContainer" containerID="da2202426713d859b897485d737b21484e5ca9b5d7888e558f0886564ebc4004" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.140233 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da2202426713d859b897485d737b21484e5ca9b5d7888e558f0886564ebc4004"} err="failed to get container status \"da2202426713d859b897485d737b21484e5ca9b5d7888e558f0886564ebc4004\": rpc error: code = NotFound desc = could not find container \"da2202426713d859b897485d737b21484e5ca9b5d7888e558f0886564ebc4004\": container with ID starting with da2202426713d859b897485d737b21484e5ca9b5d7888e558f0886564ebc4004 not found: ID does not exist" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.162065 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.510552 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-jw78d" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.542089 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.560153 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.597833 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 14 04:34:10 crc kubenswrapper[4867]: E0214 04:34:10.598470 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a61bb72-374e-48c9-bfa2-bbcc3e7503e6" containerName="nova-api-log" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.598493 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a61bb72-374e-48c9-bfa2-bbcc3e7503e6" containerName="nova-api-log" Feb 14 04:34:10 crc kubenswrapper[4867]: E0214 04:34:10.598557 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9947f337-0734-4b4e-bc31-e68e6354ed74" containerName="nova-manage" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.598568 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="9947f337-0734-4b4e-bc31-e68e6354ed74" containerName="nova-manage" Feb 14 04:34:10 crc kubenswrapper[4867]: E0214 04:34:10.598581 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bbf3a42-f012-4bed-a60e-1defcd0b1af9" containerName="nova-cell1-conductor-db-sync" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.598588 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bbf3a42-f012-4bed-a60e-1defcd0b1af9" containerName="nova-cell1-conductor-db-sync" Feb 14 04:34:10 crc kubenswrapper[4867]: E0214 04:34:10.598600 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a61bb72-374e-48c9-bfa2-bbcc3e7503e6" containerName="nova-api-api" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.598608 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a61bb72-374e-48c9-bfa2-bbcc3e7503e6" containerName="nova-api-api" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.598814 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a61bb72-374e-48c9-bfa2-bbcc3e7503e6" containerName="nova-api-api" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.598833 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a61bb72-374e-48c9-bfa2-bbcc3e7503e6" containerName="nova-api-log" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.598847 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="9947f337-0734-4b4e-bc31-e68e6354ed74" containerName="nova-manage" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.598866 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="2bbf3a42-f012-4bed-a60e-1defcd0b1af9" containerName="nova-cell1-conductor-db-sync" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.605695 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.605843 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.609406 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.610264 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bbf3a42-f012-4bed-a60e-1defcd0b1af9-config-data\") pod \"2bbf3a42-f012-4bed-a60e-1defcd0b1af9\" (UID: \"2bbf3a42-f012-4bed-a60e-1defcd0b1af9\") " Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.610586 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bbf3a42-f012-4bed-a60e-1defcd0b1af9-combined-ca-bundle\") pod \"2bbf3a42-f012-4bed-a60e-1defcd0b1af9\" (UID: \"2bbf3a42-f012-4bed-a60e-1defcd0b1af9\") " Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.610684 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2bbf3a42-f012-4bed-a60e-1defcd0b1af9-scripts\") pod \"2bbf3a42-f012-4bed-a60e-1defcd0b1af9\" (UID: \"2bbf3a42-f012-4bed-a60e-1defcd0b1af9\") " Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.610846 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x5whg\" (UniqueName: \"kubernetes.io/projected/2bbf3a42-f012-4bed-a60e-1defcd0b1af9-kube-api-access-x5whg\") pod \"2bbf3a42-f012-4bed-a60e-1defcd0b1af9\" (UID: \"2bbf3a42-f012-4bed-a60e-1defcd0b1af9\") " Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.616683 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2bbf3a42-f012-4bed-a60e-1defcd0b1af9-kube-api-access-x5whg" (OuterVolumeSpecName: "kube-api-access-x5whg") pod "2bbf3a42-f012-4bed-a60e-1defcd0b1af9" (UID: "2bbf3a42-f012-4bed-a60e-1defcd0b1af9"). InnerVolumeSpecName "kube-api-access-x5whg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.660762 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2bbf3a42-f012-4bed-a60e-1defcd0b1af9-scripts" (OuterVolumeSpecName: "scripts") pod "2bbf3a42-f012-4bed-a60e-1defcd0b1af9" (UID: "2bbf3a42-f012-4bed-a60e-1defcd0b1af9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.664689 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.667701 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2bbf3a42-f012-4bed-a60e-1defcd0b1af9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2bbf3a42-f012-4bed-a60e-1defcd0b1af9" (UID: "2bbf3a42-f012-4bed-a60e-1defcd0b1af9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.674457 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2bbf3a42-f012-4bed-a60e-1defcd0b1af9-config-data" (OuterVolumeSpecName: "config-data") pod "2bbf3a42-f012-4bed-a60e-1defcd0b1af9" (UID: "2bbf3a42-f012-4bed-a60e-1defcd0b1af9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.712821 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f365fedd-2e1e-41da-aeed-c2f6cf9de0eb-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f365fedd-2e1e-41da-aeed-c2f6cf9de0eb\") " pod="openstack/nova-api-0" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.712964 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f365fedd-2e1e-41da-aeed-c2f6cf9de0eb-logs\") pod \"nova-api-0\" (UID: \"f365fedd-2e1e-41da-aeed-c2f6cf9de0eb\") " pod="openstack/nova-api-0" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.713014 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f365fedd-2e1e-41da-aeed-c2f6cf9de0eb-config-data\") pod \"nova-api-0\" (UID: \"f365fedd-2e1e-41da-aeed-c2f6cf9de0eb\") " pod="openstack/nova-api-0" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.713157 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnlv8\" (UniqueName: \"kubernetes.io/projected/f365fedd-2e1e-41da-aeed-c2f6cf9de0eb-kube-api-access-bnlv8\") pod \"nova-api-0\" (UID: \"f365fedd-2e1e-41da-aeed-c2f6cf9de0eb\") " pod="openstack/nova-api-0" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.713256 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bbf3a42-f012-4bed-a60e-1defcd0b1af9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.713276 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2bbf3a42-f012-4bed-a60e-1defcd0b1af9-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.713289 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x5whg\" (UniqueName: \"kubernetes.io/projected/2bbf3a42-f012-4bed-a60e-1defcd0b1af9-kube-api-access-x5whg\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.713302 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bbf3a42-f012-4bed-a60e-1defcd0b1af9-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.815284 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f365fedd-2e1e-41da-aeed-c2f6cf9de0eb-logs\") pod \"nova-api-0\" (UID: \"f365fedd-2e1e-41da-aeed-c2f6cf9de0eb\") " pod="openstack/nova-api-0" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.815757 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f365fedd-2e1e-41da-aeed-c2f6cf9de0eb-config-data\") pod \"nova-api-0\" (UID: \"f365fedd-2e1e-41da-aeed-c2f6cf9de0eb\") " pod="openstack/nova-api-0" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.815913 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f365fedd-2e1e-41da-aeed-c2f6cf9de0eb-logs\") pod \"nova-api-0\" (UID: \"f365fedd-2e1e-41da-aeed-c2f6cf9de0eb\") " pod="openstack/nova-api-0" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.815942 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnlv8\" (UniqueName: \"kubernetes.io/projected/f365fedd-2e1e-41da-aeed-c2f6cf9de0eb-kube-api-access-bnlv8\") pod \"nova-api-0\" (UID: \"f365fedd-2e1e-41da-aeed-c2f6cf9de0eb\") " pod="openstack/nova-api-0" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.816030 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f365fedd-2e1e-41da-aeed-c2f6cf9de0eb-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f365fedd-2e1e-41da-aeed-c2f6cf9de0eb\") " pod="openstack/nova-api-0" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.823302 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f365fedd-2e1e-41da-aeed-c2f6cf9de0eb-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f365fedd-2e1e-41da-aeed-c2f6cf9de0eb\") " pod="openstack/nova-api-0" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.838373 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f365fedd-2e1e-41da-aeed-c2f6cf9de0eb-config-data\") pod \"nova-api-0\" (UID: \"f365fedd-2e1e-41da-aeed-c2f6cf9de0eb\") " pod="openstack/nova-api-0" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.840265 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnlv8\" (UniqueName: \"kubernetes.io/projected/f365fedd-2e1e-41da-aeed-c2f6cf9de0eb-kube-api-access-bnlv8\") pod \"nova-api-0\" (UID: \"f365fedd-2e1e-41da-aeed-c2f6cf9de0eb\") " pod="openstack/nova-api-0" Feb 14 04:34:11 crc kubenswrapper[4867]: I0214 04:34:11.077580 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 14 04:34:11 crc kubenswrapper[4867]: I0214 04:34:11.149609 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a61bb72-374e-48c9-bfa2-bbcc3e7503e6" path="/var/lib/kubelet/pods/8a61bb72-374e-48c9-bfa2-bbcc3e7503e6/volumes" Feb 14 04:34:11 crc kubenswrapper[4867]: I0214 04:34:11.192362 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-jw78d" event={"ID":"2bbf3a42-f012-4bed-a60e-1defcd0b1af9","Type":"ContainerDied","Data":"2f582cbf6bdcb91733773e29bff48a780e188f584567e68dfb743d1673b021ed"} Feb 14 04:34:11 crc kubenswrapper[4867]: I0214 04:34:11.192433 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f582cbf6bdcb91733773e29bff48a780e188f584567e68dfb743d1673b021ed" Feb 14 04:34:11 crc kubenswrapper[4867]: I0214 04:34:11.192564 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-jw78d" Feb 14 04:34:11 crc kubenswrapper[4867]: I0214 04:34:11.198184 4867 generic.go:334] "Generic (PLEG): container finished" podID="ac83a182-1841-4e64-9b31-f20e32917613" containerID="c5f4d2ce383f399374bc58d1584dbdd0becb6b82315f169b3563b08eb3f414d1" exitCode=0 Feb 14 04:34:11 crc kubenswrapper[4867]: I0214 04:34:11.198221 4867 generic.go:334] "Generic (PLEG): container finished" podID="ac83a182-1841-4e64-9b31-f20e32917613" containerID="e338dd6321b7cc373e6d70dc187a67843992c598fb81afefb40eee13511f4c40" exitCode=143 Feb 14 04:34:11 crc kubenswrapper[4867]: I0214 04:34:11.198234 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ac83a182-1841-4e64-9b31-f20e32917613","Type":"ContainerDied","Data":"c5f4d2ce383f399374bc58d1584dbdd0becb6b82315f169b3563b08eb3f414d1"} Feb 14 04:34:11 crc kubenswrapper[4867]: I0214 04:34:11.198282 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ac83a182-1841-4e64-9b31-f20e32917613","Type":"ContainerDied","Data":"e338dd6321b7cc373e6d70dc187a67843992c598fb81afefb40eee13511f4c40"} Feb 14 04:34:11 crc kubenswrapper[4867]: I0214 04:34:11.284584 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 14 04:34:11 crc kubenswrapper[4867]: I0214 04:34:11.286738 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 14 04:34:11 crc kubenswrapper[4867]: I0214 04:34:11.306911 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 14 04:34:11 crc kubenswrapper[4867]: I0214 04:34:11.373917 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 14 04:34:11 crc kubenswrapper[4867]: I0214 04:34:11.446581 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtzlh\" (UniqueName: \"kubernetes.io/projected/e367f188-2aa4-4374-a768-92b8e463e40d-kube-api-access-jtzlh\") pod \"nova-cell1-conductor-0\" (UID: \"e367f188-2aa4-4374-a768-92b8e463e40d\") " pod="openstack/nova-cell1-conductor-0" Feb 14 04:34:11 crc kubenswrapper[4867]: I0214 04:34:11.446950 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e367f188-2aa4-4374-a768-92b8e463e40d-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"e367f188-2aa4-4374-a768-92b8e463e40d\") " pod="openstack/nova-cell1-conductor-0" Feb 14 04:34:11 crc kubenswrapper[4867]: I0214 04:34:11.447038 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e367f188-2aa4-4374-a768-92b8e463e40d-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"e367f188-2aa4-4374-a768-92b8e463e40d\") " pod="openstack/nova-cell1-conductor-0" Feb 14 04:34:11 crc kubenswrapper[4867]: I0214 04:34:11.569178 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e367f188-2aa4-4374-a768-92b8e463e40d-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"e367f188-2aa4-4374-a768-92b8e463e40d\") " pod="openstack/nova-cell1-conductor-0" Feb 14 04:34:11 crc kubenswrapper[4867]: I0214 04:34:11.569550 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtzlh\" (UniqueName: \"kubernetes.io/projected/e367f188-2aa4-4374-a768-92b8e463e40d-kube-api-access-jtzlh\") pod \"nova-cell1-conductor-0\" (UID: \"e367f188-2aa4-4374-a768-92b8e463e40d\") " pod="openstack/nova-cell1-conductor-0" Feb 14 04:34:11 crc kubenswrapper[4867]: I0214 04:34:11.569719 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e367f188-2aa4-4374-a768-92b8e463e40d-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"e367f188-2aa4-4374-a768-92b8e463e40d\") " pod="openstack/nova-cell1-conductor-0" Feb 14 04:34:11 crc kubenswrapper[4867]: I0214 04:34:11.578734 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e367f188-2aa4-4374-a768-92b8e463e40d-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"e367f188-2aa4-4374-a768-92b8e463e40d\") " pod="openstack/nova-cell1-conductor-0" Feb 14 04:34:11 crc kubenswrapper[4867]: I0214 04:34:11.595178 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtzlh\" (UniqueName: \"kubernetes.io/projected/e367f188-2aa4-4374-a768-92b8e463e40d-kube-api-access-jtzlh\") pod \"nova-cell1-conductor-0\" (UID: \"e367f188-2aa4-4374-a768-92b8e463e40d\") " pod="openstack/nova-cell1-conductor-0" Feb 14 04:34:11 crc kubenswrapper[4867]: I0214 04:34:11.604116 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e367f188-2aa4-4374-a768-92b8e463e40d-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"e367f188-2aa4-4374-a768-92b8e463e40d\") " pod="openstack/nova-cell1-conductor-0" Feb 14 04:34:11 crc kubenswrapper[4867]: I0214 04:34:11.738739 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 14 04:34:11 crc kubenswrapper[4867]: I0214 04:34:11.744804 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 14 04:34:11 crc kubenswrapper[4867]: I0214 04:34:11.868625 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-dnl28" Feb 14 04:34:11 crc kubenswrapper[4867]: I0214 04:34:11.900543 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac83a182-1841-4e64-9b31-f20e32917613-combined-ca-bundle\") pod \"ac83a182-1841-4e64-9b31-f20e32917613\" (UID: \"ac83a182-1841-4e64-9b31-f20e32917613\") " Feb 14 04:34:11 crc kubenswrapper[4867]: I0214 04:34:11.900665 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac83a182-1841-4e64-9b31-f20e32917613-config-data\") pod \"ac83a182-1841-4e64-9b31-f20e32917613\" (UID: \"ac83a182-1841-4e64-9b31-f20e32917613\") " Feb 14 04:34:11 crc kubenswrapper[4867]: I0214 04:34:11.900724 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac83a182-1841-4e64-9b31-f20e32917613-logs\") pod \"ac83a182-1841-4e64-9b31-f20e32917613\" (UID: \"ac83a182-1841-4e64-9b31-f20e32917613\") " Feb 14 04:34:11 crc kubenswrapper[4867]: I0214 04:34:11.900853 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac83a182-1841-4e64-9b31-f20e32917613-nova-metadata-tls-certs\") pod \"ac83a182-1841-4e64-9b31-f20e32917613\" (UID: \"ac83a182-1841-4e64-9b31-f20e32917613\") " Feb 14 04:34:11 crc kubenswrapper[4867]: I0214 04:34:11.900928 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qcth\" (UniqueName: \"kubernetes.io/projected/ac83a182-1841-4e64-9b31-f20e32917613-kube-api-access-5qcth\") pod \"ac83a182-1841-4e64-9b31-f20e32917613\" (UID: \"ac83a182-1841-4e64-9b31-f20e32917613\") " Feb 14 04:34:11 crc kubenswrapper[4867]: I0214 04:34:11.901977 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac83a182-1841-4e64-9b31-f20e32917613-logs" (OuterVolumeSpecName: "logs") pod "ac83a182-1841-4e64-9b31-f20e32917613" (UID: "ac83a182-1841-4e64-9b31-f20e32917613"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:34:11 crc kubenswrapper[4867]: I0214 04:34:11.918781 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac83a182-1841-4e64-9b31-f20e32917613-kube-api-access-5qcth" (OuterVolumeSpecName: "kube-api-access-5qcth") pod "ac83a182-1841-4e64-9b31-f20e32917613" (UID: "ac83a182-1841-4e64-9b31-f20e32917613"). InnerVolumeSpecName "kube-api-access-5qcth". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:34:11 crc kubenswrapper[4867]: I0214 04:34:11.958009 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac83a182-1841-4e64-9b31-f20e32917613-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ac83a182-1841-4e64-9b31-f20e32917613" (UID: "ac83a182-1841-4e64-9b31-f20e32917613"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:34:11 crc kubenswrapper[4867]: I0214 04:34:11.978061 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac83a182-1841-4e64-9b31-f20e32917613-config-data" (OuterVolumeSpecName: "config-data") pod "ac83a182-1841-4e64-9b31-f20e32917613" (UID: "ac83a182-1841-4e64-9b31-f20e32917613"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.003063 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2gkt\" (UniqueName: \"kubernetes.io/projected/df373c99-9a99-4793-90ef-3ad7887e5e3e-kube-api-access-q2gkt\") pod \"df373c99-9a99-4793-90ef-3ad7887e5e3e\" (UID: \"df373c99-9a99-4793-90ef-3ad7887e5e3e\") " Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.003392 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df373c99-9a99-4793-90ef-3ad7887e5e3e-scripts\") pod \"df373c99-9a99-4793-90ef-3ad7887e5e3e\" (UID: \"df373c99-9a99-4793-90ef-3ad7887e5e3e\") " Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.003439 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df373c99-9a99-4793-90ef-3ad7887e5e3e-combined-ca-bundle\") pod \"df373c99-9a99-4793-90ef-3ad7887e5e3e\" (UID: \"df373c99-9a99-4793-90ef-3ad7887e5e3e\") " Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.003488 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df373c99-9a99-4793-90ef-3ad7887e5e3e-config-data\") pod \"df373c99-9a99-4793-90ef-3ad7887e5e3e\" (UID: \"df373c99-9a99-4793-90ef-3ad7887e5e3e\") " Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.003987 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac83a182-1841-4e64-9b31-f20e32917613-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.003999 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac83a182-1841-4e64-9b31-f20e32917613-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.004008 4867 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac83a182-1841-4e64-9b31-f20e32917613-logs\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.004017 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5qcth\" (UniqueName: \"kubernetes.io/projected/ac83a182-1841-4e64-9b31-f20e32917613-kube-api-access-5qcth\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.010685 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df373c99-9a99-4793-90ef-3ad7887e5e3e-kube-api-access-q2gkt" (OuterVolumeSpecName: "kube-api-access-q2gkt") pod "df373c99-9a99-4793-90ef-3ad7887e5e3e" (UID: "df373c99-9a99-4793-90ef-3ad7887e5e3e"). InnerVolumeSpecName "kube-api-access-q2gkt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.012024 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac83a182-1841-4e64-9b31-f20e32917613-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "ac83a182-1841-4e64-9b31-f20e32917613" (UID: "ac83a182-1841-4e64-9b31-f20e32917613"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.012787 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df373c99-9a99-4793-90ef-3ad7887e5e3e-scripts" (OuterVolumeSpecName: "scripts") pod "df373c99-9a99-4793-90ef-3ad7887e5e3e" (UID: "df373c99-9a99-4793-90ef-3ad7887e5e3e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.043553 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df373c99-9a99-4793-90ef-3ad7887e5e3e-config-data" (OuterVolumeSpecName: "config-data") pod "df373c99-9a99-4793-90ef-3ad7887e5e3e" (UID: "df373c99-9a99-4793-90ef-3ad7887e5e3e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.054441 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df373c99-9a99-4793-90ef-3ad7887e5e3e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "df373c99-9a99-4793-90ef-3ad7887e5e3e" (UID: "df373c99-9a99-4793-90ef-3ad7887e5e3e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.060789 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.106885 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df373c99-9a99-4793-90ef-3ad7887e5e3e-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.106920 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df373c99-9a99-4793-90ef-3ad7887e5e3e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.106936 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df373c99-9a99-4793-90ef-3ad7887e5e3e-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.106948 4867 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac83a182-1841-4e64-9b31-f20e32917613-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.106960 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2gkt\" (UniqueName: \"kubernetes.io/projected/df373c99-9a99-4793-90ef-3ad7887e5e3e-kube-api-access-q2gkt\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.163903 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.208357 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xmp66\" (UniqueName: \"kubernetes.io/projected/146fecda-f9b9-4c60-96a7-feb4120cda4c-kube-api-access-xmp66\") pod \"146fecda-f9b9-4c60-96a7-feb4120cda4c\" (UID: \"146fecda-f9b9-4c60-96a7-feb4120cda4c\") " Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.208558 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/146fecda-f9b9-4c60-96a7-feb4120cda4c-log-httpd\") pod \"146fecda-f9b9-4c60-96a7-feb4120cda4c\" (UID: \"146fecda-f9b9-4c60-96a7-feb4120cda4c\") " Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.208618 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/146fecda-f9b9-4c60-96a7-feb4120cda4c-config-data\") pod \"146fecda-f9b9-4c60-96a7-feb4120cda4c\" (UID: \"146fecda-f9b9-4c60-96a7-feb4120cda4c\") " Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.208716 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/146fecda-f9b9-4c60-96a7-feb4120cda4c-run-httpd\") pod \"146fecda-f9b9-4c60-96a7-feb4120cda4c\" (UID: \"146fecda-f9b9-4c60-96a7-feb4120cda4c\") " Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.208795 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/146fecda-f9b9-4c60-96a7-feb4120cda4c-scripts\") pod \"146fecda-f9b9-4c60-96a7-feb4120cda4c\" (UID: \"146fecda-f9b9-4c60-96a7-feb4120cda4c\") " Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.208879 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/146fecda-f9b9-4c60-96a7-feb4120cda4c-sg-core-conf-yaml\") pod \"146fecda-f9b9-4c60-96a7-feb4120cda4c\" (UID: \"146fecda-f9b9-4c60-96a7-feb4120cda4c\") " Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.208967 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/146fecda-f9b9-4c60-96a7-feb4120cda4c-combined-ca-bundle\") pod \"146fecda-f9b9-4c60-96a7-feb4120cda4c\" (UID: \"146fecda-f9b9-4c60-96a7-feb4120cda4c\") " Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.213188 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/146fecda-f9b9-4c60-96a7-feb4120cda4c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "146fecda-f9b9-4c60-96a7-feb4120cda4c" (UID: "146fecda-f9b9-4c60-96a7-feb4120cda4c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.215884 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/146fecda-f9b9-4c60-96a7-feb4120cda4c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "146fecda-f9b9-4c60-96a7-feb4120cda4c" (UID: "146fecda-f9b9-4c60-96a7-feb4120cda4c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.215918 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/146fecda-f9b9-4c60-96a7-feb4120cda4c-kube-api-access-xmp66" (OuterVolumeSpecName: "kube-api-access-xmp66") pod "146fecda-f9b9-4c60-96a7-feb4120cda4c" (UID: "146fecda-f9b9-4c60-96a7-feb4120cda4c"). InnerVolumeSpecName "kube-api-access-xmp66". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.216201 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/146fecda-f9b9-4c60-96a7-feb4120cda4c-scripts" (OuterVolumeSpecName: "scripts") pod "146fecda-f9b9-4c60-96a7-feb4120cda4c" (UID: "146fecda-f9b9-4c60-96a7-feb4120cda4c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.225687 4867 generic.go:334] "Generic (PLEG): container finished" podID="146fecda-f9b9-4c60-96a7-feb4120cda4c" containerID="24e83f89f28d0ec5c2caa8639449270fad89c9f3a9ccd66267870d308ea41167" exitCode=137 Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.225786 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"146fecda-f9b9-4c60-96a7-feb4120cda4c","Type":"ContainerDied","Data":"24e83f89f28d0ec5c2caa8639449270fad89c9f3a9ccd66267870d308ea41167"} Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.225819 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"146fecda-f9b9-4c60-96a7-feb4120cda4c","Type":"ContainerDied","Data":"2a9b10b567b5808562253fe944271d1f75330bc923dcd36a8e5d5a2e2e2a94fb"} Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.225839 4867 scope.go:117] "RemoveContainer" containerID="24e83f89f28d0ec5c2caa8639449270fad89c9f3a9ccd66267870d308ea41167" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.226057 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.235303 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ac83a182-1841-4e64-9b31-f20e32917613","Type":"ContainerDied","Data":"17bd6501f265854a6cc4968c75a7bac955f83f1c413ca7aa976b818c26157d4b"} Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.235374 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.239706 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f365fedd-2e1e-41da-aeed-c2f6cf9de0eb","Type":"ContainerStarted","Data":"8eef4a09d30f75b09b2a4e941b5145891d9e9ba139549a7546f8625ba9359aed"} Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.249420 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-dnl28" event={"ID":"df373c99-9a99-4793-90ef-3ad7887e5e3e","Type":"ContainerDied","Data":"1fd83dc61097e21fab2d831bb4e520d45961c33509d79aff1a7bb6b26c09cb8b"} Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.249480 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1fd83dc61097e21fab2d831bb4e520d45961c33509d79aff1a7bb6b26c09cb8b" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.249583 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-dnl28" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.265255 4867 scope.go:117] "RemoveContainer" containerID="36e2d712d4d8b9ed772106e7c47ea1eef658b8b8e9f298edfa74c23417b23cf7" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.285296 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/146fecda-f9b9-4c60-96a7-feb4120cda4c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "146fecda-f9b9-4c60-96a7-feb4120cda4c" (UID: "146fecda-f9b9-4c60-96a7-feb4120cda4c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.296001 4867 scope.go:117] "RemoveContainer" containerID="cf3e71140044d5d04aa52ab9b12ad81933fbe11148f05a7ee12b3ff9ed5ecd0b" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.308725 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.323212 4867 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/146fecda-f9b9-4c60-96a7-feb4120cda4c-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.323245 4867 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/146fecda-f9b9-4c60-96a7-feb4120cda4c-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.323254 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/146fecda-f9b9-4c60-96a7-feb4120cda4c-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.323263 4867 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/146fecda-f9b9-4c60-96a7-feb4120cda4c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.323272 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xmp66\" (UniqueName: \"kubernetes.io/projected/146fecda-f9b9-4c60-96a7-feb4120cda4c-kube-api-access-xmp66\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.333496 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/146fecda-f9b9-4c60-96a7-feb4120cda4c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "146fecda-f9b9-4c60-96a7-feb4120cda4c" (UID: "146fecda-f9b9-4c60-96a7-feb4120cda4c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.335072 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.345792 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 14 04:34:12 crc kubenswrapper[4867]: E0214 04:34:12.346243 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="146fecda-f9b9-4c60-96a7-feb4120cda4c" containerName="proxy-httpd" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.346262 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="146fecda-f9b9-4c60-96a7-feb4120cda4c" containerName="proxy-httpd" Feb 14 04:34:12 crc kubenswrapper[4867]: E0214 04:34:12.346281 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="146fecda-f9b9-4c60-96a7-feb4120cda4c" containerName="ceilometer-central-agent" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.346290 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="146fecda-f9b9-4c60-96a7-feb4120cda4c" containerName="ceilometer-central-agent" Feb 14 04:34:12 crc kubenswrapper[4867]: E0214 04:34:12.346314 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="146fecda-f9b9-4c60-96a7-feb4120cda4c" containerName="sg-core" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.346320 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="146fecda-f9b9-4c60-96a7-feb4120cda4c" containerName="sg-core" Feb 14 04:34:12 crc kubenswrapper[4867]: E0214 04:34:12.346331 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="146fecda-f9b9-4c60-96a7-feb4120cda4c" containerName="ceilometer-notification-agent" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.346337 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="146fecda-f9b9-4c60-96a7-feb4120cda4c" containerName="ceilometer-notification-agent" Feb 14 04:34:12 crc kubenswrapper[4867]: E0214 04:34:12.346346 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac83a182-1841-4e64-9b31-f20e32917613" containerName="nova-metadata-metadata" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.346353 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac83a182-1841-4e64-9b31-f20e32917613" containerName="nova-metadata-metadata" Feb 14 04:34:12 crc kubenswrapper[4867]: E0214 04:34:12.346372 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac83a182-1841-4e64-9b31-f20e32917613" containerName="nova-metadata-log" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.346378 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac83a182-1841-4e64-9b31-f20e32917613" containerName="nova-metadata-log" Feb 14 04:34:12 crc kubenswrapper[4867]: E0214 04:34:12.346395 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df373c99-9a99-4793-90ef-3ad7887e5e3e" containerName="aodh-db-sync" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.346401 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="df373c99-9a99-4793-90ef-3ad7887e5e3e" containerName="aodh-db-sync" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.346614 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac83a182-1841-4e64-9b31-f20e32917613" containerName="nova-metadata-log" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.346624 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="146fecda-f9b9-4c60-96a7-feb4120cda4c" containerName="sg-core" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.346640 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="146fecda-f9b9-4c60-96a7-feb4120cda4c" containerName="ceilometer-central-agent" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.346646 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="146fecda-f9b9-4c60-96a7-feb4120cda4c" containerName="proxy-httpd" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.346661 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac83a182-1841-4e64-9b31-f20e32917613" containerName="nova-metadata-metadata" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.346673 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="df373c99-9a99-4793-90ef-3ad7887e5e3e" containerName="aodh-db-sync" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.346686 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="146fecda-f9b9-4c60-96a7-feb4120cda4c" containerName="ceilometer-notification-agent" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.347897 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.352458 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.352544 4867 scope.go:117] "RemoveContainer" containerID="384d5807f9dd88aadba8af524a64d3e00b94913efc05bfc5c451124ddaedb1d6" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.352616 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.390087 4867 scope.go:117] "RemoveContainer" containerID="24e83f89f28d0ec5c2caa8639449270fad89c9f3a9ccd66267870d308ea41167" Feb 14 04:34:12 crc kubenswrapper[4867]: E0214 04:34:12.392813 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24e83f89f28d0ec5c2caa8639449270fad89c9f3a9ccd66267870d308ea41167\": container with ID starting with 24e83f89f28d0ec5c2caa8639449270fad89c9f3a9ccd66267870d308ea41167 not found: ID does not exist" containerID="24e83f89f28d0ec5c2caa8639449270fad89c9f3a9ccd66267870d308ea41167" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.392865 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24e83f89f28d0ec5c2caa8639449270fad89c9f3a9ccd66267870d308ea41167"} err="failed to get container status \"24e83f89f28d0ec5c2caa8639449270fad89c9f3a9ccd66267870d308ea41167\": rpc error: code = NotFound desc = could not find container \"24e83f89f28d0ec5c2caa8639449270fad89c9f3a9ccd66267870d308ea41167\": container with ID starting with 24e83f89f28d0ec5c2caa8639449270fad89c9f3a9ccd66267870d308ea41167 not found: ID does not exist" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.392899 4867 scope.go:117] "RemoveContainer" containerID="36e2d712d4d8b9ed772106e7c47ea1eef658b8b8e9f298edfa74c23417b23cf7" Feb 14 04:34:12 crc kubenswrapper[4867]: E0214 04:34:12.394130 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36e2d712d4d8b9ed772106e7c47ea1eef658b8b8e9f298edfa74c23417b23cf7\": container with ID starting with 36e2d712d4d8b9ed772106e7c47ea1eef658b8b8e9f298edfa74c23417b23cf7 not found: ID does not exist" containerID="36e2d712d4d8b9ed772106e7c47ea1eef658b8b8e9f298edfa74c23417b23cf7" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.394161 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36e2d712d4d8b9ed772106e7c47ea1eef658b8b8e9f298edfa74c23417b23cf7"} err="failed to get container status \"36e2d712d4d8b9ed772106e7c47ea1eef658b8b8e9f298edfa74c23417b23cf7\": rpc error: code = NotFound desc = could not find container \"36e2d712d4d8b9ed772106e7c47ea1eef658b8b8e9f298edfa74c23417b23cf7\": container with ID starting with 36e2d712d4d8b9ed772106e7c47ea1eef658b8b8e9f298edfa74c23417b23cf7 not found: ID does not exist" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.394183 4867 scope.go:117] "RemoveContainer" containerID="cf3e71140044d5d04aa52ab9b12ad81933fbe11148f05a7ee12b3ff9ed5ecd0b" Feb 14 04:34:12 crc kubenswrapper[4867]: E0214 04:34:12.395689 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf3e71140044d5d04aa52ab9b12ad81933fbe11148f05a7ee12b3ff9ed5ecd0b\": container with ID starting with cf3e71140044d5d04aa52ab9b12ad81933fbe11148f05a7ee12b3ff9ed5ecd0b not found: ID does not exist" containerID="cf3e71140044d5d04aa52ab9b12ad81933fbe11148f05a7ee12b3ff9ed5ecd0b" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.395730 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf3e71140044d5d04aa52ab9b12ad81933fbe11148f05a7ee12b3ff9ed5ecd0b"} err="failed to get container status \"cf3e71140044d5d04aa52ab9b12ad81933fbe11148f05a7ee12b3ff9ed5ecd0b\": rpc error: code = NotFound desc = could not find container \"cf3e71140044d5d04aa52ab9b12ad81933fbe11148f05a7ee12b3ff9ed5ecd0b\": container with ID starting with cf3e71140044d5d04aa52ab9b12ad81933fbe11148f05a7ee12b3ff9ed5ecd0b not found: ID does not exist" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.395755 4867 scope.go:117] "RemoveContainer" containerID="384d5807f9dd88aadba8af524a64d3e00b94913efc05bfc5c451124ddaedb1d6" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.409275 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 14 04:34:12 crc kubenswrapper[4867]: E0214 04:34:12.416429 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"384d5807f9dd88aadba8af524a64d3e00b94913efc05bfc5c451124ddaedb1d6\": container with ID starting with 384d5807f9dd88aadba8af524a64d3e00b94913efc05bfc5c451124ddaedb1d6 not found: ID does not exist" containerID="384d5807f9dd88aadba8af524a64d3e00b94913efc05bfc5c451124ddaedb1d6" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.416477 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"384d5807f9dd88aadba8af524a64d3e00b94913efc05bfc5c451124ddaedb1d6"} err="failed to get container status \"384d5807f9dd88aadba8af524a64d3e00b94913efc05bfc5c451124ddaedb1d6\": rpc error: code = NotFound desc = could not find container \"384d5807f9dd88aadba8af524a64d3e00b94913efc05bfc5c451124ddaedb1d6\": container with ID starting with 384d5807f9dd88aadba8af524a64d3e00b94913efc05bfc5c451124ddaedb1d6 not found: ID does not exist" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.416532 4867 scope.go:117] "RemoveContainer" containerID="c5f4d2ce383f399374bc58d1584dbdd0becb6b82315f169b3563b08eb3f414d1" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.428932 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szntz\" (UniqueName: \"kubernetes.io/projected/35a6b709-4f80-4abc-a92f-24a43d09a805-kube-api-access-szntz\") pod \"nova-metadata-0\" (UID: \"35a6b709-4f80-4abc-a92f-24a43d09a805\") " pod="openstack/nova-metadata-0" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.429558 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35a6b709-4f80-4abc-a92f-24a43d09a805-config-data\") pod \"nova-metadata-0\" (UID: \"35a6b709-4f80-4abc-a92f-24a43d09a805\") " pod="openstack/nova-metadata-0" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.429820 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35a6b709-4f80-4abc-a92f-24a43d09a805-logs\") pod \"nova-metadata-0\" (UID: \"35a6b709-4f80-4abc-a92f-24a43d09a805\") " pod="openstack/nova-metadata-0" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.429999 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35a6b709-4f80-4abc-a92f-24a43d09a805-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"35a6b709-4f80-4abc-a92f-24a43d09a805\") " pod="openstack/nova-metadata-0" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.430043 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/35a6b709-4f80-4abc-a92f-24a43d09a805-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"35a6b709-4f80-4abc-a92f-24a43d09a805\") " pod="openstack/nova-metadata-0" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.438442 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/146fecda-f9b9-4c60-96a7-feb4120cda4c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.453312 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/146fecda-f9b9-4c60-96a7-feb4120cda4c-config-data" (OuterVolumeSpecName: "config-data") pod "146fecda-f9b9-4c60-96a7-feb4120cda4c" (UID: "146fecda-f9b9-4c60-96a7-feb4120cda4c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.469996 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.541335 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szntz\" (UniqueName: \"kubernetes.io/projected/35a6b709-4f80-4abc-a92f-24a43d09a805-kube-api-access-szntz\") pod \"nova-metadata-0\" (UID: \"35a6b709-4f80-4abc-a92f-24a43d09a805\") " pod="openstack/nova-metadata-0" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.541492 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35a6b709-4f80-4abc-a92f-24a43d09a805-config-data\") pod \"nova-metadata-0\" (UID: \"35a6b709-4f80-4abc-a92f-24a43d09a805\") " pod="openstack/nova-metadata-0" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.541605 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35a6b709-4f80-4abc-a92f-24a43d09a805-logs\") pod \"nova-metadata-0\" (UID: \"35a6b709-4f80-4abc-a92f-24a43d09a805\") " pod="openstack/nova-metadata-0" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.541696 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35a6b709-4f80-4abc-a92f-24a43d09a805-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"35a6b709-4f80-4abc-a92f-24a43d09a805\") " pod="openstack/nova-metadata-0" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.541728 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/35a6b709-4f80-4abc-a92f-24a43d09a805-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"35a6b709-4f80-4abc-a92f-24a43d09a805\") " pod="openstack/nova-metadata-0" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.541827 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/146fecda-f9b9-4c60-96a7-feb4120cda4c-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.543451 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35a6b709-4f80-4abc-a92f-24a43d09a805-logs\") pod \"nova-metadata-0\" (UID: \"35a6b709-4f80-4abc-a92f-24a43d09a805\") " pod="openstack/nova-metadata-0" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.547495 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35a6b709-4f80-4abc-a92f-24a43d09a805-config-data\") pod \"nova-metadata-0\" (UID: \"35a6b709-4f80-4abc-a92f-24a43d09a805\") " pod="openstack/nova-metadata-0" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.547758 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/35a6b709-4f80-4abc-a92f-24a43d09a805-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"35a6b709-4f80-4abc-a92f-24a43d09a805\") " pod="openstack/nova-metadata-0" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.548379 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35a6b709-4f80-4abc-a92f-24a43d09a805-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"35a6b709-4f80-4abc-a92f-24a43d09a805\") " pod="openstack/nova-metadata-0" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.560096 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szntz\" (UniqueName: \"kubernetes.io/projected/35a6b709-4f80-4abc-a92f-24a43d09a805-kube-api-access-szntz\") pod \"nova-metadata-0\" (UID: \"35a6b709-4f80-4abc-a92f-24a43d09a805\") " pod="openstack/nova-metadata-0" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.654320 4867 scope.go:117] "RemoveContainer" containerID="e338dd6321b7cc373e6d70dc187a67843992c598fb81afefb40eee13511f4c40" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.667750 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.710590 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.738302 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.770044 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.773074 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.778312 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.788898 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.792714 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.867843 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dca43c59-5d18-4f9d-bb72-49460d8d691f-run-httpd\") pod \"ceilometer-0\" (UID: \"dca43c59-5d18-4f9d-bb72-49460d8d691f\") " pod="openstack/ceilometer-0" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.868148 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dca43c59-5d18-4f9d-bb72-49460d8d691f-config-data\") pod \"ceilometer-0\" (UID: \"dca43c59-5d18-4f9d-bb72-49460d8d691f\") " pod="openstack/ceilometer-0" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.868231 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dca43c59-5d18-4f9d-bb72-49460d8d691f-scripts\") pod \"ceilometer-0\" (UID: \"dca43c59-5d18-4f9d-bb72-49460d8d691f\") " pod="openstack/ceilometer-0" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.868281 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dca43c59-5d18-4f9d-bb72-49460d8d691f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dca43c59-5d18-4f9d-bb72-49460d8d691f\") " pod="openstack/ceilometer-0" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.868301 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dca43c59-5d18-4f9d-bb72-49460d8d691f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dca43c59-5d18-4f9d-bb72-49460d8d691f\") " pod="openstack/ceilometer-0" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.868331 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dca43c59-5d18-4f9d-bb72-49460d8d691f-log-httpd\") pod \"ceilometer-0\" (UID: \"dca43c59-5d18-4f9d-bb72-49460d8d691f\") " pod="openstack/ceilometer-0" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.868391 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdfnq\" (UniqueName: \"kubernetes.io/projected/dca43c59-5d18-4f9d-bb72-49460d8d691f-kube-api-access-hdfnq\") pod \"ceilometer-0\" (UID: \"dca43c59-5d18-4f9d-bb72-49460d8d691f\") " pod="openstack/ceilometer-0" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.972368 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dca43c59-5d18-4f9d-bb72-49460d8d691f-scripts\") pod \"ceilometer-0\" (UID: \"dca43c59-5d18-4f9d-bb72-49460d8d691f\") " pod="openstack/ceilometer-0" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.972450 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dca43c59-5d18-4f9d-bb72-49460d8d691f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dca43c59-5d18-4f9d-bb72-49460d8d691f\") " pod="openstack/ceilometer-0" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.972474 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dca43c59-5d18-4f9d-bb72-49460d8d691f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dca43c59-5d18-4f9d-bb72-49460d8d691f\") " pod="openstack/ceilometer-0" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.972519 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dca43c59-5d18-4f9d-bb72-49460d8d691f-log-httpd\") pod \"ceilometer-0\" (UID: \"dca43c59-5d18-4f9d-bb72-49460d8d691f\") " pod="openstack/ceilometer-0" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.972582 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdfnq\" (UniqueName: \"kubernetes.io/projected/dca43c59-5d18-4f9d-bb72-49460d8d691f-kube-api-access-hdfnq\") pod \"ceilometer-0\" (UID: \"dca43c59-5d18-4f9d-bb72-49460d8d691f\") " pod="openstack/ceilometer-0" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.972643 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dca43c59-5d18-4f9d-bb72-49460d8d691f-run-httpd\") pod \"ceilometer-0\" (UID: \"dca43c59-5d18-4f9d-bb72-49460d8d691f\") " pod="openstack/ceilometer-0" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.972661 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dca43c59-5d18-4f9d-bb72-49460d8d691f-config-data\") pod \"ceilometer-0\" (UID: \"dca43c59-5d18-4f9d-bb72-49460d8d691f\") " pod="openstack/ceilometer-0" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.976268 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dca43c59-5d18-4f9d-bb72-49460d8d691f-log-httpd\") pod \"ceilometer-0\" (UID: \"dca43c59-5d18-4f9d-bb72-49460d8d691f\") " pod="openstack/ceilometer-0" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.976420 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dca43c59-5d18-4f9d-bb72-49460d8d691f-run-httpd\") pod \"ceilometer-0\" (UID: \"dca43c59-5d18-4f9d-bb72-49460d8d691f\") " pod="openstack/ceilometer-0" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.981918 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dca43c59-5d18-4f9d-bb72-49460d8d691f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dca43c59-5d18-4f9d-bb72-49460d8d691f\") " pod="openstack/ceilometer-0" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.982204 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dca43c59-5d18-4f9d-bb72-49460d8d691f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dca43c59-5d18-4f9d-bb72-49460d8d691f\") " pod="openstack/ceilometer-0" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.982449 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dca43c59-5d18-4f9d-bb72-49460d8d691f-scripts\") pod \"ceilometer-0\" (UID: \"dca43c59-5d18-4f9d-bb72-49460d8d691f\") " pod="openstack/ceilometer-0" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.982649 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dca43c59-5d18-4f9d-bb72-49460d8d691f-config-data\") pod \"ceilometer-0\" (UID: \"dca43c59-5d18-4f9d-bb72-49460d8d691f\") " pod="openstack/ceilometer-0" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.994193 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdfnq\" (UniqueName: \"kubernetes.io/projected/dca43c59-5d18-4f9d-bb72-49460d8d691f-kube-api-access-hdfnq\") pod \"ceilometer-0\" (UID: \"dca43c59-5d18-4f9d-bb72-49460d8d691f\") " pod="openstack/ceilometer-0" Feb 14 04:34:13 crc kubenswrapper[4867]: I0214 04:34:13.012274 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="146fecda-f9b9-4c60-96a7-feb4120cda4c" path="/var/lib/kubelet/pods/146fecda-f9b9-4c60-96a7-feb4120cda4c/volumes" Feb 14 04:34:13 crc kubenswrapper[4867]: I0214 04:34:13.014008 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac83a182-1841-4e64-9b31-f20e32917613" path="/var/lib/kubelet/pods/ac83a182-1841-4e64-9b31-f20e32917613/volumes" Feb 14 04:34:13 crc kubenswrapper[4867]: I0214 04:34:13.187601 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 04:34:13 crc kubenswrapper[4867]: W0214 04:34:13.259161 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod35a6b709_4f80_4abc_a92f_24a43d09a805.slice/crio-e4082bbcd5482c7b8248419bd578fb69fd35b9f6097377273153ca13ce980a74 WatchSource:0}: Error finding container e4082bbcd5482c7b8248419bd578fb69fd35b9f6097377273153ca13ce980a74: Status 404 returned error can't find the container with id e4082bbcd5482c7b8248419bd578fb69fd35b9f6097377273153ca13ce980a74 Feb 14 04:34:13 crc kubenswrapper[4867]: I0214 04:34:13.261071 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 14 04:34:13 crc kubenswrapper[4867]: I0214 04:34:13.314631 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"e367f188-2aa4-4374-a768-92b8e463e40d","Type":"ContainerStarted","Data":"23e053e61533d60d688ce5e0075d32a24d5d784bcd101b0ac198cf0073c4215e"} Feb 14 04:34:13 crc kubenswrapper[4867]: I0214 04:34:13.314698 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"e367f188-2aa4-4374-a768-92b8e463e40d","Type":"ContainerStarted","Data":"e09a2563af19b7a141e2095e4a362e700d96702314303525282068762494921d"} Feb 14 04:34:13 crc kubenswrapper[4867]: I0214 04:34:13.317642 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 14 04:34:13 crc kubenswrapper[4867]: I0214 04:34:13.329381 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f365fedd-2e1e-41da-aeed-c2f6cf9de0eb","Type":"ContainerStarted","Data":"3bf6de6c41ec2894ac7f99d62ff3f51ff5cc922eed2592185cfef6d65b82aff9"} Feb 14 04:34:13 crc kubenswrapper[4867]: I0214 04:34:13.329438 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f365fedd-2e1e-41da-aeed-c2f6cf9de0eb","Type":"ContainerStarted","Data":"6a17090aa1f1970c7506d253b3e201ba17d075849c21455fa12bc2d248778b84"} Feb 14 04:34:13 crc kubenswrapper[4867]: I0214 04:34:13.368202 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.368179358 podStartE2EDuration="2.368179358s" podCreationTimestamp="2026-02-14 04:34:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:34:13.337250357 +0000 UTC m=+1485.418187681" watchObservedRunningTime="2026-02-14 04:34:13.368179358 +0000 UTC m=+1485.449116672" Feb 14 04:34:13 crc kubenswrapper[4867]: I0214 04:34:13.423499 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.423462444 podStartE2EDuration="3.423462444s" podCreationTimestamp="2026-02-14 04:34:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:34:13.389728267 +0000 UTC m=+1485.470665581" watchObservedRunningTime="2026-02-14 04:34:13.423462444 +0000 UTC m=+1485.504399758" Feb 14 04:34:13 crc kubenswrapper[4867]: I0214 04:34:13.737347 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:34:13 crc kubenswrapper[4867]: W0214 04:34:13.740692 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddca43c59_5d18_4f9d_bb72_49460d8d691f.slice/crio-d034009cb6942bfb4489567285645544a177850350cdc4d9dd4b67c0404cdf70 WatchSource:0}: Error finding container d034009cb6942bfb4489567285645544a177850350cdc4d9dd4b67c0404cdf70: Status 404 returned error can't find the container with id d034009cb6942bfb4489567285645544a177850350cdc4d9dd4b67c0404cdf70 Feb 14 04:34:14 crc kubenswrapper[4867]: I0214 04:34:14.342702 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dca43c59-5d18-4f9d-bb72-49460d8d691f","Type":"ContainerStarted","Data":"d034009cb6942bfb4489567285645544a177850350cdc4d9dd4b67c0404cdf70"} Feb 14 04:34:14 crc kubenswrapper[4867]: I0214 04:34:14.345297 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"35a6b709-4f80-4abc-a92f-24a43d09a805","Type":"ContainerStarted","Data":"4f20ac204fec7521d0bfa644dbcfa122f64c1e1b5d03b1c1422d51607f747fbe"} Feb 14 04:34:14 crc kubenswrapper[4867]: I0214 04:34:14.345328 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"35a6b709-4f80-4abc-a92f-24a43d09a805","Type":"ContainerStarted","Data":"fe2d375b29861eadad2b7db855fe51b64530824fb04ec1810859342237673233"} Feb 14 04:34:14 crc kubenswrapper[4867]: I0214 04:34:14.345341 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"35a6b709-4f80-4abc-a92f-24a43d09a805","Type":"ContainerStarted","Data":"e4082bbcd5482c7b8248419bd578fb69fd35b9f6097377273153ca13ce980a74"} Feb 14 04:34:14 crc kubenswrapper[4867]: I0214 04:34:14.380578 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.38055693 podStartE2EDuration="2.38055693s" podCreationTimestamp="2026-02-14 04:34:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:34:14.363142762 +0000 UTC m=+1486.444080066" watchObservedRunningTime="2026-02-14 04:34:14.38055693 +0000 UTC m=+1486.461494244" Feb 14 04:34:14 crc kubenswrapper[4867]: I0214 04:34:14.568615 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Feb 14 04:34:14 crc kubenswrapper[4867]: I0214 04:34:14.572551 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 14 04:34:14 crc kubenswrapper[4867]: I0214 04:34:14.578024 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Feb 14 04:34:14 crc kubenswrapper[4867]: I0214 04:34:14.578358 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Feb 14 04:34:14 crc kubenswrapper[4867]: I0214 04:34:14.583478 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-bzvlt" Feb 14 04:34:14 crc kubenswrapper[4867]: I0214 04:34:14.599609 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 14 04:34:14 crc kubenswrapper[4867]: I0214 04:34:14.619829 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gv5cq\" (UniqueName: \"kubernetes.io/projected/3b8b8297-e7e9-4d4e-9fbf-8aa302601521-kube-api-access-gv5cq\") pod \"aodh-0\" (UID: \"3b8b8297-e7e9-4d4e-9fbf-8aa302601521\") " pod="openstack/aodh-0" Feb 14 04:34:14 crc kubenswrapper[4867]: I0214 04:34:14.620013 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b8b8297-e7e9-4d4e-9fbf-8aa302601521-scripts\") pod \"aodh-0\" (UID: \"3b8b8297-e7e9-4d4e-9fbf-8aa302601521\") " pod="openstack/aodh-0" Feb 14 04:34:14 crc kubenswrapper[4867]: I0214 04:34:14.620094 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b8b8297-e7e9-4d4e-9fbf-8aa302601521-config-data\") pod \"aodh-0\" (UID: \"3b8b8297-e7e9-4d4e-9fbf-8aa302601521\") " pod="openstack/aodh-0" Feb 14 04:34:14 crc kubenswrapper[4867]: I0214 04:34:14.620130 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b8b8297-e7e9-4d4e-9fbf-8aa302601521-combined-ca-bundle\") pod \"aodh-0\" (UID: \"3b8b8297-e7e9-4d4e-9fbf-8aa302601521\") " pod="openstack/aodh-0" Feb 14 04:34:14 crc kubenswrapper[4867]: I0214 04:34:14.722980 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b8b8297-e7e9-4d4e-9fbf-8aa302601521-scripts\") pod \"aodh-0\" (UID: \"3b8b8297-e7e9-4d4e-9fbf-8aa302601521\") " pod="openstack/aodh-0" Feb 14 04:34:14 crc kubenswrapper[4867]: I0214 04:34:14.723086 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b8b8297-e7e9-4d4e-9fbf-8aa302601521-config-data\") pod \"aodh-0\" (UID: \"3b8b8297-e7e9-4d4e-9fbf-8aa302601521\") " pod="openstack/aodh-0" Feb 14 04:34:14 crc kubenswrapper[4867]: I0214 04:34:14.723126 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b8b8297-e7e9-4d4e-9fbf-8aa302601521-combined-ca-bundle\") pod \"aodh-0\" (UID: \"3b8b8297-e7e9-4d4e-9fbf-8aa302601521\") " pod="openstack/aodh-0" Feb 14 04:34:14 crc kubenswrapper[4867]: I0214 04:34:14.723173 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gv5cq\" (UniqueName: \"kubernetes.io/projected/3b8b8297-e7e9-4d4e-9fbf-8aa302601521-kube-api-access-gv5cq\") pod \"aodh-0\" (UID: \"3b8b8297-e7e9-4d4e-9fbf-8aa302601521\") " pod="openstack/aodh-0" Feb 14 04:34:14 crc kubenswrapper[4867]: I0214 04:34:14.729241 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b8b8297-e7e9-4d4e-9fbf-8aa302601521-combined-ca-bundle\") pod \"aodh-0\" (UID: \"3b8b8297-e7e9-4d4e-9fbf-8aa302601521\") " pod="openstack/aodh-0" Feb 14 04:34:14 crc kubenswrapper[4867]: I0214 04:34:14.730730 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b8b8297-e7e9-4d4e-9fbf-8aa302601521-config-data\") pod \"aodh-0\" (UID: \"3b8b8297-e7e9-4d4e-9fbf-8aa302601521\") " pod="openstack/aodh-0" Feb 14 04:34:14 crc kubenswrapper[4867]: I0214 04:34:14.741734 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gv5cq\" (UniqueName: \"kubernetes.io/projected/3b8b8297-e7e9-4d4e-9fbf-8aa302601521-kube-api-access-gv5cq\") pod \"aodh-0\" (UID: \"3b8b8297-e7e9-4d4e-9fbf-8aa302601521\") " pod="openstack/aodh-0" Feb 14 04:34:14 crc kubenswrapper[4867]: I0214 04:34:14.743165 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b8b8297-e7e9-4d4e-9fbf-8aa302601521-scripts\") pod \"aodh-0\" (UID: \"3b8b8297-e7e9-4d4e-9fbf-8aa302601521\") " pod="openstack/aodh-0" Feb 14 04:34:14 crc kubenswrapper[4867]: I0214 04:34:14.907072 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 14 04:34:15 crc kubenswrapper[4867]: I0214 04:34:15.372755 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dca43c59-5d18-4f9d-bb72-49460d8d691f","Type":"ContainerStarted","Data":"dd7577aaaaab45999752b1f4efb80ed248e3f9a60ebc38d3fa23086bf2d9e0fe"} Feb 14 04:34:15 crc kubenswrapper[4867]: I0214 04:34:15.373156 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dca43c59-5d18-4f9d-bb72-49460d8d691f","Type":"ContainerStarted","Data":"a0bf448b9af2da9137bfe6fd50f230b11043ba68729b4c2385f16d8c94be6d14"} Feb 14 04:34:15 crc kubenswrapper[4867]: I0214 04:34:15.535123 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 14 04:34:16 crc kubenswrapper[4867]: I0214 04:34:16.420395 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dca43c59-5d18-4f9d-bb72-49460d8d691f","Type":"ContainerStarted","Data":"14a477b5f8abacdda560860647515fd1269df06f35bc862bd440744b72123dfc"} Feb 14 04:34:16 crc kubenswrapper[4867]: I0214 04:34:16.426692 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"3b8b8297-e7e9-4d4e-9fbf-8aa302601521","Type":"ContainerStarted","Data":"389edd9377562dde5f7fe2a4c07b6137629b507c4f69fc65a4a622c3e66a0b90"} Feb 14 04:34:16 crc kubenswrapper[4867]: I0214 04:34:16.426751 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"3b8b8297-e7e9-4d4e-9fbf-8aa302601521","Type":"ContainerStarted","Data":"873489133de3c353c9f8ca313cc4a323ae602d5913923a1f3148b8aae71c2510"} Feb 14 04:34:17 crc kubenswrapper[4867]: I0214 04:34:17.463626 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dca43c59-5d18-4f9d-bb72-49460d8d691f","Type":"ContainerStarted","Data":"0cabda8eaa1316182eb67ad8e8e3fc2742a5e7f04936e1e1543c720ea2363d35"} Feb 14 04:34:17 crc kubenswrapper[4867]: I0214 04:34:17.464180 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 14 04:34:17 crc kubenswrapper[4867]: I0214 04:34:17.498133 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.195903818 podStartE2EDuration="5.498110464s" podCreationTimestamp="2026-02-14 04:34:12 +0000 UTC" firstStartedPulling="2026-02-14 04:34:13.744038187 +0000 UTC m=+1485.824975501" lastFinishedPulling="2026-02-14 04:34:17.046244833 +0000 UTC m=+1489.127182147" observedRunningTime="2026-02-14 04:34:17.489906324 +0000 UTC m=+1489.570843638" watchObservedRunningTime="2026-02-14 04:34:17.498110464 +0000 UTC m=+1489.579047778" Feb 14 04:34:17 crc kubenswrapper[4867]: I0214 04:34:17.669523 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 14 04:34:17 crc kubenswrapper[4867]: I0214 04:34:17.669586 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 14 04:34:18 crc kubenswrapper[4867]: I0214 04:34:18.390191 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Feb 14 04:34:18 crc kubenswrapper[4867]: I0214 04:34:18.908844 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:34:19 crc kubenswrapper[4867]: I0214 04:34:19.501777 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"3b8b8297-e7e9-4d4e-9fbf-8aa302601521","Type":"ContainerStarted","Data":"9248cc350ed932fdee6220c9e37ba117089264f71d0581c8a1792aace4facbcb"} Feb 14 04:34:19 crc kubenswrapper[4867]: I0214 04:34:19.501966 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dca43c59-5d18-4f9d-bb72-49460d8d691f" containerName="ceilometer-central-agent" containerID="cri-o://a0bf448b9af2da9137bfe6fd50f230b11043ba68729b4c2385f16d8c94be6d14" gracePeriod=30 Feb 14 04:34:19 crc kubenswrapper[4867]: I0214 04:34:19.502024 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dca43c59-5d18-4f9d-bb72-49460d8d691f" containerName="proxy-httpd" containerID="cri-o://0cabda8eaa1316182eb67ad8e8e3fc2742a5e7f04936e1e1543c720ea2363d35" gracePeriod=30 Feb 14 04:34:19 crc kubenswrapper[4867]: I0214 04:34:19.502125 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dca43c59-5d18-4f9d-bb72-49460d8d691f" containerName="sg-core" containerID="cri-o://14a477b5f8abacdda560860647515fd1269df06f35bc862bd440744b72123dfc" gracePeriod=30 Feb 14 04:34:19 crc kubenswrapper[4867]: I0214 04:34:19.502149 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dca43c59-5d18-4f9d-bb72-49460d8d691f" containerName="ceilometer-notification-agent" containerID="cri-o://dd7577aaaaab45999752b1f4efb80ed248e3f9a60ebc38d3fa23086bf2d9e0fe" gracePeriod=30 Feb 14 04:34:19 crc kubenswrapper[4867]: I0214 04:34:19.536683 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8w8t2"] Feb 14 04:34:19 crc kubenswrapper[4867]: I0214 04:34:19.539370 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8w8t2" Feb 14 04:34:19 crc kubenswrapper[4867]: I0214 04:34:19.553142 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8w8t2"] Feb 14 04:34:19 crc kubenswrapper[4867]: I0214 04:34:19.665260 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kz947\" (UniqueName: \"kubernetes.io/projected/07a0a67f-28d7-4aa6-872b-a0223c46a9ce-kube-api-access-kz947\") pod \"redhat-operators-8w8t2\" (UID: \"07a0a67f-28d7-4aa6-872b-a0223c46a9ce\") " pod="openshift-marketplace/redhat-operators-8w8t2" Feb 14 04:34:19 crc kubenswrapper[4867]: I0214 04:34:19.665353 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07a0a67f-28d7-4aa6-872b-a0223c46a9ce-catalog-content\") pod \"redhat-operators-8w8t2\" (UID: \"07a0a67f-28d7-4aa6-872b-a0223c46a9ce\") " pod="openshift-marketplace/redhat-operators-8w8t2" Feb 14 04:34:19 crc kubenswrapper[4867]: I0214 04:34:19.665526 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07a0a67f-28d7-4aa6-872b-a0223c46a9ce-utilities\") pod \"redhat-operators-8w8t2\" (UID: \"07a0a67f-28d7-4aa6-872b-a0223c46a9ce\") " pod="openshift-marketplace/redhat-operators-8w8t2" Feb 14 04:34:19 crc kubenswrapper[4867]: I0214 04:34:19.768133 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07a0a67f-28d7-4aa6-872b-a0223c46a9ce-catalog-content\") pod \"redhat-operators-8w8t2\" (UID: \"07a0a67f-28d7-4aa6-872b-a0223c46a9ce\") " pod="openshift-marketplace/redhat-operators-8w8t2" Feb 14 04:34:19 crc kubenswrapper[4867]: I0214 04:34:19.768324 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07a0a67f-28d7-4aa6-872b-a0223c46a9ce-utilities\") pod \"redhat-operators-8w8t2\" (UID: \"07a0a67f-28d7-4aa6-872b-a0223c46a9ce\") " pod="openshift-marketplace/redhat-operators-8w8t2" Feb 14 04:34:19 crc kubenswrapper[4867]: I0214 04:34:19.768484 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kz947\" (UniqueName: \"kubernetes.io/projected/07a0a67f-28d7-4aa6-872b-a0223c46a9ce-kube-api-access-kz947\") pod \"redhat-operators-8w8t2\" (UID: \"07a0a67f-28d7-4aa6-872b-a0223c46a9ce\") " pod="openshift-marketplace/redhat-operators-8w8t2" Feb 14 04:34:19 crc kubenswrapper[4867]: I0214 04:34:19.768689 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07a0a67f-28d7-4aa6-872b-a0223c46a9ce-catalog-content\") pod \"redhat-operators-8w8t2\" (UID: \"07a0a67f-28d7-4aa6-872b-a0223c46a9ce\") " pod="openshift-marketplace/redhat-operators-8w8t2" Feb 14 04:34:19 crc kubenswrapper[4867]: I0214 04:34:19.768788 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07a0a67f-28d7-4aa6-872b-a0223c46a9ce-utilities\") pod \"redhat-operators-8w8t2\" (UID: \"07a0a67f-28d7-4aa6-872b-a0223c46a9ce\") " pod="openshift-marketplace/redhat-operators-8w8t2" Feb 14 04:34:19 crc kubenswrapper[4867]: I0214 04:34:19.789077 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kz947\" (UniqueName: \"kubernetes.io/projected/07a0a67f-28d7-4aa6-872b-a0223c46a9ce-kube-api-access-kz947\") pod \"redhat-operators-8w8t2\" (UID: \"07a0a67f-28d7-4aa6-872b-a0223c46a9ce\") " pod="openshift-marketplace/redhat-operators-8w8t2" Feb 14 04:34:19 crc kubenswrapper[4867]: I0214 04:34:19.869748 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8w8t2" Feb 14 04:34:20 crc kubenswrapper[4867]: I0214 04:34:20.523685 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"3b8b8297-e7e9-4d4e-9fbf-8aa302601521","Type":"ContainerStarted","Data":"f7c20be58a69fd5c190fa1d934c18d6f79089308881712b0a2523c6851d81171"} Feb 14 04:34:20 crc kubenswrapper[4867]: I0214 04:34:20.527282 4867 generic.go:334] "Generic (PLEG): container finished" podID="dca43c59-5d18-4f9d-bb72-49460d8d691f" containerID="0cabda8eaa1316182eb67ad8e8e3fc2742a5e7f04936e1e1543c720ea2363d35" exitCode=0 Feb 14 04:34:20 crc kubenswrapper[4867]: I0214 04:34:20.527310 4867 generic.go:334] "Generic (PLEG): container finished" podID="dca43c59-5d18-4f9d-bb72-49460d8d691f" containerID="14a477b5f8abacdda560860647515fd1269df06f35bc862bd440744b72123dfc" exitCode=2 Feb 14 04:34:20 crc kubenswrapper[4867]: I0214 04:34:20.527320 4867 generic.go:334] "Generic (PLEG): container finished" podID="dca43c59-5d18-4f9d-bb72-49460d8d691f" containerID="dd7577aaaaab45999752b1f4efb80ed248e3f9a60ebc38d3fa23086bf2d9e0fe" exitCode=0 Feb 14 04:34:20 crc kubenswrapper[4867]: I0214 04:34:20.527335 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dca43c59-5d18-4f9d-bb72-49460d8d691f","Type":"ContainerDied","Data":"0cabda8eaa1316182eb67ad8e8e3fc2742a5e7f04936e1e1543c720ea2363d35"} Feb 14 04:34:20 crc kubenswrapper[4867]: I0214 04:34:20.527368 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dca43c59-5d18-4f9d-bb72-49460d8d691f","Type":"ContainerDied","Data":"14a477b5f8abacdda560860647515fd1269df06f35bc862bd440744b72123dfc"} Feb 14 04:34:20 crc kubenswrapper[4867]: I0214 04:34:20.527381 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dca43c59-5d18-4f9d-bb72-49460d8d691f","Type":"ContainerDied","Data":"dd7577aaaaab45999752b1f4efb80ed248e3f9a60ebc38d3fa23086bf2d9e0fe"} Feb 14 04:34:20 crc kubenswrapper[4867]: I0214 04:34:20.688598 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8w8t2"] Feb 14 04:34:20 crc kubenswrapper[4867]: W0214 04:34:20.701975 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod07a0a67f_28d7_4aa6_872b_a0223c46a9ce.slice/crio-fdac00fce6c9717e1c8d18f0be51e81e7fbc0a9225c4838a2047a292e8ab0896 WatchSource:0}: Error finding container fdac00fce6c9717e1c8d18f0be51e81e7fbc0a9225c4838a2047a292e8ab0896: Status 404 returned error can't find the container with id fdac00fce6c9717e1c8d18f0be51e81e7fbc0a9225c4838a2047a292e8ab0896 Feb 14 04:34:21 crc kubenswrapper[4867]: I0214 04:34:21.079412 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 14 04:34:21 crc kubenswrapper[4867]: I0214 04:34:21.079892 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 14 04:34:21 crc kubenswrapper[4867]: I0214 04:34:21.564234 4867 generic.go:334] "Generic (PLEG): container finished" podID="07a0a67f-28d7-4aa6-872b-a0223c46a9ce" containerID="bcc64d905c4e5f9d636eab2cf199fd810c50163cc6446c91352e060a5a3e42fd" exitCode=0 Feb 14 04:34:21 crc kubenswrapper[4867]: I0214 04:34:21.564277 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8w8t2" event={"ID":"07a0a67f-28d7-4aa6-872b-a0223c46a9ce","Type":"ContainerDied","Data":"bcc64d905c4e5f9d636eab2cf199fd810c50163cc6446c91352e060a5a3e42fd"} Feb 14 04:34:21 crc kubenswrapper[4867]: I0214 04:34:21.564305 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8w8t2" event={"ID":"07a0a67f-28d7-4aa6-872b-a0223c46a9ce","Type":"ContainerStarted","Data":"fdac00fce6c9717e1c8d18f0be51e81e7fbc0a9225c4838a2047a292e8ab0896"} Feb 14 04:34:21 crc kubenswrapper[4867]: I0214 04:34:21.808788 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 14 04:34:22 crc kubenswrapper[4867]: I0214 04:34:22.162728 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f365fedd-2e1e-41da-aeed-c2f6cf9de0eb" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.246:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 04:34:22 crc kubenswrapper[4867]: I0214 04:34:22.162856 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f365fedd-2e1e-41da-aeed-c2f6cf9de0eb" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.246:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 04:34:22 crc kubenswrapper[4867]: I0214 04:34:22.669216 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 14 04:34:22 crc kubenswrapper[4867]: I0214 04:34:22.669283 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 14 04:34:23 crc kubenswrapper[4867]: I0214 04:34:23.587386 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8w8t2" event={"ID":"07a0a67f-28d7-4aa6-872b-a0223c46a9ce","Type":"ContainerStarted","Data":"7d63f285d67f04fff738be38ba2678cb46d4e846ee48b03b6257c8a564337d5d"} Feb 14 04:34:23 crc kubenswrapper[4867]: I0214 04:34:23.591062 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"3b8b8297-e7e9-4d4e-9fbf-8aa302601521","Type":"ContainerStarted","Data":"676b44febd2b1e6f8adc3b36dfacb2ca3ffd9bcd4f9a33888b2b7f58cb54f5e2"} Feb 14 04:34:23 crc kubenswrapper[4867]: I0214 04:34:23.591211 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="3b8b8297-e7e9-4d4e-9fbf-8aa302601521" containerName="aodh-api" containerID="cri-o://389edd9377562dde5f7fe2a4c07b6137629b507c4f69fc65a4a622c3e66a0b90" gracePeriod=30 Feb 14 04:34:23 crc kubenswrapper[4867]: I0214 04:34:23.591305 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="3b8b8297-e7e9-4d4e-9fbf-8aa302601521" containerName="aodh-listener" containerID="cri-o://676b44febd2b1e6f8adc3b36dfacb2ca3ffd9bcd4f9a33888b2b7f58cb54f5e2" gracePeriod=30 Feb 14 04:34:23 crc kubenswrapper[4867]: I0214 04:34:23.591357 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="3b8b8297-e7e9-4d4e-9fbf-8aa302601521" containerName="aodh-notifier" containerID="cri-o://f7c20be58a69fd5c190fa1d934c18d6f79089308881712b0a2523c6851d81171" gracePeriod=30 Feb 14 04:34:23 crc kubenswrapper[4867]: I0214 04:34:23.591395 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="3b8b8297-e7e9-4d4e-9fbf-8aa302601521" containerName="aodh-evaluator" containerID="cri-o://9248cc350ed932fdee6220c9e37ba117089264f71d0581c8a1792aace4facbcb" gracePeriod=30 Feb 14 04:34:23 crc kubenswrapper[4867]: I0214 04:34:23.658371 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.236014663 podStartE2EDuration="9.658346533s" podCreationTimestamp="2026-02-14 04:34:14 +0000 UTC" firstStartedPulling="2026-02-14 04:34:15.539142969 +0000 UTC m=+1487.620080283" lastFinishedPulling="2026-02-14 04:34:22.961474839 +0000 UTC m=+1495.042412153" observedRunningTime="2026-02-14 04:34:23.642038715 +0000 UTC m=+1495.722976029" watchObservedRunningTime="2026-02-14 04:34:23.658346533 +0000 UTC m=+1495.739283837" Feb 14 04:34:23 crc kubenswrapper[4867]: I0214 04:34:23.691784 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="35a6b709-4f80-4abc-a92f-24a43d09a805" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.248:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 04:34:23 crc kubenswrapper[4867]: I0214 04:34:23.691996 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="35a6b709-4f80-4abc-a92f-24a43d09a805" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.248:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.176363 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.311964 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dca43c59-5d18-4f9d-bb72-49460d8d691f-config-data\") pod \"dca43c59-5d18-4f9d-bb72-49460d8d691f\" (UID: \"dca43c59-5d18-4f9d-bb72-49460d8d691f\") " Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.312060 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dca43c59-5d18-4f9d-bb72-49460d8d691f-sg-core-conf-yaml\") pod \"dca43c59-5d18-4f9d-bb72-49460d8d691f\" (UID: \"dca43c59-5d18-4f9d-bb72-49460d8d691f\") " Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.312304 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dca43c59-5d18-4f9d-bb72-49460d8d691f-combined-ca-bundle\") pod \"dca43c59-5d18-4f9d-bb72-49460d8d691f\" (UID: \"dca43c59-5d18-4f9d-bb72-49460d8d691f\") " Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.312330 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dca43c59-5d18-4f9d-bb72-49460d8d691f-scripts\") pod \"dca43c59-5d18-4f9d-bb72-49460d8d691f\" (UID: \"dca43c59-5d18-4f9d-bb72-49460d8d691f\") " Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.312366 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dca43c59-5d18-4f9d-bb72-49460d8d691f-run-httpd\") pod \"dca43c59-5d18-4f9d-bb72-49460d8d691f\" (UID: \"dca43c59-5d18-4f9d-bb72-49460d8d691f\") " Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.312428 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dca43c59-5d18-4f9d-bb72-49460d8d691f-log-httpd\") pod \"dca43c59-5d18-4f9d-bb72-49460d8d691f\" (UID: \"dca43c59-5d18-4f9d-bb72-49460d8d691f\") " Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.312453 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hdfnq\" (UniqueName: \"kubernetes.io/projected/dca43c59-5d18-4f9d-bb72-49460d8d691f-kube-api-access-hdfnq\") pod \"dca43c59-5d18-4f9d-bb72-49460d8d691f\" (UID: \"dca43c59-5d18-4f9d-bb72-49460d8d691f\") " Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.314343 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dca43c59-5d18-4f9d-bb72-49460d8d691f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "dca43c59-5d18-4f9d-bb72-49460d8d691f" (UID: "dca43c59-5d18-4f9d-bb72-49460d8d691f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.314443 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dca43c59-5d18-4f9d-bb72-49460d8d691f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "dca43c59-5d18-4f9d-bb72-49460d8d691f" (UID: "dca43c59-5d18-4f9d-bb72-49460d8d691f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.320641 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dca43c59-5d18-4f9d-bb72-49460d8d691f-kube-api-access-hdfnq" (OuterVolumeSpecName: "kube-api-access-hdfnq") pod "dca43c59-5d18-4f9d-bb72-49460d8d691f" (UID: "dca43c59-5d18-4f9d-bb72-49460d8d691f"). InnerVolumeSpecName "kube-api-access-hdfnq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.320812 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dca43c59-5d18-4f9d-bb72-49460d8d691f-scripts" (OuterVolumeSpecName: "scripts") pod "dca43c59-5d18-4f9d-bb72-49460d8d691f" (UID: "dca43c59-5d18-4f9d-bb72-49460d8d691f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.415343 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dca43c59-5d18-4f9d-bb72-49460d8d691f-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.415376 4867 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dca43c59-5d18-4f9d-bb72-49460d8d691f-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.415386 4867 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dca43c59-5d18-4f9d-bb72-49460d8d691f-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.415395 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hdfnq\" (UniqueName: \"kubernetes.io/projected/dca43c59-5d18-4f9d-bb72-49460d8d691f-kube-api-access-hdfnq\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.417705 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dca43c59-5d18-4f9d-bb72-49460d8d691f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "dca43c59-5d18-4f9d-bb72-49460d8d691f" (UID: "dca43c59-5d18-4f9d-bb72-49460d8d691f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.445925 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dca43c59-5d18-4f9d-bb72-49460d8d691f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dca43c59-5d18-4f9d-bb72-49460d8d691f" (UID: "dca43c59-5d18-4f9d-bb72-49460d8d691f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.503615 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dca43c59-5d18-4f9d-bb72-49460d8d691f-config-data" (OuterVolumeSpecName: "config-data") pod "dca43c59-5d18-4f9d-bb72-49460d8d691f" (UID: "dca43c59-5d18-4f9d-bb72-49460d8d691f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.518235 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dca43c59-5d18-4f9d-bb72-49460d8d691f-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.518272 4867 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dca43c59-5d18-4f9d-bb72-49460d8d691f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.518282 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dca43c59-5d18-4f9d-bb72-49460d8d691f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.602333 4867 generic.go:334] "Generic (PLEG): container finished" podID="dca43c59-5d18-4f9d-bb72-49460d8d691f" containerID="a0bf448b9af2da9137bfe6fd50f230b11043ba68729b4c2385f16d8c94be6d14" exitCode=0 Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.602402 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dca43c59-5d18-4f9d-bb72-49460d8d691f","Type":"ContainerDied","Data":"a0bf448b9af2da9137bfe6fd50f230b11043ba68729b4c2385f16d8c94be6d14"} Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.602432 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dca43c59-5d18-4f9d-bb72-49460d8d691f","Type":"ContainerDied","Data":"d034009cb6942bfb4489567285645544a177850350cdc4d9dd4b67c0404cdf70"} Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.602448 4867 scope.go:117] "RemoveContainer" containerID="0cabda8eaa1316182eb67ad8e8e3fc2742a5e7f04936e1e1543c720ea2363d35" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.602609 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.617721 4867 generic.go:334] "Generic (PLEG): container finished" podID="3b8b8297-e7e9-4d4e-9fbf-8aa302601521" containerID="9248cc350ed932fdee6220c9e37ba117089264f71d0581c8a1792aace4facbcb" exitCode=0 Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.617770 4867 generic.go:334] "Generic (PLEG): container finished" podID="3b8b8297-e7e9-4d4e-9fbf-8aa302601521" containerID="389edd9377562dde5f7fe2a4c07b6137629b507c4f69fc65a4a622c3e66a0b90" exitCode=0 Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.617794 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"3b8b8297-e7e9-4d4e-9fbf-8aa302601521","Type":"ContainerDied","Data":"9248cc350ed932fdee6220c9e37ba117089264f71d0581c8a1792aace4facbcb"} Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.617847 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"3b8b8297-e7e9-4d4e-9fbf-8aa302601521","Type":"ContainerDied","Data":"389edd9377562dde5f7fe2a4c07b6137629b507c4f69fc65a4a622c3e66a0b90"} Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.652467 4867 scope.go:117] "RemoveContainer" containerID="14a477b5f8abacdda560860647515fd1269df06f35bc862bd440744b72123dfc" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.652912 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.673327 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.693592 4867 scope.go:117] "RemoveContainer" containerID="dd7577aaaaab45999752b1f4efb80ed248e3f9a60ebc38d3fa23086bf2d9e0fe" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.733036 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:34:24 crc kubenswrapper[4867]: E0214 04:34:24.734831 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dca43c59-5d18-4f9d-bb72-49460d8d691f" containerName="ceilometer-central-agent" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.734857 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="dca43c59-5d18-4f9d-bb72-49460d8d691f" containerName="ceilometer-central-agent" Feb 14 04:34:24 crc kubenswrapper[4867]: E0214 04:34:24.734872 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dca43c59-5d18-4f9d-bb72-49460d8d691f" containerName="ceilometer-notification-agent" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.734878 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="dca43c59-5d18-4f9d-bb72-49460d8d691f" containerName="ceilometer-notification-agent" Feb 14 04:34:24 crc kubenswrapper[4867]: E0214 04:34:24.734903 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dca43c59-5d18-4f9d-bb72-49460d8d691f" containerName="proxy-httpd" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.734909 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="dca43c59-5d18-4f9d-bb72-49460d8d691f" containerName="proxy-httpd" Feb 14 04:34:24 crc kubenswrapper[4867]: E0214 04:34:24.734945 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dca43c59-5d18-4f9d-bb72-49460d8d691f" containerName="sg-core" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.734951 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="dca43c59-5d18-4f9d-bb72-49460d8d691f" containerName="sg-core" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.735192 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="dca43c59-5d18-4f9d-bb72-49460d8d691f" containerName="proxy-httpd" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.735211 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="dca43c59-5d18-4f9d-bb72-49460d8d691f" containerName="ceilometer-central-agent" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.735222 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="dca43c59-5d18-4f9d-bb72-49460d8d691f" containerName="sg-core" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.735232 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="dca43c59-5d18-4f9d-bb72-49460d8d691f" containerName="ceilometer-notification-agent" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.737547 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.742172 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.742599 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.749703 4867 scope.go:117] "RemoveContainer" containerID="a0bf448b9af2da9137bfe6fd50f230b11043ba68729b4c2385f16d8c94be6d14" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.749881 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.820353 4867 scope.go:117] "RemoveContainer" containerID="0cabda8eaa1316182eb67ad8e8e3fc2742a5e7f04936e1e1543c720ea2363d35" Feb 14 04:34:24 crc kubenswrapper[4867]: E0214 04:34:24.821183 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0cabda8eaa1316182eb67ad8e8e3fc2742a5e7f04936e1e1543c720ea2363d35\": container with ID starting with 0cabda8eaa1316182eb67ad8e8e3fc2742a5e7f04936e1e1543c720ea2363d35 not found: ID does not exist" containerID="0cabda8eaa1316182eb67ad8e8e3fc2742a5e7f04936e1e1543c720ea2363d35" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.821268 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0cabda8eaa1316182eb67ad8e8e3fc2742a5e7f04936e1e1543c720ea2363d35"} err="failed to get container status \"0cabda8eaa1316182eb67ad8e8e3fc2742a5e7f04936e1e1543c720ea2363d35\": rpc error: code = NotFound desc = could not find container \"0cabda8eaa1316182eb67ad8e8e3fc2742a5e7f04936e1e1543c720ea2363d35\": container with ID starting with 0cabda8eaa1316182eb67ad8e8e3fc2742a5e7f04936e1e1543c720ea2363d35 not found: ID does not exist" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.821300 4867 scope.go:117] "RemoveContainer" containerID="14a477b5f8abacdda560860647515fd1269df06f35bc862bd440744b72123dfc" Feb 14 04:34:24 crc kubenswrapper[4867]: E0214 04:34:24.821603 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14a477b5f8abacdda560860647515fd1269df06f35bc862bd440744b72123dfc\": container with ID starting with 14a477b5f8abacdda560860647515fd1269df06f35bc862bd440744b72123dfc not found: ID does not exist" containerID="14a477b5f8abacdda560860647515fd1269df06f35bc862bd440744b72123dfc" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.821632 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14a477b5f8abacdda560860647515fd1269df06f35bc862bd440744b72123dfc"} err="failed to get container status \"14a477b5f8abacdda560860647515fd1269df06f35bc862bd440744b72123dfc\": rpc error: code = NotFound desc = could not find container \"14a477b5f8abacdda560860647515fd1269df06f35bc862bd440744b72123dfc\": container with ID starting with 14a477b5f8abacdda560860647515fd1269df06f35bc862bd440744b72123dfc not found: ID does not exist" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.821653 4867 scope.go:117] "RemoveContainer" containerID="dd7577aaaaab45999752b1f4efb80ed248e3f9a60ebc38d3fa23086bf2d9e0fe" Feb 14 04:34:24 crc kubenswrapper[4867]: E0214 04:34:24.822241 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd7577aaaaab45999752b1f4efb80ed248e3f9a60ebc38d3fa23086bf2d9e0fe\": container with ID starting with dd7577aaaaab45999752b1f4efb80ed248e3f9a60ebc38d3fa23086bf2d9e0fe not found: ID does not exist" containerID="dd7577aaaaab45999752b1f4efb80ed248e3f9a60ebc38d3fa23086bf2d9e0fe" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.822286 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd7577aaaaab45999752b1f4efb80ed248e3f9a60ebc38d3fa23086bf2d9e0fe"} err="failed to get container status \"dd7577aaaaab45999752b1f4efb80ed248e3f9a60ebc38d3fa23086bf2d9e0fe\": rpc error: code = NotFound desc = could not find container \"dd7577aaaaab45999752b1f4efb80ed248e3f9a60ebc38d3fa23086bf2d9e0fe\": container with ID starting with dd7577aaaaab45999752b1f4efb80ed248e3f9a60ebc38d3fa23086bf2d9e0fe not found: ID does not exist" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.822317 4867 scope.go:117] "RemoveContainer" containerID="a0bf448b9af2da9137bfe6fd50f230b11043ba68729b4c2385f16d8c94be6d14" Feb 14 04:34:24 crc kubenswrapper[4867]: E0214 04:34:24.824495 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0bf448b9af2da9137bfe6fd50f230b11043ba68729b4c2385f16d8c94be6d14\": container with ID starting with a0bf448b9af2da9137bfe6fd50f230b11043ba68729b4c2385f16d8c94be6d14 not found: ID does not exist" containerID="a0bf448b9af2da9137bfe6fd50f230b11043ba68729b4c2385f16d8c94be6d14" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.824541 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0bf448b9af2da9137bfe6fd50f230b11043ba68729b4c2385f16d8c94be6d14"} err="failed to get container status \"a0bf448b9af2da9137bfe6fd50f230b11043ba68729b4c2385f16d8c94be6d14\": rpc error: code = NotFound desc = could not find container \"a0bf448b9af2da9137bfe6fd50f230b11043ba68729b4c2385f16d8c94be6d14\": container with ID starting with a0bf448b9af2da9137bfe6fd50f230b11043ba68729b4c2385f16d8c94be6d14 not found: ID does not exist" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.826380 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91a07e13-20f0-41a3-b974-4570ebfdc497-config-data\") pod \"ceilometer-0\" (UID: \"91a07e13-20f0-41a3-b974-4570ebfdc497\") " pod="openstack/ceilometer-0" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.826463 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msskx\" (UniqueName: \"kubernetes.io/projected/91a07e13-20f0-41a3-b974-4570ebfdc497-kube-api-access-msskx\") pod \"ceilometer-0\" (UID: \"91a07e13-20f0-41a3-b974-4570ebfdc497\") " pod="openstack/ceilometer-0" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.826528 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91a07e13-20f0-41a3-b974-4570ebfdc497-scripts\") pod \"ceilometer-0\" (UID: \"91a07e13-20f0-41a3-b974-4570ebfdc497\") " pod="openstack/ceilometer-0" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.826586 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/91a07e13-20f0-41a3-b974-4570ebfdc497-run-httpd\") pod \"ceilometer-0\" (UID: \"91a07e13-20f0-41a3-b974-4570ebfdc497\") " pod="openstack/ceilometer-0" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.826640 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/91a07e13-20f0-41a3-b974-4570ebfdc497-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"91a07e13-20f0-41a3-b974-4570ebfdc497\") " pod="openstack/ceilometer-0" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.826661 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/91a07e13-20f0-41a3-b974-4570ebfdc497-log-httpd\") pod \"ceilometer-0\" (UID: \"91a07e13-20f0-41a3-b974-4570ebfdc497\") " pod="openstack/ceilometer-0" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.826679 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91a07e13-20f0-41a3-b974-4570ebfdc497-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"91a07e13-20f0-41a3-b974-4570ebfdc497\") " pod="openstack/ceilometer-0" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.929465 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/91a07e13-20f0-41a3-b974-4570ebfdc497-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"91a07e13-20f0-41a3-b974-4570ebfdc497\") " pod="openstack/ceilometer-0" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.929543 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/91a07e13-20f0-41a3-b974-4570ebfdc497-log-httpd\") pod \"ceilometer-0\" (UID: \"91a07e13-20f0-41a3-b974-4570ebfdc497\") " pod="openstack/ceilometer-0" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.929575 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91a07e13-20f0-41a3-b974-4570ebfdc497-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"91a07e13-20f0-41a3-b974-4570ebfdc497\") " pod="openstack/ceilometer-0" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.929733 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91a07e13-20f0-41a3-b974-4570ebfdc497-config-data\") pod \"ceilometer-0\" (UID: \"91a07e13-20f0-41a3-b974-4570ebfdc497\") " pod="openstack/ceilometer-0" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.929793 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msskx\" (UniqueName: \"kubernetes.io/projected/91a07e13-20f0-41a3-b974-4570ebfdc497-kube-api-access-msskx\") pod \"ceilometer-0\" (UID: \"91a07e13-20f0-41a3-b974-4570ebfdc497\") " pod="openstack/ceilometer-0" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.929820 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91a07e13-20f0-41a3-b974-4570ebfdc497-scripts\") pod \"ceilometer-0\" (UID: \"91a07e13-20f0-41a3-b974-4570ebfdc497\") " pod="openstack/ceilometer-0" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.929863 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/91a07e13-20f0-41a3-b974-4570ebfdc497-run-httpd\") pod \"ceilometer-0\" (UID: \"91a07e13-20f0-41a3-b974-4570ebfdc497\") " pod="openstack/ceilometer-0" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.930285 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/91a07e13-20f0-41a3-b974-4570ebfdc497-log-httpd\") pod \"ceilometer-0\" (UID: \"91a07e13-20f0-41a3-b974-4570ebfdc497\") " pod="openstack/ceilometer-0" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.930299 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/91a07e13-20f0-41a3-b974-4570ebfdc497-run-httpd\") pod \"ceilometer-0\" (UID: \"91a07e13-20f0-41a3-b974-4570ebfdc497\") " pod="openstack/ceilometer-0" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.934908 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91a07e13-20f0-41a3-b974-4570ebfdc497-config-data\") pod \"ceilometer-0\" (UID: \"91a07e13-20f0-41a3-b974-4570ebfdc497\") " pod="openstack/ceilometer-0" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.935116 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/91a07e13-20f0-41a3-b974-4570ebfdc497-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"91a07e13-20f0-41a3-b974-4570ebfdc497\") " pod="openstack/ceilometer-0" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.935192 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91a07e13-20f0-41a3-b974-4570ebfdc497-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"91a07e13-20f0-41a3-b974-4570ebfdc497\") " pod="openstack/ceilometer-0" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.936381 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91a07e13-20f0-41a3-b974-4570ebfdc497-scripts\") pod \"ceilometer-0\" (UID: \"91a07e13-20f0-41a3-b974-4570ebfdc497\") " pod="openstack/ceilometer-0" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.948753 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msskx\" (UniqueName: \"kubernetes.io/projected/91a07e13-20f0-41a3-b974-4570ebfdc497-kube-api-access-msskx\") pod \"ceilometer-0\" (UID: \"91a07e13-20f0-41a3-b974-4570ebfdc497\") " pod="openstack/ceilometer-0" Feb 14 04:34:25 crc kubenswrapper[4867]: I0214 04:34:25.009687 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dca43c59-5d18-4f9d-bb72-49460d8d691f" path="/var/lib/kubelet/pods/dca43c59-5d18-4f9d-bb72-49460d8d691f/volumes" Feb 14 04:34:25 crc kubenswrapper[4867]: I0214 04:34:25.067288 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 04:34:25 crc kubenswrapper[4867]: I0214 04:34:25.639150 4867 generic.go:334] "Generic (PLEG): container finished" podID="3b8b8297-e7e9-4d4e-9fbf-8aa302601521" containerID="f7c20be58a69fd5c190fa1d934c18d6f79089308881712b0a2523c6851d81171" exitCode=0 Feb 14 04:34:25 crc kubenswrapper[4867]: I0214 04:34:25.639474 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"3b8b8297-e7e9-4d4e-9fbf-8aa302601521","Type":"ContainerDied","Data":"f7c20be58a69fd5c190fa1d934c18d6f79089308881712b0a2523c6851d81171"} Feb 14 04:34:25 crc kubenswrapper[4867]: W0214 04:34:25.661309 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod91a07e13_20f0_41a3_b974_4570ebfdc497.slice/crio-f206cad755a2b2f3c0b1803ba04c8a34a0ff6af924273028ea31c8d2d6a28332 WatchSource:0}: Error finding container f206cad755a2b2f3c0b1803ba04c8a34a0ff6af924273028ea31c8d2d6a28332: Status 404 returned error can't find the container with id f206cad755a2b2f3c0b1803ba04c8a34a0ff6af924273028ea31c8d2d6a28332 Feb 14 04:34:25 crc kubenswrapper[4867]: I0214 04:34:25.677825 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:34:26 crc kubenswrapper[4867]: I0214 04:34:26.654877 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"91a07e13-20f0-41a3-b974-4570ebfdc497","Type":"ContainerStarted","Data":"04371cd2bd6d981eba64b7f4eaeef7200ada8dd86442ec2e8912d6830b76b8d6"} Feb 14 04:34:26 crc kubenswrapper[4867]: I0214 04:34:26.655654 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"91a07e13-20f0-41a3-b974-4570ebfdc497","Type":"ContainerStarted","Data":"f206cad755a2b2f3c0b1803ba04c8a34a0ff6af924273028ea31c8d2d6a28332"} Feb 14 04:34:27 crc kubenswrapper[4867]: I0214 04:34:27.670272 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"91a07e13-20f0-41a3-b974-4570ebfdc497","Type":"ContainerStarted","Data":"365f4ce280bcf54eaf77d8f1f86bd38acc51e0b4dcba2956a590949d246f3f7d"} Feb 14 04:34:28 crc kubenswrapper[4867]: I0214 04:34:28.683895 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"91a07e13-20f0-41a3-b974-4570ebfdc497","Type":"ContainerStarted","Data":"d8f535569c3a2f29a4194d25fe02c25c8862ebdc340d4ee65743f0cf1cd3d4e2"} Feb 14 04:34:30 crc kubenswrapper[4867]: I0214 04:34:30.712168 4867 generic.go:334] "Generic (PLEG): container finished" podID="07a0a67f-28d7-4aa6-872b-a0223c46a9ce" containerID="7d63f285d67f04fff738be38ba2678cb46d4e846ee48b03b6257c8a564337d5d" exitCode=0 Feb 14 04:34:30 crc kubenswrapper[4867]: I0214 04:34:30.712223 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8w8t2" event={"ID":"07a0a67f-28d7-4aa6-872b-a0223c46a9ce","Type":"ContainerDied","Data":"7d63f285d67f04fff738be38ba2678cb46d4e846ee48b03b6257c8a564337d5d"} Feb 14 04:34:30 crc kubenswrapper[4867]: I0214 04:34:30.719496 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"91a07e13-20f0-41a3-b974-4570ebfdc497","Type":"ContainerStarted","Data":"8b166e03499fcd6a7d3f4d54be9e9dad070c581c83b5e328175a1a07459495b7"} Feb 14 04:34:30 crc kubenswrapper[4867]: I0214 04:34:30.720137 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 14 04:34:30 crc kubenswrapper[4867]: I0214 04:34:30.781227 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.6832021470000003 podStartE2EDuration="6.781183945s" podCreationTimestamp="2026-02-14 04:34:24 +0000 UTC" firstStartedPulling="2026-02-14 04:34:25.664774394 +0000 UTC m=+1497.745711708" lastFinishedPulling="2026-02-14 04:34:29.762756192 +0000 UTC m=+1501.843693506" observedRunningTime="2026-02-14 04:34:30.769312116 +0000 UTC m=+1502.850249430" watchObservedRunningTime="2026-02-14 04:34:30.781183945 +0000 UTC m=+1502.862121259" Feb 14 04:34:31 crc kubenswrapper[4867]: I0214 04:34:31.083963 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 14 04:34:31 crc kubenswrapper[4867]: I0214 04:34:31.084547 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 14 04:34:31 crc kubenswrapper[4867]: I0214 04:34:31.087584 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 14 04:34:31 crc kubenswrapper[4867]: I0214 04:34:31.088816 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 14 04:34:31 crc kubenswrapper[4867]: I0214 04:34:31.731295 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8w8t2" event={"ID":"07a0a67f-28d7-4aa6-872b-a0223c46a9ce","Type":"ContainerStarted","Data":"b28951ec7a1a0d867c9e70873b61b9ce82ff78d0b694954ee6ad69ca9b10e341"} Feb 14 04:34:31 crc kubenswrapper[4867]: I0214 04:34:31.732427 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 14 04:34:31 crc kubenswrapper[4867]: I0214 04:34:31.734988 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 14 04:34:31 crc kubenswrapper[4867]: I0214 04:34:31.757747 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8w8t2" podStartSLOduration=3.48626663 podStartE2EDuration="12.757714154s" podCreationTimestamp="2026-02-14 04:34:19 +0000 UTC" firstStartedPulling="2026-02-14 04:34:21.85021712 +0000 UTC m=+1493.931154434" lastFinishedPulling="2026-02-14 04:34:31.121664644 +0000 UTC m=+1503.202601958" observedRunningTime="2026-02-14 04:34:31.757673023 +0000 UTC m=+1503.838610347" watchObservedRunningTime="2026-02-14 04:34:31.757714154 +0000 UTC m=+1503.838651468" Feb 14 04:34:31 crc kubenswrapper[4867]: I0214 04:34:31.983794 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-5cgsc"] Feb 14 04:34:31 crc kubenswrapper[4867]: I0214 04:34:31.989133 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-5cgsc" Feb 14 04:34:32 crc kubenswrapper[4867]: I0214 04:34:32.031601 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-5cgsc"] Feb 14 04:34:32 crc kubenswrapper[4867]: I0214 04:34:32.146471 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5971b677-9b43-4667-b205-3926975d03d8-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-5cgsc\" (UID: \"5971b677-9b43-4667-b205-3926975d03d8\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-5cgsc" Feb 14 04:34:32 crc kubenswrapper[4867]: I0214 04:34:32.146548 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5971b677-9b43-4667-b205-3926975d03d8-config\") pod \"dnsmasq-dns-6b7bbf7cf9-5cgsc\" (UID: \"5971b677-9b43-4667-b205-3926975d03d8\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-5cgsc" Feb 14 04:34:32 crc kubenswrapper[4867]: I0214 04:34:32.146651 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5971b677-9b43-4667-b205-3926975d03d8-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-5cgsc\" (UID: \"5971b677-9b43-4667-b205-3926975d03d8\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-5cgsc" Feb 14 04:34:32 crc kubenswrapper[4867]: I0214 04:34:32.146788 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5971b677-9b43-4667-b205-3926975d03d8-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-5cgsc\" (UID: \"5971b677-9b43-4667-b205-3926975d03d8\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-5cgsc" Feb 14 04:34:32 crc kubenswrapper[4867]: I0214 04:34:32.146811 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5971b677-9b43-4667-b205-3926975d03d8-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-5cgsc\" (UID: \"5971b677-9b43-4667-b205-3926975d03d8\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-5cgsc" Feb 14 04:34:32 crc kubenswrapper[4867]: I0214 04:34:32.146846 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnqpt\" (UniqueName: \"kubernetes.io/projected/5971b677-9b43-4667-b205-3926975d03d8-kube-api-access-wnqpt\") pod \"dnsmasq-dns-6b7bbf7cf9-5cgsc\" (UID: \"5971b677-9b43-4667-b205-3926975d03d8\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-5cgsc" Feb 14 04:34:32 crc kubenswrapper[4867]: I0214 04:34:32.248821 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5971b677-9b43-4667-b205-3926975d03d8-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-5cgsc\" (UID: \"5971b677-9b43-4667-b205-3926975d03d8\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-5cgsc" Feb 14 04:34:32 crc kubenswrapper[4867]: I0214 04:34:32.248873 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5971b677-9b43-4667-b205-3926975d03d8-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-5cgsc\" (UID: \"5971b677-9b43-4667-b205-3926975d03d8\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-5cgsc" Feb 14 04:34:32 crc kubenswrapper[4867]: I0214 04:34:32.248914 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnqpt\" (UniqueName: \"kubernetes.io/projected/5971b677-9b43-4667-b205-3926975d03d8-kube-api-access-wnqpt\") pod \"dnsmasq-dns-6b7bbf7cf9-5cgsc\" (UID: \"5971b677-9b43-4667-b205-3926975d03d8\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-5cgsc" Feb 14 04:34:32 crc kubenswrapper[4867]: I0214 04:34:32.248949 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5971b677-9b43-4667-b205-3926975d03d8-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-5cgsc\" (UID: \"5971b677-9b43-4667-b205-3926975d03d8\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-5cgsc" Feb 14 04:34:32 crc kubenswrapper[4867]: I0214 04:34:32.248980 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5971b677-9b43-4667-b205-3926975d03d8-config\") pod \"dnsmasq-dns-6b7bbf7cf9-5cgsc\" (UID: \"5971b677-9b43-4667-b205-3926975d03d8\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-5cgsc" Feb 14 04:34:32 crc kubenswrapper[4867]: I0214 04:34:32.249057 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5971b677-9b43-4667-b205-3926975d03d8-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-5cgsc\" (UID: \"5971b677-9b43-4667-b205-3926975d03d8\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-5cgsc" Feb 14 04:34:32 crc kubenswrapper[4867]: I0214 04:34:32.250378 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5971b677-9b43-4667-b205-3926975d03d8-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-5cgsc\" (UID: \"5971b677-9b43-4667-b205-3926975d03d8\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-5cgsc" Feb 14 04:34:32 crc kubenswrapper[4867]: I0214 04:34:32.250407 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5971b677-9b43-4667-b205-3926975d03d8-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-5cgsc\" (UID: \"5971b677-9b43-4667-b205-3926975d03d8\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-5cgsc" Feb 14 04:34:32 crc kubenswrapper[4867]: I0214 04:34:32.250427 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5971b677-9b43-4667-b205-3926975d03d8-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-5cgsc\" (UID: \"5971b677-9b43-4667-b205-3926975d03d8\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-5cgsc" Feb 14 04:34:32 crc kubenswrapper[4867]: I0214 04:34:32.250891 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5971b677-9b43-4667-b205-3926975d03d8-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-5cgsc\" (UID: \"5971b677-9b43-4667-b205-3926975d03d8\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-5cgsc" Feb 14 04:34:32 crc kubenswrapper[4867]: I0214 04:34:32.250964 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5971b677-9b43-4667-b205-3926975d03d8-config\") pod \"dnsmasq-dns-6b7bbf7cf9-5cgsc\" (UID: \"5971b677-9b43-4667-b205-3926975d03d8\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-5cgsc" Feb 14 04:34:32 crc kubenswrapper[4867]: I0214 04:34:32.273451 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnqpt\" (UniqueName: \"kubernetes.io/projected/5971b677-9b43-4667-b205-3926975d03d8-kube-api-access-wnqpt\") pod \"dnsmasq-dns-6b7bbf7cf9-5cgsc\" (UID: \"5971b677-9b43-4667-b205-3926975d03d8\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-5cgsc" Feb 14 04:34:32 crc kubenswrapper[4867]: I0214 04:34:32.336407 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-5cgsc" Feb 14 04:34:32 crc kubenswrapper[4867]: I0214 04:34:32.716921 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 14 04:34:32 crc kubenswrapper[4867]: I0214 04:34:32.727937 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 14 04:34:32 crc kubenswrapper[4867]: I0214 04:34:32.734024 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 14 04:34:32 crc kubenswrapper[4867]: I0214 04:34:32.820878 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 14 04:34:33 crc kubenswrapper[4867]: I0214 04:34:33.362124 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-5cgsc"] Feb 14 04:34:33 crc kubenswrapper[4867]: I0214 04:34:33.771663 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-5cgsc" event={"ID":"5971b677-9b43-4667-b205-3926975d03d8","Type":"ContainerStarted","Data":"3c342daaec09db1c73482280fce80173920eec884b7d07687fab104355216038"} Feb 14 04:34:34 crc kubenswrapper[4867]: I0214 04:34:34.782833 4867 generic.go:334] "Generic (PLEG): container finished" podID="5971b677-9b43-4667-b205-3926975d03d8" containerID="6971374dbc010707ba6790cccdbab9a07aa3260bf64fef9946cb0b85383f3d5f" exitCode=0 Feb 14 04:34:34 crc kubenswrapper[4867]: I0214 04:34:34.784659 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-5cgsc" event={"ID":"5971b677-9b43-4667-b205-3926975d03d8","Type":"ContainerDied","Data":"6971374dbc010707ba6790cccdbab9a07aa3260bf64fef9946cb0b85383f3d5f"} Feb 14 04:34:35 crc kubenswrapper[4867]: I0214 04:34:35.027579 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 14 04:34:35 crc kubenswrapper[4867]: I0214 04:34:35.028643 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f365fedd-2e1e-41da-aeed-c2f6cf9de0eb" containerName="nova-api-api" containerID="cri-o://3bf6de6c41ec2894ac7f99d62ff3f51ff5cc922eed2592185cfef6d65b82aff9" gracePeriod=30 Feb 14 04:34:35 crc kubenswrapper[4867]: I0214 04:34:35.028894 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f365fedd-2e1e-41da-aeed-c2f6cf9de0eb" containerName="nova-api-log" containerID="cri-o://6a17090aa1f1970c7506d253b3e201ba17d075849c21455fa12bc2d248778b84" gracePeriod=30 Feb 14 04:34:35 crc kubenswrapper[4867]: I0214 04:34:35.804737 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-5cgsc" event={"ID":"5971b677-9b43-4667-b205-3926975d03d8","Type":"ContainerStarted","Data":"9ad581f44e041fa43febb489e9c262f745817b519d4f39097aec3abed254cc22"} Feb 14 04:34:35 crc kubenswrapper[4867]: I0214 04:34:35.804912 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b7bbf7cf9-5cgsc" Feb 14 04:34:35 crc kubenswrapper[4867]: I0214 04:34:35.808073 4867 generic.go:334] "Generic (PLEG): container finished" podID="f365fedd-2e1e-41da-aeed-c2f6cf9de0eb" containerID="6a17090aa1f1970c7506d253b3e201ba17d075849c21455fa12bc2d248778b84" exitCode=143 Feb 14 04:34:35 crc kubenswrapper[4867]: I0214 04:34:35.808109 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f365fedd-2e1e-41da-aeed-c2f6cf9de0eb","Type":"ContainerDied","Data":"6a17090aa1f1970c7506d253b3e201ba17d075849c21455fa12bc2d248778b84"} Feb 14 04:34:35 crc kubenswrapper[4867]: I0214 04:34:35.832362 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b7bbf7cf9-5cgsc" podStartSLOduration=4.832335434 podStartE2EDuration="4.832335434s" podCreationTimestamp="2026-02-14 04:34:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:34:35.824773451 +0000 UTC m=+1507.905710775" watchObservedRunningTime="2026-02-14 04:34:35.832335434 +0000 UTC m=+1507.913272748" Feb 14 04:34:36 crc kubenswrapper[4867]: I0214 04:34:36.705166 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:34:36 crc kubenswrapper[4867]: I0214 04:34:36.708096 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="91a07e13-20f0-41a3-b974-4570ebfdc497" containerName="ceilometer-central-agent" containerID="cri-o://04371cd2bd6d981eba64b7f4eaeef7200ada8dd86442ec2e8912d6830b76b8d6" gracePeriod=30 Feb 14 04:34:36 crc kubenswrapper[4867]: I0214 04:34:36.708372 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="91a07e13-20f0-41a3-b974-4570ebfdc497" containerName="proxy-httpd" containerID="cri-o://8b166e03499fcd6a7d3f4d54be9e9dad070c581c83b5e328175a1a07459495b7" gracePeriod=30 Feb 14 04:34:36 crc kubenswrapper[4867]: I0214 04:34:36.708438 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="91a07e13-20f0-41a3-b974-4570ebfdc497" containerName="ceilometer-notification-agent" containerID="cri-o://365f4ce280bcf54eaf77d8f1f86bd38acc51e0b4dcba2956a590949d246f3f7d" gracePeriod=30 Feb 14 04:34:36 crc kubenswrapper[4867]: I0214 04:34:36.708523 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="91a07e13-20f0-41a3-b974-4570ebfdc497" containerName="sg-core" containerID="cri-o://d8f535569c3a2f29a4194d25fe02c25c8862ebdc340d4ee65743f0cf1cd3d4e2" gracePeriod=30 Feb 14 04:34:37 crc kubenswrapper[4867]: I0214 04:34:37.526864 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 14 04:34:37 crc kubenswrapper[4867]: I0214 04:34:37.647581 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/871276b6-7245-427a-8b55-29dfdfe3695b-combined-ca-bundle\") pod \"871276b6-7245-427a-8b55-29dfdfe3695b\" (UID: \"871276b6-7245-427a-8b55-29dfdfe3695b\") " Feb 14 04:34:37 crc kubenswrapper[4867]: I0214 04:34:37.647761 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dnxbd\" (UniqueName: \"kubernetes.io/projected/871276b6-7245-427a-8b55-29dfdfe3695b-kube-api-access-dnxbd\") pod \"871276b6-7245-427a-8b55-29dfdfe3695b\" (UID: \"871276b6-7245-427a-8b55-29dfdfe3695b\") " Feb 14 04:34:37 crc kubenswrapper[4867]: I0214 04:34:37.647803 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/871276b6-7245-427a-8b55-29dfdfe3695b-config-data\") pod \"871276b6-7245-427a-8b55-29dfdfe3695b\" (UID: \"871276b6-7245-427a-8b55-29dfdfe3695b\") " Feb 14 04:34:37 crc kubenswrapper[4867]: I0214 04:34:37.690101 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/871276b6-7245-427a-8b55-29dfdfe3695b-kube-api-access-dnxbd" (OuterVolumeSpecName: "kube-api-access-dnxbd") pod "871276b6-7245-427a-8b55-29dfdfe3695b" (UID: "871276b6-7245-427a-8b55-29dfdfe3695b"). InnerVolumeSpecName "kube-api-access-dnxbd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:34:37 crc kubenswrapper[4867]: I0214 04:34:37.703353 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/871276b6-7245-427a-8b55-29dfdfe3695b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "871276b6-7245-427a-8b55-29dfdfe3695b" (UID: "871276b6-7245-427a-8b55-29dfdfe3695b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:34:37 crc kubenswrapper[4867]: I0214 04:34:37.721696 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/871276b6-7245-427a-8b55-29dfdfe3695b-config-data" (OuterVolumeSpecName: "config-data") pod "871276b6-7245-427a-8b55-29dfdfe3695b" (UID: "871276b6-7245-427a-8b55-29dfdfe3695b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:34:37 crc kubenswrapper[4867]: I0214 04:34:37.751481 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/871276b6-7245-427a-8b55-29dfdfe3695b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:37 crc kubenswrapper[4867]: I0214 04:34:37.751549 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dnxbd\" (UniqueName: \"kubernetes.io/projected/871276b6-7245-427a-8b55-29dfdfe3695b-kube-api-access-dnxbd\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:37 crc kubenswrapper[4867]: I0214 04:34:37.751563 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/871276b6-7245-427a-8b55-29dfdfe3695b-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:37 crc kubenswrapper[4867]: I0214 04:34:37.834289 4867 generic.go:334] "Generic (PLEG): container finished" podID="871276b6-7245-427a-8b55-29dfdfe3695b" containerID="7a29ef69a79abcd2999f8338c936a26c65e2128a8ad6b8ec14625ac6e941e0d9" exitCode=137 Feb 14 04:34:37 crc kubenswrapper[4867]: I0214 04:34:37.834359 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"871276b6-7245-427a-8b55-29dfdfe3695b","Type":"ContainerDied","Data":"7a29ef69a79abcd2999f8338c936a26c65e2128a8ad6b8ec14625ac6e941e0d9"} Feb 14 04:34:37 crc kubenswrapper[4867]: I0214 04:34:37.834390 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"871276b6-7245-427a-8b55-29dfdfe3695b","Type":"ContainerDied","Data":"7421ae1cc8f7150f6013e7337e1040d9ce9252e306ea9b4407c26605f30d6363"} Feb 14 04:34:37 crc kubenswrapper[4867]: I0214 04:34:37.834408 4867 scope.go:117] "RemoveContainer" containerID="7a29ef69a79abcd2999f8338c936a26c65e2128a8ad6b8ec14625ac6e941e0d9" Feb 14 04:34:37 crc kubenswrapper[4867]: I0214 04:34:37.834575 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 14 04:34:37 crc kubenswrapper[4867]: I0214 04:34:37.847992 4867 generic.go:334] "Generic (PLEG): container finished" podID="91a07e13-20f0-41a3-b974-4570ebfdc497" containerID="8b166e03499fcd6a7d3f4d54be9e9dad070c581c83b5e328175a1a07459495b7" exitCode=0 Feb 14 04:34:37 crc kubenswrapper[4867]: I0214 04:34:37.848034 4867 generic.go:334] "Generic (PLEG): container finished" podID="91a07e13-20f0-41a3-b974-4570ebfdc497" containerID="d8f535569c3a2f29a4194d25fe02c25c8862ebdc340d4ee65743f0cf1cd3d4e2" exitCode=2 Feb 14 04:34:37 crc kubenswrapper[4867]: I0214 04:34:37.848044 4867 generic.go:334] "Generic (PLEG): container finished" podID="91a07e13-20f0-41a3-b974-4570ebfdc497" containerID="365f4ce280bcf54eaf77d8f1f86bd38acc51e0b4dcba2956a590949d246f3f7d" exitCode=0 Feb 14 04:34:37 crc kubenswrapper[4867]: I0214 04:34:37.848053 4867 generic.go:334] "Generic (PLEG): container finished" podID="91a07e13-20f0-41a3-b974-4570ebfdc497" containerID="04371cd2bd6d981eba64b7f4eaeef7200ada8dd86442ec2e8912d6830b76b8d6" exitCode=0 Feb 14 04:34:37 crc kubenswrapper[4867]: I0214 04:34:37.848081 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"91a07e13-20f0-41a3-b974-4570ebfdc497","Type":"ContainerDied","Data":"8b166e03499fcd6a7d3f4d54be9e9dad070c581c83b5e328175a1a07459495b7"} Feb 14 04:34:37 crc kubenswrapper[4867]: I0214 04:34:37.848119 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"91a07e13-20f0-41a3-b974-4570ebfdc497","Type":"ContainerDied","Data":"d8f535569c3a2f29a4194d25fe02c25c8862ebdc340d4ee65743f0cf1cd3d4e2"} Feb 14 04:34:37 crc kubenswrapper[4867]: I0214 04:34:37.848135 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"91a07e13-20f0-41a3-b974-4570ebfdc497","Type":"ContainerDied","Data":"365f4ce280bcf54eaf77d8f1f86bd38acc51e0b4dcba2956a590949d246f3f7d"} Feb 14 04:34:37 crc kubenswrapper[4867]: I0214 04:34:37.848146 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"91a07e13-20f0-41a3-b974-4570ebfdc497","Type":"ContainerDied","Data":"04371cd2bd6d981eba64b7f4eaeef7200ada8dd86442ec2e8912d6830b76b8d6"} Feb 14 04:34:37 crc kubenswrapper[4867]: I0214 04:34:37.938592 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 14 04:34:37 crc kubenswrapper[4867]: I0214 04:34:37.944001 4867 scope.go:117] "RemoveContainer" containerID="7a29ef69a79abcd2999f8338c936a26c65e2128a8ad6b8ec14625ac6e941e0d9" Feb 14 04:34:37 crc kubenswrapper[4867]: E0214 04:34:37.946866 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a29ef69a79abcd2999f8338c936a26c65e2128a8ad6b8ec14625ac6e941e0d9\": container with ID starting with 7a29ef69a79abcd2999f8338c936a26c65e2128a8ad6b8ec14625ac6e941e0d9 not found: ID does not exist" containerID="7a29ef69a79abcd2999f8338c936a26c65e2128a8ad6b8ec14625ac6e941e0d9" Feb 14 04:34:37 crc kubenswrapper[4867]: I0214 04:34:37.946921 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a29ef69a79abcd2999f8338c936a26c65e2128a8ad6b8ec14625ac6e941e0d9"} err="failed to get container status \"7a29ef69a79abcd2999f8338c936a26c65e2128a8ad6b8ec14625ac6e941e0d9\": rpc error: code = NotFound desc = could not find container \"7a29ef69a79abcd2999f8338c936a26c65e2128a8ad6b8ec14625ac6e941e0d9\": container with ID starting with 7a29ef69a79abcd2999f8338c936a26c65e2128a8ad6b8ec14625ac6e941e0d9 not found: ID does not exist" Feb 14 04:34:37 crc kubenswrapper[4867]: I0214 04:34:37.955272 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 14 04:34:37 crc kubenswrapper[4867]: I0214 04:34:37.975263 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 14 04:34:37 crc kubenswrapper[4867]: E0214 04:34:37.976522 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="871276b6-7245-427a-8b55-29dfdfe3695b" containerName="nova-cell1-novncproxy-novncproxy" Feb 14 04:34:37 crc kubenswrapper[4867]: I0214 04:34:37.976553 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="871276b6-7245-427a-8b55-29dfdfe3695b" containerName="nova-cell1-novncproxy-novncproxy" Feb 14 04:34:37 crc kubenswrapper[4867]: I0214 04:34:37.976795 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="871276b6-7245-427a-8b55-29dfdfe3695b" containerName="nova-cell1-novncproxy-novncproxy" Feb 14 04:34:37 crc kubenswrapper[4867]: I0214 04:34:37.977827 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 14 04:34:37 crc kubenswrapper[4867]: I0214 04:34:37.979757 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 14 04:34:37 crc kubenswrapper[4867]: I0214 04:34:37.980783 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 14 04:34:37 crc kubenswrapper[4867]: I0214 04:34:37.980937 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 14 04:34:37 crc kubenswrapper[4867]: I0214 04:34:37.991041 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.059726 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e1bf5e4-7b04-4a47-aa41-e547815fc623-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3e1bf5e4-7b04-4a47-aa41-e547815fc623\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.060200 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e1bf5e4-7b04-4a47-aa41-e547815fc623-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3e1bf5e4-7b04-4a47-aa41-e547815fc623\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.060751 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e1bf5e4-7b04-4a47-aa41-e547815fc623-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3e1bf5e4-7b04-4a47-aa41-e547815fc623\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.060817 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxcfc\" (UniqueName: \"kubernetes.io/projected/3e1bf5e4-7b04-4a47-aa41-e547815fc623-kube-api-access-xxcfc\") pod \"nova-cell1-novncproxy-0\" (UID: \"3e1bf5e4-7b04-4a47-aa41-e547815fc623\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.061151 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e1bf5e4-7b04-4a47-aa41-e547815fc623-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3e1bf5e4-7b04-4a47-aa41-e547815fc623\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.089710 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.164654 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91a07e13-20f0-41a3-b974-4570ebfdc497-scripts\") pod \"91a07e13-20f0-41a3-b974-4570ebfdc497\" (UID: \"91a07e13-20f0-41a3-b974-4570ebfdc497\") " Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.164793 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/91a07e13-20f0-41a3-b974-4570ebfdc497-run-httpd\") pod \"91a07e13-20f0-41a3-b974-4570ebfdc497\" (UID: \"91a07e13-20f0-41a3-b974-4570ebfdc497\") " Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.165054 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/91a07e13-20f0-41a3-b974-4570ebfdc497-log-httpd\") pod \"91a07e13-20f0-41a3-b974-4570ebfdc497\" (UID: \"91a07e13-20f0-41a3-b974-4570ebfdc497\") " Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.165179 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/91a07e13-20f0-41a3-b974-4570ebfdc497-sg-core-conf-yaml\") pod \"91a07e13-20f0-41a3-b974-4570ebfdc497\" (UID: \"91a07e13-20f0-41a3-b974-4570ebfdc497\") " Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.165240 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-msskx\" (UniqueName: \"kubernetes.io/projected/91a07e13-20f0-41a3-b974-4570ebfdc497-kube-api-access-msskx\") pod \"91a07e13-20f0-41a3-b974-4570ebfdc497\" (UID: \"91a07e13-20f0-41a3-b974-4570ebfdc497\") " Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.165387 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91a07e13-20f0-41a3-b974-4570ebfdc497-config-data\") pod \"91a07e13-20f0-41a3-b974-4570ebfdc497\" (UID: \"91a07e13-20f0-41a3-b974-4570ebfdc497\") " Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.165432 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91a07e13-20f0-41a3-b974-4570ebfdc497-combined-ca-bundle\") pod \"91a07e13-20f0-41a3-b974-4570ebfdc497\" (UID: \"91a07e13-20f0-41a3-b974-4570ebfdc497\") " Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.165879 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e1bf5e4-7b04-4a47-aa41-e547815fc623-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3e1bf5e4-7b04-4a47-aa41-e547815fc623\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.165998 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e1bf5e4-7b04-4a47-aa41-e547815fc623-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3e1bf5e4-7b04-4a47-aa41-e547815fc623\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.166114 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e1bf5e4-7b04-4a47-aa41-e547815fc623-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3e1bf5e4-7b04-4a47-aa41-e547815fc623\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.166145 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxcfc\" (UniqueName: \"kubernetes.io/projected/3e1bf5e4-7b04-4a47-aa41-e547815fc623-kube-api-access-xxcfc\") pod \"nova-cell1-novncproxy-0\" (UID: \"3e1bf5e4-7b04-4a47-aa41-e547815fc623\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.165743 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91a07e13-20f0-41a3-b974-4570ebfdc497-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "91a07e13-20f0-41a3-b974-4570ebfdc497" (UID: "91a07e13-20f0-41a3-b974-4570ebfdc497"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.165822 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91a07e13-20f0-41a3-b974-4570ebfdc497-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "91a07e13-20f0-41a3-b974-4570ebfdc497" (UID: "91a07e13-20f0-41a3-b974-4570ebfdc497"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.166257 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e1bf5e4-7b04-4a47-aa41-e547815fc623-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3e1bf5e4-7b04-4a47-aa41-e547815fc623\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.166422 4867 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/91a07e13-20f0-41a3-b974-4570ebfdc497-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.166435 4867 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/91a07e13-20f0-41a3-b974-4570ebfdc497-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.171125 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e1bf5e4-7b04-4a47-aa41-e547815fc623-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3e1bf5e4-7b04-4a47-aa41-e547815fc623\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.173123 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e1bf5e4-7b04-4a47-aa41-e547815fc623-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3e1bf5e4-7b04-4a47-aa41-e547815fc623\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.175903 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e1bf5e4-7b04-4a47-aa41-e547815fc623-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3e1bf5e4-7b04-4a47-aa41-e547815fc623\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.180071 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e1bf5e4-7b04-4a47-aa41-e547815fc623-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3e1bf5e4-7b04-4a47-aa41-e547815fc623\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.190749 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91a07e13-20f0-41a3-b974-4570ebfdc497-scripts" (OuterVolumeSpecName: "scripts") pod "91a07e13-20f0-41a3-b974-4570ebfdc497" (UID: "91a07e13-20f0-41a3-b974-4570ebfdc497"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.191213 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91a07e13-20f0-41a3-b974-4570ebfdc497-kube-api-access-msskx" (OuterVolumeSpecName: "kube-api-access-msskx") pod "91a07e13-20f0-41a3-b974-4570ebfdc497" (UID: "91a07e13-20f0-41a3-b974-4570ebfdc497"). InnerVolumeSpecName "kube-api-access-msskx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.191422 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxcfc\" (UniqueName: \"kubernetes.io/projected/3e1bf5e4-7b04-4a47-aa41-e547815fc623-kube-api-access-xxcfc\") pod \"nova-cell1-novncproxy-0\" (UID: \"3e1bf5e4-7b04-4a47-aa41-e547815fc623\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.227674 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91a07e13-20f0-41a3-b974-4570ebfdc497-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "91a07e13-20f0-41a3-b974-4570ebfdc497" (UID: "91a07e13-20f0-41a3-b974-4570ebfdc497"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.269326 4867 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/91a07e13-20f0-41a3-b974-4570ebfdc497-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.269365 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-msskx\" (UniqueName: \"kubernetes.io/projected/91a07e13-20f0-41a3-b974-4570ebfdc497-kube-api-access-msskx\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.269378 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91a07e13-20f0-41a3-b974-4570ebfdc497-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.341959 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91a07e13-20f0-41a3-b974-4570ebfdc497-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "91a07e13-20f0-41a3-b974-4570ebfdc497" (UID: "91a07e13-20f0-41a3-b974-4570ebfdc497"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.372821 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91a07e13-20f0-41a3-b974-4570ebfdc497-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.402328 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.438848 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91a07e13-20f0-41a3-b974-4570ebfdc497-config-data" (OuterVolumeSpecName: "config-data") pod "91a07e13-20f0-41a3-b974-4570ebfdc497" (UID: "91a07e13-20f0-41a3-b974-4570ebfdc497"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.475945 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91a07e13-20f0-41a3-b974-4570ebfdc497-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:38 crc kubenswrapper[4867]: E0214 04:34:38.483540 4867 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf365fedd_2e1e_41da_aeed_c2f6cf9de0eb.slice/crio-3bf6de6c41ec2894ac7f99d62ff3f51ff5cc922eed2592185cfef6d65b82aff9.scope\": RecentStats: unable to find data in memory cache]" Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.761813 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.822125 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f365fedd-2e1e-41da-aeed-c2f6cf9de0eb-combined-ca-bundle\") pod \"f365fedd-2e1e-41da-aeed-c2f6cf9de0eb\" (UID: \"f365fedd-2e1e-41da-aeed-c2f6cf9de0eb\") " Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.825933 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bnlv8\" (UniqueName: \"kubernetes.io/projected/f365fedd-2e1e-41da-aeed-c2f6cf9de0eb-kube-api-access-bnlv8\") pod \"f365fedd-2e1e-41da-aeed-c2f6cf9de0eb\" (UID: \"f365fedd-2e1e-41da-aeed-c2f6cf9de0eb\") " Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.828379 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f365fedd-2e1e-41da-aeed-c2f6cf9de0eb-logs\") pod \"f365fedd-2e1e-41da-aeed-c2f6cf9de0eb\" (UID: \"f365fedd-2e1e-41da-aeed-c2f6cf9de0eb\") " Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.828551 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f365fedd-2e1e-41da-aeed-c2f6cf9de0eb-config-data\") pod \"f365fedd-2e1e-41da-aeed-c2f6cf9de0eb\" (UID: \"f365fedd-2e1e-41da-aeed-c2f6cf9de0eb\") " Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.831477 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f365fedd-2e1e-41da-aeed-c2f6cf9de0eb-logs" (OuterVolumeSpecName: "logs") pod "f365fedd-2e1e-41da-aeed-c2f6cf9de0eb" (UID: "f365fedd-2e1e-41da-aeed-c2f6cf9de0eb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.849584 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f365fedd-2e1e-41da-aeed-c2f6cf9de0eb-kube-api-access-bnlv8" (OuterVolumeSpecName: "kube-api-access-bnlv8") pod "f365fedd-2e1e-41da-aeed-c2f6cf9de0eb" (UID: "f365fedd-2e1e-41da-aeed-c2f6cf9de0eb"). InnerVolumeSpecName "kube-api-access-bnlv8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.868013 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f365fedd-2e1e-41da-aeed-c2f6cf9de0eb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f365fedd-2e1e-41da-aeed-c2f6cf9de0eb" (UID: "f365fedd-2e1e-41da-aeed-c2f6cf9de0eb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.878903 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f365fedd-2e1e-41da-aeed-c2f6cf9de0eb-config-data" (OuterVolumeSpecName: "config-data") pod "f365fedd-2e1e-41da-aeed-c2f6cf9de0eb" (UID: "f365fedd-2e1e-41da-aeed-c2f6cf9de0eb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.898155 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"91a07e13-20f0-41a3-b974-4570ebfdc497","Type":"ContainerDied","Data":"f206cad755a2b2f3c0b1803ba04c8a34a0ff6af924273028ea31c8d2d6a28332"} Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.898236 4867 scope.go:117] "RemoveContainer" containerID="8b166e03499fcd6a7d3f4d54be9e9dad070c581c83b5e328175a1a07459495b7" Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.898277 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.932385 4867 generic.go:334] "Generic (PLEG): container finished" podID="f365fedd-2e1e-41da-aeed-c2f6cf9de0eb" containerID="3bf6de6c41ec2894ac7f99d62ff3f51ff5cc922eed2592185cfef6d65b82aff9" exitCode=0 Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.932656 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f365fedd-2e1e-41da-aeed-c2f6cf9de0eb","Type":"ContainerDied","Data":"3bf6de6c41ec2894ac7f99d62ff3f51ff5cc922eed2592185cfef6d65b82aff9"} Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.932684 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f365fedd-2e1e-41da-aeed-c2f6cf9de0eb","Type":"ContainerDied","Data":"8eef4a09d30f75b09b2a4e941b5145891d9e9ba139549a7546f8625ba9359aed"} Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.932760 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.982008 4867 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f365fedd-2e1e-41da-aeed-c2f6cf9de0eb-logs\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.989360 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f365fedd-2e1e-41da-aeed-c2f6cf9de0eb-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.989394 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f365fedd-2e1e-41da-aeed-c2f6cf9de0eb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.989409 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bnlv8\" (UniqueName: \"kubernetes.io/projected/f365fedd-2e1e-41da-aeed-c2f6cf9de0eb-kube-api-access-bnlv8\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.091292 4867 scope.go:117] "RemoveContainer" containerID="d8f535569c3a2f29a4194d25fe02c25c8862ebdc340d4ee65743f0cf1cd3d4e2" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.119534 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="871276b6-7245-427a-8b55-29dfdfe3695b" path="/var/lib/kubelet/pods/871276b6-7245-427a-8b55-29dfdfe3695b/volumes" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.120917 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.137712 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.160820 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:34:39 crc kubenswrapper[4867]: E0214 04:34:39.161609 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91a07e13-20f0-41a3-b974-4570ebfdc497" containerName="ceilometer-notification-agent" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.163817 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="91a07e13-20f0-41a3-b974-4570ebfdc497" containerName="ceilometer-notification-agent" Feb 14 04:34:39 crc kubenswrapper[4867]: E0214 04:34:39.163884 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91a07e13-20f0-41a3-b974-4570ebfdc497" containerName="ceilometer-central-agent" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.163892 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="91a07e13-20f0-41a3-b974-4570ebfdc497" containerName="ceilometer-central-agent" Feb 14 04:34:39 crc kubenswrapper[4867]: E0214 04:34:39.163912 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f365fedd-2e1e-41da-aeed-c2f6cf9de0eb" containerName="nova-api-api" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.163919 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f365fedd-2e1e-41da-aeed-c2f6cf9de0eb" containerName="nova-api-api" Feb 14 04:34:39 crc kubenswrapper[4867]: E0214 04:34:39.163937 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f365fedd-2e1e-41da-aeed-c2f6cf9de0eb" containerName="nova-api-log" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.163945 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f365fedd-2e1e-41da-aeed-c2f6cf9de0eb" containerName="nova-api-log" Feb 14 04:34:39 crc kubenswrapper[4867]: E0214 04:34:39.163953 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91a07e13-20f0-41a3-b974-4570ebfdc497" containerName="proxy-httpd" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.163959 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="91a07e13-20f0-41a3-b974-4570ebfdc497" containerName="proxy-httpd" Feb 14 04:34:39 crc kubenswrapper[4867]: E0214 04:34:39.163984 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91a07e13-20f0-41a3-b974-4570ebfdc497" containerName="sg-core" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.163990 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="91a07e13-20f0-41a3-b974-4570ebfdc497" containerName="sg-core" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.164295 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="f365fedd-2e1e-41da-aeed-c2f6cf9de0eb" containerName="nova-api-log" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.164316 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="91a07e13-20f0-41a3-b974-4570ebfdc497" containerName="proxy-httpd" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.164328 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="91a07e13-20f0-41a3-b974-4570ebfdc497" containerName="ceilometer-central-agent" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.164341 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="91a07e13-20f0-41a3-b974-4570ebfdc497" containerName="ceilometer-notification-agent" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.164361 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="91a07e13-20f0-41a3-b974-4570ebfdc497" containerName="sg-core" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.164371 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="f365fedd-2e1e-41da-aeed-c2f6cf9de0eb" containerName="nova-api-api" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.168317 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.177048 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.177280 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.192825 4867 scope.go:117] "RemoveContainer" containerID="365f4ce280bcf54eaf77d8f1f86bd38acc51e0b4dcba2956a590949d246f3f7d" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.198469 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.217793 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.281686 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.305207 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce113f40-e807-4f30-adaf-8053c4ac7b65-config-data\") pod \"ceilometer-0\" (UID: \"ce113f40-e807-4f30-adaf-8053c4ac7b65\") " pod="openstack/ceilometer-0" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.305284 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce113f40-e807-4f30-adaf-8053c4ac7b65-log-httpd\") pod \"ceilometer-0\" (UID: \"ce113f40-e807-4f30-adaf-8053c4ac7b65\") " pod="openstack/ceilometer-0" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.307221 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce113f40-e807-4f30-adaf-8053c4ac7b65-scripts\") pod \"ceilometer-0\" (UID: \"ce113f40-e807-4f30-adaf-8053c4ac7b65\") " pod="openstack/ceilometer-0" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.307282 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.307480 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ce113f40-e807-4f30-adaf-8053c4ac7b65-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ce113f40-e807-4f30-adaf-8053c4ac7b65\") " pod="openstack/ceilometer-0" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.307583 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce113f40-e807-4f30-adaf-8053c4ac7b65-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ce113f40-e807-4f30-adaf-8053c4ac7b65\") " pod="openstack/ceilometer-0" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.307824 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77lbc\" (UniqueName: \"kubernetes.io/projected/ce113f40-e807-4f30-adaf-8053c4ac7b65-kube-api-access-77lbc\") pod \"ceilometer-0\" (UID: \"ce113f40-e807-4f30-adaf-8053c4ac7b65\") " pod="openstack/ceilometer-0" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.307933 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce113f40-e807-4f30-adaf-8053c4ac7b65-run-httpd\") pod \"ceilometer-0\" (UID: \"ce113f40-e807-4f30-adaf-8053c4ac7b65\") " pod="openstack/ceilometer-0" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.325278 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.337358 4867 scope.go:117] "RemoveContainer" containerID="04371cd2bd6d981eba64b7f4eaeef7200ada8dd86442ec2e8912d6830b76b8d6" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.349049 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.349181 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.357465 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.357813 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.369277 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.378180 4867 scope.go:117] "RemoveContainer" containerID="3bf6de6c41ec2894ac7f99d62ff3f51ff5cc922eed2592185cfef6d65b82aff9" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.413743 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77lbc\" (UniqueName: \"kubernetes.io/projected/ce113f40-e807-4f30-adaf-8053c4ac7b65-kube-api-access-77lbc\") pod \"ceilometer-0\" (UID: \"ce113f40-e807-4f30-adaf-8053c4ac7b65\") " pod="openstack/ceilometer-0" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.413945 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce113f40-e807-4f30-adaf-8053c4ac7b65-run-httpd\") pod \"ceilometer-0\" (UID: \"ce113f40-e807-4f30-adaf-8053c4ac7b65\") " pod="openstack/ceilometer-0" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.414066 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce113f40-e807-4f30-adaf-8053c4ac7b65-config-data\") pod \"ceilometer-0\" (UID: \"ce113f40-e807-4f30-adaf-8053c4ac7b65\") " pod="openstack/ceilometer-0" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.414154 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce113f40-e807-4f30-adaf-8053c4ac7b65-log-httpd\") pod \"ceilometer-0\" (UID: \"ce113f40-e807-4f30-adaf-8053c4ac7b65\") " pod="openstack/ceilometer-0" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.414272 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce113f40-e807-4f30-adaf-8053c4ac7b65-scripts\") pod \"ceilometer-0\" (UID: \"ce113f40-e807-4f30-adaf-8053c4ac7b65\") " pod="openstack/ceilometer-0" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.414746 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ce113f40-e807-4f30-adaf-8053c4ac7b65-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ce113f40-e807-4f30-adaf-8053c4ac7b65\") " pod="openstack/ceilometer-0" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.414936 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce113f40-e807-4f30-adaf-8053c4ac7b65-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ce113f40-e807-4f30-adaf-8053c4ac7b65\") " pod="openstack/ceilometer-0" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.418909 4867 scope.go:117] "RemoveContainer" containerID="6a17090aa1f1970c7506d253b3e201ba17d075849c21455fa12bc2d248778b84" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.419767 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce113f40-e807-4f30-adaf-8053c4ac7b65-log-httpd\") pod \"ceilometer-0\" (UID: \"ce113f40-e807-4f30-adaf-8053c4ac7b65\") " pod="openstack/ceilometer-0" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.423776 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce113f40-e807-4f30-adaf-8053c4ac7b65-run-httpd\") pod \"ceilometer-0\" (UID: \"ce113f40-e807-4f30-adaf-8053c4ac7b65\") " pod="openstack/ceilometer-0" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.437265 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ce113f40-e807-4f30-adaf-8053c4ac7b65-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ce113f40-e807-4f30-adaf-8053c4ac7b65\") " pod="openstack/ceilometer-0" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.438058 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce113f40-e807-4f30-adaf-8053c4ac7b65-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ce113f40-e807-4f30-adaf-8053c4ac7b65\") " pod="openstack/ceilometer-0" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.439980 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce113f40-e807-4f30-adaf-8053c4ac7b65-scripts\") pod \"ceilometer-0\" (UID: \"ce113f40-e807-4f30-adaf-8053c4ac7b65\") " pod="openstack/ceilometer-0" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.447089 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce113f40-e807-4f30-adaf-8053c4ac7b65-config-data\") pod \"ceilometer-0\" (UID: \"ce113f40-e807-4f30-adaf-8053c4ac7b65\") " pod="openstack/ceilometer-0" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.461186 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77lbc\" (UniqueName: \"kubernetes.io/projected/ce113f40-e807-4f30-adaf-8053c4ac7b65-kube-api-access-77lbc\") pod \"ceilometer-0\" (UID: \"ce113f40-e807-4f30-adaf-8053c4ac7b65\") " pod="openstack/ceilometer-0" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.518148 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/850d3d1a-b2c1-4063-bfb3-a796d727ff88-config-data\") pod \"nova-api-0\" (UID: \"850d3d1a-b2c1-4063-bfb3-a796d727ff88\") " pod="openstack/nova-api-0" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.518245 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vt42\" (UniqueName: \"kubernetes.io/projected/850d3d1a-b2c1-4063-bfb3-a796d727ff88-kube-api-access-4vt42\") pod \"nova-api-0\" (UID: \"850d3d1a-b2c1-4063-bfb3-a796d727ff88\") " pod="openstack/nova-api-0" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.518371 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/850d3d1a-b2c1-4063-bfb3-a796d727ff88-logs\") pod \"nova-api-0\" (UID: \"850d3d1a-b2c1-4063-bfb3-a796d727ff88\") " pod="openstack/nova-api-0" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.518453 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/850d3d1a-b2c1-4063-bfb3-a796d727ff88-internal-tls-certs\") pod \"nova-api-0\" (UID: \"850d3d1a-b2c1-4063-bfb3-a796d727ff88\") " pod="openstack/nova-api-0" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.518527 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/850d3d1a-b2c1-4063-bfb3-a796d727ff88-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"850d3d1a-b2c1-4063-bfb3-a796d727ff88\") " pod="openstack/nova-api-0" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.518566 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/850d3d1a-b2c1-4063-bfb3-a796d727ff88-public-tls-certs\") pod \"nova-api-0\" (UID: \"850d3d1a-b2c1-4063-bfb3-a796d727ff88\") " pod="openstack/nova-api-0" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.542901 4867 scope.go:117] "RemoveContainer" containerID="3bf6de6c41ec2894ac7f99d62ff3f51ff5cc922eed2592185cfef6d65b82aff9" Feb 14 04:34:39 crc kubenswrapper[4867]: E0214 04:34:39.546315 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3bf6de6c41ec2894ac7f99d62ff3f51ff5cc922eed2592185cfef6d65b82aff9\": container with ID starting with 3bf6de6c41ec2894ac7f99d62ff3f51ff5cc922eed2592185cfef6d65b82aff9 not found: ID does not exist" containerID="3bf6de6c41ec2894ac7f99d62ff3f51ff5cc922eed2592185cfef6d65b82aff9" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.546367 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bf6de6c41ec2894ac7f99d62ff3f51ff5cc922eed2592185cfef6d65b82aff9"} err="failed to get container status \"3bf6de6c41ec2894ac7f99d62ff3f51ff5cc922eed2592185cfef6d65b82aff9\": rpc error: code = NotFound desc = could not find container \"3bf6de6c41ec2894ac7f99d62ff3f51ff5cc922eed2592185cfef6d65b82aff9\": container with ID starting with 3bf6de6c41ec2894ac7f99d62ff3f51ff5cc922eed2592185cfef6d65b82aff9 not found: ID does not exist" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.546399 4867 scope.go:117] "RemoveContainer" containerID="6a17090aa1f1970c7506d253b3e201ba17d075849c21455fa12bc2d248778b84" Feb 14 04:34:39 crc kubenswrapper[4867]: E0214 04:34:39.548789 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a17090aa1f1970c7506d253b3e201ba17d075849c21455fa12bc2d248778b84\": container with ID starting with 6a17090aa1f1970c7506d253b3e201ba17d075849c21455fa12bc2d248778b84 not found: ID does not exist" containerID="6a17090aa1f1970c7506d253b3e201ba17d075849c21455fa12bc2d248778b84" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.550208 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a17090aa1f1970c7506d253b3e201ba17d075849c21455fa12bc2d248778b84"} err="failed to get container status \"6a17090aa1f1970c7506d253b3e201ba17d075849c21455fa12bc2d248778b84\": rpc error: code = NotFound desc = could not find container \"6a17090aa1f1970c7506d253b3e201ba17d075849c21455fa12bc2d248778b84\": container with ID starting with 6a17090aa1f1970c7506d253b3e201ba17d075849c21455fa12bc2d248778b84 not found: ID does not exist" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.621074 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/850d3d1a-b2c1-4063-bfb3-a796d727ff88-internal-tls-certs\") pod \"nova-api-0\" (UID: \"850d3d1a-b2c1-4063-bfb3-a796d727ff88\") " pod="openstack/nova-api-0" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.621196 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/850d3d1a-b2c1-4063-bfb3-a796d727ff88-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"850d3d1a-b2c1-4063-bfb3-a796d727ff88\") " pod="openstack/nova-api-0" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.621245 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/850d3d1a-b2c1-4063-bfb3-a796d727ff88-public-tls-certs\") pod \"nova-api-0\" (UID: \"850d3d1a-b2c1-4063-bfb3-a796d727ff88\") " pod="openstack/nova-api-0" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.621298 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/850d3d1a-b2c1-4063-bfb3-a796d727ff88-config-data\") pod \"nova-api-0\" (UID: \"850d3d1a-b2c1-4063-bfb3-a796d727ff88\") " pod="openstack/nova-api-0" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.621353 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vt42\" (UniqueName: \"kubernetes.io/projected/850d3d1a-b2c1-4063-bfb3-a796d727ff88-kube-api-access-4vt42\") pod \"nova-api-0\" (UID: \"850d3d1a-b2c1-4063-bfb3-a796d727ff88\") " pod="openstack/nova-api-0" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.621464 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/850d3d1a-b2c1-4063-bfb3-a796d727ff88-logs\") pod \"nova-api-0\" (UID: \"850d3d1a-b2c1-4063-bfb3-a796d727ff88\") " pod="openstack/nova-api-0" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.622256 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/850d3d1a-b2c1-4063-bfb3-a796d727ff88-logs\") pod \"nova-api-0\" (UID: \"850d3d1a-b2c1-4063-bfb3-a796d727ff88\") " pod="openstack/nova-api-0" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.626317 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/850d3d1a-b2c1-4063-bfb3-a796d727ff88-internal-tls-certs\") pod \"nova-api-0\" (UID: \"850d3d1a-b2c1-4063-bfb3-a796d727ff88\") " pod="openstack/nova-api-0" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.629621 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/850d3d1a-b2c1-4063-bfb3-a796d727ff88-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"850d3d1a-b2c1-4063-bfb3-a796d727ff88\") " pod="openstack/nova-api-0" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.631116 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/850d3d1a-b2c1-4063-bfb3-a796d727ff88-public-tls-certs\") pod \"nova-api-0\" (UID: \"850d3d1a-b2c1-4063-bfb3-a796d727ff88\") " pod="openstack/nova-api-0" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.631366 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/850d3d1a-b2c1-4063-bfb3-a796d727ff88-config-data\") pod \"nova-api-0\" (UID: \"850d3d1a-b2c1-4063-bfb3-a796d727ff88\") " pod="openstack/nova-api-0" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.641398 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.648044 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vt42\" (UniqueName: \"kubernetes.io/projected/850d3d1a-b2c1-4063-bfb3-a796d727ff88-kube-api-access-4vt42\") pod \"nova-api-0\" (UID: \"850d3d1a-b2c1-4063-bfb3-a796d727ff88\") " pod="openstack/nova-api-0" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.674851 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.872416 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8w8t2" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.873361 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8w8t2" Feb 14 04:34:40 crc kubenswrapper[4867]: I0214 04:34:40.016988 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3e1bf5e4-7b04-4a47-aa41-e547815fc623","Type":"ContainerStarted","Data":"27618aec079281309f2f806dff0227f6ec2dda3b07305db0870bc2570992a846"} Feb 14 04:34:40 crc kubenswrapper[4867]: I0214 04:34:40.017601 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3e1bf5e4-7b04-4a47-aa41-e547815fc623","Type":"ContainerStarted","Data":"7a265dffe4bee445426f6de577d81903f5f8f44fc744a7f1a6c93811b1574fb2"} Feb 14 04:34:40 crc kubenswrapper[4867]: I0214 04:34:40.021489 4867 generic.go:334] "Generic (PLEG): container finished" podID="ef0bc6d9-66ae-4a4d-8650-3c0ac27287cf" containerID="bc19b23b550c0ff93b93128b07ead353fc9290a4dbd1f4015fc48de629ff924f" exitCode=137 Feb 14 04:34:40 crc kubenswrapper[4867]: I0214 04:34:40.022235 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ef0bc6d9-66ae-4a4d-8650-3c0ac27287cf","Type":"ContainerDied","Data":"bc19b23b550c0ff93b93128b07ead353fc9290a4dbd1f4015fc48de629ff924f"} Feb 14 04:34:40 crc kubenswrapper[4867]: I0214 04:34:40.055285 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.055262149 podStartE2EDuration="3.055262149s" podCreationTimestamp="2026-02-14 04:34:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:34:40.051317353 +0000 UTC m=+1512.132254667" watchObservedRunningTime="2026-02-14 04:34:40.055262149 +0000 UTC m=+1512.136199463" Feb 14 04:34:40 crc kubenswrapper[4867]: I0214 04:34:40.303473 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 14 04:34:40 crc kubenswrapper[4867]: I0214 04:34:40.327788 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef0bc6d9-66ae-4a4d-8650-3c0ac27287cf-config-data\") pod \"ef0bc6d9-66ae-4a4d-8650-3c0ac27287cf\" (UID: \"ef0bc6d9-66ae-4a4d-8650-3c0ac27287cf\") " Feb 14 04:34:40 crc kubenswrapper[4867]: I0214 04:34:40.361328 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 14 04:34:40 crc kubenswrapper[4867]: I0214 04:34:40.394432 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef0bc6d9-66ae-4a4d-8650-3c0ac27287cf-config-data" (OuterVolumeSpecName: "config-data") pod "ef0bc6d9-66ae-4a4d-8650-3c0ac27287cf" (UID: "ef0bc6d9-66ae-4a4d-8650-3c0ac27287cf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:34:40 crc kubenswrapper[4867]: I0214 04:34:40.437279 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bv467\" (UniqueName: \"kubernetes.io/projected/ef0bc6d9-66ae-4a4d-8650-3c0ac27287cf-kube-api-access-bv467\") pod \"ef0bc6d9-66ae-4a4d-8650-3c0ac27287cf\" (UID: \"ef0bc6d9-66ae-4a4d-8650-3c0ac27287cf\") " Feb 14 04:34:40 crc kubenswrapper[4867]: I0214 04:34:40.437611 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef0bc6d9-66ae-4a4d-8650-3c0ac27287cf-combined-ca-bundle\") pod \"ef0bc6d9-66ae-4a4d-8650-3c0ac27287cf\" (UID: \"ef0bc6d9-66ae-4a4d-8650-3c0ac27287cf\") " Feb 14 04:34:40 crc kubenswrapper[4867]: I0214 04:34:40.438589 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef0bc6d9-66ae-4a4d-8650-3c0ac27287cf-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:40 crc kubenswrapper[4867]: I0214 04:34:40.445723 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef0bc6d9-66ae-4a4d-8650-3c0ac27287cf-kube-api-access-bv467" (OuterVolumeSpecName: "kube-api-access-bv467") pod "ef0bc6d9-66ae-4a4d-8650-3c0ac27287cf" (UID: "ef0bc6d9-66ae-4a4d-8650-3c0ac27287cf"). InnerVolumeSpecName "kube-api-access-bv467". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:34:40 crc kubenswrapper[4867]: I0214 04:34:40.491206 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef0bc6d9-66ae-4a4d-8650-3c0ac27287cf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ef0bc6d9-66ae-4a4d-8650-3c0ac27287cf" (UID: "ef0bc6d9-66ae-4a4d-8650-3c0ac27287cf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:34:40 crc kubenswrapper[4867]: I0214 04:34:40.538485 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:34:40 crc kubenswrapper[4867]: I0214 04:34:40.544048 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bv467\" (UniqueName: \"kubernetes.io/projected/ef0bc6d9-66ae-4a4d-8650-3c0ac27287cf-kube-api-access-bv467\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:40 crc kubenswrapper[4867]: I0214 04:34:40.544091 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef0bc6d9-66ae-4a4d-8650-3c0ac27287cf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:40 crc kubenswrapper[4867]: I0214 04:34:40.657708 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:34:41 crc kubenswrapper[4867]: I0214 04:34:41.019708 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8w8t2" podUID="07a0a67f-28d7-4aa6-872b-a0223c46a9ce" containerName="registry-server" probeResult="failure" output=< Feb 14 04:34:41 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 04:34:41 crc kubenswrapper[4867]: > Feb 14 04:34:41 crc kubenswrapper[4867]: I0214 04:34:41.020761 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91a07e13-20f0-41a3-b974-4570ebfdc497" path="/var/lib/kubelet/pods/91a07e13-20f0-41a3-b974-4570ebfdc497/volumes" Feb 14 04:34:41 crc kubenswrapper[4867]: I0214 04:34:41.023243 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f365fedd-2e1e-41da-aeed-c2f6cf9de0eb" path="/var/lib/kubelet/pods/f365fedd-2e1e-41da-aeed-c2f6cf9de0eb/volumes" Feb 14 04:34:41 crc kubenswrapper[4867]: I0214 04:34:41.063896 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"850d3d1a-b2c1-4063-bfb3-a796d727ff88","Type":"ContainerStarted","Data":"8d77482b563ed9482e4b0ebcbec7eb6c654115cb0d4aec7f4285cdc30ab1c7f4"} Feb 14 04:34:41 crc kubenswrapper[4867]: I0214 04:34:41.063978 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"850d3d1a-b2c1-4063-bfb3-a796d727ff88","Type":"ContainerStarted","Data":"49aade93d2eb64a508755defcd10d3374df2e6e0070641f14c9d09c777382e72"} Feb 14 04:34:41 crc kubenswrapper[4867]: I0214 04:34:41.063990 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"850d3d1a-b2c1-4063-bfb3-a796d727ff88","Type":"ContainerStarted","Data":"23eda3f5de37b914af1120c4a29676bc10a45dd14a87ddd0f0c35695c9bbb5a7"} Feb 14 04:34:41 crc kubenswrapper[4867]: I0214 04:34:41.077283 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ef0bc6d9-66ae-4a4d-8650-3c0ac27287cf","Type":"ContainerDied","Data":"2f1ec16c434c7fe8c8b2e012785b630337a932a6d095d2d76aaa4e23a79c54fa"} Feb 14 04:34:41 crc kubenswrapper[4867]: I0214 04:34:41.077762 4867 scope.go:117] "RemoveContainer" containerID="bc19b23b550c0ff93b93128b07ead353fc9290a4dbd1f4015fc48de629ff924f" Feb 14 04:34:41 crc kubenswrapper[4867]: I0214 04:34:41.077310 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 14 04:34:41 crc kubenswrapper[4867]: I0214 04:34:41.082336 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce113f40-e807-4f30-adaf-8053c4ac7b65","Type":"ContainerStarted","Data":"6f3a0a4513ba6bef6e4ce1201f78bb96037334e2512744dff6bf6a6b1b3b22b2"} Feb 14 04:34:41 crc kubenswrapper[4867]: I0214 04:34:41.160688 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.160653868 podStartE2EDuration="2.160653868s" podCreationTimestamp="2026-02-14 04:34:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:34:41.09112711 +0000 UTC m=+1513.172064424" watchObservedRunningTime="2026-02-14 04:34:41.160653868 +0000 UTC m=+1513.241591192" Feb 14 04:34:41 crc kubenswrapper[4867]: I0214 04:34:41.223020 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 14 04:34:41 crc kubenswrapper[4867]: I0214 04:34:41.240205 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 14 04:34:41 crc kubenswrapper[4867]: I0214 04:34:41.291580 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 14 04:34:41 crc kubenswrapper[4867]: E0214 04:34:41.292450 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef0bc6d9-66ae-4a4d-8650-3c0ac27287cf" containerName="nova-scheduler-scheduler" Feb 14 04:34:41 crc kubenswrapper[4867]: I0214 04:34:41.292481 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef0bc6d9-66ae-4a4d-8650-3c0ac27287cf" containerName="nova-scheduler-scheduler" Feb 14 04:34:41 crc kubenswrapper[4867]: I0214 04:34:41.292812 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef0bc6d9-66ae-4a4d-8650-3c0ac27287cf" containerName="nova-scheduler-scheduler" Feb 14 04:34:41 crc kubenswrapper[4867]: I0214 04:34:41.293902 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 14 04:34:41 crc kubenswrapper[4867]: I0214 04:34:41.301477 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 14 04:34:41 crc kubenswrapper[4867]: I0214 04:34:41.323708 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 14 04:34:41 crc kubenswrapper[4867]: I0214 04:34:41.481051 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09251416-b49f-4e81-9584-8428f1903785-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"09251416-b49f-4e81-9584-8428f1903785\") " pod="openstack/nova-scheduler-0" Feb 14 04:34:41 crc kubenswrapper[4867]: I0214 04:34:41.481360 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwwnd\" (UniqueName: \"kubernetes.io/projected/09251416-b49f-4e81-9584-8428f1903785-kube-api-access-gwwnd\") pod \"nova-scheduler-0\" (UID: \"09251416-b49f-4e81-9584-8428f1903785\") " pod="openstack/nova-scheduler-0" Feb 14 04:34:41 crc kubenswrapper[4867]: I0214 04:34:41.481637 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09251416-b49f-4e81-9584-8428f1903785-config-data\") pod \"nova-scheduler-0\" (UID: \"09251416-b49f-4e81-9584-8428f1903785\") " pod="openstack/nova-scheduler-0" Feb 14 04:34:41 crc kubenswrapper[4867]: I0214 04:34:41.583762 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09251416-b49f-4e81-9584-8428f1903785-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"09251416-b49f-4e81-9584-8428f1903785\") " pod="openstack/nova-scheduler-0" Feb 14 04:34:41 crc kubenswrapper[4867]: I0214 04:34:41.583974 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwwnd\" (UniqueName: \"kubernetes.io/projected/09251416-b49f-4e81-9584-8428f1903785-kube-api-access-gwwnd\") pod \"nova-scheduler-0\" (UID: \"09251416-b49f-4e81-9584-8428f1903785\") " pod="openstack/nova-scheduler-0" Feb 14 04:34:41 crc kubenswrapper[4867]: I0214 04:34:41.584061 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09251416-b49f-4e81-9584-8428f1903785-config-data\") pod \"nova-scheduler-0\" (UID: \"09251416-b49f-4e81-9584-8428f1903785\") " pod="openstack/nova-scheduler-0" Feb 14 04:34:41 crc kubenswrapper[4867]: I0214 04:34:41.590576 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09251416-b49f-4e81-9584-8428f1903785-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"09251416-b49f-4e81-9584-8428f1903785\") " pod="openstack/nova-scheduler-0" Feb 14 04:34:41 crc kubenswrapper[4867]: I0214 04:34:41.596810 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09251416-b49f-4e81-9584-8428f1903785-config-data\") pod \"nova-scheduler-0\" (UID: \"09251416-b49f-4e81-9584-8428f1903785\") " pod="openstack/nova-scheduler-0" Feb 14 04:34:41 crc kubenswrapper[4867]: I0214 04:34:41.609984 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwwnd\" (UniqueName: \"kubernetes.io/projected/09251416-b49f-4e81-9584-8428f1903785-kube-api-access-gwwnd\") pod \"nova-scheduler-0\" (UID: \"09251416-b49f-4e81-9584-8428f1903785\") " pod="openstack/nova-scheduler-0" Feb 14 04:34:41 crc kubenswrapper[4867]: I0214 04:34:41.616334 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 14 04:34:41 crc kubenswrapper[4867]: I0214 04:34:41.698796 4867 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod2bbf3a42-f012-4bed-a60e-1defcd0b1af9"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod2bbf3a42-f012-4bed-a60e-1defcd0b1af9] : Timed out while waiting for systemd to remove kubepods-besteffort-pod2bbf3a42_f012_4bed_a60e_1defcd0b1af9.slice" Feb 14 04:34:42 crc kubenswrapper[4867]: I0214 04:34:42.115224 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 14 04:34:42 crc kubenswrapper[4867]: I0214 04:34:42.120704 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce113f40-e807-4f30-adaf-8053c4ac7b65","Type":"ContainerStarted","Data":"2bdf28b1e859bb5d2211947dae2797aa206db181b3539ea0de854f0f3e6d89c6"} Feb 14 04:34:42 crc kubenswrapper[4867]: I0214 04:34:42.339428 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b7bbf7cf9-5cgsc" Feb 14 04:34:42 crc kubenswrapper[4867]: I0214 04:34:42.418092 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-sf4cl"] Feb 14 04:34:42 crc kubenswrapper[4867]: I0214 04:34:42.418361 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-9b86998b5-sf4cl" podUID="6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9" containerName="dnsmasq-dns" containerID="cri-o://34d8986cbe09c27161bbd156dfd8b33968031eafe3d3eb76f1f6a490b717eb6f" gracePeriod=10 Feb 14 04:34:43 crc kubenswrapper[4867]: I0214 04:34:43.016346 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef0bc6d9-66ae-4a4d-8650-3c0ac27287cf" path="/var/lib/kubelet/pods/ef0bc6d9-66ae-4a4d-8650-3c0ac27287cf/volumes" Feb 14 04:34:43 crc kubenswrapper[4867]: I0214 04:34:43.127331 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-sf4cl" Feb 14 04:34:43 crc kubenswrapper[4867]: I0214 04:34:43.145902 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9-ovsdbserver-sb\") pod \"6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9\" (UID: \"6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9\") " Feb 14 04:34:43 crc kubenswrapper[4867]: I0214 04:34:43.146006 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9-ovsdbserver-nb\") pod \"6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9\" (UID: \"6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9\") " Feb 14 04:34:43 crc kubenswrapper[4867]: I0214 04:34:43.146071 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qpf7v\" (UniqueName: \"kubernetes.io/projected/6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9-kube-api-access-qpf7v\") pod \"6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9\" (UID: \"6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9\") " Feb 14 04:34:43 crc kubenswrapper[4867]: I0214 04:34:43.146111 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9-dns-swift-storage-0\") pod \"6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9\" (UID: \"6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9\") " Feb 14 04:34:43 crc kubenswrapper[4867]: I0214 04:34:43.146135 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9-config\") pod \"6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9\" (UID: \"6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9\") " Feb 14 04:34:43 crc kubenswrapper[4867]: I0214 04:34:43.146198 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9-dns-svc\") pod \"6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9\" (UID: \"6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9\") " Feb 14 04:34:43 crc kubenswrapper[4867]: I0214 04:34:43.159376 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce113f40-e807-4f30-adaf-8053c4ac7b65","Type":"ContainerStarted","Data":"645d09ab3ab20918409aff17c8b3710b4ffbfa06ad1a509445fe4ca8b7901e2d"} Feb 14 04:34:43 crc kubenswrapper[4867]: I0214 04:34:43.162347 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce113f40-e807-4f30-adaf-8053c4ac7b65","Type":"ContainerStarted","Data":"da8ab728620d5f0651397fa356c829bf5bff0ab2414fec4cf72bb2494ac4d8b1"} Feb 14 04:34:43 crc kubenswrapper[4867]: I0214 04:34:43.164986 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"09251416-b49f-4e81-9584-8428f1903785","Type":"ContainerStarted","Data":"c9de120b6fd1a7517f333b812742eb01b3833d04ea075130057de9091383c946"} Feb 14 04:34:43 crc kubenswrapper[4867]: I0214 04:34:43.165050 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"09251416-b49f-4e81-9584-8428f1903785","Type":"ContainerStarted","Data":"4ee4cff4cc87308f769e3bd724d5abd95ae658a9785bc66a6f75cd2304c98ea1"} Feb 14 04:34:43 crc kubenswrapper[4867]: I0214 04:34:43.175372 4867 generic.go:334] "Generic (PLEG): container finished" podID="6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9" containerID="34d8986cbe09c27161bbd156dfd8b33968031eafe3d3eb76f1f6a490b717eb6f" exitCode=0 Feb 14 04:34:43 crc kubenswrapper[4867]: I0214 04:34:43.175419 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-sf4cl" event={"ID":"6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9","Type":"ContainerDied","Data":"34d8986cbe09c27161bbd156dfd8b33968031eafe3d3eb76f1f6a490b717eb6f"} Feb 14 04:34:43 crc kubenswrapper[4867]: I0214 04:34:43.175450 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-sf4cl" event={"ID":"6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9","Type":"ContainerDied","Data":"d75507374634724c8a1ef310952a5ce339f06c748d3d87d74bf982c68a7ee156"} Feb 14 04:34:43 crc kubenswrapper[4867]: I0214 04:34:43.175468 4867 scope.go:117] "RemoveContainer" containerID="34d8986cbe09c27161bbd156dfd8b33968031eafe3d3eb76f1f6a490b717eb6f" Feb 14 04:34:43 crc kubenswrapper[4867]: I0214 04:34:43.175604 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-sf4cl" Feb 14 04:34:43 crc kubenswrapper[4867]: I0214 04:34:43.199984 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9-kube-api-access-qpf7v" (OuterVolumeSpecName: "kube-api-access-qpf7v") pod "6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9" (UID: "6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9"). InnerVolumeSpecName "kube-api-access-qpf7v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:34:43 crc kubenswrapper[4867]: I0214 04:34:43.245081 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.245061005 podStartE2EDuration="2.245061005s" podCreationTimestamp="2026-02-14 04:34:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:34:43.232524618 +0000 UTC m=+1515.313461942" watchObservedRunningTime="2026-02-14 04:34:43.245061005 +0000 UTC m=+1515.325998319" Feb 14 04:34:43 crc kubenswrapper[4867]: I0214 04:34:43.263663 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qpf7v\" (UniqueName: \"kubernetes.io/projected/6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9-kube-api-access-qpf7v\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:43 crc kubenswrapper[4867]: I0214 04:34:43.291119 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9" (UID: "6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:34:43 crc kubenswrapper[4867]: I0214 04:34:43.323939 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9" (UID: "6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:34:43 crc kubenswrapper[4867]: I0214 04:34:43.324498 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9-config" (OuterVolumeSpecName: "config") pod "6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9" (UID: "6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:34:43 crc kubenswrapper[4867]: I0214 04:34:43.324792 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9" (UID: "6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:34:43 crc kubenswrapper[4867]: I0214 04:34:43.349954 4867 scope.go:117] "RemoveContainer" containerID="25f3fdaf8d189df27a82c7b6c2f5ffc72a3cc21b6fdff3aa5db60ff88eff4374" Feb 14 04:34:43 crc kubenswrapper[4867]: I0214 04:34:43.367473 4867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:43 crc kubenswrapper[4867]: I0214 04:34:43.368016 4867 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:43 crc kubenswrapper[4867]: I0214 04:34:43.368228 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:43 crc kubenswrapper[4867]: I0214 04:34:43.368335 4867 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:43 crc kubenswrapper[4867]: I0214 04:34:43.386711 4867 scope.go:117] "RemoveContainer" containerID="34d8986cbe09c27161bbd156dfd8b33968031eafe3d3eb76f1f6a490b717eb6f" Feb 14 04:34:43 crc kubenswrapper[4867]: E0214 04:34:43.387443 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34d8986cbe09c27161bbd156dfd8b33968031eafe3d3eb76f1f6a490b717eb6f\": container with ID starting with 34d8986cbe09c27161bbd156dfd8b33968031eafe3d3eb76f1f6a490b717eb6f not found: ID does not exist" containerID="34d8986cbe09c27161bbd156dfd8b33968031eafe3d3eb76f1f6a490b717eb6f" Feb 14 04:34:43 crc kubenswrapper[4867]: I0214 04:34:43.387518 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34d8986cbe09c27161bbd156dfd8b33968031eafe3d3eb76f1f6a490b717eb6f"} err="failed to get container status \"34d8986cbe09c27161bbd156dfd8b33968031eafe3d3eb76f1f6a490b717eb6f\": rpc error: code = NotFound desc = could not find container \"34d8986cbe09c27161bbd156dfd8b33968031eafe3d3eb76f1f6a490b717eb6f\": container with ID starting with 34d8986cbe09c27161bbd156dfd8b33968031eafe3d3eb76f1f6a490b717eb6f not found: ID does not exist" Feb 14 04:34:43 crc kubenswrapper[4867]: I0214 04:34:43.387558 4867 scope.go:117] "RemoveContainer" containerID="25f3fdaf8d189df27a82c7b6c2f5ffc72a3cc21b6fdff3aa5db60ff88eff4374" Feb 14 04:34:43 crc kubenswrapper[4867]: E0214 04:34:43.388175 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25f3fdaf8d189df27a82c7b6c2f5ffc72a3cc21b6fdff3aa5db60ff88eff4374\": container with ID starting with 25f3fdaf8d189df27a82c7b6c2f5ffc72a3cc21b6fdff3aa5db60ff88eff4374 not found: ID does not exist" containerID="25f3fdaf8d189df27a82c7b6c2f5ffc72a3cc21b6fdff3aa5db60ff88eff4374" Feb 14 04:34:43 crc kubenswrapper[4867]: I0214 04:34:43.388224 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25f3fdaf8d189df27a82c7b6c2f5ffc72a3cc21b6fdff3aa5db60ff88eff4374"} err="failed to get container status \"25f3fdaf8d189df27a82c7b6c2f5ffc72a3cc21b6fdff3aa5db60ff88eff4374\": rpc error: code = NotFound desc = could not find container \"25f3fdaf8d189df27a82c7b6c2f5ffc72a3cc21b6fdff3aa5db60ff88eff4374\": container with ID starting with 25f3fdaf8d189df27a82c7b6c2f5ffc72a3cc21b6fdff3aa5db60ff88eff4374 not found: ID does not exist" Feb 14 04:34:43 crc kubenswrapper[4867]: I0214 04:34:43.403549 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9" (UID: "6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:34:43 crc kubenswrapper[4867]: I0214 04:34:43.403669 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 14 04:34:43 crc kubenswrapper[4867]: I0214 04:34:43.471734 4867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:43 crc kubenswrapper[4867]: I0214 04:34:43.526317 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-sf4cl"] Feb 14 04:34:43 crc kubenswrapper[4867]: I0214 04:34:43.589220 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-sf4cl"] Feb 14 04:34:45 crc kubenswrapper[4867]: I0214 04:34:45.013918 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9" path="/var/lib/kubelet/pods/6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9/volumes" Feb 14 04:34:45 crc kubenswrapper[4867]: I0214 04:34:45.208183 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce113f40-e807-4f30-adaf-8053c4ac7b65","Type":"ContainerStarted","Data":"8ee377ab9df59755c2608bf160912f4986e5a570c0b163efea645d0bbf2907f0"} Feb 14 04:34:45 crc kubenswrapper[4867]: I0214 04:34:45.208379 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ce113f40-e807-4f30-adaf-8053c4ac7b65" containerName="ceilometer-central-agent" containerID="cri-o://2bdf28b1e859bb5d2211947dae2797aa206db181b3539ea0de854f0f3e6d89c6" gracePeriod=30 Feb 14 04:34:45 crc kubenswrapper[4867]: I0214 04:34:45.208783 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 14 04:34:45 crc kubenswrapper[4867]: I0214 04:34:45.209336 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ce113f40-e807-4f30-adaf-8053c4ac7b65" containerName="proxy-httpd" containerID="cri-o://8ee377ab9df59755c2608bf160912f4986e5a570c0b163efea645d0bbf2907f0" gracePeriod=30 Feb 14 04:34:45 crc kubenswrapper[4867]: I0214 04:34:45.209413 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ce113f40-e807-4f30-adaf-8053c4ac7b65" containerName="sg-core" containerID="cri-o://645d09ab3ab20918409aff17c8b3710b4ffbfa06ad1a509445fe4ca8b7901e2d" gracePeriod=30 Feb 14 04:34:45 crc kubenswrapper[4867]: I0214 04:34:45.209459 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ce113f40-e807-4f30-adaf-8053c4ac7b65" containerName="ceilometer-notification-agent" containerID="cri-o://da8ab728620d5f0651397fa356c829bf5bff0ab2414fec4cf72bb2494ac4d8b1" gracePeriod=30 Feb 14 04:34:45 crc kubenswrapper[4867]: I0214 04:34:45.238297 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.837405242 podStartE2EDuration="6.238277369s" podCreationTimestamp="2026-02-14 04:34:39 +0000 UTC" firstStartedPulling="2026-02-14 04:34:40.533601901 +0000 UTC m=+1512.614539215" lastFinishedPulling="2026-02-14 04:34:43.934474028 +0000 UTC m=+1516.015411342" observedRunningTime="2026-02-14 04:34:45.234684893 +0000 UTC m=+1517.315622207" watchObservedRunningTime="2026-02-14 04:34:45.238277369 +0000 UTC m=+1517.319214683" Feb 14 04:34:46 crc kubenswrapper[4867]: I0214 04:34:46.228690 4867 generic.go:334] "Generic (PLEG): container finished" podID="ce113f40-e807-4f30-adaf-8053c4ac7b65" containerID="8ee377ab9df59755c2608bf160912f4986e5a570c0b163efea645d0bbf2907f0" exitCode=0 Feb 14 04:34:46 crc kubenswrapper[4867]: I0214 04:34:46.229258 4867 generic.go:334] "Generic (PLEG): container finished" podID="ce113f40-e807-4f30-adaf-8053c4ac7b65" containerID="645d09ab3ab20918409aff17c8b3710b4ffbfa06ad1a509445fe4ca8b7901e2d" exitCode=2 Feb 14 04:34:46 crc kubenswrapper[4867]: I0214 04:34:46.229273 4867 generic.go:334] "Generic (PLEG): container finished" podID="ce113f40-e807-4f30-adaf-8053c4ac7b65" containerID="da8ab728620d5f0651397fa356c829bf5bff0ab2414fec4cf72bb2494ac4d8b1" exitCode=0 Feb 14 04:34:46 crc kubenswrapper[4867]: I0214 04:34:46.228790 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce113f40-e807-4f30-adaf-8053c4ac7b65","Type":"ContainerDied","Data":"8ee377ab9df59755c2608bf160912f4986e5a570c0b163efea645d0bbf2907f0"} Feb 14 04:34:46 crc kubenswrapper[4867]: I0214 04:34:46.229333 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce113f40-e807-4f30-adaf-8053c4ac7b65","Type":"ContainerDied","Data":"645d09ab3ab20918409aff17c8b3710b4ffbfa06ad1a509445fe4ca8b7901e2d"} Feb 14 04:34:46 crc kubenswrapper[4867]: I0214 04:34:46.229359 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce113f40-e807-4f30-adaf-8053c4ac7b65","Type":"ContainerDied","Data":"da8ab728620d5f0651397fa356c829bf5bff0ab2414fec4cf72bb2494ac4d8b1"} Feb 14 04:34:46 crc kubenswrapper[4867]: I0214 04:34:46.616848 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 14 04:34:48 crc kubenswrapper[4867]: I0214 04:34:48.287037 4867 generic.go:334] "Generic (PLEG): container finished" podID="ce113f40-e807-4f30-adaf-8053c4ac7b65" containerID="2bdf28b1e859bb5d2211947dae2797aa206db181b3539ea0de854f0f3e6d89c6" exitCode=0 Feb 14 04:34:48 crc kubenswrapper[4867]: I0214 04:34:48.287113 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce113f40-e807-4f30-adaf-8053c4ac7b65","Type":"ContainerDied","Data":"2bdf28b1e859bb5d2211947dae2797aa206db181b3539ea0de854f0f3e6d89c6"} Feb 14 04:34:48 crc kubenswrapper[4867]: I0214 04:34:48.288796 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce113f40-e807-4f30-adaf-8053c4ac7b65","Type":"ContainerDied","Data":"6f3a0a4513ba6bef6e4ce1201f78bb96037334e2512744dff6bf6a6b1b3b22b2"} Feb 14 04:34:48 crc kubenswrapper[4867]: I0214 04:34:48.288899 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f3a0a4513ba6bef6e4ce1201f78bb96037334e2512744dff6bf6a6b1b3b22b2" Feb 14 04:34:48 crc kubenswrapper[4867]: I0214 04:34:48.301358 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 04:34:48 crc kubenswrapper[4867]: I0214 04:34:48.403804 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 14 04:34:48 crc kubenswrapper[4867]: I0214 04:34:48.425604 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 14 04:34:48 crc kubenswrapper[4867]: I0214 04:34:48.429319 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce113f40-e807-4f30-adaf-8053c4ac7b65-scripts\") pod \"ce113f40-e807-4f30-adaf-8053c4ac7b65\" (UID: \"ce113f40-e807-4f30-adaf-8053c4ac7b65\") " Feb 14 04:34:48 crc kubenswrapper[4867]: I0214 04:34:48.429384 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce113f40-e807-4f30-adaf-8053c4ac7b65-config-data\") pod \"ce113f40-e807-4f30-adaf-8053c4ac7b65\" (UID: \"ce113f40-e807-4f30-adaf-8053c4ac7b65\") " Feb 14 04:34:48 crc kubenswrapper[4867]: I0214 04:34:48.429429 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce113f40-e807-4f30-adaf-8053c4ac7b65-run-httpd\") pod \"ce113f40-e807-4f30-adaf-8053c4ac7b65\" (UID: \"ce113f40-e807-4f30-adaf-8053c4ac7b65\") " Feb 14 04:34:48 crc kubenswrapper[4867]: I0214 04:34:48.429493 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce113f40-e807-4f30-adaf-8053c4ac7b65-combined-ca-bundle\") pod \"ce113f40-e807-4f30-adaf-8053c4ac7b65\" (UID: \"ce113f40-e807-4f30-adaf-8053c4ac7b65\") " Feb 14 04:34:48 crc kubenswrapper[4867]: I0214 04:34:48.429733 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77lbc\" (UniqueName: \"kubernetes.io/projected/ce113f40-e807-4f30-adaf-8053c4ac7b65-kube-api-access-77lbc\") pod \"ce113f40-e807-4f30-adaf-8053c4ac7b65\" (UID: \"ce113f40-e807-4f30-adaf-8053c4ac7b65\") " Feb 14 04:34:48 crc kubenswrapper[4867]: I0214 04:34:48.429862 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce113f40-e807-4f30-adaf-8053c4ac7b65-log-httpd\") pod \"ce113f40-e807-4f30-adaf-8053c4ac7b65\" (UID: \"ce113f40-e807-4f30-adaf-8053c4ac7b65\") " Feb 14 04:34:48 crc kubenswrapper[4867]: I0214 04:34:48.429910 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ce113f40-e807-4f30-adaf-8053c4ac7b65-sg-core-conf-yaml\") pod \"ce113f40-e807-4f30-adaf-8053c4ac7b65\" (UID: \"ce113f40-e807-4f30-adaf-8053c4ac7b65\") " Feb 14 04:34:48 crc kubenswrapper[4867]: I0214 04:34:48.430097 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce113f40-e807-4f30-adaf-8053c4ac7b65-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ce113f40-e807-4f30-adaf-8053c4ac7b65" (UID: "ce113f40-e807-4f30-adaf-8053c4ac7b65"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:34:48 crc kubenswrapper[4867]: I0214 04:34:48.430364 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce113f40-e807-4f30-adaf-8053c4ac7b65-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ce113f40-e807-4f30-adaf-8053c4ac7b65" (UID: "ce113f40-e807-4f30-adaf-8053c4ac7b65"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:34:48 crc kubenswrapper[4867]: I0214 04:34:48.431305 4867 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce113f40-e807-4f30-adaf-8053c4ac7b65-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:48 crc kubenswrapper[4867]: I0214 04:34:48.431333 4867 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce113f40-e807-4f30-adaf-8053c4ac7b65-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:48 crc kubenswrapper[4867]: I0214 04:34:48.437794 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce113f40-e807-4f30-adaf-8053c4ac7b65-kube-api-access-77lbc" (OuterVolumeSpecName: "kube-api-access-77lbc") pod "ce113f40-e807-4f30-adaf-8053c4ac7b65" (UID: "ce113f40-e807-4f30-adaf-8053c4ac7b65"). InnerVolumeSpecName "kube-api-access-77lbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:34:48 crc kubenswrapper[4867]: I0214 04:34:48.439956 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce113f40-e807-4f30-adaf-8053c4ac7b65-scripts" (OuterVolumeSpecName: "scripts") pod "ce113f40-e807-4f30-adaf-8053c4ac7b65" (UID: "ce113f40-e807-4f30-adaf-8053c4ac7b65"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:34:48 crc kubenswrapper[4867]: I0214 04:34:48.483934 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce113f40-e807-4f30-adaf-8053c4ac7b65-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ce113f40-e807-4f30-adaf-8053c4ac7b65" (UID: "ce113f40-e807-4f30-adaf-8053c4ac7b65"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:34:48 crc kubenswrapper[4867]: I0214 04:34:48.534187 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce113f40-e807-4f30-adaf-8053c4ac7b65-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:48 crc kubenswrapper[4867]: I0214 04:34:48.534412 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-77lbc\" (UniqueName: \"kubernetes.io/projected/ce113f40-e807-4f30-adaf-8053c4ac7b65-kube-api-access-77lbc\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:48 crc kubenswrapper[4867]: I0214 04:34:48.534473 4867 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ce113f40-e807-4f30-adaf-8053c4ac7b65-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:48 crc kubenswrapper[4867]: I0214 04:34:48.535621 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce113f40-e807-4f30-adaf-8053c4ac7b65-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ce113f40-e807-4f30-adaf-8053c4ac7b65" (UID: "ce113f40-e807-4f30-adaf-8053c4ac7b65"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:34:48 crc kubenswrapper[4867]: I0214 04:34:48.558741 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce113f40-e807-4f30-adaf-8053c4ac7b65-config-data" (OuterVolumeSpecName: "config-data") pod "ce113f40-e807-4f30-adaf-8053c4ac7b65" (UID: "ce113f40-e807-4f30-adaf-8053c4ac7b65"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:34:48 crc kubenswrapper[4867]: I0214 04:34:48.637104 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce113f40-e807-4f30-adaf-8053c4ac7b65-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:48 crc kubenswrapper[4867]: I0214 04:34:48.637137 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce113f40-e807-4f30-adaf-8053c4ac7b65-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.299454 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.353889 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.381320 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.395726 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.411709 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:34:49 crc kubenswrapper[4867]: E0214 04:34:49.412546 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9" containerName="init" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.412588 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9" containerName="init" Feb 14 04:34:49 crc kubenswrapper[4867]: E0214 04:34:49.412611 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce113f40-e807-4f30-adaf-8053c4ac7b65" containerName="proxy-httpd" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.412618 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce113f40-e807-4f30-adaf-8053c4ac7b65" containerName="proxy-httpd" Feb 14 04:34:49 crc kubenswrapper[4867]: E0214 04:34:49.412670 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9" containerName="dnsmasq-dns" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.412678 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9" containerName="dnsmasq-dns" Feb 14 04:34:49 crc kubenswrapper[4867]: E0214 04:34:49.412702 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce113f40-e807-4f30-adaf-8053c4ac7b65" containerName="ceilometer-central-agent" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.412708 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce113f40-e807-4f30-adaf-8053c4ac7b65" containerName="ceilometer-central-agent" Feb 14 04:34:49 crc kubenswrapper[4867]: E0214 04:34:49.412719 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce113f40-e807-4f30-adaf-8053c4ac7b65" containerName="sg-core" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.412725 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce113f40-e807-4f30-adaf-8053c4ac7b65" containerName="sg-core" Feb 14 04:34:49 crc kubenswrapper[4867]: E0214 04:34:49.412743 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce113f40-e807-4f30-adaf-8053c4ac7b65" containerName="ceilometer-notification-agent" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.412751 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce113f40-e807-4f30-adaf-8053c4ac7b65" containerName="ceilometer-notification-agent" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.412986 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce113f40-e807-4f30-adaf-8053c4ac7b65" containerName="ceilometer-central-agent" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.412998 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce113f40-e807-4f30-adaf-8053c4ac7b65" containerName="proxy-httpd" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.413031 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce113f40-e807-4f30-adaf-8053c4ac7b65" containerName="ceilometer-notification-agent" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.413045 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9" containerName="dnsmasq-dns" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.413066 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce113f40-e807-4f30-adaf-8053c4ac7b65" containerName="sg-core" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.415290 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.420651 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.423643 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.424338 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.561340 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-k2ls7"] Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.563106 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-k2ls7" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.564364 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1e2abd9c-e70a-4c49-99e2-d8f2606d3916-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1e2abd9c-e70a-4c49-99e2-d8f2606d3916\") " pod="openstack/ceilometer-0" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.564483 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e2abd9c-e70a-4c49-99e2-d8f2606d3916-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1e2abd9c-e70a-4c49-99e2-d8f2606d3916\") " pod="openstack/ceilometer-0" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.564584 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-td9tc\" (UniqueName: \"kubernetes.io/projected/1e2abd9c-e70a-4c49-99e2-d8f2606d3916-kube-api-access-td9tc\") pod \"ceilometer-0\" (UID: \"1e2abd9c-e70a-4c49-99e2-d8f2606d3916\") " pod="openstack/ceilometer-0" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.564872 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e2abd9c-e70a-4c49-99e2-d8f2606d3916-scripts\") pod \"ceilometer-0\" (UID: \"1e2abd9c-e70a-4c49-99e2-d8f2606d3916\") " pod="openstack/ceilometer-0" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.564911 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1e2abd9c-e70a-4c49-99e2-d8f2606d3916-run-httpd\") pod \"ceilometer-0\" (UID: \"1e2abd9c-e70a-4c49-99e2-d8f2606d3916\") " pod="openstack/ceilometer-0" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.564970 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e2abd9c-e70a-4c49-99e2-d8f2606d3916-config-data\") pod \"ceilometer-0\" (UID: \"1e2abd9c-e70a-4c49-99e2-d8f2606d3916\") " pod="openstack/ceilometer-0" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.565283 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.565336 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1e2abd9c-e70a-4c49-99e2-d8f2606d3916-log-httpd\") pod \"ceilometer-0\" (UID: \"1e2abd9c-e70a-4c49-99e2-d8f2606d3916\") " pod="openstack/ceilometer-0" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.566993 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.573003 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-k2ls7"] Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.667900 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4be79f3c-fa78-40d2-9ad9-d1dfd965c831-scripts\") pod \"nova-cell1-cell-mapping-k2ls7\" (UID: \"4be79f3c-fa78-40d2-9ad9-d1dfd965c831\") " pod="openstack/nova-cell1-cell-mapping-k2ls7" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.667977 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e2abd9c-e70a-4c49-99e2-d8f2606d3916-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1e2abd9c-e70a-4c49-99e2-d8f2606d3916\") " pod="openstack/ceilometer-0" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.668007 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-td9tc\" (UniqueName: \"kubernetes.io/projected/1e2abd9c-e70a-4c49-99e2-d8f2606d3916-kube-api-access-td9tc\") pod \"ceilometer-0\" (UID: \"1e2abd9c-e70a-4c49-99e2-d8f2606d3916\") " pod="openstack/ceilometer-0" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.668416 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4be79f3c-fa78-40d2-9ad9-d1dfd965c831-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-k2ls7\" (UID: \"4be79f3c-fa78-40d2-9ad9-d1dfd965c831\") " pod="openstack/nova-cell1-cell-mapping-k2ls7" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.668688 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e2abd9c-e70a-4c49-99e2-d8f2606d3916-scripts\") pod \"ceilometer-0\" (UID: \"1e2abd9c-e70a-4c49-99e2-d8f2606d3916\") " pod="openstack/ceilometer-0" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.668739 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1e2abd9c-e70a-4c49-99e2-d8f2606d3916-run-httpd\") pod \"ceilometer-0\" (UID: \"1e2abd9c-e70a-4c49-99e2-d8f2606d3916\") " pod="openstack/ceilometer-0" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.668803 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7j5h\" (UniqueName: \"kubernetes.io/projected/4be79f3c-fa78-40d2-9ad9-d1dfd965c831-kube-api-access-d7j5h\") pod \"nova-cell1-cell-mapping-k2ls7\" (UID: \"4be79f3c-fa78-40d2-9ad9-d1dfd965c831\") " pod="openstack/nova-cell1-cell-mapping-k2ls7" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.668863 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e2abd9c-e70a-4c49-99e2-d8f2606d3916-config-data\") pod \"ceilometer-0\" (UID: \"1e2abd9c-e70a-4c49-99e2-d8f2606d3916\") " pod="openstack/ceilometer-0" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.669178 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1e2abd9c-e70a-4c49-99e2-d8f2606d3916-log-httpd\") pod \"ceilometer-0\" (UID: \"1e2abd9c-e70a-4c49-99e2-d8f2606d3916\") " pod="openstack/ceilometer-0" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.669207 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4be79f3c-fa78-40d2-9ad9-d1dfd965c831-config-data\") pod \"nova-cell1-cell-mapping-k2ls7\" (UID: \"4be79f3c-fa78-40d2-9ad9-d1dfd965c831\") " pod="openstack/nova-cell1-cell-mapping-k2ls7" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.669249 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1e2abd9c-e70a-4c49-99e2-d8f2606d3916-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1e2abd9c-e70a-4c49-99e2-d8f2606d3916\") " pod="openstack/ceilometer-0" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.669946 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1e2abd9c-e70a-4c49-99e2-d8f2606d3916-run-httpd\") pod \"ceilometer-0\" (UID: \"1e2abd9c-e70a-4c49-99e2-d8f2606d3916\") " pod="openstack/ceilometer-0" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.670058 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1e2abd9c-e70a-4c49-99e2-d8f2606d3916-log-httpd\") pod \"ceilometer-0\" (UID: \"1e2abd9c-e70a-4c49-99e2-d8f2606d3916\") " pod="openstack/ceilometer-0" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.674239 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1e2abd9c-e70a-4c49-99e2-d8f2606d3916-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1e2abd9c-e70a-4c49-99e2-d8f2606d3916\") " pod="openstack/ceilometer-0" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.674580 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e2abd9c-e70a-4c49-99e2-d8f2606d3916-scripts\") pod \"ceilometer-0\" (UID: \"1e2abd9c-e70a-4c49-99e2-d8f2606d3916\") " pod="openstack/ceilometer-0" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.675030 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e2abd9c-e70a-4c49-99e2-d8f2606d3916-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1e2abd9c-e70a-4c49-99e2-d8f2606d3916\") " pod="openstack/ceilometer-0" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.675819 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.675864 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.678695 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e2abd9c-e70a-4c49-99e2-d8f2606d3916-config-data\") pod \"ceilometer-0\" (UID: \"1e2abd9c-e70a-4c49-99e2-d8f2606d3916\") " pod="openstack/ceilometer-0" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.692728 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-td9tc\" (UniqueName: \"kubernetes.io/projected/1e2abd9c-e70a-4c49-99e2-d8f2606d3916-kube-api-access-td9tc\") pod \"ceilometer-0\" (UID: \"1e2abd9c-e70a-4c49-99e2-d8f2606d3916\") " pod="openstack/ceilometer-0" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.754126 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.771540 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4be79f3c-fa78-40d2-9ad9-d1dfd965c831-scripts\") pod \"nova-cell1-cell-mapping-k2ls7\" (UID: \"4be79f3c-fa78-40d2-9ad9-d1dfd965c831\") " pod="openstack/nova-cell1-cell-mapping-k2ls7" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.771765 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4be79f3c-fa78-40d2-9ad9-d1dfd965c831-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-k2ls7\" (UID: \"4be79f3c-fa78-40d2-9ad9-d1dfd965c831\") " pod="openstack/nova-cell1-cell-mapping-k2ls7" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.771830 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7j5h\" (UniqueName: \"kubernetes.io/projected/4be79f3c-fa78-40d2-9ad9-d1dfd965c831-kube-api-access-d7j5h\") pod \"nova-cell1-cell-mapping-k2ls7\" (UID: \"4be79f3c-fa78-40d2-9ad9-d1dfd965c831\") " pod="openstack/nova-cell1-cell-mapping-k2ls7" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.771917 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4be79f3c-fa78-40d2-9ad9-d1dfd965c831-config-data\") pod \"nova-cell1-cell-mapping-k2ls7\" (UID: \"4be79f3c-fa78-40d2-9ad9-d1dfd965c831\") " pod="openstack/nova-cell1-cell-mapping-k2ls7" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.775754 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4be79f3c-fa78-40d2-9ad9-d1dfd965c831-config-data\") pod \"nova-cell1-cell-mapping-k2ls7\" (UID: \"4be79f3c-fa78-40d2-9ad9-d1dfd965c831\") " pod="openstack/nova-cell1-cell-mapping-k2ls7" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.775940 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4be79f3c-fa78-40d2-9ad9-d1dfd965c831-scripts\") pod \"nova-cell1-cell-mapping-k2ls7\" (UID: \"4be79f3c-fa78-40d2-9ad9-d1dfd965c831\") " pod="openstack/nova-cell1-cell-mapping-k2ls7" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.778189 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4be79f3c-fa78-40d2-9ad9-d1dfd965c831-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-k2ls7\" (UID: \"4be79f3c-fa78-40d2-9ad9-d1dfd965c831\") " pod="openstack/nova-cell1-cell-mapping-k2ls7" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.793692 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7j5h\" (UniqueName: \"kubernetes.io/projected/4be79f3c-fa78-40d2-9ad9-d1dfd965c831-kube-api-access-d7j5h\") pod \"nova-cell1-cell-mapping-k2ls7\" (UID: \"4be79f3c-fa78-40d2-9ad9-d1dfd965c831\") " pod="openstack/nova-cell1-cell-mapping-k2ls7" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.889169 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-k2ls7" Feb 14 04:34:50 crc kubenswrapper[4867]: I0214 04:34:50.277161 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:34:50 crc kubenswrapper[4867]: I0214 04:34:50.344693 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1e2abd9c-e70a-4c49-99e2-d8f2606d3916","Type":"ContainerStarted","Data":"36752e6e5f2c31ee736f7a9a28d860706f6c2685f55f602f485609bff4a72cd3"} Feb 14 04:34:50 crc kubenswrapper[4867]: I0214 04:34:50.521052 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-k2ls7"] Feb 14 04:34:50 crc kubenswrapper[4867]: W0214 04:34:50.536614 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4be79f3c_fa78_40d2_9ad9_d1dfd965c831.slice/crio-93942f2908369aa48586c169f69ff9c6fce0cd69dd8bdba555432c48fe82f7bb WatchSource:0}: Error finding container 93942f2908369aa48586c169f69ff9c6fce0cd69dd8bdba555432c48fe82f7bb: Status 404 returned error can't find the container with id 93942f2908369aa48586c169f69ff9c6fce0cd69dd8bdba555432c48fe82f7bb Feb 14 04:34:50 crc kubenswrapper[4867]: I0214 04:34:50.746745 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="850d3d1a-b2c1-4063-bfb3-a796d727ff88" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.0:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 04:34:50 crc kubenswrapper[4867]: I0214 04:34:50.747026 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="850d3d1a-b2c1-4063-bfb3-a796d727ff88" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.0:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 04:34:50 crc kubenswrapper[4867]: I0214 04:34:50.944430 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8w8t2" podUID="07a0a67f-28d7-4aa6-872b-a0223c46a9ce" containerName="registry-server" probeResult="failure" output=< Feb 14 04:34:50 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 04:34:50 crc kubenswrapper[4867]: > Feb 14 04:34:51 crc kubenswrapper[4867]: I0214 04:34:51.022347 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce113f40-e807-4f30-adaf-8053c4ac7b65" path="/var/lib/kubelet/pods/ce113f40-e807-4f30-adaf-8053c4ac7b65/volumes" Feb 14 04:34:51 crc kubenswrapper[4867]: I0214 04:34:51.361192 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1e2abd9c-e70a-4c49-99e2-d8f2606d3916","Type":"ContainerStarted","Data":"cc831c892e8c013abef53560483873aaf79b87e38bc3a6d0d64c21cf9f9314c5"} Feb 14 04:34:51 crc kubenswrapper[4867]: I0214 04:34:51.365519 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-k2ls7" event={"ID":"4be79f3c-fa78-40d2-9ad9-d1dfd965c831","Type":"ContainerStarted","Data":"8824aa9f9bf0f294916520c801c31cbd1d85520f64360c54d9e396f8acec8e15"} Feb 14 04:34:51 crc kubenswrapper[4867]: I0214 04:34:51.365580 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-k2ls7" event={"ID":"4be79f3c-fa78-40d2-9ad9-d1dfd965c831","Type":"ContainerStarted","Data":"93942f2908369aa48586c169f69ff9c6fce0cd69dd8bdba555432c48fe82f7bb"} Feb 14 04:34:51 crc kubenswrapper[4867]: I0214 04:34:51.391026 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-k2ls7" podStartSLOduration=2.391005575 podStartE2EDuration="2.391005575s" podCreationTimestamp="2026-02-14 04:34:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:34:51.389011722 +0000 UTC m=+1523.469949036" watchObservedRunningTime="2026-02-14 04:34:51.391005575 +0000 UTC m=+1523.471942889" Feb 14 04:34:51 crc kubenswrapper[4867]: I0214 04:34:51.617029 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 14 04:34:51 crc kubenswrapper[4867]: I0214 04:34:51.661705 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 14 04:34:52 crc kubenswrapper[4867]: I0214 04:34:52.380523 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1e2abd9c-e70a-4c49-99e2-d8f2606d3916","Type":"ContainerStarted","Data":"a035303162febd05e4c69dbea4b23655bfc8fbf0f1bef5f71200bbb4908c72f6"} Feb 14 04:34:52 crc kubenswrapper[4867]: I0214 04:34:52.432368 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 14 04:34:53 crc kubenswrapper[4867]: I0214 04:34:53.396127 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1e2abd9c-e70a-4c49-99e2-d8f2606d3916","Type":"ContainerStarted","Data":"1fb8c5a5621f2d512d37075d0d5b21a45a195911425ead599feb944d6a4de9ab"} Feb 14 04:34:54 crc kubenswrapper[4867]: I0214 04:34:54.424754 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1e2abd9c-e70a-4c49-99e2-d8f2606d3916","Type":"ContainerStarted","Data":"d3d7a5de7a46e9bf58582679cea6e78b22e33da4c8a17769dcc662cfd68cc950"} Feb 14 04:34:54 crc kubenswrapper[4867]: I0214 04:34:54.425690 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 14 04:34:54 crc kubenswrapper[4867]: I0214 04:34:54.432722 4867 generic.go:334] "Generic (PLEG): container finished" podID="3b8b8297-e7e9-4d4e-9fbf-8aa302601521" containerID="676b44febd2b1e6f8adc3b36dfacb2ca3ffd9bcd4f9a33888b2b7f58cb54f5e2" exitCode=137 Feb 14 04:34:54 crc kubenswrapper[4867]: I0214 04:34:54.432769 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"3b8b8297-e7e9-4d4e-9fbf-8aa302601521","Type":"ContainerDied","Data":"676b44febd2b1e6f8adc3b36dfacb2ca3ffd9bcd4f9a33888b2b7f58cb54f5e2"} Feb 14 04:34:54 crc kubenswrapper[4867]: I0214 04:34:54.466289 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.829625243 podStartE2EDuration="5.466267925s" podCreationTimestamp="2026-02-14 04:34:49 +0000 UTC" firstStartedPulling="2026-02-14 04:34:50.27647095 +0000 UTC m=+1522.357408264" lastFinishedPulling="2026-02-14 04:34:53.913113622 +0000 UTC m=+1525.994050946" observedRunningTime="2026-02-14 04:34:54.45269113 +0000 UTC m=+1526.533628444" watchObservedRunningTime="2026-02-14 04:34:54.466267925 +0000 UTC m=+1526.547205239" Feb 14 04:34:54 crc kubenswrapper[4867]: I0214 04:34:54.603624 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 14 04:34:54 crc kubenswrapper[4867]: I0214 04:34:54.658707 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b8b8297-e7e9-4d4e-9fbf-8aa302601521-scripts\") pod \"3b8b8297-e7e9-4d4e-9fbf-8aa302601521\" (UID: \"3b8b8297-e7e9-4d4e-9fbf-8aa302601521\") " Feb 14 04:34:54 crc kubenswrapper[4867]: I0214 04:34:54.658975 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b8b8297-e7e9-4d4e-9fbf-8aa302601521-combined-ca-bundle\") pod \"3b8b8297-e7e9-4d4e-9fbf-8aa302601521\" (UID: \"3b8b8297-e7e9-4d4e-9fbf-8aa302601521\") " Feb 14 04:34:54 crc kubenswrapper[4867]: I0214 04:34:54.659016 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gv5cq\" (UniqueName: \"kubernetes.io/projected/3b8b8297-e7e9-4d4e-9fbf-8aa302601521-kube-api-access-gv5cq\") pod \"3b8b8297-e7e9-4d4e-9fbf-8aa302601521\" (UID: \"3b8b8297-e7e9-4d4e-9fbf-8aa302601521\") " Feb 14 04:34:54 crc kubenswrapper[4867]: I0214 04:34:54.659283 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b8b8297-e7e9-4d4e-9fbf-8aa302601521-config-data\") pod \"3b8b8297-e7e9-4d4e-9fbf-8aa302601521\" (UID: \"3b8b8297-e7e9-4d4e-9fbf-8aa302601521\") " Feb 14 04:34:54 crc kubenswrapper[4867]: I0214 04:34:54.712435 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b8b8297-e7e9-4d4e-9fbf-8aa302601521-scripts" (OuterVolumeSpecName: "scripts") pod "3b8b8297-e7e9-4d4e-9fbf-8aa302601521" (UID: "3b8b8297-e7e9-4d4e-9fbf-8aa302601521"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:34:54 crc kubenswrapper[4867]: I0214 04:34:54.716254 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b8b8297-e7e9-4d4e-9fbf-8aa302601521-kube-api-access-gv5cq" (OuterVolumeSpecName: "kube-api-access-gv5cq") pod "3b8b8297-e7e9-4d4e-9fbf-8aa302601521" (UID: "3b8b8297-e7e9-4d4e-9fbf-8aa302601521"). InnerVolumeSpecName "kube-api-access-gv5cq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:34:54 crc kubenswrapper[4867]: I0214 04:34:54.772807 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b8b8297-e7e9-4d4e-9fbf-8aa302601521-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:54 crc kubenswrapper[4867]: I0214 04:34:54.772850 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gv5cq\" (UniqueName: \"kubernetes.io/projected/3b8b8297-e7e9-4d4e-9fbf-8aa302601521-kube-api-access-gv5cq\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:54 crc kubenswrapper[4867]: I0214 04:34:54.919433 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b8b8297-e7e9-4d4e-9fbf-8aa302601521-config-data" (OuterVolumeSpecName: "config-data") pod "3b8b8297-e7e9-4d4e-9fbf-8aa302601521" (UID: "3b8b8297-e7e9-4d4e-9fbf-8aa302601521"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:34:54 crc kubenswrapper[4867]: I0214 04:34:54.951251 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b8b8297-e7e9-4d4e-9fbf-8aa302601521-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3b8b8297-e7e9-4d4e-9fbf-8aa302601521" (UID: "3b8b8297-e7e9-4d4e-9fbf-8aa302601521"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:34:54 crc kubenswrapper[4867]: I0214 04:34:54.980280 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b8b8297-e7e9-4d4e-9fbf-8aa302601521-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:54 crc kubenswrapper[4867]: I0214 04:34:54.980703 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b8b8297-e7e9-4d4e-9fbf-8aa302601521-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:55 crc kubenswrapper[4867]: I0214 04:34:55.449760 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 14 04:34:55 crc kubenswrapper[4867]: I0214 04:34:55.450725 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"3b8b8297-e7e9-4d4e-9fbf-8aa302601521","Type":"ContainerDied","Data":"873489133de3c353c9f8ca313cc4a323ae602d5913923a1f3148b8aae71c2510"} Feb 14 04:34:55 crc kubenswrapper[4867]: I0214 04:34:55.451434 4867 scope.go:117] "RemoveContainer" containerID="676b44febd2b1e6f8adc3b36dfacb2ca3ffd9bcd4f9a33888b2b7f58cb54f5e2" Feb 14 04:34:55 crc kubenswrapper[4867]: I0214 04:34:55.510694 4867 scope.go:117] "RemoveContainer" containerID="f7c20be58a69fd5c190fa1d934c18d6f79089308881712b0a2523c6851d81171" Feb 14 04:34:55 crc kubenswrapper[4867]: I0214 04:34:55.557597 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Feb 14 04:34:55 crc kubenswrapper[4867]: I0214 04:34:55.596676 4867 scope.go:117] "RemoveContainer" containerID="9248cc350ed932fdee6220c9e37ba117089264f71d0581c8a1792aace4facbcb" Feb 14 04:34:55 crc kubenswrapper[4867]: I0214 04:34:55.624679 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Feb 14 04:34:55 crc kubenswrapper[4867]: I0214 04:34:55.640895 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Feb 14 04:34:55 crc kubenswrapper[4867]: E0214 04:34:55.641725 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b8b8297-e7e9-4d4e-9fbf-8aa302601521" containerName="aodh-evaluator" Feb 14 04:34:55 crc kubenswrapper[4867]: I0214 04:34:55.641751 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b8b8297-e7e9-4d4e-9fbf-8aa302601521" containerName="aodh-evaluator" Feb 14 04:34:55 crc kubenswrapper[4867]: E0214 04:34:55.641809 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b8b8297-e7e9-4d4e-9fbf-8aa302601521" containerName="aodh-notifier" Feb 14 04:34:55 crc kubenswrapper[4867]: I0214 04:34:55.641817 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b8b8297-e7e9-4d4e-9fbf-8aa302601521" containerName="aodh-notifier" Feb 14 04:34:55 crc kubenswrapper[4867]: E0214 04:34:55.641830 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b8b8297-e7e9-4d4e-9fbf-8aa302601521" containerName="aodh-api" Feb 14 04:34:55 crc kubenswrapper[4867]: I0214 04:34:55.641838 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b8b8297-e7e9-4d4e-9fbf-8aa302601521" containerName="aodh-api" Feb 14 04:34:55 crc kubenswrapper[4867]: E0214 04:34:55.641862 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b8b8297-e7e9-4d4e-9fbf-8aa302601521" containerName="aodh-listener" Feb 14 04:34:55 crc kubenswrapper[4867]: I0214 04:34:55.641868 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b8b8297-e7e9-4d4e-9fbf-8aa302601521" containerName="aodh-listener" Feb 14 04:34:55 crc kubenswrapper[4867]: I0214 04:34:55.642116 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b8b8297-e7e9-4d4e-9fbf-8aa302601521" containerName="aodh-api" Feb 14 04:34:55 crc kubenswrapper[4867]: I0214 04:34:55.642147 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b8b8297-e7e9-4d4e-9fbf-8aa302601521" containerName="aodh-notifier" Feb 14 04:34:55 crc kubenswrapper[4867]: I0214 04:34:55.642165 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b8b8297-e7e9-4d4e-9fbf-8aa302601521" containerName="aodh-listener" Feb 14 04:34:55 crc kubenswrapper[4867]: I0214 04:34:55.642181 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b8b8297-e7e9-4d4e-9fbf-8aa302601521" containerName="aodh-evaluator" Feb 14 04:34:55 crc kubenswrapper[4867]: I0214 04:34:55.644974 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 14 04:34:55 crc kubenswrapper[4867]: I0214 04:34:55.648681 4867 scope.go:117] "RemoveContainer" containerID="389edd9377562dde5f7fe2a4c07b6137629b507c4f69fc65a4a622c3e66a0b90" Feb 14 04:34:55 crc kubenswrapper[4867]: I0214 04:34:55.648950 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Feb 14 04:34:55 crc kubenswrapper[4867]: I0214 04:34:55.648965 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Feb 14 04:34:55 crc kubenswrapper[4867]: I0214 04:34:55.652010 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Feb 14 04:34:55 crc kubenswrapper[4867]: I0214 04:34:55.652244 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-bzvlt" Feb 14 04:34:55 crc kubenswrapper[4867]: I0214 04:34:55.652536 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Feb 14 04:34:55 crc kubenswrapper[4867]: I0214 04:34:55.657737 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 14 04:34:55 crc kubenswrapper[4867]: I0214 04:34:55.712837 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/58861691-18ee-408e-9b79-b12a411e99d0-public-tls-certs\") pod \"aodh-0\" (UID: \"58861691-18ee-408e-9b79-b12a411e99d0\") " pod="openstack/aodh-0" Feb 14 04:34:55 crc kubenswrapper[4867]: I0214 04:34:55.712890 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58861691-18ee-408e-9b79-b12a411e99d0-combined-ca-bundle\") pod \"aodh-0\" (UID: \"58861691-18ee-408e-9b79-b12a411e99d0\") " pod="openstack/aodh-0" Feb 14 04:34:55 crc kubenswrapper[4867]: I0214 04:34:55.712977 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58861691-18ee-408e-9b79-b12a411e99d0-config-data\") pod \"aodh-0\" (UID: \"58861691-18ee-408e-9b79-b12a411e99d0\") " pod="openstack/aodh-0" Feb 14 04:34:55 crc kubenswrapper[4867]: I0214 04:34:55.713008 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m47lq\" (UniqueName: \"kubernetes.io/projected/58861691-18ee-408e-9b79-b12a411e99d0-kube-api-access-m47lq\") pod \"aodh-0\" (UID: \"58861691-18ee-408e-9b79-b12a411e99d0\") " pod="openstack/aodh-0" Feb 14 04:34:55 crc kubenswrapper[4867]: I0214 04:34:55.713106 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/58861691-18ee-408e-9b79-b12a411e99d0-internal-tls-certs\") pod \"aodh-0\" (UID: \"58861691-18ee-408e-9b79-b12a411e99d0\") " pod="openstack/aodh-0" Feb 14 04:34:55 crc kubenswrapper[4867]: I0214 04:34:55.713139 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58861691-18ee-408e-9b79-b12a411e99d0-scripts\") pod \"aodh-0\" (UID: \"58861691-18ee-408e-9b79-b12a411e99d0\") " pod="openstack/aodh-0" Feb 14 04:34:55 crc kubenswrapper[4867]: I0214 04:34:55.816390 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/58861691-18ee-408e-9b79-b12a411e99d0-public-tls-certs\") pod \"aodh-0\" (UID: \"58861691-18ee-408e-9b79-b12a411e99d0\") " pod="openstack/aodh-0" Feb 14 04:34:55 crc kubenswrapper[4867]: I0214 04:34:55.816457 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58861691-18ee-408e-9b79-b12a411e99d0-combined-ca-bundle\") pod \"aodh-0\" (UID: \"58861691-18ee-408e-9b79-b12a411e99d0\") " pod="openstack/aodh-0" Feb 14 04:34:55 crc kubenswrapper[4867]: I0214 04:34:55.816591 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58861691-18ee-408e-9b79-b12a411e99d0-config-data\") pod \"aodh-0\" (UID: \"58861691-18ee-408e-9b79-b12a411e99d0\") " pod="openstack/aodh-0" Feb 14 04:34:55 crc kubenswrapper[4867]: I0214 04:34:55.816640 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m47lq\" (UniqueName: \"kubernetes.io/projected/58861691-18ee-408e-9b79-b12a411e99d0-kube-api-access-m47lq\") pod \"aodh-0\" (UID: \"58861691-18ee-408e-9b79-b12a411e99d0\") " pod="openstack/aodh-0" Feb 14 04:34:55 crc kubenswrapper[4867]: I0214 04:34:55.816761 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/58861691-18ee-408e-9b79-b12a411e99d0-internal-tls-certs\") pod \"aodh-0\" (UID: \"58861691-18ee-408e-9b79-b12a411e99d0\") " pod="openstack/aodh-0" Feb 14 04:34:55 crc kubenswrapper[4867]: I0214 04:34:55.816804 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58861691-18ee-408e-9b79-b12a411e99d0-scripts\") pod \"aodh-0\" (UID: \"58861691-18ee-408e-9b79-b12a411e99d0\") " pod="openstack/aodh-0" Feb 14 04:34:55 crc kubenswrapper[4867]: I0214 04:34:55.822064 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58861691-18ee-408e-9b79-b12a411e99d0-scripts\") pod \"aodh-0\" (UID: \"58861691-18ee-408e-9b79-b12a411e99d0\") " pod="openstack/aodh-0" Feb 14 04:34:55 crc kubenswrapper[4867]: I0214 04:34:55.822941 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/58861691-18ee-408e-9b79-b12a411e99d0-public-tls-certs\") pod \"aodh-0\" (UID: \"58861691-18ee-408e-9b79-b12a411e99d0\") " pod="openstack/aodh-0" Feb 14 04:34:55 crc kubenswrapper[4867]: I0214 04:34:55.832561 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58861691-18ee-408e-9b79-b12a411e99d0-combined-ca-bundle\") pod \"aodh-0\" (UID: \"58861691-18ee-408e-9b79-b12a411e99d0\") " pod="openstack/aodh-0" Feb 14 04:34:55 crc kubenswrapper[4867]: I0214 04:34:55.833604 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58861691-18ee-408e-9b79-b12a411e99d0-config-data\") pod \"aodh-0\" (UID: \"58861691-18ee-408e-9b79-b12a411e99d0\") " pod="openstack/aodh-0" Feb 14 04:34:55 crc kubenswrapper[4867]: I0214 04:34:55.834965 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/58861691-18ee-408e-9b79-b12a411e99d0-internal-tls-certs\") pod \"aodh-0\" (UID: \"58861691-18ee-408e-9b79-b12a411e99d0\") " pod="openstack/aodh-0" Feb 14 04:34:55 crc kubenswrapper[4867]: I0214 04:34:55.838540 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m47lq\" (UniqueName: \"kubernetes.io/projected/58861691-18ee-408e-9b79-b12a411e99d0-kube-api-access-m47lq\") pod \"aodh-0\" (UID: \"58861691-18ee-408e-9b79-b12a411e99d0\") " pod="openstack/aodh-0" Feb 14 04:34:56 crc kubenswrapper[4867]: I0214 04:34:56.033348 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 14 04:34:56 crc kubenswrapper[4867]: I0214 04:34:56.536743 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 14 04:34:57 crc kubenswrapper[4867]: I0214 04:34:57.011134 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b8b8297-e7e9-4d4e-9fbf-8aa302601521" path="/var/lib/kubelet/pods/3b8b8297-e7e9-4d4e-9fbf-8aa302601521/volumes" Feb 14 04:34:57 crc kubenswrapper[4867]: I0214 04:34:57.478145 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"58861691-18ee-408e-9b79-b12a411e99d0","Type":"ContainerStarted","Data":"4f9fbe8278c2f8217fd9d1c65cfa1d016b54bc10a1b47dd522ac53e2da5bac45"} Feb 14 04:34:57 crc kubenswrapper[4867]: I0214 04:34:57.478699 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"58861691-18ee-408e-9b79-b12a411e99d0","Type":"ContainerStarted","Data":"cc6bfc1f8b14bfadc90bd97fe9104d42e32da1b206a8c9f9b7d46cb64815cc9b"} Feb 14 04:34:58 crc kubenswrapper[4867]: I0214 04:34:58.494695 4867 generic.go:334] "Generic (PLEG): container finished" podID="4be79f3c-fa78-40d2-9ad9-d1dfd965c831" containerID="8824aa9f9bf0f294916520c801c31cbd1d85520f64360c54d9e396f8acec8e15" exitCode=0 Feb 14 04:34:58 crc kubenswrapper[4867]: I0214 04:34:58.494790 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-k2ls7" event={"ID":"4be79f3c-fa78-40d2-9ad9-d1dfd965c831","Type":"ContainerDied","Data":"8824aa9f9bf0f294916520c801c31cbd1d85520f64360c54d9e396f8acec8e15"} Feb 14 04:34:58 crc kubenswrapper[4867]: I0214 04:34:58.500277 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"58861691-18ee-408e-9b79-b12a411e99d0","Type":"ContainerStarted","Data":"a6c180f71636733ac3331112696898cf83a02e4f76f35724da02b3fc7166a0be"} Feb 14 04:34:59 crc kubenswrapper[4867]: I0214 04:34:59.513434 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"58861691-18ee-408e-9b79-b12a411e99d0","Type":"ContainerStarted","Data":"57c262920dac84f166643430c62b34648c079ac3eb2252d50e804a444b3475ef"} Feb 14 04:34:59 crc kubenswrapper[4867]: I0214 04:34:59.689830 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 14 04:34:59 crc kubenswrapper[4867]: I0214 04:34:59.690818 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 14 04:34:59 crc kubenswrapper[4867]: I0214 04:34:59.693937 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 14 04:34:59 crc kubenswrapper[4867]: I0214 04:34:59.733317 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 14 04:35:00 crc kubenswrapper[4867]: I0214 04:35:00.082078 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-k2ls7" Feb 14 04:35:00 crc kubenswrapper[4867]: I0214 04:35:00.164352 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4be79f3c-fa78-40d2-9ad9-d1dfd965c831-scripts\") pod \"4be79f3c-fa78-40d2-9ad9-d1dfd965c831\" (UID: \"4be79f3c-fa78-40d2-9ad9-d1dfd965c831\") " Feb 14 04:35:00 crc kubenswrapper[4867]: I0214 04:35:00.164497 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4be79f3c-fa78-40d2-9ad9-d1dfd965c831-combined-ca-bundle\") pod \"4be79f3c-fa78-40d2-9ad9-d1dfd965c831\" (UID: \"4be79f3c-fa78-40d2-9ad9-d1dfd965c831\") " Feb 14 04:35:00 crc kubenswrapper[4867]: I0214 04:35:00.164613 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4be79f3c-fa78-40d2-9ad9-d1dfd965c831-config-data\") pod \"4be79f3c-fa78-40d2-9ad9-d1dfd965c831\" (UID: \"4be79f3c-fa78-40d2-9ad9-d1dfd965c831\") " Feb 14 04:35:00 crc kubenswrapper[4867]: I0214 04:35:00.164667 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7j5h\" (UniqueName: \"kubernetes.io/projected/4be79f3c-fa78-40d2-9ad9-d1dfd965c831-kube-api-access-d7j5h\") pod \"4be79f3c-fa78-40d2-9ad9-d1dfd965c831\" (UID: \"4be79f3c-fa78-40d2-9ad9-d1dfd965c831\") " Feb 14 04:35:00 crc kubenswrapper[4867]: I0214 04:35:00.171914 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4be79f3c-fa78-40d2-9ad9-d1dfd965c831-kube-api-access-d7j5h" (OuterVolumeSpecName: "kube-api-access-d7j5h") pod "4be79f3c-fa78-40d2-9ad9-d1dfd965c831" (UID: "4be79f3c-fa78-40d2-9ad9-d1dfd965c831"). InnerVolumeSpecName "kube-api-access-d7j5h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:35:00 crc kubenswrapper[4867]: I0214 04:35:00.173896 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4be79f3c-fa78-40d2-9ad9-d1dfd965c831-scripts" (OuterVolumeSpecName: "scripts") pod "4be79f3c-fa78-40d2-9ad9-d1dfd965c831" (UID: "4be79f3c-fa78-40d2-9ad9-d1dfd965c831"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:35:00 crc kubenswrapper[4867]: I0214 04:35:00.221589 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4be79f3c-fa78-40d2-9ad9-d1dfd965c831-config-data" (OuterVolumeSpecName: "config-data") pod "4be79f3c-fa78-40d2-9ad9-d1dfd965c831" (UID: "4be79f3c-fa78-40d2-9ad9-d1dfd965c831"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:35:00 crc kubenswrapper[4867]: I0214 04:35:00.237574 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4be79f3c-fa78-40d2-9ad9-d1dfd965c831-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4be79f3c-fa78-40d2-9ad9-d1dfd965c831" (UID: "4be79f3c-fa78-40d2-9ad9-d1dfd965c831"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:35:00 crc kubenswrapper[4867]: I0214 04:35:00.268759 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4be79f3c-fa78-40d2-9ad9-d1dfd965c831-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:35:00 crc kubenswrapper[4867]: I0214 04:35:00.268801 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4be79f3c-fa78-40d2-9ad9-d1dfd965c831-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:35:00 crc kubenswrapper[4867]: I0214 04:35:00.268812 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d7j5h\" (UniqueName: \"kubernetes.io/projected/4be79f3c-fa78-40d2-9ad9-d1dfd965c831-kube-api-access-d7j5h\") on node \"crc\" DevicePath \"\"" Feb 14 04:35:00 crc kubenswrapper[4867]: I0214 04:35:00.268824 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4be79f3c-fa78-40d2-9ad9-d1dfd965c831-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:35:00 crc kubenswrapper[4867]: I0214 04:35:00.527123 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-k2ls7" Feb 14 04:35:00 crc kubenswrapper[4867]: I0214 04:35:00.527131 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-k2ls7" event={"ID":"4be79f3c-fa78-40d2-9ad9-d1dfd965c831","Type":"ContainerDied","Data":"93942f2908369aa48586c169f69ff9c6fce0cd69dd8bdba555432c48fe82f7bb"} Feb 14 04:35:00 crc kubenswrapper[4867]: I0214 04:35:00.527215 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="93942f2908369aa48586c169f69ff9c6fce0cd69dd8bdba555432c48fe82f7bb" Feb 14 04:35:00 crc kubenswrapper[4867]: I0214 04:35:00.529867 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"58861691-18ee-408e-9b79-b12a411e99d0","Type":"ContainerStarted","Data":"27e1492030b12bf8e17f8ae9468e42331d9cc302f11974a5a0fc14d2d151ad95"} Feb 14 04:35:00 crc kubenswrapper[4867]: I0214 04:35:00.530380 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 14 04:35:00 crc kubenswrapper[4867]: I0214 04:35:00.540435 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 14 04:35:00 crc kubenswrapper[4867]: I0214 04:35:00.664595 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.507778315 podStartE2EDuration="5.664565614s" podCreationTimestamp="2026-02-14 04:34:55 +0000 UTC" firstStartedPulling="2026-02-14 04:34:56.545909792 +0000 UTC m=+1528.626847106" lastFinishedPulling="2026-02-14 04:34:59.702697091 +0000 UTC m=+1531.783634405" observedRunningTime="2026-02-14 04:35:00.555306049 +0000 UTC m=+1532.636243363" watchObservedRunningTime="2026-02-14 04:35:00.664565614 +0000 UTC m=+1532.745502928" Feb 14 04:35:00 crc kubenswrapper[4867]: I0214 04:35:00.856734 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 14 04:35:00 crc kubenswrapper[4867]: I0214 04:35:00.902601 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 14 04:35:00 crc kubenswrapper[4867]: I0214 04:35:00.902950 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="09251416-b49f-4e81-9584-8428f1903785" containerName="nova-scheduler-scheduler" containerID="cri-o://c9de120b6fd1a7517f333b812742eb01b3833d04ea075130057de9091383c946" gracePeriod=30 Feb 14 04:35:00 crc kubenswrapper[4867]: I0214 04:35:00.914970 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 14 04:35:00 crc kubenswrapper[4867]: I0214 04:35:00.915234 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="35a6b709-4f80-4abc-a92f-24a43d09a805" containerName="nova-metadata-log" containerID="cri-o://fe2d375b29861eadad2b7db855fe51b64530824fb04ec1810859342237673233" gracePeriod=30 Feb 14 04:35:00 crc kubenswrapper[4867]: I0214 04:35:00.915825 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="35a6b709-4f80-4abc-a92f-24a43d09a805" containerName="nova-metadata-metadata" containerID="cri-o://4f20ac204fec7521d0bfa644dbcfa122f64c1e1b5d03b1c1422d51607f747fbe" gracePeriod=30 Feb 14 04:35:00 crc kubenswrapper[4867]: I0214 04:35:00.947342 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8w8t2" podUID="07a0a67f-28d7-4aa6-872b-a0223c46a9ce" containerName="registry-server" probeResult="failure" output=< Feb 14 04:35:00 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 04:35:00 crc kubenswrapper[4867]: > Feb 14 04:35:01 crc kubenswrapper[4867]: I0214 04:35:01.251223 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 04:35:01 crc kubenswrapper[4867]: I0214 04:35:01.251550 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 04:35:01 crc kubenswrapper[4867]: I0214 04:35:01.541806 4867 generic.go:334] "Generic (PLEG): container finished" podID="35a6b709-4f80-4abc-a92f-24a43d09a805" containerID="fe2d375b29861eadad2b7db855fe51b64530824fb04ec1810859342237673233" exitCode=143 Feb 14 04:35:01 crc kubenswrapper[4867]: I0214 04:35:01.541877 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"35a6b709-4f80-4abc-a92f-24a43d09a805","Type":"ContainerDied","Data":"fe2d375b29861eadad2b7db855fe51b64530824fb04ec1810859342237673233"} Feb 14 04:35:01 crc kubenswrapper[4867]: E0214 04:35:01.627079 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c9de120b6fd1a7517f333b812742eb01b3833d04ea075130057de9091383c946" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 14 04:35:01 crc kubenswrapper[4867]: E0214 04:35:01.629302 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c9de120b6fd1a7517f333b812742eb01b3833d04ea075130057de9091383c946" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 14 04:35:01 crc kubenswrapper[4867]: E0214 04:35:01.635042 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c9de120b6fd1a7517f333b812742eb01b3833d04ea075130057de9091383c946" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 14 04:35:01 crc kubenswrapper[4867]: E0214 04:35:01.635128 4867 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="09251416-b49f-4e81-9584-8428f1903785" containerName="nova-scheduler-scheduler" Feb 14 04:35:02 crc kubenswrapper[4867]: I0214 04:35:02.551192 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="850d3d1a-b2c1-4063-bfb3-a796d727ff88" containerName="nova-api-log" containerID="cri-o://49aade93d2eb64a508755defcd10d3374df2e6e0070641f14c9d09c777382e72" gracePeriod=30 Feb 14 04:35:02 crc kubenswrapper[4867]: I0214 04:35:02.551246 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="850d3d1a-b2c1-4063-bfb3-a796d727ff88" containerName="nova-api-api" containerID="cri-o://8d77482b563ed9482e4b0ebcbec7eb6c654115cb0d4aec7f4285cdc30ab1c7f4" gracePeriod=30 Feb 14 04:35:03 crc kubenswrapper[4867]: I0214 04:35:03.566462 4867 generic.go:334] "Generic (PLEG): container finished" podID="850d3d1a-b2c1-4063-bfb3-a796d727ff88" containerID="49aade93d2eb64a508755defcd10d3374df2e6e0070641f14c9d09c777382e72" exitCode=143 Feb 14 04:35:03 crc kubenswrapper[4867]: I0214 04:35:03.566979 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"850d3d1a-b2c1-4063-bfb3-a796d727ff88","Type":"ContainerDied","Data":"49aade93d2eb64a508755defcd10d3374df2e6e0070641f14c9d09c777382e72"} Feb 14 04:35:04 crc kubenswrapper[4867]: I0214 04:35:04.316635 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="35a6b709-4f80-4abc-a92f-24a43d09a805" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.248:8775/\": read tcp 10.217.0.2:42546->10.217.0.248:8775: read: connection reset by peer" Feb 14 04:35:04 crc kubenswrapper[4867]: I0214 04:35:04.316638 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="35a6b709-4f80-4abc-a92f-24a43d09a805" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.248:8775/\": read tcp 10.217.0.2:42538->10.217.0.248:8775: read: connection reset by peer" Feb 14 04:35:04 crc kubenswrapper[4867]: I0214 04:35:04.580997 4867 generic.go:334] "Generic (PLEG): container finished" podID="35a6b709-4f80-4abc-a92f-24a43d09a805" containerID="4f20ac204fec7521d0bfa644dbcfa122f64c1e1b5d03b1c1422d51607f747fbe" exitCode=0 Feb 14 04:35:04 crc kubenswrapper[4867]: I0214 04:35:04.582308 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"35a6b709-4f80-4abc-a92f-24a43d09a805","Type":"ContainerDied","Data":"4f20ac204fec7521d0bfa644dbcfa122f64c1e1b5d03b1c1422d51607f747fbe"} Feb 14 04:35:04 crc kubenswrapper[4867]: I0214 04:35:04.586826 4867 generic.go:334] "Generic (PLEG): container finished" podID="09251416-b49f-4e81-9584-8428f1903785" containerID="c9de120b6fd1a7517f333b812742eb01b3833d04ea075130057de9091383c946" exitCode=0 Feb 14 04:35:04 crc kubenswrapper[4867]: I0214 04:35:04.586860 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"09251416-b49f-4e81-9584-8428f1903785","Type":"ContainerDied","Data":"c9de120b6fd1a7517f333b812742eb01b3833d04ea075130057de9091383c946"} Feb 14 04:35:04 crc kubenswrapper[4867]: I0214 04:35:04.792713 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 14 04:35:04 crc kubenswrapper[4867]: I0214 04:35:04.894025 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09251416-b49f-4e81-9584-8428f1903785-config-data\") pod \"09251416-b49f-4e81-9584-8428f1903785\" (UID: \"09251416-b49f-4e81-9584-8428f1903785\") " Feb 14 04:35:04 crc kubenswrapper[4867]: I0214 04:35:04.894456 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09251416-b49f-4e81-9584-8428f1903785-combined-ca-bundle\") pod \"09251416-b49f-4e81-9584-8428f1903785\" (UID: \"09251416-b49f-4e81-9584-8428f1903785\") " Feb 14 04:35:04 crc kubenswrapper[4867]: I0214 04:35:04.894965 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwwnd\" (UniqueName: \"kubernetes.io/projected/09251416-b49f-4e81-9584-8428f1903785-kube-api-access-gwwnd\") pod \"09251416-b49f-4e81-9584-8428f1903785\" (UID: \"09251416-b49f-4e81-9584-8428f1903785\") " Feb 14 04:35:04 crc kubenswrapper[4867]: I0214 04:35:04.905670 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09251416-b49f-4e81-9584-8428f1903785-kube-api-access-gwwnd" (OuterVolumeSpecName: "kube-api-access-gwwnd") pod "09251416-b49f-4e81-9584-8428f1903785" (UID: "09251416-b49f-4e81-9584-8428f1903785"). InnerVolumeSpecName "kube-api-access-gwwnd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:35:04 crc kubenswrapper[4867]: I0214 04:35:04.969731 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09251416-b49f-4e81-9584-8428f1903785-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "09251416-b49f-4e81-9584-8428f1903785" (UID: "09251416-b49f-4e81-9584-8428f1903785"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:35:04 crc kubenswrapper[4867]: I0214 04:35:04.970950 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09251416-b49f-4e81-9584-8428f1903785-config-data" (OuterVolumeSpecName: "config-data") pod "09251416-b49f-4e81-9584-8428f1903785" (UID: "09251416-b49f-4e81-9584-8428f1903785"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:35:04 crc kubenswrapper[4867]: I0214 04:35:04.974130 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 14 04:35:04 crc kubenswrapper[4867]: I0214 04:35:04.999447 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gwwnd\" (UniqueName: \"kubernetes.io/projected/09251416-b49f-4e81-9584-8428f1903785-kube-api-access-gwwnd\") on node \"crc\" DevicePath \"\"" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.000131 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09251416-b49f-4e81-9584-8428f1903785-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.000605 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09251416-b49f-4e81-9584-8428f1903785-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.115651 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35a6b709-4f80-4abc-a92f-24a43d09a805-combined-ca-bundle\") pod \"35a6b709-4f80-4abc-a92f-24a43d09a805\" (UID: \"35a6b709-4f80-4abc-a92f-24a43d09a805\") " Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.115981 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35a6b709-4f80-4abc-a92f-24a43d09a805-logs\") pod \"35a6b709-4f80-4abc-a92f-24a43d09a805\" (UID: \"35a6b709-4f80-4abc-a92f-24a43d09a805\") " Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.116013 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35a6b709-4f80-4abc-a92f-24a43d09a805-config-data\") pod \"35a6b709-4f80-4abc-a92f-24a43d09a805\" (UID: \"35a6b709-4f80-4abc-a92f-24a43d09a805\") " Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.116204 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/35a6b709-4f80-4abc-a92f-24a43d09a805-nova-metadata-tls-certs\") pod \"35a6b709-4f80-4abc-a92f-24a43d09a805\" (UID: \"35a6b709-4f80-4abc-a92f-24a43d09a805\") " Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.116317 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-szntz\" (UniqueName: \"kubernetes.io/projected/35a6b709-4f80-4abc-a92f-24a43d09a805-kube-api-access-szntz\") pod \"35a6b709-4f80-4abc-a92f-24a43d09a805\" (UID: \"35a6b709-4f80-4abc-a92f-24a43d09a805\") " Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.117324 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35a6b709-4f80-4abc-a92f-24a43d09a805-logs" (OuterVolumeSpecName: "logs") pod "35a6b709-4f80-4abc-a92f-24a43d09a805" (UID: "35a6b709-4f80-4abc-a92f-24a43d09a805"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.120143 4867 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35a6b709-4f80-4abc-a92f-24a43d09a805-logs\") on node \"crc\" DevicePath \"\"" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.120463 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35a6b709-4f80-4abc-a92f-24a43d09a805-kube-api-access-szntz" (OuterVolumeSpecName: "kube-api-access-szntz") pod "35a6b709-4f80-4abc-a92f-24a43d09a805" (UID: "35a6b709-4f80-4abc-a92f-24a43d09a805"). InnerVolumeSpecName "kube-api-access-szntz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.155284 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35a6b709-4f80-4abc-a92f-24a43d09a805-config-data" (OuterVolumeSpecName: "config-data") pod "35a6b709-4f80-4abc-a92f-24a43d09a805" (UID: "35a6b709-4f80-4abc-a92f-24a43d09a805"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.197668 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35a6b709-4f80-4abc-a92f-24a43d09a805-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "35a6b709-4f80-4abc-a92f-24a43d09a805" (UID: "35a6b709-4f80-4abc-a92f-24a43d09a805"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.213212 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35a6b709-4f80-4abc-a92f-24a43d09a805-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "35a6b709-4f80-4abc-a92f-24a43d09a805" (UID: "35a6b709-4f80-4abc-a92f-24a43d09a805"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.223787 4867 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/35a6b709-4f80-4abc-a92f-24a43d09a805-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.223831 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-szntz\" (UniqueName: \"kubernetes.io/projected/35a6b709-4f80-4abc-a92f-24a43d09a805-kube-api-access-szntz\") on node \"crc\" DevicePath \"\"" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.223842 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35a6b709-4f80-4abc-a92f-24a43d09a805-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.223852 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35a6b709-4f80-4abc-a92f-24a43d09a805-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.600601 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"35a6b709-4f80-4abc-a92f-24a43d09a805","Type":"ContainerDied","Data":"e4082bbcd5482c7b8248419bd578fb69fd35b9f6097377273153ca13ce980a74"} Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.600672 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.600692 4867 scope.go:117] "RemoveContainer" containerID="4f20ac204fec7521d0bfa644dbcfa122f64c1e1b5d03b1c1422d51607f747fbe" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.602833 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"09251416-b49f-4e81-9584-8428f1903785","Type":"ContainerDied","Data":"4ee4cff4cc87308f769e3bd724d5abd95ae658a9785bc66a6f75cd2304c98ea1"} Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.602941 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.630942 4867 scope.go:117] "RemoveContainer" containerID="fe2d375b29861eadad2b7db855fe51b64530824fb04ec1810859342237673233" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.666828 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.692452 4867 scope.go:117] "RemoveContainer" containerID="c9de120b6fd1a7517f333b812742eb01b3833d04ea075130057de9091383c946" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.698628 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.724687 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.743656 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 14 04:35:05 crc kubenswrapper[4867]: E0214 04:35:05.744156 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35a6b709-4f80-4abc-a92f-24a43d09a805" containerName="nova-metadata-log" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.744173 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="35a6b709-4f80-4abc-a92f-24a43d09a805" containerName="nova-metadata-log" Feb 14 04:35:05 crc kubenswrapper[4867]: E0214 04:35:05.744189 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09251416-b49f-4e81-9584-8428f1903785" containerName="nova-scheduler-scheduler" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.744197 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="09251416-b49f-4e81-9584-8428f1903785" containerName="nova-scheduler-scheduler" Feb 14 04:35:05 crc kubenswrapper[4867]: E0214 04:35:05.744215 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35a6b709-4f80-4abc-a92f-24a43d09a805" containerName="nova-metadata-metadata" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.744221 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="35a6b709-4f80-4abc-a92f-24a43d09a805" containerName="nova-metadata-metadata" Feb 14 04:35:05 crc kubenswrapper[4867]: E0214 04:35:05.744233 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4be79f3c-fa78-40d2-9ad9-d1dfd965c831" containerName="nova-manage" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.744241 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="4be79f3c-fa78-40d2-9ad9-d1dfd965c831" containerName="nova-manage" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.744483 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="4be79f3c-fa78-40d2-9ad9-d1dfd965c831" containerName="nova-manage" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.744530 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="35a6b709-4f80-4abc-a92f-24a43d09a805" containerName="nova-metadata-log" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.744539 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="09251416-b49f-4e81-9584-8428f1903785" containerName="nova-scheduler-scheduler" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.744550 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="35a6b709-4f80-4abc-a92f-24a43d09a805" containerName="nova-metadata-metadata" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.745441 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.756110 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.756376 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.781976 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.784276 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.788791 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.789492 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.807348 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.837498 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.852911 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bb228b6-c3a9-46ac-8c21-a8786c6ac11b-config-data\") pod \"nova-scheduler-0\" (UID: \"7bb228b6-c3a9-46ac-8c21-a8786c6ac11b\") " pod="openstack/nova-scheduler-0" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.853301 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kj8t\" (UniqueName: \"kubernetes.io/projected/3748198f-49fe-4a76-bd81-4ad518a594e8-kube-api-access-8kj8t\") pod \"nova-metadata-0\" (UID: \"3748198f-49fe-4a76-bd81-4ad518a594e8\") " pod="openstack/nova-metadata-0" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.853455 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3748198f-49fe-4a76-bd81-4ad518a594e8-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3748198f-49fe-4a76-bd81-4ad518a594e8\") " pod="openstack/nova-metadata-0" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.867710 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bb228b6-c3a9-46ac-8c21-a8786c6ac11b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7bb228b6-c3a9-46ac-8c21-a8786c6ac11b\") " pod="openstack/nova-scheduler-0" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.868099 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-749v5\" (UniqueName: \"kubernetes.io/projected/7bb228b6-c3a9-46ac-8c21-a8786c6ac11b-kube-api-access-749v5\") pod \"nova-scheduler-0\" (UID: \"7bb228b6-c3a9-46ac-8c21-a8786c6ac11b\") " pod="openstack/nova-scheduler-0" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.868234 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3748198f-49fe-4a76-bd81-4ad518a594e8-config-data\") pod \"nova-metadata-0\" (UID: \"3748198f-49fe-4a76-bd81-4ad518a594e8\") " pod="openstack/nova-metadata-0" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.868465 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3748198f-49fe-4a76-bd81-4ad518a594e8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3748198f-49fe-4a76-bd81-4ad518a594e8\") " pod="openstack/nova-metadata-0" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.868681 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3748198f-49fe-4a76-bd81-4ad518a594e8-logs\") pod \"nova-metadata-0\" (UID: \"3748198f-49fe-4a76-bd81-4ad518a594e8\") " pod="openstack/nova-metadata-0" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.976678 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kj8t\" (UniqueName: \"kubernetes.io/projected/3748198f-49fe-4a76-bd81-4ad518a594e8-kube-api-access-8kj8t\") pod \"nova-metadata-0\" (UID: \"3748198f-49fe-4a76-bd81-4ad518a594e8\") " pod="openstack/nova-metadata-0" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.977065 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3748198f-49fe-4a76-bd81-4ad518a594e8-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3748198f-49fe-4a76-bd81-4ad518a594e8\") " pod="openstack/nova-metadata-0" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.977171 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bb228b6-c3a9-46ac-8c21-a8786c6ac11b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7bb228b6-c3a9-46ac-8c21-a8786c6ac11b\") " pod="openstack/nova-scheduler-0" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.977258 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-749v5\" (UniqueName: \"kubernetes.io/projected/7bb228b6-c3a9-46ac-8c21-a8786c6ac11b-kube-api-access-749v5\") pod \"nova-scheduler-0\" (UID: \"7bb228b6-c3a9-46ac-8c21-a8786c6ac11b\") " pod="openstack/nova-scheduler-0" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.977365 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3748198f-49fe-4a76-bd81-4ad518a594e8-config-data\") pod \"nova-metadata-0\" (UID: \"3748198f-49fe-4a76-bd81-4ad518a594e8\") " pod="openstack/nova-metadata-0" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.977457 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3748198f-49fe-4a76-bd81-4ad518a594e8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3748198f-49fe-4a76-bd81-4ad518a594e8\") " pod="openstack/nova-metadata-0" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.978582 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3748198f-49fe-4a76-bd81-4ad518a594e8-logs\") pod \"nova-metadata-0\" (UID: \"3748198f-49fe-4a76-bd81-4ad518a594e8\") " pod="openstack/nova-metadata-0" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.978868 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bb228b6-c3a9-46ac-8c21-a8786c6ac11b-config-data\") pod \"nova-scheduler-0\" (UID: \"7bb228b6-c3a9-46ac-8c21-a8786c6ac11b\") " pod="openstack/nova-scheduler-0" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.983006 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3748198f-49fe-4a76-bd81-4ad518a594e8-logs\") pod \"nova-metadata-0\" (UID: \"3748198f-49fe-4a76-bd81-4ad518a594e8\") " pod="openstack/nova-metadata-0" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.989757 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3748198f-49fe-4a76-bd81-4ad518a594e8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3748198f-49fe-4a76-bd81-4ad518a594e8\") " pod="openstack/nova-metadata-0" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.993123 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3748198f-49fe-4a76-bd81-4ad518a594e8-config-data\") pod \"nova-metadata-0\" (UID: \"3748198f-49fe-4a76-bd81-4ad518a594e8\") " pod="openstack/nova-metadata-0" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.993488 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3748198f-49fe-4a76-bd81-4ad518a594e8-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3748198f-49fe-4a76-bd81-4ad518a594e8\") " pod="openstack/nova-metadata-0" Feb 14 04:35:06 crc kubenswrapper[4867]: I0214 04:35:06.000867 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bb228b6-c3a9-46ac-8c21-a8786c6ac11b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7bb228b6-c3a9-46ac-8c21-a8786c6ac11b\") " pod="openstack/nova-scheduler-0" Feb 14 04:35:06 crc kubenswrapper[4867]: I0214 04:35:06.001657 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kj8t\" (UniqueName: \"kubernetes.io/projected/3748198f-49fe-4a76-bd81-4ad518a594e8-kube-api-access-8kj8t\") pod \"nova-metadata-0\" (UID: \"3748198f-49fe-4a76-bd81-4ad518a594e8\") " pod="openstack/nova-metadata-0" Feb 14 04:35:06 crc kubenswrapper[4867]: I0214 04:35:06.016546 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-749v5\" (UniqueName: \"kubernetes.io/projected/7bb228b6-c3a9-46ac-8c21-a8786c6ac11b-kube-api-access-749v5\") pod \"nova-scheduler-0\" (UID: \"7bb228b6-c3a9-46ac-8c21-a8786c6ac11b\") " pod="openstack/nova-scheduler-0" Feb 14 04:35:06 crc kubenswrapper[4867]: I0214 04:35:06.024485 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bb228b6-c3a9-46ac-8c21-a8786c6ac11b-config-data\") pod \"nova-scheduler-0\" (UID: \"7bb228b6-c3a9-46ac-8c21-a8786c6ac11b\") " pod="openstack/nova-scheduler-0" Feb 14 04:35:06 crc kubenswrapper[4867]: I0214 04:35:06.091171 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 14 04:35:06 crc kubenswrapper[4867]: I0214 04:35:06.167941 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 14 04:35:06 crc kubenswrapper[4867]: I0214 04:35:06.626947 4867 generic.go:334] "Generic (PLEG): container finished" podID="850d3d1a-b2c1-4063-bfb3-a796d727ff88" containerID="8d77482b563ed9482e4b0ebcbec7eb6c654115cb0d4aec7f4285cdc30ab1c7f4" exitCode=0 Feb 14 04:35:06 crc kubenswrapper[4867]: I0214 04:35:06.627020 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"850d3d1a-b2c1-4063-bfb3-a796d727ff88","Type":"ContainerDied","Data":"8d77482b563ed9482e4b0ebcbec7eb6c654115cb0d4aec7f4285cdc30ab1c7f4"} Feb 14 04:35:06 crc kubenswrapper[4867]: I0214 04:35:06.795200 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 14 04:35:06 crc kubenswrapper[4867]: W0214 04:35:06.821900 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7bb228b6_c3a9_46ac_8c21_a8786c6ac11b.slice/crio-afd951f8aa342236bef14675306fe5f7a7c6823cb9c92f7711be4adf24833636 WatchSource:0}: Error finding container afd951f8aa342236bef14675306fe5f7a7c6823cb9c92f7711be4adf24833636: Status 404 returned error can't find the container with id afd951f8aa342236bef14675306fe5f7a7c6823cb9c92f7711be4adf24833636 Feb 14 04:35:06 crc kubenswrapper[4867]: W0214 04:35:06.824748 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3748198f_49fe_4a76_bd81_4ad518a594e8.slice/crio-78f2e7fd40e9c58cc5f082541b0ca4e08987298f72a838a1396dc5ea37ecdbb4 WatchSource:0}: Error finding container 78f2e7fd40e9c58cc5f082541b0ca4e08987298f72a838a1396dc5ea37ecdbb4: Status 404 returned error can't find the container with id 78f2e7fd40e9c58cc5f082541b0ca4e08987298f72a838a1396dc5ea37ecdbb4 Feb 14 04:35:06 crc kubenswrapper[4867]: I0214 04:35:06.836551 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 14 04:35:06 crc kubenswrapper[4867]: I0214 04:35:06.878436 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 14 04:35:06 crc kubenswrapper[4867]: I0214 04:35:06.913139 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/850d3d1a-b2c1-4063-bfb3-a796d727ff88-internal-tls-certs\") pod \"850d3d1a-b2c1-4063-bfb3-a796d727ff88\" (UID: \"850d3d1a-b2c1-4063-bfb3-a796d727ff88\") " Feb 14 04:35:06 crc kubenswrapper[4867]: I0214 04:35:06.913537 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/850d3d1a-b2c1-4063-bfb3-a796d727ff88-combined-ca-bundle\") pod \"850d3d1a-b2c1-4063-bfb3-a796d727ff88\" (UID: \"850d3d1a-b2c1-4063-bfb3-a796d727ff88\") " Feb 14 04:35:06 crc kubenswrapper[4867]: I0214 04:35:06.913979 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/850d3d1a-b2c1-4063-bfb3-a796d727ff88-config-data\") pod \"850d3d1a-b2c1-4063-bfb3-a796d727ff88\" (UID: \"850d3d1a-b2c1-4063-bfb3-a796d727ff88\") " Feb 14 04:35:06 crc kubenswrapper[4867]: I0214 04:35:06.914120 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/850d3d1a-b2c1-4063-bfb3-a796d727ff88-logs\") pod \"850d3d1a-b2c1-4063-bfb3-a796d727ff88\" (UID: \"850d3d1a-b2c1-4063-bfb3-a796d727ff88\") " Feb 14 04:35:06 crc kubenswrapper[4867]: I0214 04:35:06.914411 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vt42\" (UniqueName: \"kubernetes.io/projected/850d3d1a-b2c1-4063-bfb3-a796d727ff88-kube-api-access-4vt42\") pod \"850d3d1a-b2c1-4063-bfb3-a796d727ff88\" (UID: \"850d3d1a-b2c1-4063-bfb3-a796d727ff88\") " Feb 14 04:35:06 crc kubenswrapper[4867]: I0214 04:35:06.915883 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/850d3d1a-b2c1-4063-bfb3-a796d727ff88-public-tls-certs\") pod \"850d3d1a-b2c1-4063-bfb3-a796d727ff88\" (UID: \"850d3d1a-b2c1-4063-bfb3-a796d727ff88\") " Feb 14 04:35:06 crc kubenswrapper[4867]: I0214 04:35:06.918440 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/850d3d1a-b2c1-4063-bfb3-a796d727ff88-logs" (OuterVolumeSpecName: "logs") pod "850d3d1a-b2c1-4063-bfb3-a796d727ff88" (UID: "850d3d1a-b2c1-4063-bfb3-a796d727ff88"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:35:06 crc kubenswrapper[4867]: I0214 04:35:06.922578 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/850d3d1a-b2c1-4063-bfb3-a796d727ff88-kube-api-access-4vt42" (OuterVolumeSpecName: "kube-api-access-4vt42") pod "850d3d1a-b2c1-4063-bfb3-a796d727ff88" (UID: "850d3d1a-b2c1-4063-bfb3-a796d727ff88"). InnerVolumeSpecName "kube-api-access-4vt42". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:35:07 crc kubenswrapper[4867]: I0214 04:35:07.019577 4867 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/850d3d1a-b2c1-4063-bfb3-a796d727ff88-logs\") on node \"crc\" DevicePath \"\"" Feb 14 04:35:07 crc kubenswrapper[4867]: I0214 04:35:07.019615 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4vt42\" (UniqueName: \"kubernetes.io/projected/850d3d1a-b2c1-4063-bfb3-a796d727ff88-kube-api-access-4vt42\") on node \"crc\" DevicePath \"\"" Feb 14 04:35:07 crc kubenswrapper[4867]: I0214 04:35:07.019759 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09251416-b49f-4e81-9584-8428f1903785" path="/var/lib/kubelet/pods/09251416-b49f-4e81-9584-8428f1903785/volumes" Feb 14 04:35:07 crc kubenswrapper[4867]: I0214 04:35:07.020383 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35a6b709-4f80-4abc-a92f-24a43d09a805" path="/var/lib/kubelet/pods/35a6b709-4f80-4abc-a92f-24a43d09a805/volumes" Feb 14 04:35:07 crc kubenswrapper[4867]: I0214 04:35:07.034442 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/850d3d1a-b2c1-4063-bfb3-a796d727ff88-config-data" (OuterVolumeSpecName: "config-data") pod "850d3d1a-b2c1-4063-bfb3-a796d727ff88" (UID: "850d3d1a-b2c1-4063-bfb3-a796d727ff88"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:35:07 crc kubenswrapper[4867]: I0214 04:35:07.048716 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/850d3d1a-b2c1-4063-bfb3-a796d727ff88-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "850d3d1a-b2c1-4063-bfb3-a796d727ff88" (UID: "850d3d1a-b2c1-4063-bfb3-a796d727ff88"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:35:07 crc kubenswrapper[4867]: I0214 04:35:07.051242 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/850d3d1a-b2c1-4063-bfb3-a796d727ff88-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "850d3d1a-b2c1-4063-bfb3-a796d727ff88" (UID: "850d3d1a-b2c1-4063-bfb3-a796d727ff88"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:35:07 crc kubenswrapper[4867]: I0214 04:35:07.083366 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/850d3d1a-b2c1-4063-bfb3-a796d727ff88-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "850d3d1a-b2c1-4063-bfb3-a796d727ff88" (UID: "850d3d1a-b2c1-4063-bfb3-a796d727ff88"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:35:07 crc kubenswrapper[4867]: I0214 04:35:07.122329 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/850d3d1a-b2c1-4063-bfb3-a796d727ff88-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:35:07 crc kubenswrapper[4867]: I0214 04:35:07.122374 4867 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/850d3d1a-b2c1-4063-bfb3-a796d727ff88-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 14 04:35:07 crc kubenswrapper[4867]: I0214 04:35:07.122390 4867 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/850d3d1a-b2c1-4063-bfb3-a796d727ff88-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 14 04:35:07 crc kubenswrapper[4867]: I0214 04:35:07.122401 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/850d3d1a-b2c1-4063-bfb3-a796d727ff88-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:35:07 crc kubenswrapper[4867]: I0214 04:35:07.662422 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"850d3d1a-b2c1-4063-bfb3-a796d727ff88","Type":"ContainerDied","Data":"23eda3f5de37b914af1120c4a29676bc10a45dd14a87ddd0f0c35695c9bbb5a7"} Feb 14 04:35:07 crc kubenswrapper[4867]: I0214 04:35:07.663059 4867 scope.go:117] "RemoveContainer" containerID="8d77482b563ed9482e4b0ebcbec7eb6c654115cb0d4aec7f4285cdc30ab1c7f4" Feb 14 04:35:07 crc kubenswrapper[4867]: I0214 04:35:07.663472 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 14 04:35:07 crc kubenswrapper[4867]: I0214 04:35:07.683972 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3748198f-49fe-4a76-bd81-4ad518a594e8","Type":"ContainerStarted","Data":"7ce542212205c747ed57f127d518600ad3fff73ae9a54575e1dc9fbb5b42feb8"} Feb 14 04:35:07 crc kubenswrapper[4867]: I0214 04:35:07.684029 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3748198f-49fe-4a76-bd81-4ad518a594e8","Type":"ContainerStarted","Data":"020ee3e9d366c1b8fef2a939ab9172d0cb013d0129dc85d3831176ee65a1081f"} Feb 14 04:35:07 crc kubenswrapper[4867]: I0214 04:35:07.684043 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3748198f-49fe-4a76-bd81-4ad518a594e8","Type":"ContainerStarted","Data":"78f2e7fd40e9c58cc5f082541b0ca4e08987298f72a838a1396dc5ea37ecdbb4"} Feb 14 04:35:07 crc kubenswrapper[4867]: I0214 04:35:07.690599 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7bb228b6-c3a9-46ac-8c21-a8786c6ac11b","Type":"ContainerStarted","Data":"87a68aefd437700f9b6aa384418fc2aebbf7e5e7b1a2110cc403ad263a060445"} Feb 14 04:35:07 crc kubenswrapper[4867]: I0214 04:35:07.690630 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7bb228b6-c3a9-46ac-8c21-a8786c6ac11b","Type":"ContainerStarted","Data":"afd951f8aa342236bef14675306fe5f7a7c6823cb9c92f7711be4adf24833636"} Feb 14 04:35:07 crc kubenswrapper[4867]: I0214 04:35:07.726939 4867 scope.go:117] "RemoveContainer" containerID="49aade93d2eb64a508755defcd10d3374df2e6e0070641f14c9d09c777382e72" Feb 14 04:35:07 crc kubenswrapper[4867]: I0214 04:35:07.738046 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.738019469 podStartE2EDuration="2.738019469s" podCreationTimestamp="2026-02-14 04:35:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:35:07.713830999 +0000 UTC m=+1539.794768313" watchObservedRunningTime="2026-02-14 04:35:07.738019469 +0000 UTC m=+1539.818956783" Feb 14 04:35:07 crc kubenswrapper[4867]: I0214 04:35:07.756904 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.756880836 podStartE2EDuration="2.756880836s" podCreationTimestamp="2026-02-14 04:35:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:35:07.735719967 +0000 UTC m=+1539.816657281" watchObservedRunningTime="2026-02-14 04:35:07.756880836 +0000 UTC m=+1539.837818150" Feb 14 04:35:07 crc kubenswrapper[4867]: I0214 04:35:07.791214 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 14 04:35:07 crc kubenswrapper[4867]: I0214 04:35:07.827905 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 14 04:35:07 crc kubenswrapper[4867]: I0214 04:35:07.844964 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 14 04:35:07 crc kubenswrapper[4867]: E0214 04:35:07.845816 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="850d3d1a-b2c1-4063-bfb3-a796d727ff88" containerName="nova-api-api" Feb 14 04:35:07 crc kubenswrapper[4867]: I0214 04:35:07.845841 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="850d3d1a-b2c1-4063-bfb3-a796d727ff88" containerName="nova-api-api" Feb 14 04:35:07 crc kubenswrapper[4867]: E0214 04:35:07.845895 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="850d3d1a-b2c1-4063-bfb3-a796d727ff88" containerName="nova-api-log" Feb 14 04:35:07 crc kubenswrapper[4867]: I0214 04:35:07.845901 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="850d3d1a-b2c1-4063-bfb3-a796d727ff88" containerName="nova-api-log" Feb 14 04:35:07 crc kubenswrapper[4867]: I0214 04:35:07.846142 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="850d3d1a-b2c1-4063-bfb3-a796d727ff88" containerName="nova-api-api" Feb 14 04:35:07 crc kubenswrapper[4867]: I0214 04:35:07.846185 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="850d3d1a-b2c1-4063-bfb3-a796d727ff88" containerName="nova-api-log" Feb 14 04:35:07 crc kubenswrapper[4867]: I0214 04:35:07.854009 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 14 04:35:07 crc kubenswrapper[4867]: I0214 04:35:07.858181 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 14 04:35:07 crc kubenswrapper[4867]: I0214 04:35:07.858657 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 14 04:35:07 crc kubenswrapper[4867]: I0214 04:35:07.858873 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 14 04:35:07 crc kubenswrapper[4867]: I0214 04:35:07.859003 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 14 04:35:07 crc kubenswrapper[4867]: I0214 04:35:07.955357 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/464bbcc9-1810-40bc-8773-bfa3e615b67b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"464bbcc9-1810-40bc-8773-bfa3e615b67b\") " pod="openstack/nova-api-0" Feb 14 04:35:07 crc kubenswrapper[4867]: I0214 04:35:07.955728 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/464bbcc9-1810-40bc-8773-bfa3e615b67b-public-tls-certs\") pod \"nova-api-0\" (UID: \"464bbcc9-1810-40bc-8773-bfa3e615b67b\") " pod="openstack/nova-api-0" Feb 14 04:35:07 crc kubenswrapper[4867]: I0214 04:35:07.955818 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/464bbcc9-1810-40bc-8773-bfa3e615b67b-config-data\") pod \"nova-api-0\" (UID: \"464bbcc9-1810-40bc-8773-bfa3e615b67b\") " pod="openstack/nova-api-0" Feb 14 04:35:07 crc kubenswrapper[4867]: I0214 04:35:07.955897 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xd66b\" (UniqueName: \"kubernetes.io/projected/464bbcc9-1810-40bc-8773-bfa3e615b67b-kube-api-access-xd66b\") pod \"nova-api-0\" (UID: \"464bbcc9-1810-40bc-8773-bfa3e615b67b\") " pod="openstack/nova-api-0" Feb 14 04:35:07 crc kubenswrapper[4867]: I0214 04:35:07.956107 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/464bbcc9-1810-40bc-8773-bfa3e615b67b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"464bbcc9-1810-40bc-8773-bfa3e615b67b\") " pod="openstack/nova-api-0" Feb 14 04:35:07 crc kubenswrapper[4867]: I0214 04:35:07.956196 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/464bbcc9-1810-40bc-8773-bfa3e615b67b-logs\") pod \"nova-api-0\" (UID: \"464bbcc9-1810-40bc-8773-bfa3e615b67b\") " pod="openstack/nova-api-0" Feb 14 04:35:07 crc kubenswrapper[4867]: I0214 04:35:07.989922 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gvlgw"] Feb 14 04:35:07 crc kubenswrapper[4867]: I0214 04:35:07.994371 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gvlgw" Feb 14 04:35:08 crc kubenswrapper[4867]: I0214 04:35:08.030317 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gvlgw"] Feb 14 04:35:08 crc kubenswrapper[4867]: I0214 04:35:08.057978 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3dbe8df1-aae4-43fe-a7cc-bea6e0124213-catalog-content\") pod \"redhat-marketplace-gvlgw\" (UID: \"3dbe8df1-aae4-43fe-a7cc-bea6e0124213\") " pod="openshift-marketplace/redhat-marketplace-gvlgw" Feb 14 04:35:08 crc kubenswrapper[4867]: I0214 04:35:08.058092 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9n8l\" (UniqueName: \"kubernetes.io/projected/3dbe8df1-aae4-43fe-a7cc-bea6e0124213-kube-api-access-k9n8l\") pod \"redhat-marketplace-gvlgw\" (UID: \"3dbe8df1-aae4-43fe-a7cc-bea6e0124213\") " pod="openshift-marketplace/redhat-marketplace-gvlgw" Feb 14 04:35:08 crc kubenswrapper[4867]: I0214 04:35:08.058164 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/464bbcc9-1810-40bc-8773-bfa3e615b67b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"464bbcc9-1810-40bc-8773-bfa3e615b67b\") " pod="openstack/nova-api-0" Feb 14 04:35:08 crc kubenswrapper[4867]: I0214 04:35:08.058195 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/464bbcc9-1810-40bc-8773-bfa3e615b67b-logs\") pod \"nova-api-0\" (UID: \"464bbcc9-1810-40bc-8773-bfa3e615b67b\") " pod="openstack/nova-api-0" Feb 14 04:35:08 crc kubenswrapper[4867]: I0214 04:35:08.058274 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3dbe8df1-aae4-43fe-a7cc-bea6e0124213-utilities\") pod \"redhat-marketplace-gvlgw\" (UID: \"3dbe8df1-aae4-43fe-a7cc-bea6e0124213\") " pod="openshift-marketplace/redhat-marketplace-gvlgw" Feb 14 04:35:08 crc kubenswrapper[4867]: I0214 04:35:08.058321 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/464bbcc9-1810-40bc-8773-bfa3e615b67b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"464bbcc9-1810-40bc-8773-bfa3e615b67b\") " pod="openstack/nova-api-0" Feb 14 04:35:08 crc kubenswrapper[4867]: I0214 04:35:08.058395 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/464bbcc9-1810-40bc-8773-bfa3e615b67b-public-tls-certs\") pod \"nova-api-0\" (UID: \"464bbcc9-1810-40bc-8773-bfa3e615b67b\") " pod="openstack/nova-api-0" Feb 14 04:35:08 crc kubenswrapper[4867]: I0214 04:35:08.058434 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/464bbcc9-1810-40bc-8773-bfa3e615b67b-config-data\") pod \"nova-api-0\" (UID: \"464bbcc9-1810-40bc-8773-bfa3e615b67b\") " pod="openstack/nova-api-0" Feb 14 04:35:08 crc kubenswrapper[4867]: I0214 04:35:08.058469 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xd66b\" (UniqueName: \"kubernetes.io/projected/464bbcc9-1810-40bc-8773-bfa3e615b67b-kube-api-access-xd66b\") pod \"nova-api-0\" (UID: \"464bbcc9-1810-40bc-8773-bfa3e615b67b\") " pod="openstack/nova-api-0" Feb 14 04:35:08 crc kubenswrapper[4867]: I0214 04:35:08.059632 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/464bbcc9-1810-40bc-8773-bfa3e615b67b-logs\") pod \"nova-api-0\" (UID: \"464bbcc9-1810-40bc-8773-bfa3e615b67b\") " pod="openstack/nova-api-0" Feb 14 04:35:08 crc kubenswrapper[4867]: I0214 04:35:08.065446 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/464bbcc9-1810-40bc-8773-bfa3e615b67b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"464bbcc9-1810-40bc-8773-bfa3e615b67b\") " pod="openstack/nova-api-0" Feb 14 04:35:08 crc kubenswrapper[4867]: I0214 04:35:08.068257 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/464bbcc9-1810-40bc-8773-bfa3e615b67b-config-data\") pod \"nova-api-0\" (UID: \"464bbcc9-1810-40bc-8773-bfa3e615b67b\") " pod="openstack/nova-api-0" Feb 14 04:35:08 crc kubenswrapper[4867]: I0214 04:35:08.068802 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/464bbcc9-1810-40bc-8773-bfa3e615b67b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"464bbcc9-1810-40bc-8773-bfa3e615b67b\") " pod="openstack/nova-api-0" Feb 14 04:35:08 crc kubenswrapper[4867]: I0214 04:35:08.082462 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/464bbcc9-1810-40bc-8773-bfa3e615b67b-public-tls-certs\") pod \"nova-api-0\" (UID: \"464bbcc9-1810-40bc-8773-bfa3e615b67b\") " pod="openstack/nova-api-0" Feb 14 04:35:08 crc kubenswrapper[4867]: I0214 04:35:08.083708 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xd66b\" (UniqueName: \"kubernetes.io/projected/464bbcc9-1810-40bc-8773-bfa3e615b67b-kube-api-access-xd66b\") pod \"nova-api-0\" (UID: \"464bbcc9-1810-40bc-8773-bfa3e615b67b\") " pod="openstack/nova-api-0" Feb 14 04:35:08 crc kubenswrapper[4867]: I0214 04:35:08.161169 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9n8l\" (UniqueName: \"kubernetes.io/projected/3dbe8df1-aae4-43fe-a7cc-bea6e0124213-kube-api-access-k9n8l\") pod \"redhat-marketplace-gvlgw\" (UID: \"3dbe8df1-aae4-43fe-a7cc-bea6e0124213\") " pod="openshift-marketplace/redhat-marketplace-gvlgw" Feb 14 04:35:08 crc kubenswrapper[4867]: I0214 04:35:08.161324 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3dbe8df1-aae4-43fe-a7cc-bea6e0124213-utilities\") pod \"redhat-marketplace-gvlgw\" (UID: \"3dbe8df1-aae4-43fe-a7cc-bea6e0124213\") " pod="openshift-marketplace/redhat-marketplace-gvlgw" Feb 14 04:35:08 crc kubenswrapper[4867]: I0214 04:35:08.161452 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3dbe8df1-aae4-43fe-a7cc-bea6e0124213-catalog-content\") pod \"redhat-marketplace-gvlgw\" (UID: \"3dbe8df1-aae4-43fe-a7cc-bea6e0124213\") " pod="openshift-marketplace/redhat-marketplace-gvlgw" Feb 14 04:35:08 crc kubenswrapper[4867]: I0214 04:35:08.161851 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3dbe8df1-aae4-43fe-a7cc-bea6e0124213-utilities\") pod \"redhat-marketplace-gvlgw\" (UID: \"3dbe8df1-aae4-43fe-a7cc-bea6e0124213\") " pod="openshift-marketplace/redhat-marketplace-gvlgw" Feb 14 04:35:08 crc kubenswrapper[4867]: I0214 04:35:08.161985 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3dbe8df1-aae4-43fe-a7cc-bea6e0124213-catalog-content\") pod \"redhat-marketplace-gvlgw\" (UID: \"3dbe8df1-aae4-43fe-a7cc-bea6e0124213\") " pod="openshift-marketplace/redhat-marketplace-gvlgw" Feb 14 04:35:08 crc kubenswrapper[4867]: I0214 04:35:08.180888 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 14 04:35:08 crc kubenswrapper[4867]: I0214 04:35:08.181231 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9n8l\" (UniqueName: \"kubernetes.io/projected/3dbe8df1-aae4-43fe-a7cc-bea6e0124213-kube-api-access-k9n8l\") pod \"redhat-marketplace-gvlgw\" (UID: \"3dbe8df1-aae4-43fe-a7cc-bea6e0124213\") " pod="openshift-marketplace/redhat-marketplace-gvlgw" Feb 14 04:35:08 crc kubenswrapper[4867]: I0214 04:35:08.319347 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gvlgw" Feb 14 04:35:08 crc kubenswrapper[4867]: I0214 04:35:08.824378 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 14 04:35:09 crc kubenswrapper[4867]: I0214 04:35:09.058290 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="850d3d1a-b2c1-4063-bfb3-a796d727ff88" path="/var/lib/kubelet/pods/850d3d1a-b2c1-4063-bfb3-a796d727ff88/volumes" Feb 14 04:35:09 crc kubenswrapper[4867]: I0214 04:35:09.059095 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gvlgw"] Feb 14 04:35:09 crc kubenswrapper[4867]: I0214 04:35:09.731416 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"464bbcc9-1810-40bc-8773-bfa3e615b67b","Type":"ContainerStarted","Data":"950fc6945de9051af9e1b0faf98cebbbdb2928cf426dd534741b7b23b9d2cf6c"} Feb 14 04:35:09 crc kubenswrapper[4867]: I0214 04:35:09.732359 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"464bbcc9-1810-40bc-8773-bfa3e615b67b","Type":"ContainerStarted","Data":"1077d52991be2b0d0e83d78d63c066a64dfc4b3b1a4bad89f608cda44ff26c27"} Feb 14 04:35:09 crc kubenswrapper[4867]: I0214 04:35:09.732450 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"464bbcc9-1810-40bc-8773-bfa3e615b67b","Type":"ContainerStarted","Data":"ddf25d5b2fc2c44a19e57f1102b554dfe6a76562b72824cb420dd7acc799fa3f"} Feb 14 04:35:09 crc kubenswrapper[4867]: I0214 04:35:09.735371 4867 generic.go:334] "Generic (PLEG): container finished" podID="3dbe8df1-aae4-43fe-a7cc-bea6e0124213" containerID="29c3d46d3c1a5c9008610223d152565721e790493ef80583497b4a53c2abb102" exitCode=0 Feb 14 04:35:09 crc kubenswrapper[4867]: I0214 04:35:09.735547 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gvlgw" event={"ID":"3dbe8df1-aae4-43fe-a7cc-bea6e0124213","Type":"ContainerDied","Data":"29c3d46d3c1a5c9008610223d152565721e790493ef80583497b4a53c2abb102"} Feb 14 04:35:09 crc kubenswrapper[4867]: I0214 04:35:09.735650 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gvlgw" event={"ID":"3dbe8df1-aae4-43fe-a7cc-bea6e0124213","Type":"ContainerStarted","Data":"70df0f314d5fb90d90314aa06788a811dc9c80acdc1aa6f7d2bb1ed596e5f7c2"} Feb 14 04:35:09 crc kubenswrapper[4867]: I0214 04:35:09.738085 4867 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 14 04:35:09 crc kubenswrapper[4867]: I0214 04:35:09.787971 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.787949438 podStartE2EDuration="2.787949438s" podCreationTimestamp="2026-02-14 04:35:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:35:09.769329628 +0000 UTC m=+1541.850266942" watchObservedRunningTime="2026-02-14 04:35:09.787949438 +0000 UTC m=+1541.868886752" Feb 14 04:35:09 crc kubenswrapper[4867]: I0214 04:35:09.928153 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8w8t2" Feb 14 04:35:09 crc kubenswrapper[4867]: I0214 04:35:09.980423 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8w8t2" Feb 14 04:35:10 crc kubenswrapper[4867]: I0214 04:35:10.753945 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gvlgw" event={"ID":"3dbe8df1-aae4-43fe-a7cc-bea6e0124213","Type":"ContainerStarted","Data":"7f493b03493f584a948f58791a4731dee623aef265a565eb57782b6d03c752e4"} Feb 14 04:35:11 crc kubenswrapper[4867]: I0214 04:35:11.092580 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 14 04:35:11 crc kubenswrapper[4867]: I0214 04:35:11.168818 4867 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","podef0bc6d9-66ae-4a4d-8650-3c0ac27287cf"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort podef0bc6d9-66ae-4a4d-8650-3c0ac27287cf] : Timed out while waiting for systemd to remove kubepods-besteffort-podef0bc6d9_66ae_4a4d_8650_3c0ac27287cf.slice" Feb 14 04:35:11 crc kubenswrapper[4867]: I0214 04:35:11.168840 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 14 04:35:11 crc kubenswrapper[4867]: I0214 04:35:11.168921 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 14 04:35:11 crc kubenswrapper[4867]: I0214 04:35:11.768431 4867 generic.go:334] "Generic (PLEG): container finished" podID="3dbe8df1-aae4-43fe-a7cc-bea6e0124213" containerID="7f493b03493f584a948f58791a4731dee623aef265a565eb57782b6d03c752e4" exitCode=0 Feb 14 04:35:11 crc kubenswrapper[4867]: I0214 04:35:11.768571 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gvlgw" event={"ID":"3dbe8df1-aae4-43fe-a7cc-bea6e0124213","Type":"ContainerDied","Data":"7f493b03493f584a948f58791a4731dee623aef265a565eb57782b6d03c752e4"} Feb 14 04:35:12 crc kubenswrapper[4867]: I0214 04:35:12.348778 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8w8t2"] Feb 14 04:35:12 crc kubenswrapper[4867]: I0214 04:35:12.350019 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8w8t2" podUID="07a0a67f-28d7-4aa6-872b-a0223c46a9ce" containerName="registry-server" containerID="cri-o://b28951ec7a1a0d867c9e70873b61b9ce82ff78d0b694954ee6ad69ca9b10e341" gracePeriod=2 Feb 14 04:35:12 crc kubenswrapper[4867]: I0214 04:35:12.889810 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gvlgw" event={"ID":"3dbe8df1-aae4-43fe-a7cc-bea6e0124213","Type":"ContainerStarted","Data":"f30d52308341a9296f8b6fd10d906d09999467b46c7027125fb93c9f82b211b6"} Feb 14 04:35:12 crc kubenswrapper[4867]: I0214 04:35:12.894826 4867 generic.go:334] "Generic (PLEG): container finished" podID="07a0a67f-28d7-4aa6-872b-a0223c46a9ce" containerID="b28951ec7a1a0d867c9e70873b61b9ce82ff78d0b694954ee6ad69ca9b10e341" exitCode=0 Feb 14 04:35:12 crc kubenswrapper[4867]: I0214 04:35:12.895016 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8w8t2" event={"ID":"07a0a67f-28d7-4aa6-872b-a0223c46a9ce","Type":"ContainerDied","Data":"b28951ec7a1a0d867c9e70873b61b9ce82ff78d0b694954ee6ad69ca9b10e341"} Feb 14 04:35:12 crc kubenswrapper[4867]: I0214 04:35:12.915353 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gvlgw" podStartSLOduration=3.473100387 podStartE2EDuration="5.915331957s" podCreationTimestamp="2026-02-14 04:35:07 +0000 UTC" firstStartedPulling="2026-02-14 04:35:09.737895753 +0000 UTC m=+1541.818833067" lastFinishedPulling="2026-02-14 04:35:12.180127323 +0000 UTC m=+1544.261064637" observedRunningTime="2026-02-14 04:35:12.914816533 +0000 UTC m=+1544.995753857" watchObservedRunningTime="2026-02-14 04:35:12.915331957 +0000 UTC m=+1544.996269271" Feb 14 04:35:13 crc kubenswrapper[4867]: I0214 04:35:13.110280 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8w8t2" Feb 14 04:35:13 crc kubenswrapper[4867]: I0214 04:35:13.208786 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07a0a67f-28d7-4aa6-872b-a0223c46a9ce-utilities\") pod \"07a0a67f-28d7-4aa6-872b-a0223c46a9ce\" (UID: \"07a0a67f-28d7-4aa6-872b-a0223c46a9ce\") " Feb 14 04:35:13 crc kubenswrapper[4867]: I0214 04:35:13.208840 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kz947\" (UniqueName: \"kubernetes.io/projected/07a0a67f-28d7-4aa6-872b-a0223c46a9ce-kube-api-access-kz947\") pod \"07a0a67f-28d7-4aa6-872b-a0223c46a9ce\" (UID: \"07a0a67f-28d7-4aa6-872b-a0223c46a9ce\") " Feb 14 04:35:13 crc kubenswrapper[4867]: I0214 04:35:13.209030 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07a0a67f-28d7-4aa6-872b-a0223c46a9ce-catalog-content\") pod \"07a0a67f-28d7-4aa6-872b-a0223c46a9ce\" (UID: \"07a0a67f-28d7-4aa6-872b-a0223c46a9ce\") " Feb 14 04:35:13 crc kubenswrapper[4867]: I0214 04:35:13.209542 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07a0a67f-28d7-4aa6-872b-a0223c46a9ce-utilities" (OuterVolumeSpecName: "utilities") pod "07a0a67f-28d7-4aa6-872b-a0223c46a9ce" (UID: "07a0a67f-28d7-4aa6-872b-a0223c46a9ce"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:35:13 crc kubenswrapper[4867]: I0214 04:35:13.215529 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07a0a67f-28d7-4aa6-872b-a0223c46a9ce-kube-api-access-kz947" (OuterVolumeSpecName: "kube-api-access-kz947") pod "07a0a67f-28d7-4aa6-872b-a0223c46a9ce" (UID: "07a0a67f-28d7-4aa6-872b-a0223c46a9ce"). InnerVolumeSpecName "kube-api-access-kz947". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:35:13 crc kubenswrapper[4867]: I0214 04:35:13.312718 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07a0a67f-28d7-4aa6-872b-a0223c46a9ce-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 04:35:13 crc kubenswrapper[4867]: I0214 04:35:13.312763 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kz947\" (UniqueName: \"kubernetes.io/projected/07a0a67f-28d7-4aa6-872b-a0223c46a9ce-kube-api-access-kz947\") on node \"crc\" DevicePath \"\"" Feb 14 04:35:13 crc kubenswrapper[4867]: I0214 04:35:13.340133 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07a0a67f-28d7-4aa6-872b-a0223c46a9ce-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "07a0a67f-28d7-4aa6-872b-a0223c46a9ce" (UID: "07a0a67f-28d7-4aa6-872b-a0223c46a9ce"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:35:13 crc kubenswrapper[4867]: I0214 04:35:13.414891 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07a0a67f-28d7-4aa6-872b-a0223c46a9ce-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 04:35:13 crc kubenswrapper[4867]: I0214 04:35:13.918209 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8w8t2" event={"ID":"07a0a67f-28d7-4aa6-872b-a0223c46a9ce","Type":"ContainerDied","Data":"fdac00fce6c9717e1c8d18f0be51e81e7fbc0a9225c4838a2047a292e8ab0896"} Feb 14 04:35:13 crc kubenswrapper[4867]: I0214 04:35:13.918479 4867 scope.go:117] "RemoveContainer" containerID="b28951ec7a1a0d867c9e70873b61b9ce82ff78d0b694954ee6ad69ca9b10e341" Feb 14 04:35:13 crc kubenswrapper[4867]: I0214 04:35:13.918239 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8w8t2" Feb 14 04:35:13 crc kubenswrapper[4867]: I0214 04:35:13.947776 4867 scope.go:117] "RemoveContainer" containerID="7d63f285d67f04fff738be38ba2678cb46d4e846ee48b03b6257c8a564337d5d" Feb 14 04:35:13 crc kubenswrapper[4867]: I0214 04:35:13.968477 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8w8t2"] Feb 14 04:35:13 crc kubenswrapper[4867]: I0214 04:35:13.977703 4867 scope.go:117] "RemoveContainer" containerID="bcc64d905c4e5f9d636eab2cf199fd810c50163cc6446c91352e060a5a3e42fd" Feb 14 04:35:13 crc kubenswrapper[4867]: I0214 04:35:13.984052 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8w8t2"] Feb 14 04:35:15 crc kubenswrapper[4867]: I0214 04:35:15.023594 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07a0a67f-28d7-4aa6-872b-a0223c46a9ce" path="/var/lib/kubelet/pods/07a0a67f-28d7-4aa6-872b-a0223c46a9ce/volumes" Feb 14 04:35:16 crc kubenswrapper[4867]: I0214 04:35:16.092799 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 14 04:35:16 crc kubenswrapper[4867]: I0214 04:35:16.126651 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 14 04:35:16 crc kubenswrapper[4867]: I0214 04:35:16.168756 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 14 04:35:16 crc kubenswrapper[4867]: I0214 04:35:16.168810 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 14 04:35:16 crc kubenswrapper[4867]: I0214 04:35:16.993093 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 14 04:35:17 crc kubenswrapper[4867]: I0214 04:35:17.183732 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="3748198f-49fe-4a76-bd81-4ad518a594e8" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.6:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 04:35:17 crc kubenswrapper[4867]: I0214 04:35:17.183752 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="3748198f-49fe-4a76-bd81-4ad518a594e8" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.6:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 04:35:18 crc kubenswrapper[4867]: I0214 04:35:18.182294 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 14 04:35:18 crc kubenswrapper[4867]: I0214 04:35:18.182358 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 14 04:35:18 crc kubenswrapper[4867]: I0214 04:35:18.319825 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gvlgw" Feb 14 04:35:18 crc kubenswrapper[4867]: I0214 04:35:18.320173 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gvlgw" Feb 14 04:35:18 crc kubenswrapper[4867]: I0214 04:35:18.377901 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gvlgw" Feb 14 04:35:19 crc kubenswrapper[4867]: I0214 04:35:19.031745 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gvlgw" Feb 14 04:35:19 crc kubenswrapper[4867]: I0214 04:35:19.098888 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gvlgw"] Feb 14 04:35:19 crc kubenswrapper[4867]: I0214 04:35:19.194842 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="464bbcc9-1810-40bc-8773-bfa3e615b67b" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.7:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 04:35:19 crc kubenswrapper[4867]: I0214 04:35:19.195030 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="464bbcc9-1810-40bc-8773-bfa3e615b67b" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.7:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 04:35:19 crc kubenswrapper[4867]: I0214 04:35:19.764489 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 14 04:35:20 crc kubenswrapper[4867]: I0214 04:35:20.999187 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-gvlgw" podUID="3dbe8df1-aae4-43fe-a7cc-bea6e0124213" containerName="registry-server" containerID="cri-o://f30d52308341a9296f8b6fd10d906d09999467b46c7027125fb93c9f82b211b6" gracePeriod=2 Feb 14 04:35:21 crc kubenswrapper[4867]: I0214 04:35:21.578579 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gvlgw" Feb 14 04:35:21 crc kubenswrapper[4867]: I0214 04:35:21.716134 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k9n8l\" (UniqueName: \"kubernetes.io/projected/3dbe8df1-aae4-43fe-a7cc-bea6e0124213-kube-api-access-k9n8l\") pod \"3dbe8df1-aae4-43fe-a7cc-bea6e0124213\" (UID: \"3dbe8df1-aae4-43fe-a7cc-bea6e0124213\") " Feb 14 04:35:21 crc kubenswrapper[4867]: I0214 04:35:21.716247 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3dbe8df1-aae4-43fe-a7cc-bea6e0124213-catalog-content\") pod \"3dbe8df1-aae4-43fe-a7cc-bea6e0124213\" (UID: \"3dbe8df1-aae4-43fe-a7cc-bea6e0124213\") " Feb 14 04:35:21 crc kubenswrapper[4867]: I0214 04:35:21.716283 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3dbe8df1-aae4-43fe-a7cc-bea6e0124213-utilities\") pod \"3dbe8df1-aae4-43fe-a7cc-bea6e0124213\" (UID: \"3dbe8df1-aae4-43fe-a7cc-bea6e0124213\") " Feb 14 04:35:21 crc kubenswrapper[4867]: I0214 04:35:21.717193 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3dbe8df1-aae4-43fe-a7cc-bea6e0124213-utilities" (OuterVolumeSpecName: "utilities") pod "3dbe8df1-aae4-43fe-a7cc-bea6e0124213" (UID: "3dbe8df1-aae4-43fe-a7cc-bea6e0124213"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:35:21 crc kubenswrapper[4867]: I0214 04:35:21.717615 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3dbe8df1-aae4-43fe-a7cc-bea6e0124213-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 04:35:21 crc kubenswrapper[4867]: I0214 04:35:21.722265 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3dbe8df1-aae4-43fe-a7cc-bea6e0124213-kube-api-access-k9n8l" (OuterVolumeSpecName: "kube-api-access-k9n8l") pod "3dbe8df1-aae4-43fe-a7cc-bea6e0124213" (UID: "3dbe8df1-aae4-43fe-a7cc-bea6e0124213"). InnerVolumeSpecName "kube-api-access-k9n8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:35:21 crc kubenswrapper[4867]: I0214 04:35:21.742462 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3dbe8df1-aae4-43fe-a7cc-bea6e0124213-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3dbe8df1-aae4-43fe-a7cc-bea6e0124213" (UID: "3dbe8df1-aae4-43fe-a7cc-bea6e0124213"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:35:21 crc kubenswrapper[4867]: I0214 04:35:21.821391 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k9n8l\" (UniqueName: \"kubernetes.io/projected/3dbe8df1-aae4-43fe-a7cc-bea6e0124213-kube-api-access-k9n8l\") on node \"crc\" DevicePath \"\"" Feb 14 04:35:21 crc kubenswrapper[4867]: I0214 04:35:21.821436 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3dbe8df1-aae4-43fe-a7cc-bea6e0124213-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 04:35:22 crc kubenswrapper[4867]: I0214 04:35:22.012465 4867 generic.go:334] "Generic (PLEG): container finished" podID="3dbe8df1-aae4-43fe-a7cc-bea6e0124213" containerID="f30d52308341a9296f8b6fd10d906d09999467b46c7027125fb93c9f82b211b6" exitCode=0 Feb 14 04:35:22 crc kubenswrapper[4867]: I0214 04:35:22.012548 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gvlgw" event={"ID":"3dbe8df1-aae4-43fe-a7cc-bea6e0124213","Type":"ContainerDied","Data":"f30d52308341a9296f8b6fd10d906d09999467b46c7027125fb93c9f82b211b6"} Feb 14 04:35:22 crc kubenswrapper[4867]: I0214 04:35:22.012581 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gvlgw" event={"ID":"3dbe8df1-aae4-43fe-a7cc-bea6e0124213","Type":"ContainerDied","Data":"70df0f314d5fb90d90314aa06788a811dc9c80acdc1aa6f7d2bb1ed596e5f7c2"} Feb 14 04:35:22 crc kubenswrapper[4867]: I0214 04:35:22.012600 4867 scope.go:117] "RemoveContainer" containerID="f30d52308341a9296f8b6fd10d906d09999467b46c7027125fb93c9f82b211b6" Feb 14 04:35:22 crc kubenswrapper[4867]: I0214 04:35:22.012769 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gvlgw" Feb 14 04:35:22 crc kubenswrapper[4867]: I0214 04:35:22.046892 4867 scope.go:117] "RemoveContainer" containerID="7f493b03493f584a948f58791a4731dee623aef265a565eb57782b6d03c752e4" Feb 14 04:35:22 crc kubenswrapper[4867]: I0214 04:35:22.050388 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gvlgw"] Feb 14 04:35:22 crc kubenswrapper[4867]: I0214 04:35:22.064695 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-gvlgw"] Feb 14 04:35:22 crc kubenswrapper[4867]: I0214 04:35:22.089057 4867 scope.go:117] "RemoveContainer" containerID="29c3d46d3c1a5c9008610223d152565721e790493ef80583497b4a53c2abb102" Feb 14 04:35:22 crc kubenswrapper[4867]: I0214 04:35:22.156209 4867 scope.go:117] "RemoveContainer" containerID="f30d52308341a9296f8b6fd10d906d09999467b46c7027125fb93c9f82b211b6" Feb 14 04:35:22 crc kubenswrapper[4867]: E0214 04:35:22.157167 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f30d52308341a9296f8b6fd10d906d09999467b46c7027125fb93c9f82b211b6\": container with ID starting with f30d52308341a9296f8b6fd10d906d09999467b46c7027125fb93c9f82b211b6 not found: ID does not exist" containerID="f30d52308341a9296f8b6fd10d906d09999467b46c7027125fb93c9f82b211b6" Feb 14 04:35:22 crc kubenswrapper[4867]: I0214 04:35:22.157222 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f30d52308341a9296f8b6fd10d906d09999467b46c7027125fb93c9f82b211b6"} err="failed to get container status \"f30d52308341a9296f8b6fd10d906d09999467b46c7027125fb93c9f82b211b6\": rpc error: code = NotFound desc = could not find container \"f30d52308341a9296f8b6fd10d906d09999467b46c7027125fb93c9f82b211b6\": container with ID starting with f30d52308341a9296f8b6fd10d906d09999467b46c7027125fb93c9f82b211b6 not found: ID does not exist" Feb 14 04:35:22 crc kubenswrapper[4867]: I0214 04:35:22.157260 4867 scope.go:117] "RemoveContainer" containerID="7f493b03493f584a948f58791a4731dee623aef265a565eb57782b6d03c752e4" Feb 14 04:35:22 crc kubenswrapper[4867]: E0214 04:35:22.157827 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f493b03493f584a948f58791a4731dee623aef265a565eb57782b6d03c752e4\": container with ID starting with 7f493b03493f584a948f58791a4731dee623aef265a565eb57782b6d03c752e4 not found: ID does not exist" containerID="7f493b03493f584a948f58791a4731dee623aef265a565eb57782b6d03c752e4" Feb 14 04:35:22 crc kubenswrapper[4867]: I0214 04:35:22.157857 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f493b03493f584a948f58791a4731dee623aef265a565eb57782b6d03c752e4"} err="failed to get container status \"7f493b03493f584a948f58791a4731dee623aef265a565eb57782b6d03c752e4\": rpc error: code = NotFound desc = could not find container \"7f493b03493f584a948f58791a4731dee623aef265a565eb57782b6d03c752e4\": container with ID starting with 7f493b03493f584a948f58791a4731dee623aef265a565eb57782b6d03c752e4 not found: ID does not exist" Feb 14 04:35:22 crc kubenswrapper[4867]: I0214 04:35:22.157876 4867 scope.go:117] "RemoveContainer" containerID="29c3d46d3c1a5c9008610223d152565721e790493ef80583497b4a53c2abb102" Feb 14 04:35:22 crc kubenswrapper[4867]: E0214 04:35:22.158147 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29c3d46d3c1a5c9008610223d152565721e790493ef80583497b4a53c2abb102\": container with ID starting with 29c3d46d3c1a5c9008610223d152565721e790493ef80583497b4a53c2abb102 not found: ID does not exist" containerID="29c3d46d3c1a5c9008610223d152565721e790493ef80583497b4a53c2abb102" Feb 14 04:35:22 crc kubenswrapper[4867]: I0214 04:35:22.158178 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29c3d46d3c1a5c9008610223d152565721e790493ef80583497b4a53c2abb102"} err="failed to get container status \"29c3d46d3c1a5c9008610223d152565721e790493ef80583497b4a53c2abb102\": rpc error: code = NotFound desc = could not find container \"29c3d46d3c1a5c9008610223d152565721e790493ef80583497b4a53c2abb102\": container with ID starting with 29c3d46d3c1a5c9008610223d152565721e790493ef80583497b4a53c2abb102 not found: ID does not exist" Feb 14 04:35:23 crc kubenswrapper[4867]: I0214 04:35:23.027532 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3dbe8df1-aae4-43fe-a7cc-bea6e0124213" path="/var/lib/kubelet/pods/3dbe8df1-aae4-43fe-a7cc-bea6e0124213/volumes" Feb 14 04:35:24 crc kubenswrapper[4867]: I0214 04:35:24.371016 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 14 04:35:24 crc kubenswrapper[4867]: I0214 04:35:24.372017 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="a78fec22-f395-42fc-a228-8d896580bc95" containerName="kube-state-metrics" containerID="cri-o://c6296689e104eeb9513087c1b6ad0a291438f63926c611686753788a4db4940c" gracePeriod=30 Feb 14 04:35:24 crc kubenswrapper[4867]: I0214 04:35:24.438890 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 14 04:35:24 crc kubenswrapper[4867]: I0214 04:35:24.439376 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mysqld-exporter-0" podUID="4e89a71e-e837-4d98-a707-27908a8342bc" containerName="mysqld-exporter" containerID="cri-o://46871adad84ae3334a9c8c1d7590115ccc3e6c56c62e9c431fc9f978e9e97ba6" gracePeriod=30 Feb 14 04:35:24 crc kubenswrapper[4867]: I0214 04:35:24.959294 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.048480 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.056088 4867 generic.go:334] "Generic (PLEG): container finished" podID="a78fec22-f395-42fc-a228-8d896580bc95" containerID="c6296689e104eeb9513087c1b6ad0a291438f63926c611686753788a4db4940c" exitCode=2 Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.056167 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a78fec22-f395-42fc-a228-8d896580bc95","Type":"ContainerDied","Data":"c6296689e104eeb9513087c1b6ad0a291438f63926c611686753788a4db4940c"} Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.056202 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a78fec22-f395-42fc-a228-8d896580bc95","Type":"ContainerDied","Data":"7872a307f41dac436f282982837819d0b6f5a19b6e81efabef32ab85041cfe4d"} Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.056226 4867 scope.go:117] "RemoveContainer" containerID="c6296689e104eeb9513087c1b6ad0a291438f63926c611686753788a4db4940c" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.056353 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.062385 4867 generic.go:334] "Generic (PLEG): container finished" podID="4e89a71e-e837-4d98-a707-27908a8342bc" containerID="46871adad84ae3334a9c8c1d7590115ccc3e6c56c62e9c431fc9f978e9e97ba6" exitCode=2 Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.062432 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"4e89a71e-e837-4d98-a707-27908a8342bc","Type":"ContainerDied","Data":"46871adad84ae3334a9c8c1d7590115ccc3e6c56c62e9c431fc9f978e9e97ba6"} Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.062465 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"4e89a71e-e837-4d98-a707-27908a8342bc","Type":"ContainerDied","Data":"5b4f6da6858b80468a9ce475d2d3c8ccdc38ea567758289aef5a49879e4b28e8"} Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.062552 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.100588 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h5zbq\" (UniqueName: \"kubernetes.io/projected/a78fec22-f395-42fc-a228-8d896580bc95-kube-api-access-h5zbq\") pod \"a78fec22-f395-42fc-a228-8d896580bc95\" (UID: \"a78fec22-f395-42fc-a228-8d896580bc95\") " Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.105362 4867 scope.go:117] "RemoveContainer" containerID="c6296689e104eeb9513087c1b6ad0a291438f63926c611686753788a4db4940c" Feb 14 04:35:25 crc kubenswrapper[4867]: E0214 04:35:25.106007 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6296689e104eeb9513087c1b6ad0a291438f63926c611686753788a4db4940c\": container with ID starting with c6296689e104eeb9513087c1b6ad0a291438f63926c611686753788a4db4940c not found: ID does not exist" containerID="c6296689e104eeb9513087c1b6ad0a291438f63926c611686753788a4db4940c" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.106051 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6296689e104eeb9513087c1b6ad0a291438f63926c611686753788a4db4940c"} err="failed to get container status \"c6296689e104eeb9513087c1b6ad0a291438f63926c611686753788a4db4940c\": rpc error: code = NotFound desc = could not find container \"c6296689e104eeb9513087c1b6ad0a291438f63926c611686753788a4db4940c\": container with ID starting with c6296689e104eeb9513087c1b6ad0a291438f63926c611686753788a4db4940c not found: ID does not exist" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.106075 4867 scope.go:117] "RemoveContainer" containerID="46871adad84ae3334a9c8c1d7590115ccc3e6c56c62e9c431fc9f978e9e97ba6" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.109716 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a78fec22-f395-42fc-a228-8d896580bc95-kube-api-access-h5zbq" (OuterVolumeSpecName: "kube-api-access-h5zbq") pod "a78fec22-f395-42fc-a228-8d896580bc95" (UID: "a78fec22-f395-42fc-a228-8d896580bc95"). InnerVolumeSpecName "kube-api-access-h5zbq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.144974 4867 scope.go:117] "RemoveContainer" containerID="46871adad84ae3334a9c8c1d7590115ccc3e6c56c62e9c431fc9f978e9e97ba6" Feb 14 04:35:25 crc kubenswrapper[4867]: E0214 04:35:25.145519 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46871adad84ae3334a9c8c1d7590115ccc3e6c56c62e9c431fc9f978e9e97ba6\": container with ID starting with 46871adad84ae3334a9c8c1d7590115ccc3e6c56c62e9c431fc9f978e9e97ba6 not found: ID does not exist" containerID="46871adad84ae3334a9c8c1d7590115ccc3e6c56c62e9c431fc9f978e9e97ba6" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.145564 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46871adad84ae3334a9c8c1d7590115ccc3e6c56c62e9c431fc9f978e9e97ba6"} err="failed to get container status \"46871adad84ae3334a9c8c1d7590115ccc3e6c56c62e9c431fc9f978e9e97ba6\": rpc error: code = NotFound desc = could not find container \"46871adad84ae3334a9c8c1d7590115ccc3e6c56c62e9c431fc9f978e9e97ba6\": container with ID starting with 46871adad84ae3334a9c8c1d7590115ccc3e6c56c62e9c431fc9f978e9e97ba6 not found: ID does not exist" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.202875 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9zlkj\" (UniqueName: \"kubernetes.io/projected/4e89a71e-e837-4d98-a707-27908a8342bc-kube-api-access-9zlkj\") pod \"4e89a71e-e837-4d98-a707-27908a8342bc\" (UID: \"4e89a71e-e837-4d98-a707-27908a8342bc\") " Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.203185 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e89a71e-e837-4d98-a707-27908a8342bc-config-data\") pod \"4e89a71e-e837-4d98-a707-27908a8342bc\" (UID: \"4e89a71e-e837-4d98-a707-27908a8342bc\") " Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.203299 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e89a71e-e837-4d98-a707-27908a8342bc-combined-ca-bundle\") pod \"4e89a71e-e837-4d98-a707-27908a8342bc\" (UID: \"4e89a71e-e837-4d98-a707-27908a8342bc\") " Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.203967 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h5zbq\" (UniqueName: \"kubernetes.io/projected/a78fec22-f395-42fc-a228-8d896580bc95-kube-api-access-h5zbq\") on node \"crc\" DevicePath \"\"" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.206350 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e89a71e-e837-4d98-a707-27908a8342bc-kube-api-access-9zlkj" (OuterVolumeSpecName: "kube-api-access-9zlkj") pod "4e89a71e-e837-4d98-a707-27908a8342bc" (UID: "4e89a71e-e837-4d98-a707-27908a8342bc"). InnerVolumeSpecName "kube-api-access-9zlkj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.239963 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e89a71e-e837-4d98-a707-27908a8342bc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4e89a71e-e837-4d98-a707-27908a8342bc" (UID: "4e89a71e-e837-4d98-a707-27908a8342bc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.269536 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e89a71e-e837-4d98-a707-27908a8342bc-config-data" (OuterVolumeSpecName: "config-data") pod "4e89a71e-e837-4d98-a707-27908a8342bc" (UID: "4e89a71e-e837-4d98-a707-27908a8342bc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.306070 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e89a71e-e837-4d98-a707-27908a8342bc-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.306259 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e89a71e-e837-4d98-a707-27908a8342bc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.306341 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9zlkj\" (UniqueName: \"kubernetes.io/projected/4e89a71e-e837-4d98-a707-27908a8342bc-kube-api-access-9zlkj\") on node \"crc\" DevicePath \"\"" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.401344 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.416809 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.432098 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.447604 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.466260 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 14 04:35:25 crc kubenswrapper[4867]: E0214 04:35:25.467048 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3dbe8df1-aae4-43fe-a7cc-bea6e0124213" containerName="registry-server" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.467078 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="3dbe8df1-aae4-43fe-a7cc-bea6e0124213" containerName="registry-server" Feb 14 04:35:25 crc kubenswrapper[4867]: E0214 04:35:25.467098 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07a0a67f-28d7-4aa6-872b-a0223c46a9ce" containerName="extract-utilities" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.467106 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="07a0a67f-28d7-4aa6-872b-a0223c46a9ce" containerName="extract-utilities" Feb 14 04:35:25 crc kubenswrapper[4867]: E0214 04:35:25.467122 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07a0a67f-28d7-4aa6-872b-a0223c46a9ce" containerName="extract-content" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.467131 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="07a0a67f-28d7-4aa6-872b-a0223c46a9ce" containerName="extract-content" Feb 14 04:35:25 crc kubenswrapper[4867]: E0214 04:35:25.467154 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3dbe8df1-aae4-43fe-a7cc-bea6e0124213" containerName="extract-content" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.467164 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="3dbe8df1-aae4-43fe-a7cc-bea6e0124213" containerName="extract-content" Feb 14 04:35:25 crc kubenswrapper[4867]: E0214 04:35:25.467175 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3dbe8df1-aae4-43fe-a7cc-bea6e0124213" containerName="extract-utilities" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.467185 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="3dbe8df1-aae4-43fe-a7cc-bea6e0124213" containerName="extract-utilities" Feb 14 04:35:25 crc kubenswrapper[4867]: E0214 04:35:25.467207 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07a0a67f-28d7-4aa6-872b-a0223c46a9ce" containerName="registry-server" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.467216 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="07a0a67f-28d7-4aa6-872b-a0223c46a9ce" containerName="registry-server" Feb 14 04:35:25 crc kubenswrapper[4867]: E0214 04:35:25.467246 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e89a71e-e837-4d98-a707-27908a8342bc" containerName="mysqld-exporter" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.467254 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e89a71e-e837-4d98-a707-27908a8342bc" containerName="mysqld-exporter" Feb 14 04:35:25 crc kubenswrapper[4867]: E0214 04:35:25.467271 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a78fec22-f395-42fc-a228-8d896580bc95" containerName="kube-state-metrics" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.467278 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="a78fec22-f395-42fc-a228-8d896580bc95" containerName="kube-state-metrics" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.467646 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e89a71e-e837-4d98-a707-27908a8342bc" containerName="mysqld-exporter" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.467675 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="a78fec22-f395-42fc-a228-8d896580bc95" containerName="kube-state-metrics" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.467690 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="3dbe8df1-aae4-43fe-a7cc-bea6e0124213" containerName="registry-server" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.467702 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="07a0a67f-28d7-4aa6-872b-a0223c46a9ce" containerName="registry-server" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.468800 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.470988 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.471219 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.480721 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.483167 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.503074 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.503292 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-mysqld-exporter-svc" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.513738 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.531436 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.615848 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/89e70483-d3e8-4758-bb61-ae6147dd4f39-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"89e70483-d3e8-4758-bb61-ae6147dd4f39\") " pod="openstack/kube-state-metrics-0" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.615900 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zw62\" (UniqueName: \"kubernetes.io/projected/89e70483-d3e8-4758-bb61-ae6147dd4f39-kube-api-access-4zw62\") pod \"kube-state-metrics-0\" (UID: \"89e70483-d3e8-4758-bb61-ae6147dd4f39\") " pod="openstack/kube-state-metrics-0" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.615943 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9139dc7-b868-4f7c-9e7e-10e313ff1e10-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"e9139dc7-b868-4f7c-9e7e-10e313ff1e10\") " pod="openstack/mysqld-exporter-0" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.615960 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5wm5\" (UniqueName: \"kubernetes.io/projected/e9139dc7-b868-4f7c-9e7e-10e313ff1e10-kube-api-access-t5wm5\") pod \"mysqld-exporter-0\" (UID: \"e9139dc7-b868-4f7c-9e7e-10e313ff1e10\") " pod="openstack/mysqld-exporter-0" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.615984 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/89e70483-d3e8-4758-bb61-ae6147dd4f39-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"89e70483-d3e8-4758-bb61-ae6147dd4f39\") " pod="openstack/kube-state-metrics-0" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.616020 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9139dc7-b868-4f7c-9e7e-10e313ff1e10-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"e9139dc7-b868-4f7c-9e7e-10e313ff1e10\") " pod="openstack/mysqld-exporter-0" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.616047 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9139dc7-b868-4f7c-9e7e-10e313ff1e10-config-data\") pod \"mysqld-exporter-0\" (UID: \"e9139dc7-b868-4f7c-9e7e-10e313ff1e10\") " pod="openstack/mysqld-exporter-0" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.616086 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89e70483-d3e8-4758-bb61-ae6147dd4f39-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"89e70483-d3e8-4758-bb61-ae6147dd4f39\") " pod="openstack/kube-state-metrics-0" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.718203 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/89e70483-d3e8-4758-bb61-ae6147dd4f39-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"89e70483-d3e8-4758-bb61-ae6147dd4f39\") " pod="openstack/kube-state-metrics-0" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.718617 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zw62\" (UniqueName: \"kubernetes.io/projected/89e70483-d3e8-4758-bb61-ae6147dd4f39-kube-api-access-4zw62\") pod \"kube-state-metrics-0\" (UID: \"89e70483-d3e8-4758-bb61-ae6147dd4f39\") " pod="openstack/kube-state-metrics-0" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.718675 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9139dc7-b868-4f7c-9e7e-10e313ff1e10-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"e9139dc7-b868-4f7c-9e7e-10e313ff1e10\") " pod="openstack/mysqld-exporter-0" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.718690 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5wm5\" (UniqueName: \"kubernetes.io/projected/e9139dc7-b868-4f7c-9e7e-10e313ff1e10-kube-api-access-t5wm5\") pod \"mysqld-exporter-0\" (UID: \"e9139dc7-b868-4f7c-9e7e-10e313ff1e10\") " pod="openstack/mysqld-exporter-0" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.718721 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/89e70483-d3e8-4758-bb61-ae6147dd4f39-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"89e70483-d3e8-4758-bb61-ae6147dd4f39\") " pod="openstack/kube-state-metrics-0" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.718767 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9139dc7-b868-4f7c-9e7e-10e313ff1e10-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"e9139dc7-b868-4f7c-9e7e-10e313ff1e10\") " pod="openstack/mysqld-exporter-0" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.718797 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9139dc7-b868-4f7c-9e7e-10e313ff1e10-config-data\") pod \"mysqld-exporter-0\" (UID: \"e9139dc7-b868-4f7c-9e7e-10e313ff1e10\") " pod="openstack/mysqld-exporter-0" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.718842 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89e70483-d3e8-4758-bb61-ae6147dd4f39-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"89e70483-d3e8-4758-bb61-ae6147dd4f39\") " pod="openstack/kube-state-metrics-0" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.723223 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/89e70483-d3e8-4758-bb61-ae6147dd4f39-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"89e70483-d3e8-4758-bb61-ae6147dd4f39\") " pod="openstack/kube-state-metrics-0" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.723229 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89e70483-d3e8-4758-bb61-ae6147dd4f39-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"89e70483-d3e8-4758-bb61-ae6147dd4f39\") " pod="openstack/kube-state-metrics-0" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.723789 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/89e70483-d3e8-4758-bb61-ae6147dd4f39-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"89e70483-d3e8-4758-bb61-ae6147dd4f39\") " pod="openstack/kube-state-metrics-0" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.724307 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9139dc7-b868-4f7c-9e7e-10e313ff1e10-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"e9139dc7-b868-4f7c-9e7e-10e313ff1e10\") " pod="openstack/mysqld-exporter-0" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.731277 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9139dc7-b868-4f7c-9e7e-10e313ff1e10-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"e9139dc7-b868-4f7c-9e7e-10e313ff1e10\") " pod="openstack/mysqld-exporter-0" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.732229 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9139dc7-b868-4f7c-9e7e-10e313ff1e10-config-data\") pod \"mysqld-exporter-0\" (UID: \"e9139dc7-b868-4f7c-9e7e-10e313ff1e10\") " pod="openstack/mysqld-exporter-0" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.740083 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5wm5\" (UniqueName: \"kubernetes.io/projected/e9139dc7-b868-4f7c-9e7e-10e313ff1e10-kube-api-access-t5wm5\") pod \"mysqld-exporter-0\" (UID: \"e9139dc7-b868-4f7c-9e7e-10e313ff1e10\") " pod="openstack/mysqld-exporter-0" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.741797 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zw62\" (UniqueName: \"kubernetes.io/projected/89e70483-d3e8-4758-bb61-ae6147dd4f39-kube-api-access-4zw62\") pod \"kube-state-metrics-0\" (UID: \"89e70483-d3e8-4758-bb61-ae6147dd4f39\") " pod="openstack/kube-state-metrics-0" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.813930 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.886017 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 14 04:35:26 crc kubenswrapper[4867]: I0214 04:35:26.172420 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 14 04:35:26 crc kubenswrapper[4867]: I0214 04:35:26.173713 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 14 04:35:26 crc kubenswrapper[4867]: I0214 04:35:26.176391 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 14 04:35:26 crc kubenswrapper[4867]: I0214 04:35:26.401420 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 14 04:35:26 crc kubenswrapper[4867]: I0214 04:35:26.508386 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 14 04:35:26 crc kubenswrapper[4867]: W0214 04:35:26.510491 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode9139dc7_b868_4f7c_9e7e_10e313ff1e10.slice/crio-81cbe0ca053c5f78199ef40639781845b0a9fe159c7091dbb851d99054a200ec WatchSource:0}: Error finding container 81cbe0ca053c5f78199ef40639781845b0a9fe159c7091dbb851d99054a200ec: Status 404 returned error can't find the container with id 81cbe0ca053c5f78199ef40639781845b0a9fe159c7091dbb851d99054a200ec Feb 14 04:35:26 crc kubenswrapper[4867]: I0214 04:35:26.675557 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:35:26 crc kubenswrapper[4867]: I0214 04:35:26.677313 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1e2abd9c-e70a-4c49-99e2-d8f2606d3916" containerName="ceilometer-central-agent" containerID="cri-o://cc831c892e8c013abef53560483873aaf79b87e38bc3a6d0d64c21cf9f9314c5" gracePeriod=30 Feb 14 04:35:26 crc kubenswrapper[4867]: I0214 04:35:26.677588 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1e2abd9c-e70a-4c49-99e2-d8f2606d3916" containerName="ceilometer-notification-agent" containerID="cri-o://a035303162febd05e4c69dbea4b23655bfc8fbf0f1bef5f71200bbb4908c72f6" gracePeriod=30 Feb 14 04:35:26 crc kubenswrapper[4867]: I0214 04:35:26.677576 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1e2abd9c-e70a-4c49-99e2-d8f2606d3916" containerName="sg-core" containerID="cri-o://1fb8c5a5621f2d512d37075d0d5b21a45a195911425ead599feb944d6a4de9ab" gracePeriod=30 Feb 14 04:35:26 crc kubenswrapper[4867]: I0214 04:35:26.677605 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1e2abd9c-e70a-4c49-99e2-d8f2606d3916" containerName="proxy-httpd" containerID="cri-o://d3d7a5de7a46e9bf58582679cea6e78b22e33da4c8a17769dcc662cfd68cc950" gracePeriod=30 Feb 14 04:35:26 crc kubenswrapper[4867]: E0214 04:35:26.764161 4867 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e2abd9c_e70a_4c49_99e2_d8f2606d3916.slice/crio-conmon-1fb8c5a5621f2d512d37075d0d5b21a45a195911425ead599feb944d6a4de9ab.scope\": RecentStats: unable to find data in memory cache]" Feb 14 04:35:27 crc kubenswrapper[4867]: I0214 04:35:27.027413 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e89a71e-e837-4d98-a707-27908a8342bc" path="/var/lib/kubelet/pods/4e89a71e-e837-4d98-a707-27908a8342bc/volumes" Feb 14 04:35:27 crc kubenswrapper[4867]: I0214 04:35:27.029019 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a78fec22-f395-42fc-a228-8d896580bc95" path="/var/lib/kubelet/pods/a78fec22-f395-42fc-a228-8d896580bc95/volumes" Feb 14 04:35:27 crc kubenswrapper[4867]: I0214 04:35:27.130210 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"e9139dc7-b868-4f7c-9e7e-10e313ff1e10","Type":"ContainerStarted","Data":"81cbe0ca053c5f78199ef40639781845b0a9fe159c7091dbb851d99054a200ec"} Feb 14 04:35:27 crc kubenswrapper[4867]: I0214 04:35:27.131673 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"89e70483-d3e8-4758-bb61-ae6147dd4f39","Type":"ContainerStarted","Data":"d1fe91c8c6f53cf2cd3095d370426f8434db2b63771db762197d4b1633174d13"} Feb 14 04:35:27 crc kubenswrapper[4867]: I0214 04:35:27.136195 4867 generic.go:334] "Generic (PLEG): container finished" podID="1e2abd9c-e70a-4c49-99e2-d8f2606d3916" containerID="d3d7a5de7a46e9bf58582679cea6e78b22e33da4c8a17769dcc662cfd68cc950" exitCode=0 Feb 14 04:35:27 crc kubenswrapper[4867]: I0214 04:35:27.136230 4867 generic.go:334] "Generic (PLEG): container finished" podID="1e2abd9c-e70a-4c49-99e2-d8f2606d3916" containerID="1fb8c5a5621f2d512d37075d0d5b21a45a195911425ead599feb944d6a4de9ab" exitCode=2 Feb 14 04:35:27 crc kubenswrapper[4867]: I0214 04:35:27.136468 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1e2abd9c-e70a-4c49-99e2-d8f2606d3916","Type":"ContainerDied","Data":"d3d7a5de7a46e9bf58582679cea6e78b22e33da4c8a17769dcc662cfd68cc950"} Feb 14 04:35:27 crc kubenswrapper[4867]: I0214 04:35:27.136496 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1e2abd9c-e70a-4c49-99e2-d8f2606d3916","Type":"ContainerDied","Data":"1fb8c5a5621f2d512d37075d0d5b21a45a195911425ead599feb944d6a4de9ab"} Feb 14 04:35:27 crc kubenswrapper[4867]: I0214 04:35:27.144789 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 14 04:35:28 crc kubenswrapper[4867]: I0214 04:35:28.152826 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"89e70483-d3e8-4758-bb61-ae6147dd4f39","Type":"ContainerStarted","Data":"297abf93528c6931e93a622e3695fb5f753d0b19a6467b48c678927e93f9e34b"} Feb 14 04:35:28 crc kubenswrapper[4867]: I0214 04:35:28.153574 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 14 04:35:28 crc kubenswrapper[4867]: I0214 04:35:28.155575 4867 generic.go:334] "Generic (PLEG): container finished" podID="1e2abd9c-e70a-4c49-99e2-d8f2606d3916" containerID="cc831c892e8c013abef53560483873aaf79b87e38bc3a6d0d64c21cf9f9314c5" exitCode=0 Feb 14 04:35:28 crc kubenswrapper[4867]: I0214 04:35:28.155649 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1e2abd9c-e70a-4c49-99e2-d8f2606d3916","Type":"ContainerDied","Data":"cc831c892e8c013abef53560483873aaf79b87e38bc3a6d0d64c21cf9f9314c5"} Feb 14 04:35:28 crc kubenswrapper[4867]: I0214 04:35:28.157605 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"e9139dc7-b868-4f7c-9e7e-10e313ff1e10","Type":"ContainerStarted","Data":"90915f128655d36f5a05cb88e69e47360dadef16c0cfc8bedcf47ea687cdc58b"} Feb 14 04:35:28 crc kubenswrapper[4867]: I0214 04:35:28.198321 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.748621739 podStartE2EDuration="3.198295581s" podCreationTimestamp="2026-02-14 04:35:25 +0000 UTC" firstStartedPulling="2026-02-14 04:35:26.400038275 +0000 UTC m=+1558.480975589" lastFinishedPulling="2026-02-14 04:35:26.849712117 +0000 UTC m=+1558.930649431" observedRunningTime="2026-02-14 04:35:28.17590447 +0000 UTC m=+1560.256841784" watchObservedRunningTime="2026-02-14 04:35:28.198295581 +0000 UTC m=+1560.279232905" Feb 14 04:35:28 crc kubenswrapper[4867]: I0214 04:35:28.207370 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=2.716197418 podStartE2EDuration="3.207352365s" podCreationTimestamp="2026-02-14 04:35:25 +0000 UTC" firstStartedPulling="2026-02-14 04:35:26.51561443 +0000 UTC m=+1558.596551744" lastFinishedPulling="2026-02-14 04:35:27.006769377 +0000 UTC m=+1559.087706691" observedRunningTime="2026-02-14 04:35:28.193616376 +0000 UTC m=+1560.274553690" watchObservedRunningTime="2026-02-14 04:35:28.207352365 +0000 UTC m=+1560.288289689" Feb 14 04:35:28 crc kubenswrapper[4867]: I0214 04:35:28.215997 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 14 04:35:28 crc kubenswrapper[4867]: I0214 04:35:28.230906 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 14 04:35:28 crc kubenswrapper[4867]: I0214 04:35:28.239104 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 14 04:35:28 crc kubenswrapper[4867]: I0214 04:35:28.263751 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 14 04:35:29 crc kubenswrapper[4867]: I0214 04:35:29.169874 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 14 04:35:29 crc kubenswrapper[4867]: I0214 04:35:29.180937 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 14 04:35:31 crc kubenswrapper[4867]: I0214 04:35:31.212209 4867 generic.go:334] "Generic (PLEG): container finished" podID="1e2abd9c-e70a-4c49-99e2-d8f2606d3916" containerID="a035303162febd05e4c69dbea4b23655bfc8fbf0f1bef5f71200bbb4908c72f6" exitCode=0 Feb 14 04:35:31 crc kubenswrapper[4867]: I0214 04:35:31.214415 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1e2abd9c-e70a-4c49-99e2-d8f2606d3916","Type":"ContainerDied","Data":"a035303162febd05e4c69dbea4b23655bfc8fbf0f1bef5f71200bbb4908c72f6"} Feb 14 04:35:31 crc kubenswrapper[4867]: I0214 04:35:31.251746 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 04:35:31 crc kubenswrapper[4867]: I0214 04:35:31.251817 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 04:35:31 crc kubenswrapper[4867]: I0214 04:35:31.416891 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 04:35:31 crc kubenswrapper[4867]: I0214 04:35:31.574249 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e2abd9c-e70a-4c49-99e2-d8f2606d3916-scripts\") pod \"1e2abd9c-e70a-4c49-99e2-d8f2606d3916\" (UID: \"1e2abd9c-e70a-4c49-99e2-d8f2606d3916\") " Feb 14 04:35:31 crc kubenswrapper[4867]: I0214 04:35:31.575882 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1e2abd9c-e70a-4c49-99e2-d8f2606d3916-sg-core-conf-yaml\") pod \"1e2abd9c-e70a-4c49-99e2-d8f2606d3916\" (UID: \"1e2abd9c-e70a-4c49-99e2-d8f2606d3916\") " Feb 14 04:35:31 crc kubenswrapper[4867]: I0214 04:35:31.576324 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-td9tc\" (UniqueName: \"kubernetes.io/projected/1e2abd9c-e70a-4c49-99e2-d8f2606d3916-kube-api-access-td9tc\") pod \"1e2abd9c-e70a-4c49-99e2-d8f2606d3916\" (UID: \"1e2abd9c-e70a-4c49-99e2-d8f2606d3916\") " Feb 14 04:35:31 crc kubenswrapper[4867]: I0214 04:35:31.576409 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e2abd9c-e70a-4c49-99e2-d8f2606d3916-config-data\") pod \"1e2abd9c-e70a-4c49-99e2-d8f2606d3916\" (UID: \"1e2abd9c-e70a-4c49-99e2-d8f2606d3916\") " Feb 14 04:35:31 crc kubenswrapper[4867]: I0214 04:35:31.576656 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1e2abd9c-e70a-4c49-99e2-d8f2606d3916-run-httpd\") pod \"1e2abd9c-e70a-4c49-99e2-d8f2606d3916\" (UID: \"1e2abd9c-e70a-4c49-99e2-d8f2606d3916\") " Feb 14 04:35:31 crc kubenswrapper[4867]: I0214 04:35:31.576805 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1e2abd9c-e70a-4c49-99e2-d8f2606d3916-log-httpd\") pod \"1e2abd9c-e70a-4c49-99e2-d8f2606d3916\" (UID: \"1e2abd9c-e70a-4c49-99e2-d8f2606d3916\") " Feb 14 04:35:31 crc kubenswrapper[4867]: I0214 04:35:31.577394 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e2abd9c-e70a-4c49-99e2-d8f2606d3916-combined-ca-bundle\") pod \"1e2abd9c-e70a-4c49-99e2-d8f2606d3916\" (UID: \"1e2abd9c-e70a-4c49-99e2-d8f2606d3916\") " Feb 14 04:35:31 crc kubenswrapper[4867]: I0214 04:35:31.577849 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e2abd9c-e70a-4c49-99e2-d8f2606d3916-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "1e2abd9c-e70a-4c49-99e2-d8f2606d3916" (UID: "1e2abd9c-e70a-4c49-99e2-d8f2606d3916"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:35:31 crc kubenswrapper[4867]: I0214 04:35:31.578223 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e2abd9c-e70a-4c49-99e2-d8f2606d3916-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "1e2abd9c-e70a-4c49-99e2-d8f2606d3916" (UID: "1e2abd9c-e70a-4c49-99e2-d8f2606d3916"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:35:31 crc kubenswrapper[4867]: I0214 04:35:31.578427 4867 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1e2abd9c-e70a-4c49-99e2-d8f2606d3916-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 14 04:35:31 crc kubenswrapper[4867]: I0214 04:35:31.578535 4867 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1e2abd9c-e70a-4c49-99e2-d8f2606d3916-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 14 04:35:31 crc kubenswrapper[4867]: I0214 04:35:31.582428 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e2abd9c-e70a-4c49-99e2-d8f2606d3916-kube-api-access-td9tc" (OuterVolumeSpecName: "kube-api-access-td9tc") pod "1e2abd9c-e70a-4c49-99e2-d8f2606d3916" (UID: "1e2abd9c-e70a-4c49-99e2-d8f2606d3916"). InnerVolumeSpecName "kube-api-access-td9tc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:35:31 crc kubenswrapper[4867]: I0214 04:35:31.583349 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e2abd9c-e70a-4c49-99e2-d8f2606d3916-scripts" (OuterVolumeSpecName: "scripts") pod "1e2abd9c-e70a-4c49-99e2-d8f2606d3916" (UID: "1e2abd9c-e70a-4c49-99e2-d8f2606d3916"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:35:31 crc kubenswrapper[4867]: I0214 04:35:31.622494 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e2abd9c-e70a-4c49-99e2-d8f2606d3916-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "1e2abd9c-e70a-4c49-99e2-d8f2606d3916" (UID: "1e2abd9c-e70a-4c49-99e2-d8f2606d3916"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:35:31 crc kubenswrapper[4867]: I0214 04:35:31.681366 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e2abd9c-e70a-4c49-99e2-d8f2606d3916-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:35:31 crc kubenswrapper[4867]: I0214 04:35:31.681639 4867 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1e2abd9c-e70a-4c49-99e2-d8f2606d3916-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 14 04:35:31 crc kubenswrapper[4867]: I0214 04:35:31.681711 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-td9tc\" (UniqueName: \"kubernetes.io/projected/1e2abd9c-e70a-4c49-99e2-d8f2606d3916-kube-api-access-td9tc\") on node \"crc\" DevicePath \"\"" Feb 14 04:35:31 crc kubenswrapper[4867]: I0214 04:35:31.686639 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e2abd9c-e70a-4c49-99e2-d8f2606d3916-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1e2abd9c-e70a-4c49-99e2-d8f2606d3916" (UID: "1e2abd9c-e70a-4c49-99e2-d8f2606d3916"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:35:31 crc kubenswrapper[4867]: I0214 04:35:31.710563 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e2abd9c-e70a-4c49-99e2-d8f2606d3916-config-data" (OuterVolumeSpecName: "config-data") pod "1e2abd9c-e70a-4c49-99e2-d8f2606d3916" (UID: "1e2abd9c-e70a-4c49-99e2-d8f2606d3916"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:35:31 crc kubenswrapper[4867]: I0214 04:35:31.784862 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e2abd9c-e70a-4c49-99e2-d8f2606d3916-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:35:31 crc kubenswrapper[4867]: I0214 04:35:31.785288 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e2abd9c-e70a-4c49-99e2-d8f2606d3916-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:35:32 crc kubenswrapper[4867]: I0214 04:35:32.225949 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1e2abd9c-e70a-4c49-99e2-d8f2606d3916","Type":"ContainerDied","Data":"36752e6e5f2c31ee736f7a9a28d860706f6c2685f55f602f485609bff4a72cd3"} Feb 14 04:35:32 crc kubenswrapper[4867]: I0214 04:35:32.227248 4867 scope.go:117] "RemoveContainer" containerID="d3d7a5de7a46e9bf58582679cea6e78b22e33da4c8a17769dcc662cfd68cc950" Feb 14 04:35:32 crc kubenswrapper[4867]: I0214 04:35:32.227198 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 04:35:32 crc kubenswrapper[4867]: I0214 04:35:32.279422 4867 scope.go:117] "RemoveContainer" containerID="1fb8c5a5621f2d512d37075d0d5b21a45a195911425ead599feb944d6a4de9ab" Feb 14 04:35:32 crc kubenswrapper[4867]: I0214 04:35:32.285674 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:35:32 crc kubenswrapper[4867]: I0214 04:35:32.312259 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:35:32 crc kubenswrapper[4867]: I0214 04:35:32.330686 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:35:32 crc kubenswrapper[4867]: E0214 04:35:32.331344 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e2abd9c-e70a-4c49-99e2-d8f2606d3916" containerName="ceilometer-notification-agent" Feb 14 04:35:32 crc kubenswrapper[4867]: I0214 04:35:32.331366 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e2abd9c-e70a-4c49-99e2-d8f2606d3916" containerName="ceilometer-notification-agent" Feb 14 04:35:32 crc kubenswrapper[4867]: E0214 04:35:32.331391 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e2abd9c-e70a-4c49-99e2-d8f2606d3916" containerName="sg-core" Feb 14 04:35:32 crc kubenswrapper[4867]: I0214 04:35:32.331398 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e2abd9c-e70a-4c49-99e2-d8f2606d3916" containerName="sg-core" Feb 14 04:35:32 crc kubenswrapper[4867]: E0214 04:35:32.331416 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e2abd9c-e70a-4c49-99e2-d8f2606d3916" containerName="ceilometer-central-agent" Feb 14 04:35:32 crc kubenswrapper[4867]: I0214 04:35:32.331422 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e2abd9c-e70a-4c49-99e2-d8f2606d3916" containerName="ceilometer-central-agent" Feb 14 04:35:32 crc kubenswrapper[4867]: E0214 04:35:32.331438 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e2abd9c-e70a-4c49-99e2-d8f2606d3916" containerName="proxy-httpd" Feb 14 04:35:32 crc kubenswrapper[4867]: I0214 04:35:32.331444 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e2abd9c-e70a-4c49-99e2-d8f2606d3916" containerName="proxy-httpd" Feb 14 04:35:32 crc kubenswrapper[4867]: I0214 04:35:32.331713 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e2abd9c-e70a-4c49-99e2-d8f2606d3916" containerName="ceilometer-central-agent" Feb 14 04:35:32 crc kubenswrapper[4867]: I0214 04:35:32.331727 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e2abd9c-e70a-4c49-99e2-d8f2606d3916" containerName="sg-core" Feb 14 04:35:32 crc kubenswrapper[4867]: I0214 04:35:32.331745 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e2abd9c-e70a-4c49-99e2-d8f2606d3916" containerName="ceilometer-notification-agent" Feb 14 04:35:32 crc kubenswrapper[4867]: I0214 04:35:32.331760 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e2abd9c-e70a-4c49-99e2-d8f2606d3916" containerName="proxy-httpd" Feb 14 04:35:32 crc kubenswrapper[4867]: I0214 04:35:32.333991 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 04:35:32 crc kubenswrapper[4867]: I0214 04:35:32.337146 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 14 04:35:32 crc kubenswrapper[4867]: I0214 04:35:32.337418 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 14 04:35:32 crc kubenswrapper[4867]: I0214 04:35:32.337554 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 14 04:35:32 crc kubenswrapper[4867]: I0214 04:35:32.337723 4867 scope.go:117] "RemoveContainer" containerID="a035303162febd05e4c69dbea4b23655bfc8fbf0f1bef5f71200bbb4908c72f6" Feb 14 04:35:32 crc kubenswrapper[4867]: I0214 04:35:32.343712 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:35:32 crc kubenswrapper[4867]: I0214 04:35:32.381433 4867 scope.go:117] "RemoveContainer" containerID="cc831c892e8c013abef53560483873aaf79b87e38bc3a6d0d64c21cf9f9314c5" Feb 14 04:35:32 crc kubenswrapper[4867]: I0214 04:35:32.515661 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/755b32e7-a73b-4823-a57a-9ff2346f37ba-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"755b32e7-a73b-4823-a57a-9ff2346f37ba\") " pod="openstack/ceilometer-0" Feb 14 04:35:32 crc kubenswrapper[4867]: I0214 04:35:32.515851 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/755b32e7-a73b-4823-a57a-9ff2346f37ba-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"755b32e7-a73b-4823-a57a-9ff2346f37ba\") " pod="openstack/ceilometer-0" Feb 14 04:35:32 crc kubenswrapper[4867]: I0214 04:35:32.515949 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/755b32e7-a73b-4823-a57a-9ff2346f37ba-log-httpd\") pod \"ceilometer-0\" (UID: \"755b32e7-a73b-4823-a57a-9ff2346f37ba\") " pod="openstack/ceilometer-0" Feb 14 04:35:32 crc kubenswrapper[4867]: I0214 04:35:32.516031 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/755b32e7-a73b-4823-a57a-9ff2346f37ba-run-httpd\") pod \"ceilometer-0\" (UID: \"755b32e7-a73b-4823-a57a-9ff2346f37ba\") " pod="openstack/ceilometer-0" Feb 14 04:35:32 crc kubenswrapper[4867]: I0214 04:35:32.516077 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xq4lx\" (UniqueName: \"kubernetes.io/projected/755b32e7-a73b-4823-a57a-9ff2346f37ba-kube-api-access-xq4lx\") pod \"ceilometer-0\" (UID: \"755b32e7-a73b-4823-a57a-9ff2346f37ba\") " pod="openstack/ceilometer-0" Feb 14 04:35:32 crc kubenswrapper[4867]: I0214 04:35:32.516255 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/755b32e7-a73b-4823-a57a-9ff2346f37ba-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"755b32e7-a73b-4823-a57a-9ff2346f37ba\") " pod="openstack/ceilometer-0" Feb 14 04:35:32 crc kubenswrapper[4867]: I0214 04:35:32.516340 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/755b32e7-a73b-4823-a57a-9ff2346f37ba-scripts\") pod \"ceilometer-0\" (UID: \"755b32e7-a73b-4823-a57a-9ff2346f37ba\") " pod="openstack/ceilometer-0" Feb 14 04:35:32 crc kubenswrapper[4867]: I0214 04:35:32.516411 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/755b32e7-a73b-4823-a57a-9ff2346f37ba-config-data\") pod \"ceilometer-0\" (UID: \"755b32e7-a73b-4823-a57a-9ff2346f37ba\") " pod="openstack/ceilometer-0" Feb 14 04:35:32 crc kubenswrapper[4867]: I0214 04:35:32.620122 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/755b32e7-a73b-4823-a57a-9ff2346f37ba-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"755b32e7-a73b-4823-a57a-9ff2346f37ba\") " pod="openstack/ceilometer-0" Feb 14 04:35:32 crc kubenswrapper[4867]: I0214 04:35:32.620207 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/755b32e7-a73b-4823-a57a-9ff2346f37ba-scripts\") pod \"ceilometer-0\" (UID: \"755b32e7-a73b-4823-a57a-9ff2346f37ba\") " pod="openstack/ceilometer-0" Feb 14 04:35:32 crc kubenswrapper[4867]: I0214 04:35:32.620270 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/755b32e7-a73b-4823-a57a-9ff2346f37ba-config-data\") pod \"ceilometer-0\" (UID: \"755b32e7-a73b-4823-a57a-9ff2346f37ba\") " pod="openstack/ceilometer-0" Feb 14 04:35:32 crc kubenswrapper[4867]: I0214 04:35:32.620537 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/755b32e7-a73b-4823-a57a-9ff2346f37ba-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"755b32e7-a73b-4823-a57a-9ff2346f37ba\") " pod="openstack/ceilometer-0" Feb 14 04:35:32 crc kubenswrapper[4867]: I0214 04:35:32.620630 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/755b32e7-a73b-4823-a57a-9ff2346f37ba-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"755b32e7-a73b-4823-a57a-9ff2346f37ba\") " pod="openstack/ceilometer-0" Feb 14 04:35:32 crc kubenswrapper[4867]: I0214 04:35:32.620673 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/755b32e7-a73b-4823-a57a-9ff2346f37ba-log-httpd\") pod \"ceilometer-0\" (UID: \"755b32e7-a73b-4823-a57a-9ff2346f37ba\") " pod="openstack/ceilometer-0" Feb 14 04:35:32 crc kubenswrapper[4867]: I0214 04:35:32.620739 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/755b32e7-a73b-4823-a57a-9ff2346f37ba-run-httpd\") pod \"ceilometer-0\" (UID: \"755b32e7-a73b-4823-a57a-9ff2346f37ba\") " pod="openstack/ceilometer-0" Feb 14 04:35:32 crc kubenswrapper[4867]: I0214 04:35:32.620776 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xq4lx\" (UniqueName: \"kubernetes.io/projected/755b32e7-a73b-4823-a57a-9ff2346f37ba-kube-api-access-xq4lx\") pod \"ceilometer-0\" (UID: \"755b32e7-a73b-4823-a57a-9ff2346f37ba\") " pod="openstack/ceilometer-0" Feb 14 04:35:32 crc kubenswrapper[4867]: I0214 04:35:32.621550 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/755b32e7-a73b-4823-a57a-9ff2346f37ba-log-httpd\") pod \"ceilometer-0\" (UID: \"755b32e7-a73b-4823-a57a-9ff2346f37ba\") " pod="openstack/ceilometer-0" Feb 14 04:35:32 crc kubenswrapper[4867]: I0214 04:35:32.622076 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/755b32e7-a73b-4823-a57a-9ff2346f37ba-run-httpd\") pod \"ceilometer-0\" (UID: \"755b32e7-a73b-4823-a57a-9ff2346f37ba\") " pod="openstack/ceilometer-0" Feb 14 04:35:32 crc kubenswrapper[4867]: I0214 04:35:32.627891 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/755b32e7-a73b-4823-a57a-9ff2346f37ba-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"755b32e7-a73b-4823-a57a-9ff2346f37ba\") " pod="openstack/ceilometer-0" Feb 14 04:35:32 crc kubenswrapper[4867]: I0214 04:35:32.628220 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/755b32e7-a73b-4823-a57a-9ff2346f37ba-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"755b32e7-a73b-4823-a57a-9ff2346f37ba\") " pod="openstack/ceilometer-0" Feb 14 04:35:32 crc kubenswrapper[4867]: I0214 04:35:32.628407 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/755b32e7-a73b-4823-a57a-9ff2346f37ba-scripts\") pod \"ceilometer-0\" (UID: \"755b32e7-a73b-4823-a57a-9ff2346f37ba\") " pod="openstack/ceilometer-0" Feb 14 04:35:32 crc kubenswrapper[4867]: I0214 04:35:32.630098 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/755b32e7-a73b-4823-a57a-9ff2346f37ba-config-data\") pod \"ceilometer-0\" (UID: \"755b32e7-a73b-4823-a57a-9ff2346f37ba\") " pod="openstack/ceilometer-0" Feb 14 04:35:32 crc kubenswrapper[4867]: I0214 04:35:32.630775 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/755b32e7-a73b-4823-a57a-9ff2346f37ba-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"755b32e7-a73b-4823-a57a-9ff2346f37ba\") " pod="openstack/ceilometer-0" Feb 14 04:35:32 crc kubenswrapper[4867]: I0214 04:35:32.640469 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xq4lx\" (UniqueName: \"kubernetes.io/projected/755b32e7-a73b-4823-a57a-9ff2346f37ba-kube-api-access-xq4lx\") pod \"ceilometer-0\" (UID: \"755b32e7-a73b-4823-a57a-9ff2346f37ba\") " pod="openstack/ceilometer-0" Feb 14 04:35:32 crc kubenswrapper[4867]: I0214 04:35:32.682309 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 04:35:33 crc kubenswrapper[4867]: I0214 04:35:33.015777 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e2abd9c-e70a-4c49-99e2-d8f2606d3916" path="/var/lib/kubelet/pods/1e2abd9c-e70a-4c49-99e2-d8f2606d3916/volumes" Feb 14 04:35:33 crc kubenswrapper[4867]: I0214 04:35:33.154434 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:35:33 crc kubenswrapper[4867]: I0214 04:35:33.242197 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"755b32e7-a73b-4823-a57a-9ff2346f37ba","Type":"ContainerStarted","Data":"73ec0567b19c96951a830a41b4544085988752f18988cc5174bd34b76d04f7d9"} Feb 14 04:35:34 crc kubenswrapper[4867]: I0214 04:35:34.266736 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"755b32e7-a73b-4823-a57a-9ff2346f37ba","Type":"ContainerStarted","Data":"da180bbe3f204dbafda3ff9411b5f7ce6de88f48145b022bced6575ef8415899"} Feb 14 04:35:35 crc kubenswrapper[4867]: I0214 04:35:35.288797 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"755b32e7-a73b-4823-a57a-9ff2346f37ba","Type":"ContainerStarted","Data":"925a585863d08622a1aaa17cd592d436946e2f7543ad7a339de42ffb5db6ed88"} Feb 14 04:35:35 crc kubenswrapper[4867]: I0214 04:35:35.829630 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 14 04:35:36 crc kubenswrapper[4867]: I0214 04:35:36.306500 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"755b32e7-a73b-4823-a57a-9ff2346f37ba","Type":"ContainerStarted","Data":"dc1cf3121882c456defd0b584e2d3e7cab7b3b69157d5a2371159fa03ae59f2d"} Feb 14 04:35:37 crc kubenswrapper[4867]: I0214 04:35:37.329070 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"755b32e7-a73b-4823-a57a-9ff2346f37ba","Type":"ContainerStarted","Data":"0447a5810775684932ac15e3424c1b15be46ff0f806cbba24fd777ce41cbccc0"} Feb 14 04:35:37 crc kubenswrapper[4867]: I0214 04:35:37.329836 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 14 04:35:37 crc kubenswrapper[4867]: I0214 04:35:37.361387 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.745870458 podStartE2EDuration="5.361362372s" podCreationTimestamp="2026-02-14 04:35:32 +0000 UTC" firstStartedPulling="2026-02-14 04:35:33.152919926 +0000 UTC m=+1565.233857240" lastFinishedPulling="2026-02-14 04:35:36.76841184 +0000 UTC m=+1568.849349154" observedRunningTime="2026-02-14 04:35:37.359018609 +0000 UTC m=+1569.439955943" watchObservedRunningTime="2026-02-14 04:35:37.361362372 +0000 UTC m=+1569.442299706" Feb 14 04:36:01 crc kubenswrapper[4867]: I0214 04:36:01.250644 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 04:36:01 crc kubenswrapper[4867]: I0214 04:36:01.251348 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 04:36:01 crc kubenswrapper[4867]: I0214 04:36:01.251396 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" Feb 14 04:36:01 crc kubenswrapper[4867]: I0214 04:36:01.252841 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7203a3aa09f0fa634ee4bcd02b0e1dff1e29376e8dd84a4e743cbea72d4c480e"} pod="openshift-machine-config-operator/machine-config-daemon-4s95t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 04:36:01 crc kubenswrapper[4867]: I0214 04:36:01.252906 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" containerID="cri-o://7203a3aa09f0fa634ee4bcd02b0e1dff1e29376e8dd84a4e743cbea72d4c480e" gracePeriod=600 Feb 14 04:36:01 crc kubenswrapper[4867]: E0214 04:36:01.374331 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:36:01 crc kubenswrapper[4867]: I0214 04:36:01.631656 4867 generic.go:334] "Generic (PLEG): container finished" podID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerID="7203a3aa09f0fa634ee4bcd02b0e1dff1e29376e8dd84a4e743cbea72d4c480e" exitCode=0 Feb 14 04:36:01 crc kubenswrapper[4867]: I0214 04:36:01.631699 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" event={"ID":"5992e46c-bce7-4b9f-82f2-c7ffb93286cd","Type":"ContainerDied","Data":"7203a3aa09f0fa634ee4bcd02b0e1dff1e29376e8dd84a4e743cbea72d4c480e"} Feb 14 04:36:01 crc kubenswrapper[4867]: I0214 04:36:01.631736 4867 scope.go:117] "RemoveContainer" containerID="9c4b967cf6b24751f9f07fc3f33e355390aef9adbb8efd8f22637fd0bfe6c0be" Feb 14 04:36:01 crc kubenswrapper[4867]: I0214 04:36:01.632795 4867 scope.go:117] "RemoveContainer" containerID="7203a3aa09f0fa634ee4bcd02b0e1dff1e29376e8dd84a4e743cbea72d4c480e" Feb 14 04:36:01 crc kubenswrapper[4867]: E0214 04:36:01.633331 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:36:02 crc kubenswrapper[4867]: I0214 04:36:02.692364 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 14 04:36:10 crc kubenswrapper[4867]: I0214 04:36:10.009602 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kwldn"] Feb 14 04:36:10 crc kubenswrapper[4867]: I0214 04:36:10.014688 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kwldn" Feb 14 04:36:10 crc kubenswrapper[4867]: I0214 04:36:10.022056 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kwldn"] Feb 14 04:36:10 crc kubenswrapper[4867]: I0214 04:36:10.180141 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77086ddb-f1c4-4387-a1b3-a7b9389d4eb0-utilities\") pod \"certified-operators-kwldn\" (UID: \"77086ddb-f1c4-4387-a1b3-a7b9389d4eb0\") " pod="openshift-marketplace/certified-operators-kwldn" Feb 14 04:36:10 crc kubenswrapper[4867]: I0214 04:36:10.180321 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snc9h\" (UniqueName: \"kubernetes.io/projected/77086ddb-f1c4-4387-a1b3-a7b9389d4eb0-kube-api-access-snc9h\") pod \"certified-operators-kwldn\" (UID: \"77086ddb-f1c4-4387-a1b3-a7b9389d4eb0\") " pod="openshift-marketplace/certified-operators-kwldn" Feb 14 04:36:10 crc kubenswrapper[4867]: I0214 04:36:10.180427 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77086ddb-f1c4-4387-a1b3-a7b9389d4eb0-catalog-content\") pod \"certified-operators-kwldn\" (UID: \"77086ddb-f1c4-4387-a1b3-a7b9389d4eb0\") " pod="openshift-marketplace/certified-operators-kwldn" Feb 14 04:36:10 crc kubenswrapper[4867]: I0214 04:36:10.282246 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77086ddb-f1c4-4387-a1b3-a7b9389d4eb0-utilities\") pod \"certified-operators-kwldn\" (UID: \"77086ddb-f1c4-4387-a1b3-a7b9389d4eb0\") " pod="openshift-marketplace/certified-operators-kwldn" Feb 14 04:36:10 crc kubenswrapper[4867]: I0214 04:36:10.282373 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snc9h\" (UniqueName: \"kubernetes.io/projected/77086ddb-f1c4-4387-a1b3-a7b9389d4eb0-kube-api-access-snc9h\") pod \"certified-operators-kwldn\" (UID: \"77086ddb-f1c4-4387-a1b3-a7b9389d4eb0\") " pod="openshift-marketplace/certified-operators-kwldn" Feb 14 04:36:10 crc kubenswrapper[4867]: I0214 04:36:10.282463 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77086ddb-f1c4-4387-a1b3-a7b9389d4eb0-catalog-content\") pod \"certified-operators-kwldn\" (UID: \"77086ddb-f1c4-4387-a1b3-a7b9389d4eb0\") " pod="openshift-marketplace/certified-operators-kwldn" Feb 14 04:36:10 crc kubenswrapper[4867]: I0214 04:36:10.283042 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77086ddb-f1c4-4387-a1b3-a7b9389d4eb0-utilities\") pod \"certified-operators-kwldn\" (UID: \"77086ddb-f1c4-4387-a1b3-a7b9389d4eb0\") " pod="openshift-marketplace/certified-operators-kwldn" Feb 14 04:36:10 crc kubenswrapper[4867]: I0214 04:36:10.283576 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77086ddb-f1c4-4387-a1b3-a7b9389d4eb0-catalog-content\") pod \"certified-operators-kwldn\" (UID: \"77086ddb-f1c4-4387-a1b3-a7b9389d4eb0\") " pod="openshift-marketplace/certified-operators-kwldn" Feb 14 04:36:10 crc kubenswrapper[4867]: I0214 04:36:10.303401 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snc9h\" (UniqueName: \"kubernetes.io/projected/77086ddb-f1c4-4387-a1b3-a7b9389d4eb0-kube-api-access-snc9h\") pod \"certified-operators-kwldn\" (UID: \"77086ddb-f1c4-4387-a1b3-a7b9389d4eb0\") " pod="openshift-marketplace/certified-operators-kwldn" Feb 14 04:36:10 crc kubenswrapper[4867]: I0214 04:36:10.357445 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kwldn" Feb 14 04:36:10 crc kubenswrapper[4867]: I0214 04:36:10.862241 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kwldn"] Feb 14 04:36:10 crc kubenswrapper[4867]: I0214 04:36:10.964334 4867 scope.go:117] "RemoveContainer" containerID="026325c8f6cfe452fbbf5a283d6335d1b62be9618bc89fae94bbe5dcc2c9e96d" Feb 14 04:36:11 crc kubenswrapper[4867]: I0214 04:36:11.005224 4867 scope.go:117] "RemoveContainer" containerID="f68abce2a11886ea053ab13b7ebbe72ba1f8d7abcfad4ba7b26252a8c0000f25" Feb 14 04:36:11 crc kubenswrapper[4867]: I0214 04:36:11.039650 4867 scope.go:117] "RemoveContainer" containerID="7429acc7d9da73b9750d17def9d8240155c7d41dbd196ce0d4607a1d9b14419f" Feb 14 04:36:11 crc kubenswrapper[4867]: I0214 04:36:11.751413 4867 generic.go:334] "Generic (PLEG): container finished" podID="77086ddb-f1c4-4387-a1b3-a7b9389d4eb0" containerID="5253dc65bb4e2a66c82df57f3fab0290cc8ebb76baf27354b2a9c4455891c781" exitCode=0 Feb 14 04:36:11 crc kubenswrapper[4867]: I0214 04:36:11.751638 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kwldn" event={"ID":"77086ddb-f1c4-4387-a1b3-a7b9389d4eb0","Type":"ContainerDied","Data":"5253dc65bb4e2a66c82df57f3fab0290cc8ebb76baf27354b2a9c4455891c781"} Feb 14 04:36:11 crc kubenswrapper[4867]: I0214 04:36:11.751754 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kwldn" event={"ID":"77086ddb-f1c4-4387-a1b3-a7b9389d4eb0","Type":"ContainerStarted","Data":"f81d9f3f5e58496407123bbe89b13b2f4384e5424f5ed4516e82d1a0c14bf576"} Feb 14 04:36:12 crc kubenswrapper[4867]: I0214 04:36:12.768558 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kwldn" event={"ID":"77086ddb-f1c4-4387-a1b3-a7b9389d4eb0","Type":"ContainerStarted","Data":"4f2b9b821ddbdb0d02349dca656687220bddd2bf4415503ac614d53801212cea"} Feb 14 04:36:13 crc kubenswrapper[4867]: I0214 04:36:13.780267 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-246z7"] Feb 14 04:36:13 crc kubenswrapper[4867]: I0214 04:36:13.795466 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-246z7"] Feb 14 04:36:13 crc kubenswrapper[4867]: I0214 04:36:13.839877 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-l8hr2"] Feb 14 04:36:13 crc kubenswrapper[4867]: I0214 04:36:13.842176 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-l8hr2" Feb 14 04:36:13 crc kubenswrapper[4867]: I0214 04:36:13.855522 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-l8hr2"] Feb 14 04:36:13 crc kubenswrapper[4867]: I0214 04:36:13.992046 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j82w7\" (UniqueName: \"kubernetes.io/projected/632c48c8-f0d5-4dc9-823e-fa96b9265e97-kube-api-access-j82w7\") pod \"heat-db-sync-l8hr2\" (UID: \"632c48c8-f0d5-4dc9-823e-fa96b9265e97\") " pod="openstack/heat-db-sync-l8hr2" Feb 14 04:36:13 crc kubenswrapper[4867]: I0214 04:36:13.992161 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/632c48c8-f0d5-4dc9-823e-fa96b9265e97-combined-ca-bundle\") pod \"heat-db-sync-l8hr2\" (UID: \"632c48c8-f0d5-4dc9-823e-fa96b9265e97\") " pod="openstack/heat-db-sync-l8hr2" Feb 14 04:36:13 crc kubenswrapper[4867]: I0214 04:36:13.992336 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/632c48c8-f0d5-4dc9-823e-fa96b9265e97-config-data\") pod \"heat-db-sync-l8hr2\" (UID: \"632c48c8-f0d5-4dc9-823e-fa96b9265e97\") " pod="openstack/heat-db-sync-l8hr2" Feb 14 04:36:13 crc kubenswrapper[4867]: I0214 04:36:13.996972 4867 scope.go:117] "RemoveContainer" containerID="7203a3aa09f0fa634ee4bcd02b0e1dff1e29376e8dd84a4e743cbea72d4c480e" Feb 14 04:36:13 crc kubenswrapper[4867]: E0214 04:36:13.997431 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:36:14 crc kubenswrapper[4867]: I0214 04:36:14.095532 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j82w7\" (UniqueName: \"kubernetes.io/projected/632c48c8-f0d5-4dc9-823e-fa96b9265e97-kube-api-access-j82w7\") pod \"heat-db-sync-l8hr2\" (UID: \"632c48c8-f0d5-4dc9-823e-fa96b9265e97\") " pod="openstack/heat-db-sync-l8hr2" Feb 14 04:36:14 crc kubenswrapper[4867]: I0214 04:36:14.095876 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/632c48c8-f0d5-4dc9-823e-fa96b9265e97-combined-ca-bundle\") pod \"heat-db-sync-l8hr2\" (UID: \"632c48c8-f0d5-4dc9-823e-fa96b9265e97\") " pod="openstack/heat-db-sync-l8hr2" Feb 14 04:36:14 crc kubenswrapper[4867]: I0214 04:36:14.096946 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/632c48c8-f0d5-4dc9-823e-fa96b9265e97-config-data\") pod \"heat-db-sync-l8hr2\" (UID: \"632c48c8-f0d5-4dc9-823e-fa96b9265e97\") " pod="openstack/heat-db-sync-l8hr2" Feb 14 04:36:14 crc kubenswrapper[4867]: I0214 04:36:14.104162 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/632c48c8-f0d5-4dc9-823e-fa96b9265e97-config-data\") pod \"heat-db-sync-l8hr2\" (UID: \"632c48c8-f0d5-4dc9-823e-fa96b9265e97\") " pod="openstack/heat-db-sync-l8hr2" Feb 14 04:36:14 crc kubenswrapper[4867]: I0214 04:36:14.102928 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/632c48c8-f0d5-4dc9-823e-fa96b9265e97-combined-ca-bundle\") pod \"heat-db-sync-l8hr2\" (UID: \"632c48c8-f0d5-4dc9-823e-fa96b9265e97\") " pod="openstack/heat-db-sync-l8hr2" Feb 14 04:36:14 crc kubenswrapper[4867]: I0214 04:36:14.115469 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j82w7\" (UniqueName: \"kubernetes.io/projected/632c48c8-f0d5-4dc9-823e-fa96b9265e97-kube-api-access-j82w7\") pod \"heat-db-sync-l8hr2\" (UID: \"632c48c8-f0d5-4dc9-823e-fa96b9265e97\") " pod="openstack/heat-db-sync-l8hr2" Feb 14 04:36:14 crc kubenswrapper[4867]: I0214 04:36:14.182843 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-l8hr2" Feb 14 04:36:14 crc kubenswrapper[4867]: I0214 04:36:14.714437 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-l8hr2"] Feb 14 04:36:14 crc kubenswrapper[4867]: W0214 04:36:14.714924 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod632c48c8_f0d5_4dc9_823e_fa96b9265e97.slice/crio-f6d7447bc4808aa0ae450dfc090bd3e6cef5e2bf5c0d0482fa7c73bb4eea0eab WatchSource:0}: Error finding container f6d7447bc4808aa0ae450dfc090bd3e6cef5e2bf5c0d0482fa7c73bb4eea0eab: Status 404 returned error can't find the container with id f6d7447bc4808aa0ae450dfc090bd3e6cef5e2bf5c0d0482fa7c73bb4eea0eab Feb 14 04:36:14 crc kubenswrapper[4867]: I0214 04:36:14.797661 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-l8hr2" event={"ID":"632c48c8-f0d5-4dc9-823e-fa96b9265e97","Type":"ContainerStarted","Data":"f6d7447bc4808aa0ae450dfc090bd3e6cef5e2bf5c0d0482fa7c73bb4eea0eab"} Feb 14 04:36:15 crc kubenswrapper[4867]: I0214 04:36:15.014462 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18fb2b12-f922-4976-8e05-6e78a8751456" path="/var/lib/kubelet/pods/18fb2b12-f922-4976-8e05-6e78a8751456/volumes" Feb 14 04:36:15 crc kubenswrapper[4867]: I0214 04:36:15.809785 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 14 04:36:15 crc kubenswrapper[4867]: I0214 04:36:15.821680 4867 generic.go:334] "Generic (PLEG): container finished" podID="77086ddb-f1c4-4387-a1b3-a7b9389d4eb0" containerID="4f2b9b821ddbdb0d02349dca656687220bddd2bf4415503ac614d53801212cea" exitCode=0 Feb 14 04:36:15 crc kubenswrapper[4867]: I0214 04:36:15.821725 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kwldn" event={"ID":"77086ddb-f1c4-4387-a1b3-a7b9389d4eb0","Type":"ContainerDied","Data":"4f2b9b821ddbdb0d02349dca656687220bddd2bf4415503ac614d53801212cea"} Feb 14 04:36:16 crc kubenswrapper[4867]: I0214 04:36:16.093623 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:36:16 crc kubenswrapper[4867]: I0214 04:36:16.093931 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="755b32e7-a73b-4823-a57a-9ff2346f37ba" containerName="ceilometer-central-agent" containerID="cri-o://da180bbe3f204dbafda3ff9411b5f7ce6de88f48145b022bced6575ef8415899" gracePeriod=30 Feb 14 04:36:16 crc kubenswrapper[4867]: I0214 04:36:16.094074 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="755b32e7-a73b-4823-a57a-9ff2346f37ba" containerName="sg-core" containerID="cri-o://dc1cf3121882c456defd0b584e2d3e7cab7b3b69157d5a2371159fa03ae59f2d" gracePeriod=30 Feb 14 04:36:16 crc kubenswrapper[4867]: I0214 04:36:16.094160 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="755b32e7-a73b-4823-a57a-9ff2346f37ba" containerName="ceilometer-notification-agent" containerID="cri-o://925a585863d08622a1aaa17cd592d436946e2f7543ad7a339de42ffb5db6ed88" gracePeriod=30 Feb 14 04:36:16 crc kubenswrapper[4867]: I0214 04:36:16.094242 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="755b32e7-a73b-4823-a57a-9ff2346f37ba" containerName="proxy-httpd" containerID="cri-o://0447a5810775684932ac15e3424c1b15be46ff0f806cbba24fd777ce41cbccc0" gracePeriod=30 Feb 14 04:36:16 crc kubenswrapper[4867]: I0214 04:36:16.845858 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kwldn" event={"ID":"77086ddb-f1c4-4387-a1b3-a7b9389d4eb0","Type":"ContainerStarted","Data":"2e95723f53dda67e6fbff64267cd010c5186a50cb85b25868b75a1965fb93aa5"} Feb 14 04:36:16 crc kubenswrapper[4867]: I0214 04:36:16.850708 4867 generic.go:334] "Generic (PLEG): container finished" podID="755b32e7-a73b-4823-a57a-9ff2346f37ba" containerID="0447a5810775684932ac15e3424c1b15be46ff0f806cbba24fd777ce41cbccc0" exitCode=0 Feb 14 04:36:16 crc kubenswrapper[4867]: I0214 04:36:16.850763 4867 generic.go:334] "Generic (PLEG): container finished" podID="755b32e7-a73b-4823-a57a-9ff2346f37ba" containerID="dc1cf3121882c456defd0b584e2d3e7cab7b3b69157d5a2371159fa03ae59f2d" exitCode=2 Feb 14 04:36:16 crc kubenswrapper[4867]: I0214 04:36:16.850774 4867 generic.go:334] "Generic (PLEG): container finished" podID="755b32e7-a73b-4823-a57a-9ff2346f37ba" containerID="da180bbe3f204dbafda3ff9411b5f7ce6de88f48145b022bced6575ef8415899" exitCode=0 Feb 14 04:36:16 crc kubenswrapper[4867]: I0214 04:36:16.850798 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"755b32e7-a73b-4823-a57a-9ff2346f37ba","Type":"ContainerDied","Data":"0447a5810775684932ac15e3424c1b15be46ff0f806cbba24fd777ce41cbccc0"} Feb 14 04:36:16 crc kubenswrapper[4867]: I0214 04:36:16.850830 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"755b32e7-a73b-4823-a57a-9ff2346f37ba","Type":"ContainerDied","Data":"dc1cf3121882c456defd0b584e2d3e7cab7b3b69157d5a2371159fa03ae59f2d"} Feb 14 04:36:16 crc kubenswrapper[4867]: I0214 04:36:16.850841 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"755b32e7-a73b-4823-a57a-9ff2346f37ba","Type":"ContainerDied","Data":"da180bbe3f204dbafda3ff9411b5f7ce6de88f48145b022bced6575ef8415899"} Feb 14 04:36:16 crc kubenswrapper[4867]: I0214 04:36:16.877836 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kwldn" podStartSLOduration=3.326191045 podStartE2EDuration="7.877814892s" podCreationTimestamp="2026-02-14 04:36:09 +0000 UTC" firstStartedPulling="2026-02-14 04:36:11.755282067 +0000 UTC m=+1603.836219391" lastFinishedPulling="2026-02-14 04:36:16.306905924 +0000 UTC m=+1608.387843238" observedRunningTime="2026-02-14 04:36:16.872043111 +0000 UTC m=+1608.952980425" watchObservedRunningTime="2026-02-14 04:36:16.877814892 +0000 UTC m=+1608.958752206" Feb 14 04:36:17 crc kubenswrapper[4867]: I0214 04:36:17.098849 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 14 04:36:17 crc kubenswrapper[4867]: I0214 04:36:17.869446 4867 generic.go:334] "Generic (PLEG): container finished" podID="755b32e7-a73b-4823-a57a-9ff2346f37ba" containerID="925a585863d08622a1aaa17cd592d436946e2f7543ad7a339de42ffb5db6ed88" exitCode=0 Feb 14 04:36:17 crc kubenswrapper[4867]: I0214 04:36:17.869816 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"755b32e7-a73b-4823-a57a-9ff2346f37ba","Type":"ContainerDied","Data":"925a585863d08622a1aaa17cd592d436946e2f7543ad7a339de42ffb5db6ed88"} Feb 14 04:36:18 crc kubenswrapper[4867]: I0214 04:36:18.490122 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 04:36:18 crc kubenswrapper[4867]: I0214 04:36:18.643494 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/755b32e7-a73b-4823-a57a-9ff2346f37ba-log-httpd\") pod \"755b32e7-a73b-4823-a57a-9ff2346f37ba\" (UID: \"755b32e7-a73b-4823-a57a-9ff2346f37ba\") " Feb 14 04:36:18 crc kubenswrapper[4867]: I0214 04:36:18.643699 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/755b32e7-a73b-4823-a57a-9ff2346f37ba-sg-core-conf-yaml\") pod \"755b32e7-a73b-4823-a57a-9ff2346f37ba\" (UID: \"755b32e7-a73b-4823-a57a-9ff2346f37ba\") " Feb 14 04:36:18 crc kubenswrapper[4867]: I0214 04:36:18.643761 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/755b32e7-a73b-4823-a57a-9ff2346f37ba-combined-ca-bundle\") pod \"755b32e7-a73b-4823-a57a-9ff2346f37ba\" (UID: \"755b32e7-a73b-4823-a57a-9ff2346f37ba\") " Feb 14 04:36:18 crc kubenswrapper[4867]: I0214 04:36:18.643852 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/755b32e7-a73b-4823-a57a-9ff2346f37ba-ceilometer-tls-certs\") pod \"755b32e7-a73b-4823-a57a-9ff2346f37ba\" (UID: \"755b32e7-a73b-4823-a57a-9ff2346f37ba\") " Feb 14 04:36:18 crc kubenswrapper[4867]: I0214 04:36:18.643935 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/755b32e7-a73b-4823-a57a-9ff2346f37ba-run-httpd\") pod \"755b32e7-a73b-4823-a57a-9ff2346f37ba\" (UID: \"755b32e7-a73b-4823-a57a-9ff2346f37ba\") " Feb 14 04:36:18 crc kubenswrapper[4867]: I0214 04:36:18.643999 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xq4lx\" (UniqueName: \"kubernetes.io/projected/755b32e7-a73b-4823-a57a-9ff2346f37ba-kube-api-access-xq4lx\") pod \"755b32e7-a73b-4823-a57a-9ff2346f37ba\" (UID: \"755b32e7-a73b-4823-a57a-9ff2346f37ba\") " Feb 14 04:36:18 crc kubenswrapper[4867]: I0214 04:36:18.644058 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/755b32e7-a73b-4823-a57a-9ff2346f37ba-scripts\") pod \"755b32e7-a73b-4823-a57a-9ff2346f37ba\" (UID: \"755b32e7-a73b-4823-a57a-9ff2346f37ba\") " Feb 14 04:36:18 crc kubenswrapper[4867]: I0214 04:36:18.644164 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/755b32e7-a73b-4823-a57a-9ff2346f37ba-config-data\") pod \"755b32e7-a73b-4823-a57a-9ff2346f37ba\" (UID: \"755b32e7-a73b-4823-a57a-9ff2346f37ba\") " Feb 14 04:36:18 crc kubenswrapper[4867]: I0214 04:36:18.644307 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/755b32e7-a73b-4823-a57a-9ff2346f37ba-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "755b32e7-a73b-4823-a57a-9ff2346f37ba" (UID: "755b32e7-a73b-4823-a57a-9ff2346f37ba"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:36:18 crc kubenswrapper[4867]: I0214 04:36:18.644763 4867 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/755b32e7-a73b-4823-a57a-9ff2346f37ba-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 14 04:36:18 crc kubenswrapper[4867]: I0214 04:36:18.644955 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/755b32e7-a73b-4823-a57a-9ff2346f37ba-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "755b32e7-a73b-4823-a57a-9ff2346f37ba" (UID: "755b32e7-a73b-4823-a57a-9ff2346f37ba"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:36:18 crc kubenswrapper[4867]: I0214 04:36:18.650139 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/755b32e7-a73b-4823-a57a-9ff2346f37ba-scripts" (OuterVolumeSpecName: "scripts") pod "755b32e7-a73b-4823-a57a-9ff2346f37ba" (UID: "755b32e7-a73b-4823-a57a-9ff2346f37ba"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:36:18 crc kubenswrapper[4867]: I0214 04:36:18.650797 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/755b32e7-a73b-4823-a57a-9ff2346f37ba-kube-api-access-xq4lx" (OuterVolumeSpecName: "kube-api-access-xq4lx") pod "755b32e7-a73b-4823-a57a-9ff2346f37ba" (UID: "755b32e7-a73b-4823-a57a-9ff2346f37ba"). InnerVolumeSpecName "kube-api-access-xq4lx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:36:18 crc kubenswrapper[4867]: I0214 04:36:18.687958 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/755b32e7-a73b-4823-a57a-9ff2346f37ba-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "755b32e7-a73b-4823-a57a-9ff2346f37ba" (UID: "755b32e7-a73b-4823-a57a-9ff2346f37ba"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:36:18 crc kubenswrapper[4867]: I0214 04:36:18.745532 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/755b32e7-a73b-4823-a57a-9ff2346f37ba-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "755b32e7-a73b-4823-a57a-9ff2346f37ba" (UID: "755b32e7-a73b-4823-a57a-9ff2346f37ba"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:36:18 crc kubenswrapper[4867]: I0214 04:36:18.747146 4867 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/755b32e7-a73b-4823-a57a-9ff2346f37ba-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 14 04:36:18 crc kubenswrapper[4867]: I0214 04:36:18.747180 4867 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/755b32e7-a73b-4823-a57a-9ff2346f37ba-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 14 04:36:18 crc kubenswrapper[4867]: I0214 04:36:18.747189 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xq4lx\" (UniqueName: \"kubernetes.io/projected/755b32e7-a73b-4823-a57a-9ff2346f37ba-kube-api-access-xq4lx\") on node \"crc\" DevicePath \"\"" Feb 14 04:36:18 crc kubenswrapper[4867]: I0214 04:36:18.747199 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/755b32e7-a73b-4823-a57a-9ff2346f37ba-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:36:18 crc kubenswrapper[4867]: I0214 04:36:18.747208 4867 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/755b32e7-a73b-4823-a57a-9ff2346f37ba-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 14 04:36:18 crc kubenswrapper[4867]: I0214 04:36:18.784648 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/755b32e7-a73b-4823-a57a-9ff2346f37ba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "755b32e7-a73b-4823-a57a-9ff2346f37ba" (UID: "755b32e7-a73b-4823-a57a-9ff2346f37ba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:36:18 crc kubenswrapper[4867]: I0214 04:36:18.849602 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/755b32e7-a73b-4823-a57a-9ff2346f37ba-config-data" (OuterVolumeSpecName: "config-data") pod "755b32e7-a73b-4823-a57a-9ff2346f37ba" (UID: "755b32e7-a73b-4823-a57a-9ff2346f37ba"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:36:18 crc kubenswrapper[4867]: I0214 04:36:18.850067 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/755b32e7-a73b-4823-a57a-9ff2346f37ba-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:36:18 crc kubenswrapper[4867]: I0214 04:36:18.850104 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/755b32e7-a73b-4823-a57a-9ff2346f37ba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:36:18 crc kubenswrapper[4867]: I0214 04:36:18.913686 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"755b32e7-a73b-4823-a57a-9ff2346f37ba","Type":"ContainerDied","Data":"73ec0567b19c96951a830a41b4544085988752f18988cc5174bd34b76d04f7d9"} Feb 14 04:36:18 crc kubenswrapper[4867]: I0214 04:36:18.913756 4867 scope.go:117] "RemoveContainer" containerID="0447a5810775684932ac15e3424c1b15be46ff0f806cbba24fd777ce41cbccc0" Feb 14 04:36:18 crc kubenswrapper[4867]: I0214 04:36:18.913940 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 04:36:18 crc kubenswrapper[4867]: I0214 04:36:18.990453 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:36:19 crc kubenswrapper[4867]: I0214 04:36:19.007413 4867 scope.go:117] "RemoveContainer" containerID="dc1cf3121882c456defd0b584e2d3e7cab7b3b69157d5a2371159fa03ae59f2d" Feb 14 04:36:19 crc kubenswrapper[4867]: I0214 04:36:19.042755 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:36:19 crc kubenswrapper[4867]: I0214 04:36:19.054572 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:36:19 crc kubenswrapper[4867]: E0214 04:36:19.055322 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="755b32e7-a73b-4823-a57a-9ff2346f37ba" containerName="ceilometer-central-agent" Feb 14 04:36:19 crc kubenswrapper[4867]: I0214 04:36:19.055347 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="755b32e7-a73b-4823-a57a-9ff2346f37ba" containerName="ceilometer-central-agent" Feb 14 04:36:19 crc kubenswrapper[4867]: E0214 04:36:19.055367 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="755b32e7-a73b-4823-a57a-9ff2346f37ba" containerName="proxy-httpd" Feb 14 04:36:19 crc kubenswrapper[4867]: I0214 04:36:19.055378 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="755b32e7-a73b-4823-a57a-9ff2346f37ba" containerName="proxy-httpd" Feb 14 04:36:19 crc kubenswrapper[4867]: E0214 04:36:19.055409 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="755b32e7-a73b-4823-a57a-9ff2346f37ba" containerName="sg-core" Feb 14 04:36:19 crc kubenswrapper[4867]: I0214 04:36:19.055417 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="755b32e7-a73b-4823-a57a-9ff2346f37ba" containerName="sg-core" Feb 14 04:36:19 crc kubenswrapper[4867]: E0214 04:36:19.055451 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="755b32e7-a73b-4823-a57a-9ff2346f37ba" containerName="ceilometer-notification-agent" Feb 14 04:36:19 crc kubenswrapper[4867]: I0214 04:36:19.055463 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="755b32e7-a73b-4823-a57a-9ff2346f37ba" containerName="ceilometer-notification-agent" Feb 14 04:36:19 crc kubenswrapper[4867]: I0214 04:36:19.055778 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="755b32e7-a73b-4823-a57a-9ff2346f37ba" containerName="ceilometer-central-agent" Feb 14 04:36:19 crc kubenswrapper[4867]: I0214 04:36:19.055816 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="755b32e7-a73b-4823-a57a-9ff2346f37ba" containerName="proxy-httpd" Feb 14 04:36:19 crc kubenswrapper[4867]: I0214 04:36:19.055832 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="755b32e7-a73b-4823-a57a-9ff2346f37ba" containerName="ceilometer-notification-agent" Feb 14 04:36:19 crc kubenswrapper[4867]: I0214 04:36:19.055852 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="755b32e7-a73b-4823-a57a-9ff2346f37ba" containerName="sg-core" Feb 14 04:36:19 crc kubenswrapper[4867]: I0214 04:36:19.058880 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 04:36:19 crc kubenswrapper[4867]: I0214 04:36:19.065552 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 14 04:36:19 crc kubenswrapper[4867]: I0214 04:36:19.065607 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 14 04:36:19 crc kubenswrapper[4867]: I0214 04:36:19.066048 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 14 04:36:19 crc kubenswrapper[4867]: I0214 04:36:19.070221 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:36:19 crc kubenswrapper[4867]: I0214 04:36:19.079375 4867 scope.go:117] "RemoveContainer" containerID="925a585863d08622a1aaa17cd592d436946e2f7543ad7a339de42ffb5db6ed88" Feb 14 04:36:19 crc kubenswrapper[4867]: I0214 04:36:19.129681 4867 scope.go:117] "RemoveContainer" containerID="da180bbe3f204dbafda3ff9411b5f7ce6de88f48145b022bced6575ef8415899" Feb 14 04:36:19 crc kubenswrapper[4867]: I0214 04:36:19.164733 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/27437fd9-2bc5-48ac-9e34-e733da15dd2b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"27437fd9-2bc5-48ac-9e34-e733da15dd2b\") " pod="openstack/ceilometer-0" Feb 14 04:36:19 crc kubenswrapper[4867]: I0214 04:36:19.164948 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/27437fd9-2bc5-48ac-9e34-e733da15dd2b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"27437fd9-2bc5-48ac-9e34-e733da15dd2b\") " pod="openstack/ceilometer-0" Feb 14 04:36:19 crc kubenswrapper[4867]: I0214 04:36:19.165039 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27437fd9-2bc5-48ac-9e34-e733da15dd2b-run-httpd\") pod \"ceilometer-0\" (UID: \"27437fd9-2bc5-48ac-9e34-e733da15dd2b\") " pod="openstack/ceilometer-0" Feb 14 04:36:19 crc kubenswrapper[4867]: I0214 04:36:19.165226 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27437fd9-2bc5-48ac-9e34-e733da15dd2b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"27437fd9-2bc5-48ac-9e34-e733da15dd2b\") " pod="openstack/ceilometer-0" Feb 14 04:36:19 crc kubenswrapper[4867]: I0214 04:36:19.165545 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27437fd9-2bc5-48ac-9e34-e733da15dd2b-config-data\") pod \"ceilometer-0\" (UID: \"27437fd9-2bc5-48ac-9e34-e733da15dd2b\") " pod="openstack/ceilometer-0" Feb 14 04:36:19 crc kubenswrapper[4867]: I0214 04:36:19.165603 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27437fd9-2bc5-48ac-9e34-e733da15dd2b-scripts\") pod \"ceilometer-0\" (UID: \"27437fd9-2bc5-48ac-9e34-e733da15dd2b\") " pod="openstack/ceilometer-0" Feb 14 04:36:19 crc kubenswrapper[4867]: I0214 04:36:19.165656 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bl7qf\" (UniqueName: \"kubernetes.io/projected/27437fd9-2bc5-48ac-9e34-e733da15dd2b-kube-api-access-bl7qf\") pod \"ceilometer-0\" (UID: \"27437fd9-2bc5-48ac-9e34-e733da15dd2b\") " pod="openstack/ceilometer-0" Feb 14 04:36:19 crc kubenswrapper[4867]: I0214 04:36:19.166013 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27437fd9-2bc5-48ac-9e34-e733da15dd2b-log-httpd\") pod \"ceilometer-0\" (UID: \"27437fd9-2bc5-48ac-9e34-e733da15dd2b\") " pod="openstack/ceilometer-0" Feb 14 04:36:19 crc kubenswrapper[4867]: I0214 04:36:19.268667 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27437fd9-2bc5-48ac-9e34-e733da15dd2b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"27437fd9-2bc5-48ac-9e34-e733da15dd2b\") " pod="openstack/ceilometer-0" Feb 14 04:36:19 crc kubenswrapper[4867]: I0214 04:36:19.268744 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27437fd9-2bc5-48ac-9e34-e733da15dd2b-config-data\") pod \"ceilometer-0\" (UID: \"27437fd9-2bc5-48ac-9e34-e733da15dd2b\") " pod="openstack/ceilometer-0" Feb 14 04:36:19 crc kubenswrapper[4867]: I0214 04:36:19.268784 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27437fd9-2bc5-48ac-9e34-e733da15dd2b-scripts\") pod \"ceilometer-0\" (UID: \"27437fd9-2bc5-48ac-9e34-e733da15dd2b\") " pod="openstack/ceilometer-0" Feb 14 04:36:19 crc kubenswrapper[4867]: I0214 04:36:19.268849 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bl7qf\" (UniqueName: \"kubernetes.io/projected/27437fd9-2bc5-48ac-9e34-e733da15dd2b-kube-api-access-bl7qf\") pod \"ceilometer-0\" (UID: \"27437fd9-2bc5-48ac-9e34-e733da15dd2b\") " pod="openstack/ceilometer-0" Feb 14 04:36:19 crc kubenswrapper[4867]: I0214 04:36:19.269005 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27437fd9-2bc5-48ac-9e34-e733da15dd2b-log-httpd\") pod \"ceilometer-0\" (UID: \"27437fd9-2bc5-48ac-9e34-e733da15dd2b\") " pod="openstack/ceilometer-0" Feb 14 04:36:19 crc kubenswrapper[4867]: I0214 04:36:19.269091 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/27437fd9-2bc5-48ac-9e34-e733da15dd2b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"27437fd9-2bc5-48ac-9e34-e733da15dd2b\") " pod="openstack/ceilometer-0" Feb 14 04:36:19 crc kubenswrapper[4867]: I0214 04:36:19.269132 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/27437fd9-2bc5-48ac-9e34-e733da15dd2b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"27437fd9-2bc5-48ac-9e34-e733da15dd2b\") " pod="openstack/ceilometer-0" Feb 14 04:36:19 crc kubenswrapper[4867]: I0214 04:36:19.269182 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27437fd9-2bc5-48ac-9e34-e733da15dd2b-run-httpd\") pod \"ceilometer-0\" (UID: \"27437fd9-2bc5-48ac-9e34-e733da15dd2b\") " pod="openstack/ceilometer-0" Feb 14 04:36:19 crc kubenswrapper[4867]: I0214 04:36:19.270983 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27437fd9-2bc5-48ac-9e34-e733da15dd2b-log-httpd\") pod \"ceilometer-0\" (UID: \"27437fd9-2bc5-48ac-9e34-e733da15dd2b\") " pod="openstack/ceilometer-0" Feb 14 04:36:19 crc kubenswrapper[4867]: I0214 04:36:19.271069 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27437fd9-2bc5-48ac-9e34-e733da15dd2b-run-httpd\") pod \"ceilometer-0\" (UID: \"27437fd9-2bc5-48ac-9e34-e733da15dd2b\") " pod="openstack/ceilometer-0" Feb 14 04:36:19 crc kubenswrapper[4867]: I0214 04:36:19.275025 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/27437fd9-2bc5-48ac-9e34-e733da15dd2b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"27437fd9-2bc5-48ac-9e34-e733da15dd2b\") " pod="openstack/ceilometer-0" Feb 14 04:36:19 crc kubenswrapper[4867]: I0214 04:36:19.275056 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27437fd9-2bc5-48ac-9e34-e733da15dd2b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"27437fd9-2bc5-48ac-9e34-e733da15dd2b\") " pod="openstack/ceilometer-0" Feb 14 04:36:19 crc kubenswrapper[4867]: I0214 04:36:19.278456 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27437fd9-2bc5-48ac-9e34-e733da15dd2b-config-data\") pod \"ceilometer-0\" (UID: \"27437fd9-2bc5-48ac-9e34-e733da15dd2b\") " pod="openstack/ceilometer-0" Feb 14 04:36:19 crc kubenswrapper[4867]: I0214 04:36:19.287395 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bl7qf\" (UniqueName: \"kubernetes.io/projected/27437fd9-2bc5-48ac-9e34-e733da15dd2b-kube-api-access-bl7qf\") pod \"ceilometer-0\" (UID: \"27437fd9-2bc5-48ac-9e34-e733da15dd2b\") " pod="openstack/ceilometer-0" Feb 14 04:36:19 crc kubenswrapper[4867]: I0214 04:36:19.290758 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/27437fd9-2bc5-48ac-9e34-e733da15dd2b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"27437fd9-2bc5-48ac-9e34-e733da15dd2b\") " pod="openstack/ceilometer-0" Feb 14 04:36:19 crc kubenswrapper[4867]: I0214 04:36:19.305306 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27437fd9-2bc5-48ac-9e34-e733da15dd2b-scripts\") pod \"ceilometer-0\" (UID: \"27437fd9-2bc5-48ac-9e34-e733da15dd2b\") " pod="openstack/ceilometer-0" Feb 14 04:36:19 crc kubenswrapper[4867]: I0214 04:36:19.378637 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 04:36:20 crc kubenswrapper[4867]: I0214 04:36:20.214035 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:36:20 crc kubenswrapper[4867]: I0214 04:36:20.357955 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kwldn" Feb 14 04:36:20 crc kubenswrapper[4867]: I0214 04:36:20.359564 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kwldn" Feb 14 04:36:20 crc kubenswrapper[4867]: I0214 04:36:20.988114 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27437fd9-2bc5-48ac-9e34-e733da15dd2b","Type":"ContainerStarted","Data":"d4f0bdf0dbd1d228ba52e053dd1cc643ebc3046d0b265d62590eb29358a8f187"} Feb 14 04:36:21 crc kubenswrapper[4867]: I0214 04:36:21.027616 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="755b32e7-a73b-4823-a57a-9ff2346f37ba" path="/var/lib/kubelet/pods/755b32e7-a73b-4823-a57a-9ff2346f37ba/volumes" Feb 14 04:36:21 crc kubenswrapper[4867]: I0214 04:36:21.418099 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-kwldn" podUID="77086ddb-f1c4-4387-a1b3-a7b9389d4eb0" containerName="registry-server" probeResult="failure" output=< Feb 14 04:36:21 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 04:36:21 crc kubenswrapper[4867]: > Feb 14 04:36:21 crc kubenswrapper[4867]: I0214 04:36:21.573248 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-2" podUID="9bba5174-edd6-4e59-8b84-6c50439be88e" containerName="rabbitmq" containerID="cri-o://3a805b4a9b14096595ccbe2f2670f7820f5c356d6f6f2f30fc1ba861c96ba989" gracePeriod=604795 Feb 14 04:36:22 crc kubenswrapper[4867]: I0214 04:36:22.010539 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="e1e022d9-e2db-41eb-bbc8-36a85211a141" containerName="rabbitmq" containerID="cri-o://1c9536ee76daa0952682b4376762a2a587b803ad41d92cac29e3c1b5557102c7" gracePeriod=604796 Feb 14 04:36:27 crc kubenswrapper[4867]: I0214 04:36:27.210325 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="e1e022d9-e2db-41eb-bbc8-36a85211a141" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.126:5671: connect: connection refused" Feb 14 04:36:28 crc kubenswrapper[4867]: I0214 04:36:28.056391 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="9bba5174-edd6-4e59-8b84-6c50439be88e" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.129:5671: connect: connection refused" Feb 14 04:36:28 crc kubenswrapper[4867]: I0214 04:36:28.111085 4867 generic.go:334] "Generic (PLEG): container finished" podID="9bba5174-edd6-4e59-8b84-6c50439be88e" containerID="3a805b4a9b14096595ccbe2f2670f7820f5c356d6f6f2f30fc1ba861c96ba989" exitCode=0 Feb 14 04:36:28 crc kubenswrapper[4867]: I0214 04:36:28.111583 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"9bba5174-edd6-4e59-8b84-6c50439be88e","Type":"ContainerDied","Data":"3a805b4a9b14096595ccbe2f2670f7820f5c356d6f6f2f30fc1ba861c96ba989"} Feb 14 04:36:29 crc kubenswrapper[4867]: I0214 04:36:29.017241 4867 scope.go:117] "RemoveContainer" containerID="7203a3aa09f0fa634ee4bcd02b0e1dff1e29376e8dd84a4e743cbea72d4c480e" Feb 14 04:36:29 crc kubenswrapper[4867]: E0214 04:36:29.017961 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:36:29 crc kubenswrapper[4867]: I0214 04:36:29.128022 4867 generic.go:334] "Generic (PLEG): container finished" podID="e1e022d9-e2db-41eb-bbc8-36a85211a141" containerID="1c9536ee76daa0952682b4376762a2a587b803ad41d92cac29e3c1b5557102c7" exitCode=0 Feb 14 04:36:29 crc kubenswrapper[4867]: I0214 04:36:29.128115 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e1e022d9-e2db-41eb-bbc8-36a85211a141","Type":"ContainerDied","Data":"1c9536ee76daa0952682b4376762a2a587b803ad41d92cac29e3c1b5557102c7"} Feb 14 04:36:31 crc kubenswrapper[4867]: I0214 04:36:31.039605 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-prh4d"] Feb 14 04:36:31 crc kubenswrapper[4867]: I0214 04:36:31.042844 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-prh4d" Feb 14 04:36:31 crc kubenswrapper[4867]: I0214 04:36:31.059029 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Feb 14 04:36:31 crc kubenswrapper[4867]: I0214 04:36:31.085613 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-prh4d"] Feb 14 04:36:31 crc kubenswrapper[4867]: I0214 04:36:31.102550 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/323a0af9-9e80-476b-8315-e20a6dd41293-dns-swift-storage-0\") pod \"dnsmasq-dns-7d84b4d45c-prh4d\" (UID: \"323a0af9-9e80-476b-8315-e20a6dd41293\") " pod="openstack/dnsmasq-dns-7d84b4d45c-prh4d" Feb 14 04:36:31 crc kubenswrapper[4867]: I0214 04:36:31.102730 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgct9\" (UniqueName: \"kubernetes.io/projected/323a0af9-9e80-476b-8315-e20a6dd41293-kube-api-access-lgct9\") pod \"dnsmasq-dns-7d84b4d45c-prh4d\" (UID: \"323a0af9-9e80-476b-8315-e20a6dd41293\") " pod="openstack/dnsmasq-dns-7d84b4d45c-prh4d" Feb 14 04:36:31 crc kubenswrapper[4867]: I0214 04:36:31.102772 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/323a0af9-9e80-476b-8315-e20a6dd41293-dns-svc\") pod \"dnsmasq-dns-7d84b4d45c-prh4d\" (UID: \"323a0af9-9e80-476b-8315-e20a6dd41293\") " pod="openstack/dnsmasq-dns-7d84b4d45c-prh4d" Feb 14 04:36:31 crc kubenswrapper[4867]: I0214 04:36:31.102805 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/323a0af9-9e80-476b-8315-e20a6dd41293-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d84b4d45c-prh4d\" (UID: \"323a0af9-9e80-476b-8315-e20a6dd41293\") " pod="openstack/dnsmasq-dns-7d84b4d45c-prh4d" Feb 14 04:36:31 crc kubenswrapper[4867]: I0214 04:36:31.102823 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/323a0af9-9e80-476b-8315-e20a6dd41293-ovsdbserver-sb\") pod \"dnsmasq-dns-7d84b4d45c-prh4d\" (UID: \"323a0af9-9e80-476b-8315-e20a6dd41293\") " pod="openstack/dnsmasq-dns-7d84b4d45c-prh4d" Feb 14 04:36:31 crc kubenswrapper[4867]: I0214 04:36:31.102929 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/323a0af9-9e80-476b-8315-e20a6dd41293-ovsdbserver-nb\") pod \"dnsmasq-dns-7d84b4d45c-prh4d\" (UID: \"323a0af9-9e80-476b-8315-e20a6dd41293\") " pod="openstack/dnsmasq-dns-7d84b4d45c-prh4d" Feb 14 04:36:31 crc kubenswrapper[4867]: I0214 04:36:31.102953 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/323a0af9-9e80-476b-8315-e20a6dd41293-config\") pod \"dnsmasq-dns-7d84b4d45c-prh4d\" (UID: \"323a0af9-9e80-476b-8315-e20a6dd41293\") " pod="openstack/dnsmasq-dns-7d84b4d45c-prh4d" Feb 14 04:36:31 crc kubenswrapper[4867]: I0214 04:36:31.205868 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgct9\" (UniqueName: \"kubernetes.io/projected/323a0af9-9e80-476b-8315-e20a6dd41293-kube-api-access-lgct9\") pod \"dnsmasq-dns-7d84b4d45c-prh4d\" (UID: \"323a0af9-9e80-476b-8315-e20a6dd41293\") " pod="openstack/dnsmasq-dns-7d84b4d45c-prh4d" Feb 14 04:36:31 crc kubenswrapper[4867]: I0214 04:36:31.206456 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/323a0af9-9e80-476b-8315-e20a6dd41293-dns-svc\") pod \"dnsmasq-dns-7d84b4d45c-prh4d\" (UID: \"323a0af9-9e80-476b-8315-e20a6dd41293\") " pod="openstack/dnsmasq-dns-7d84b4d45c-prh4d" Feb 14 04:36:31 crc kubenswrapper[4867]: I0214 04:36:31.206546 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/323a0af9-9e80-476b-8315-e20a6dd41293-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d84b4d45c-prh4d\" (UID: \"323a0af9-9e80-476b-8315-e20a6dd41293\") " pod="openstack/dnsmasq-dns-7d84b4d45c-prh4d" Feb 14 04:36:31 crc kubenswrapper[4867]: I0214 04:36:31.206576 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/323a0af9-9e80-476b-8315-e20a6dd41293-ovsdbserver-sb\") pod \"dnsmasq-dns-7d84b4d45c-prh4d\" (UID: \"323a0af9-9e80-476b-8315-e20a6dd41293\") " pod="openstack/dnsmasq-dns-7d84b4d45c-prh4d" Feb 14 04:36:31 crc kubenswrapper[4867]: I0214 04:36:31.206798 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/323a0af9-9e80-476b-8315-e20a6dd41293-ovsdbserver-nb\") pod \"dnsmasq-dns-7d84b4d45c-prh4d\" (UID: \"323a0af9-9e80-476b-8315-e20a6dd41293\") " pod="openstack/dnsmasq-dns-7d84b4d45c-prh4d" Feb 14 04:36:31 crc kubenswrapper[4867]: I0214 04:36:31.206844 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/323a0af9-9e80-476b-8315-e20a6dd41293-config\") pod \"dnsmasq-dns-7d84b4d45c-prh4d\" (UID: \"323a0af9-9e80-476b-8315-e20a6dd41293\") " pod="openstack/dnsmasq-dns-7d84b4d45c-prh4d" Feb 14 04:36:31 crc kubenswrapper[4867]: I0214 04:36:31.207279 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/323a0af9-9e80-476b-8315-e20a6dd41293-dns-swift-storage-0\") pod \"dnsmasq-dns-7d84b4d45c-prh4d\" (UID: \"323a0af9-9e80-476b-8315-e20a6dd41293\") " pod="openstack/dnsmasq-dns-7d84b4d45c-prh4d" Feb 14 04:36:31 crc kubenswrapper[4867]: I0214 04:36:31.207609 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/323a0af9-9e80-476b-8315-e20a6dd41293-dns-svc\") pod \"dnsmasq-dns-7d84b4d45c-prh4d\" (UID: \"323a0af9-9e80-476b-8315-e20a6dd41293\") " pod="openstack/dnsmasq-dns-7d84b4d45c-prh4d" Feb 14 04:36:31 crc kubenswrapper[4867]: I0214 04:36:31.207608 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/323a0af9-9e80-476b-8315-e20a6dd41293-ovsdbserver-sb\") pod \"dnsmasq-dns-7d84b4d45c-prh4d\" (UID: \"323a0af9-9e80-476b-8315-e20a6dd41293\") " pod="openstack/dnsmasq-dns-7d84b4d45c-prh4d" Feb 14 04:36:31 crc kubenswrapper[4867]: I0214 04:36:31.208394 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/323a0af9-9e80-476b-8315-e20a6dd41293-config\") pod \"dnsmasq-dns-7d84b4d45c-prh4d\" (UID: \"323a0af9-9e80-476b-8315-e20a6dd41293\") " pod="openstack/dnsmasq-dns-7d84b4d45c-prh4d" Feb 14 04:36:31 crc kubenswrapper[4867]: I0214 04:36:31.208890 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/323a0af9-9e80-476b-8315-e20a6dd41293-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d84b4d45c-prh4d\" (UID: \"323a0af9-9e80-476b-8315-e20a6dd41293\") " pod="openstack/dnsmasq-dns-7d84b4d45c-prh4d" Feb 14 04:36:31 crc kubenswrapper[4867]: I0214 04:36:31.209286 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/323a0af9-9e80-476b-8315-e20a6dd41293-ovsdbserver-nb\") pod \"dnsmasq-dns-7d84b4d45c-prh4d\" (UID: \"323a0af9-9e80-476b-8315-e20a6dd41293\") " pod="openstack/dnsmasq-dns-7d84b4d45c-prh4d" Feb 14 04:36:31 crc kubenswrapper[4867]: I0214 04:36:31.209864 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/323a0af9-9e80-476b-8315-e20a6dd41293-dns-swift-storage-0\") pod \"dnsmasq-dns-7d84b4d45c-prh4d\" (UID: \"323a0af9-9e80-476b-8315-e20a6dd41293\") " pod="openstack/dnsmasq-dns-7d84b4d45c-prh4d" Feb 14 04:36:31 crc kubenswrapper[4867]: I0214 04:36:31.232586 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgct9\" (UniqueName: \"kubernetes.io/projected/323a0af9-9e80-476b-8315-e20a6dd41293-kube-api-access-lgct9\") pod \"dnsmasq-dns-7d84b4d45c-prh4d\" (UID: \"323a0af9-9e80-476b-8315-e20a6dd41293\") " pod="openstack/dnsmasq-dns-7d84b4d45c-prh4d" Feb 14 04:36:31 crc kubenswrapper[4867]: I0214 04:36:31.387641 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-prh4d" Feb 14 04:36:31 crc kubenswrapper[4867]: I0214 04:36:31.425098 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-kwldn" podUID="77086ddb-f1c4-4387-a1b3-a7b9389d4eb0" containerName="registry-server" probeResult="failure" output=< Feb 14 04:36:31 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 04:36:31 crc kubenswrapper[4867]: > Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.678498 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.698439 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.794271 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9bba5174-edd6-4e59-8b84-6c50439be88e-rabbitmq-tls\") pod \"9bba5174-edd6-4e59-8b84-6c50439be88e\" (UID: \"9bba5174-edd6-4e59-8b84-6c50439be88e\") " Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.794380 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e1e022d9-e2db-41eb-bbc8-36a85211a141-server-conf\") pod \"e1e022d9-e2db-41eb-bbc8-36a85211a141\" (UID: \"e1e022d9-e2db-41eb-bbc8-36a85211a141\") " Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.794495 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9bba5174-edd6-4e59-8b84-6c50439be88e-pod-info\") pod \"9bba5174-edd6-4e59-8b84-6c50439be88e\" (UID: \"9bba5174-edd6-4e59-8b84-6c50439be88e\") " Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.794563 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrf6j\" (UniqueName: \"kubernetes.io/projected/e1e022d9-e2db-41eb-bbc8-36a85211a141-kube-api-access-wrf6j\") pod \"e1e022d9-e2db-41eb-bbc8-36a85211a141\" (UID: \"e1e022d9-e2db-41eb-bbc8-36a85211a141\") " Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.794590 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e1e022d9-e2db-41eb-bbc8-36a85211a141-rabbitmq-tls\") pod \"e1e022d9-e2db-41eb-bbc8-36a85211a141\" (UID: \"e1e022d9-e2db-41eb-bbc8-36a85211a141\") " Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.794661 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e1e022d9-e2db-41eb-bbc8-36a85211a141-config-data\") pod \"e1e022d9-e2db-41eb-bbc8-36a85211a141\" (UID: \"e1e022d9-e2db-41eb-bbc8-36a85211a141\") " Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.794680 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9bba5174-edd6-4e59-8b84-6c50439be88e-plugins-conf\") pod \"9bba5174-edd6-4e59-8b84-6c50439be88e\" (UID: \"9bba5174-edd6-4e59-8b84-6c50439be88e\") " Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.794703 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9bba5174-edd6-4e59-8b84-6c50439be88e-rabbitmq-plugins\") pod \"9bba5174-edd6-4e59-8b84-6c50439be88e\" (UID: \"9bba5174-edd6-4e59-8b84-6c50439be88e\") " Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.795281 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8137c787-0a8b-490f-9eaf-e3821659a9ce\") pod \"e1e022d9-e2db-41eb-bbc8-36a85211a141\" (UID: \"e1e022d9-e2db-41eb-bbc8-36a85211a141\") " Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.796783 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d997565a-60ec-4873-b7c9-bde8044c981f\") pod \"9bba5174-edd6-4e59-8b84-6c50439be88e\" (UID: \"9bba5174-edd6-4e59-8b84-6c50439be88e\") " Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.796826 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e1e022d9-e2db-41eb-bbc8-36a85211a141-rabbitmq-confd\") pod \"e1e022d9-e2db-41eb-bbc8-36a85211a141\" (UID: \"e1e022d9-e2db-41eb-bbc8-36a85211a141\") " Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.796854 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e1e022d9-e2db-41eb-bbc8-36a85211a141-erlang-cookie-secret\") pod \"e1e022d9-e2db-41eb-bbc8-36a85211a141\" (UID: \"e1e022d9-e2db-41eb-bbc8-36a85211a141\") " Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.796881 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9bba5174-edd6-4e59-8b84-6c50439be88e-erlang-cookie-secret\") pod \"9bba5174-edd6-4e59-8b84-6c50439be88e\" (UID: \"9bba5174-edd6-4e59-8b84-6c50439be88e\") " Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.796913 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e1e022d9-e2db-41eb-bbc8-36a85211a141-rabbitmq-plugins\") pod \"e1e022d9-e2db-41eb-bbc8-36a85211a141\" (UID: \"e1e022d9-e2db-41eb-bbc8-36a85211a141\") " Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.796955 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e1e022d9-e2db-41eb-bbc8-36a85211a141-pod-info\") pod \"e1e022d9-e2db-41eb-bbc8-36a85211a141\" (UID: \"e1e022d9-e2db-41eb-bbc8-36a85211a141\") " Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.796986 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q676p\" (UniqueName: \"kubernetes.io/projected/9bba5174-edd6-4e59-8b84-6c50439be88e-kube-api-access-q676p\") pod \"9bba5174-edd6-4e59-8b84-6c50439be88e\" (UID: \"9bba5174-edd6-4e59-8b84-6c50439be88e\") " Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.797023 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9bba5174-edd6-4e59-8b84-6c50439be88e-config-data\") pod \"9bba5174-edd6-4e59-8b84-6c50439be88e\" (UID: \"9bba5174-edd6-4e59-8b84-6c50439be88e\") " Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.797080 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9bba5174-edd6-4e59-8b84-6c50439be88e-rabbitmq-confd\") pod \"9bba5174-edd6-4e59-8b84-6c50439be88e\" (UID: \"9bba5174-edd6-4e59-8b84-6c50439be88e\") " Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.797120 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e1e022d9-e2db-41eb-bbc8-36a85211a141-rabbitmq-erlang-cookie\") pod \"e1e022d9-e2db-41eb-bbc8-36a85211a141\" (UID: \"e1e022d9-e2db-41eb-bbc8-36a85211a141\") " Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.797171 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9bba5174-edd6-4e59-8b84-6c50439be88e-rabbitmq-erlang-cookie\") pod \"9bba5174-edd6-4e59-8b84-6c50439be88e\" (UID: \"9bba5174-edd6-4e59-8b84-6c50439be88e\") " Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.797225 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e1e022d9-e2db-41eb-bbc8-36a85211a141-plugins-conf\") pod \"e1e022d9-e2db-41eb-bbc8-36a85211a141\" (UID: \"e1e022d9-e2db-41eb-bbc8-36a85211a141\") " Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.797805 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9bba5174-edd6-4e59-8b84-6c50439be88e-server-conf\") pod \"9bba5174-edd6-4e59-8b84-6c50439be88e\" (UID: \"9bba5174-edd6-4e59-8b84-6c50439be88e\") " Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.806153 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9bba5174-edd6-4e59-8b84-6c50439be88e-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "9bba5174-edd6-4e59-8b84-6c50439be88e" (UID: "9bba5174-edd6-4e59-8b84-6c50439be88e"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.808422 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1e022d9-e2db-41eb-bbc8-36a85211a141-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "e1e022d9-e2db-41eb-bbc8-36a85211a141" (UID: "e1e022d9-e2db-41eb-bbc8-36a85211a141"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.819415 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1e022d9-e2db-41eb-bbc8-36a85211a141-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "e1e022d9-e2db-41eb-bbc8-36a85211a141" (UID: "e1e022d9-e2db-41eb-bbc8-36a85211a141"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.829386 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1e022d9-e2db-41eb-bbc8-36a85211a141-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "e1e022d9-e2db-41eb-bbc8-36a85211a141" (UID: "e1e022d9-e2db-41eb-bbc8-36a85211a141"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.840117 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9bba5174-edd6-4e59-8b84-6c50439be88e-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "9bba5174-edd6-4e59-8b84-6c50439be88e" (UID: "9bba5174-edd6-4e59-8b84-6c50439be88e"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.845096 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1e022d9-e2db-41eb-bbc8-36a85211a141-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "e1e022d9-e2db-41eb-bbc8-36a85211a141" (UID: "e1e022d9-e2db-41eb-bbc8-36a85211a141"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.852261 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1e022d9-e2db-41eb-bbc8-36a85211a141-kube-api-access-wrf6j" (OuterVolumeSpecName: "kube-api-access-wrf6j") pod "e1e022d9-e2db-41eb-bbc8-36a85211a141" (UID: "e1e022d9-e2db-41eb-bbc8-36a85211a141"). InnerVolumeSpecName "kube-api-access-wrf6j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.857179 4867 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9bba5174-edd6-4e59-8b84-6c50439be88e-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.857235 4867 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e1e022d9-e2db-41eb-bbc8-36a85211a141-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.875833 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9bba5174-edd6-4e59-8b84-6c50439be88e-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "9bba5174-edd6-4e59-8b84-6c50439be88e" (UID: "9bba5174-edd6-4e59-8b84-6c50439be88e"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.885034 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bba5174-edd6-4e59-8b84-6c50439be88e-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "9bba5174-edd6-4e59-8b84-6c50439be88e" (UID: "9bba5174-edd6-4e59-8b84-6c50439be88e"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.887872 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bba5174-edd6-4e59-8b84-6c50439be88e-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "9bba5174-edd6-4e59-8b84-6c50439be88e" (UID: "9bba5174-edd6-4e59-8b84-6c50439be88e"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.888051 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1e022d9-e2db-41eb-bbc8-36a85211a141-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "e1e022d9-e2db-41eb-bbc8-36a85211a141" (UID: "e1e022d9-e2db-41eb-bbc8-36a85211a141"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.888666 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/e1e022d9-e2db-41eb-bbc8-36a85211a141-pod-info" (OuterVolumeSpecName: "pod-info") pod "e1e022d9-e2db-41eb-bbc8-36a85211a141" (UID: "e1e022d9-e2db-41eb-bbc8-36a85211a141"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.867923 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bba5174-edd6-4e59-8b84-6c50439be88e-kube-api-access-q676p" (OuterVolumeSpecName: "kube-api-access-q676p") pod "9bba5174-edd6-4e59-8b84-6c50439be88e" (UID: "9bba5174-edd6-4e59-8b84-6c50439be88e"). InnerVolumeSpecName "kube-api-access-q676p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.912622 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/9bba5174-edd6-4e59-8b84-6c50439be88e-pod-info" (OuterVolumeSpecName: "pod-info") pod "9bba5174-edd6-4e59-8b84-6c50439be88e" (UID: "9bba5174-edd6-4e59-8b84-6c50439be88e"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.939874 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d997565a-60ec-4873-b7c9-bde8044c981f" (OuterVolumeSpecName: "persistence") pod "9bba5174-edd6-4e59-8b84-6c50439be88e" (UID: "9bba5174-edd6-4e59-8b84-6c50439be88e"). InnerVolumeSpecName "pvc-d997565a-60ec-4873-b7c9-bde8044c981f". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.953605 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8137c787-0a8b-490f-9eaf-e3821659a9ce" (OuterVolumeSpecName: "persistence") pod "e1e022d9-e2db-41eb-bbc8-36a85211a141" (UID: "e1e022d9-e2db-41eb-bbc8-36a85211a141"). InnerVolumeSpecName "pvc-8137c787-0a8b-490f-9eaf-e3821659a9ce". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.972281 4867 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9bba5174-edd6-4e59-8b84-6c50439be88e-pod-info\") on node \"crc\" DevicePath \"\"" Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.972742 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wrf6j\" (UniqueName: \"kubernetes.io/projected/e1e022d9-e2db-41eb-bbc8-36a85211a141-kube-api-access-wrf6j\") on node \"crc\" DevicePath \"\"" Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.972756 4867 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e1e022d9-e2db-41eb-bbc8-36a85211a141-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.972766 4867 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9bba5174-edd6-4e59-8b84-6c50439be88e-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.972803 4867 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-8137c787-0a8b-490f-9eaf-e3821659a9ce\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8137c787-0a8b-490f-9eaf-e3821659a9ce\") on node \"crc\" " Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.972823 4867 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-d997565a-60ec-4873-b7c9-bde8044c981f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d997565a-60ec-4873-b7c9-bde8044c981f\") on node \"crc\" " Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.972838 4867 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e1e022d9-e2db-41eb-bbc8-36a85211a141-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.972852 4867 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9bba5174-edd6-4e59-8b84-6c50439be88e-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.972865 4867 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e1e022d9-e2db-41eb-bbc8-36a85211a141-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.972878 4867 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e1e022d9-e2db-41eb-bbc8-36a85211a141-pod-info\") on node \"crc\" DevicePath \"\"" Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.972889 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q676p\" (UniqueName: \"kubernetes.io/projected/9bba5174-edd6-4e59-8b84-6c50439be88e-kube-api-access-q676p\") on node \"crc\" DevicePath \"\"" Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.972901 4867 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9bba5174-edd6-4e59-8b84-6c50439be88e-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.972912 4867 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e1e022d9-e2db-41eb-bbc8-36a85211a141-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.972922 4867 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9bba5174-edd6-4e59-8b84-6c50439be88e-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.973916 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1e022d9-e2db-41eb-bbc8-36a85211a141-server-conf" (OuterVolumeSpecName: "server-conf") pod "e1e022d9-e2db-41eb-bbc8-36a85211a141" (UID: "e1e022d9-e2db-41eb-bbc8-36a85211a141"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.983849 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1e022d9-e2db-41eb-bbc8-36a85211a141-config-data" (OuterVolumeSpecName: "config-data") pod "e1e022d9-e2db-41eb-bbc8-36a85211a141" (UID: "e1e022d9-e2db-41eb-bbc8-36a85211a141"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.987358 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9bba5174-edd6-4e59-8b84-6c50439be88e-config-data" (OuterVolumeSpecName: "config-data") pod "9bba5174-edd6-4e59-8b84-6c50439be88e" (UID: "9bba5174-edd6-4e59-8b84-6c50439be88e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.004573 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9bba5174-edd6-4e59-8b84-6c50439be88e-server-conf" (OuterVolumeSpecName: "server-conf") pod "9bba5174-edd6-4e59-8b84-6c50439be88e" (UID: "9bba5174-edd6-4e59-8b84-6c50439be88e"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.043888 4867 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.044313 4867 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-d997565a-60ec-4873-b7c9-bde8044c981f" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d997565a-60ec-4873-b7c9-bde8044c981f") on node "crc" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.047483 4867 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.048436 4867 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-8137c787-0a8b-490f-9eaf-e3821659a9ce" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8137c787-0a8b-490f-9eaf-e3821659a9ce") on node "crc" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.076487 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e1e022d9-e2db-41eb-bbc8-36a85211a141-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.077270 4867 reconciler_common.go:293] "Volume detached for volume \"pvc-8137c787-0a8b-490f-9eaf-e3821659a9ce\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8137c787-0a8b-490f-9eaf-e3821659a9ce\") on node \"crc\" DevicePath \"\"" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.077357 4867 reconciler_common.go:293] "Volume detached for volume \"pvc-d997565a-60ec-4873-b7c9-bde8044c981f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d997565a-60ec-4873-b7c9-bde8044c981f\") on node \"crc\" DevicePath \"\"" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.077433 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9bba5174-edd6-4e59-8b84-6c50439be88e-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.077498 4867 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9bba5174-edd6-4e59-8b84-6c50439be88e-server-conf\") on node \"crc\" DevicePath \"\"" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.077610 4867 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e1e022d9-e2db-41eb-bbc8-36a85211a141-server-conf\") on node \"crc\" DevicePath \"\"" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.090705 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1e022d9-e2db-41eb-bbc8-36a85211a141-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "e1e022d9-e2db-41eb-bbc8-36a85211a141" (UID: "e1e022d9-e2db-41eb-bbc8-36a85211a141"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.095760 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bba5174-edd6-4e59-8b84-6c50439be88e-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "9bba5174-edd6-4e59-8b84-6c50439be88e" (UID: "9bba5174-edd6-4e59-8b84-6c50439be88e"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.181975 4867 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e1e022d9-e2db-41eb-bbc8-36a85211a141-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.182012 4867 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9bba5174-edd6-4e59-8b84-6c50439be88e-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.194245 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"9bba5174-edd6-4e59-8b84-6c50439be88e","Type":"ContainerDied","Data":"1a22c1b816602c7a9c207095a5f963d6cce2df715e59142c62ec1b7539b424fc"} Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.194332 4867 scope.go:117] "RemoveContainer" containerID="3a805b4a9b14096595ccbe2f2670f7820f5c356d6f6f2f30fc1ba861c96ba989" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.194745 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.196883 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e1e022d9-e2db-41eb-bbc8-36a85211a141","Type":"ContainerDied","Data":"eff48d6ea9b314940f4e42275756ed44177eec1f24e83d25c5b5fe5435a8ea2e"} Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.198000 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.260087 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.277298 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.298706 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.317157 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.335566 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 14 04:36:34 crc kubenswrapper[4867]: E0214 04:36:34.336339 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1e022d9-e2db-41eb-bbc8-36a85211a141" containerName="setup-container" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.336363 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1e022d9-e2db-41eb-bbc8-36a85211a141" containerName="setup-container" Feb 14 04:36:34 crc kubenswrapper[4867]: E0214 04:36:34.336398 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bba5174-edd6-4e59-8b84-6c50439be88e" containerName="setup-container" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.336405 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bba5174-edd6-4e59-8b84-6c50439be88e" containerName="setup-container" Feb 14 04:36:34 crc kubenswrapper[4867]: E0214 04:36:34.336418 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bba5174-edd6-4e59-8b84-6c50439be88e" containerName="rabbitmq" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.336424 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bba5174-edd6-4e59-8b84-6c50439be88e" containerName="rabbitmq" Feb 14 04:36:34 crc kubenswrapper[4867]: E0214 04:36:34.336445 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1e022d9-e2db-41eb-bbc8-36a85211a141" containerName="rabbitmq" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.336451 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1e022d9-e2db-41eb-bbc8-36a85211a141" containerName="rabbitmq" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.336724 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1e022d9-e2db-41eb-bbc8-36a85211a141" containerName="rabbitmq" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.336746 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="9bba5174-edd6-4e59-8b84-6c50439be88e" containerName="rabbitmq" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.338541 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.341016 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.341375 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.347687 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.348308 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.351151 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.351602 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.352126 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-7gx8s" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.365136 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-2"] Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.368035 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.402148 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.415832 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.489358 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c8afa7ab-eaaa-4558-99d5-c655cf271f62-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"c8afa7ab-eaaa-4558-99d5-c655cf271f62\") " pod="openstack/rabbitmq-server-2" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.489410 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.489432 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nntts\" (UniqueName: \"kubernetes.io/projected/0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c-kube-api-access-nntts\") pod \"rabbitmq-cell1-server-0\" (UID: \"0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.489459 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzgv6\" (UniqueName: \"kubernetes.io/projected/c8afa7ab-eaaa-4558-99d5-c655cf271f62-kube-api-access-lzgv6\") pod \"rabbitmq-server-2\" (UID: \"c8afa7ab-eaaa-4558-99d5-c655cf271f62\") " pod="openstack/rabbitmq-server-2" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.489524 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.489550 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c8afa7ab-eaaa-4558-99d5-c655cf271f62-config-data\") pod \"rabbitmq-server-2\" (UID: \"c8afa7ab-eaaa-4558-99d5-c655cf271f62\") " pod="openstack/rabbitmq-server-2" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.489567 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c8afa7ab-eaaa-4558-99d5-c655cf271f62-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"c8afa7ab-eaaa-4558-99d5-c655cf271f62\") " pod="openstack/rabbitmq-server-2" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.489590 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c8afa7ab-eaaa-4558-99d5-c655cf271f62-server-conf\") pod \"rabbitmq-server-2\" (UID: \"c8afa7ab-eaaa-4558-99d5-c655cf271f62\") " pod="openstack/rabbitmq-server-2" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.489606 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c8afa7ab-eaaa-4558-99d5-c655cf271f62-pod-info\") pod \"rabbitmq-server-2\" (UID: \"c8afa7ab-eaaa-4558-99d5-c655cf271f62\") " pod="openstack/rabbitmq-server-2" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.489622 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.489640 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c8afa7ab-eaaa-4558-99d5-c655cf271f62-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"c8afa7ab-eaaa-4558-99d5-c655cf271f62\") " pod="openstack/rabbitmq-server-2" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.489667 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.489697 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c8afa7ab-eaaa-4558-99d5-c655cf271f62-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"c8afa7ab-eaaa-4558-99d5-c655cf271f62\") " pod="openstack/rabbitmq-server-2" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.489749 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.489779 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d997565a-60ec-4873-b7c9-bde8044c981f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d997565a-60ec-4873-b7c9-bde8044c981f\") pod \"rabbitmq-server-2\" (UID: \"c8afa7ab-eaaa-4558-99d5-c655cf271f62\") " pod="openstack/rabbitmq-server-2" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.489805 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c8afa7ab-eaaa-4558-99d5-c655cf271f62-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"c8afa7ab-eaaa-4558-99d5-c655cf271f62\") " pod="openstack/rabbitmq-server-2" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.489821 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c8afa7ab-eaaa-4558-99d5-c655cf271f62-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"c8afa7ab-eaaa-4558-99d5-c655cf271f62\") " pod="openstack/rabbitmq-server-2" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.489835 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.489876 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.489902 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.489932 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8137c787-0a8b-490f-9eaf-e3821659a9ce\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8137c787-0a8b-490f-9eaf-e3821659a9ce\") pod \"rabbitmq-cell1-server-0\" (UID: \"0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.489950 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.592419 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c8afa7ab-eaaa-4558-99d5-c655cf271f62-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"c8afa7ab-eaaa-4558-99d5-c655cf271f62\") " pod="openstack/rabbitmq-server-2" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.592482 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.592524 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nntts\" (UniqueName: \"kubernetes.io/projected/0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c-kube-api-access-nntts\") pod \"rabbitmq-cell1-server-0\" (UID: \"0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.592617 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzgv6\" (UniqueName: \"kubernetes.io/projected/c8afa7ab-eaaa-4558-99d5-c655cf271f62-kube-api-access-lzgv6\") pod \"rabbitmq-server-2\" (UID: \"c8afa7ab-eaaa-4558-99d5-c655cf271f62\") " pod="openstack/rabbitmq-server-2" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.593024 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.593085 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c8afa7ab-eaaa-4558-99d5-c655cf271f62-config-data\") pod \"rabbitmq-server-2\" (UID: \"c8afa7ab-eaaa-4558-99d5-c655cf271f62\") " pod="openstack/rabbitmq-server-2" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.593122 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c8afa7ab-eaaa-4558-99d5-c655cf271f62-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"c8afa7ab-eaaa-4558-99d5-c655cf271f62\") " pod="openstack/rabbitmq-server-2" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.593168 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c8afa7ab-eaaa-4558-99d5-c655cf271f62-server-conf\") pod \"rabbitmq-server-2\" (UID: \"c8afa7ab-eaaa-4558-99d5-c655cf271f62\") " pod="openstack/rabbitmq-server-2" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.593203 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c8afa7ab-eaaa-4558-99d5-c655cf271f62-pod-info\") pod \"rabbitmq-server-2\" (UID: \"c8afa7ab-eaaa-4558-99d5-c655cf271f62\") " pod="openstack/rabbitmq-server-2" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.593235 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.593271 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c8afa7ab-eaaa-4558-99d5-c655cf271f62-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"c8afa7ab-eaaa-4558-99d5-c655cf271f62\") " pod="openstack/rabbitmq-server-2" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.593336 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.593412 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c8afa7ab-eaaa-4558-99d5-c655cf271f62-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"c8afa7ab-eaaa-4558-99d5-c655cf271f62\") " pod="openstack/rabbitmq-server-2" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.593440 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.594007 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c8afa7ab-eaaa-4558-99d5-c655cf271f62-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"c8afa7ab-eaaa-4558-99d5-c655cf271f62\") " pod="openstack/rabbitmq-server-2" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.594115 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c8afa7ab-eaaa-4558-99d5-c655cf271f62-config-data\") pod \"rabbitmq-server-2\" (UID: \"c8afa7ab-eaaa-4558-99d5-c655cf271f62\") " pod="openstack/rabbitmq-server-2" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.594460 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.594807 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c8afa7ab-eaaa-4558-99d5-c655cf271f62-server-conf\") pod \"rabbitmq-server-2\" (UID: \"c8afa7ab-eaaa-4558-99d5-c655cf271f62\") " pod="openstack/rabbitmq-server-2" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.596373 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.596550 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.596635 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d997565a-60ec-4873-b7c9-bde8044c981f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d997565a-60ec-4873-b7c9-bde8044c981f\") pod \"rabbitmq-server-2\" (UID: \"c8afa7ab-eaaa-4558-99d5-c655cf271f62\") " pod="openstack/rabbitmq-server-2" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.596721 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c8afa7ab-eaaa-4558-99d5-c655cf271f62-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"c8afa7ab-eaaa-4558-99d5-c655cf271f62\") " pod="openstack/rabbitmq-server-2" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.596755 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c8afa7ab-eaaa-4558-99d5-c655cf271f62-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"c8afa7ab-eaaa-4558-99d5-c655cf271f62\") " pod="openstack/rabbitmq-server-2" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.596782 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.596889 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.596948 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.597162 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-8137c787-0a8b-490f-9eaf-e3821659a9ce\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8137c787-0a8b-490f-9eaf-e3821659a9ce\") pod \"rabbitmq-cell1-server-0\" (UID: \"0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.597209 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.597762 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c8afa7ab-eaaa-4558-99d5-c655cf271f62-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"c8afa7ab-eaaa-4558-99d5-c655cf271f62\") " pod="openstack/rabbitmq-server-2" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.598424 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c8afa7ab-eaaa-4558-99d5-c655cf271f62-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"c8afa7ab-eaaa-4558-99d5-c655cf271f62\") " pod="openstack/rabbitmq-server-2" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.601263 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.601363 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.602155 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c8afa7ab-eaaa-4558-99d5-c655cf271f62-pod-info\") pod \"rabbitmq-server-2\" (UID: \"c8afa7ab-eaaa-4558-99d5-c655cf271f62\") " pod="openstack/rabbitmq-server-2" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.602829 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.603086 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c8afa7ab-eaaa-4558-99d5-c655cf271f62-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"c8afa7ab-eaaa-4558-99d5-c655cf271f62\") " pod="openstack/rabbitmq-server-2" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.606362 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.610786 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.611747 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c8afa7ab-eaaa-4558-99d5-c655cf271f62-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"c8afa7ab-eaaa-4558-99d5-c655cf271f62\") " pod="openstack/rabbitmq-server-2" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.612072 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c8afa7ab-eaaa-4558-99d5-c655cf271f62-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"c8afa7ab-eaaa-4558-99d5-c655cf271f62\") " pod="openstack/rabbitmq-server-2" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.613548 4867 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.613586 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d997565a-60ec-4873-b7c9-bde8044c981f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d997565a-60ec-4873-b7c9-bde8044c981f\") pod \"rabbitmq-server-2\" (UID: \"c8afa7ab-eaaa-4558-99d5-c655cf271f62\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/03d7bcff7c5d0322515cfcd29e48bfb1d0d6f9021316ba38c2028cf5ce82afee/globalmount\"" pod="openstack/rabbitmq-server-2" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.613615 4867 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.613646 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-8137c787-0a8b-490f-9eaf-e3821659a9ce\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8137c787-0a8b-490f-9eaf-e3821659a9ce\") pod \"rabbitmq-cell1-server-0\" (UID: \"0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/7c81ba883a06ca9e019b2d7c726ddbfb519b81827f5cfcee1e25c00752814b8f/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.617418 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzgv6\" (UniqueName: \"kubernetes.io/projected/c8afa7ab-eaaa-4558-99d5-c655cf271f62-kube-api-access-lzgv6\") pod \"rabbitmq-server-2\" (UID: \"c8afa7ab-eaaa-4558-99d5-c655cf271f62\") " pod="openstack/rabbitmq-server-2" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.617898 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.631133 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nntts\" (UniqueName: \"kubernetes.io/projected/0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c-kube-api-access-nntts\") pod \"rabbitmq-cell1-server-0\" (UID: \"0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.698314 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d997565a-60ec-4873-b7c9-bde8044c981f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d997565a-60ec-4873-b7c9-bde8044c981f\") pod \"rabbitmq-server-2\" (UID: \"c8afa7ab-eaaa-4558-99d5-c655cf271f62\") " pod="openstack/rabbitmq-server-2" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.701280 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-8137c787-0a8b-490f-9eaf-e3821659a9ce\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8137c787-0a8b-490f-9eaf-e3821659a9ce\") pod \"rabbitmq-cell1-server-0\" (UID: \"0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.707265 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.975653 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:36:35 crc kubenswrapper[4867]: I0214 04:36:35.013300 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9bba5174-edd6-4e59-8b84-6c50439be88e" path="/var/lib/kubelet/pods/9bba5174-edd6-4e59-8b84-6c50439be88e/volumes" Feb 14 04:36:35 crc kubenswrapper[4867]: I0214 04:36:35.015139 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1e022d9-e2db-41eb-bbc8-36a85211a141" path="/var/lib/kubelet/pods/e1e022d9-e2db-41eb-bbc8-36a85211a141/volumes" Feb 14 04:36:40 crc kubenswrapper[4867]: I0214 04:36:40.409090 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kwldn" Feb 14 04:36:40 crc kubenswrapper[4867]: I0214 04:36:40.480125 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kwldn" Feb 14 04:36:40 crc kubenswrapper[4867]: E0214 04:36:40.759714 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 14 04:36:40 crc kubenswrapper[4867]: E0214 04:36:40.759782 4867 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 14 04:36:40 crc kubenswrapper[4867]: E0214 04:36:40.759920 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n644h55dh68fh5b9h59ch5ch676h577h677hc6h557h5cdhdh54dh5b7h5f8h59h549hc8h584h5cchf6hb8h66ch95h6dh544h594h54h58ch8dh648q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bl7qf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(27437fd9-2bc5-48ac-9e34-e733da15dd2b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 04:36:41 crc kubenswrapper[4867]: E0214 04:36:41.102002 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 14 04:36:41 crc kubenswrapper[4867]: E0214 04:36:41.102928 4867 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 14 04:36:41 crc kubenswrapper[4867]: E0214 04:36:41.103106 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j82w7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-l8hr2_openstack(632c48c8-f0d5-4dc9-823e-fa96b9265e97): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 04:36:41 crc kubenswrapper[4867]: E0214 04:36:41.104636 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-l8hr2" podUID="632c48c8-f0d5-4dc9-823e-fa96b9265e97" Feb 14 04:36:41 crc kubenswrapper[4867]: I0214 04:36:41.129367 4867 scope.go:117] "RemoveContainer" containerID="cdd34e48fd8308f6fcb0879223cfb287fe4fad8d2d81caedd7f537716f873d08" Feb 14 04:36:41 crc kubenswrapper[4867]: I0214 04:36:41.220248 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kwldn"] Feb 14 04:36:41 crc kubenswrapper[4867]: I0214 04:36:41.290030 4867 scope.go:117] "RemoveContainer" containerID="1c9536ee76daa0952682b4376762a2a587b803ad41d92cac29e3c1b5557102c7" Feb 14 04:36:41 crc kubenswrapper[4867]: E0214 04:36:41.304720 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-l8hr2" podUID="632c48c8-f0d5-4dc9-823e-fa96b9265e97" Feb 14 04:36:41 crc kubenswrapper[4867]: I0214 04:36:41.350754 4867 scope.go:117] "RemoveContainer" containerID="262c6cf6afafb6e46f694f14f681aa82c37388eec461cacbdee05ba39ec4b230" Feb 14 04:36:41 crc kubenswrapper[4867]: I0214 04:36:41.699175 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 14 04:36:41 crc kubenswrapper[4867]: W0214 04:36:41.709978 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc8afa7ab_eaaa_4558_99d5_c655cf271f62.slice/crio-6b354dcbefff7fa7caee2aaba4c3bdf408e699b4754ffae69c10621f6a2fbf6e WatchSource:0}: Error finding container 6b354dcbefff7fa7caee2aaba4c3bdf408e699b4754ffae69c10621f6a2fbf6e: Status 404 returned error can't find the container with id 6b354dcbefff7fa7caee2aaba4c3bdf408e699b4754ffae69c10621f6a2fbf6e Feb 14 04:36:41 crc kubenswrapper[4867]: W0214 04:36:41.714655 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0901cb1a_f3c5_4eff_843b_cdb5c5c7a78c.slice/crio-70711207fa42dafdb29e5bf118bc6444bf48cc782f58da14bc872c1dee6f4995 WatchSource:0}: Error finding container 70711207fa42dafdb29e5bf118bc6444bf48cc782f58da14bc872c1dee6f4995: Status 404 returned error can't find the container with id 70711207fa42dafdb29e5bf118bc6444bf48cc782f58da14bc872c1dee6f4995 Feb 14 04:36:41 crc kubenswrapper[4867]: I0214 04:36:41.717005 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 14 04:36:41 crc kubenswrapper[4867]: W0214 04:36:41.720790 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod323a0af9_9e80_476b_8315_e20a6dd41293.slice/crio-09af16d3a20690cfc39a0ddb82488ac9f522f8bc29592b76f2d5e3c3d0549e4a WatchSource:0}: Error finding container 09af16d3a20690cfc39a0ddb82488ac9f522f8bc29592b76f2d5e3c3d0549e4a: Status 404 returned error can't find the container with id 09af16d3a20690cfc39a0ddb82488ac9f522f8bc29592b76f2d5e3c3d0549e4a Feb 14 04:36:41 crc kubenswrapper[4867]: I0214 04:36:41.729933 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-prh4d"] Feb 14 04:36:42 crc kubenswrapper[4867]: I0214 04:36:42.321316 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27437fd9-2bc5-48ac-9e34-e733da15dd2b","Type":"ContainerStarted","Data":"1557719afe78de0e4e29cad64cf88aca042467a40974993c334f00a52cde8934"} Feb 14 04:36:42 crc kubenswrapper[4867]: I0214 04:36:42.322913 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"c8afa7ab-eaaa-4558-99d5-c655cf271f62","Type":"ContainerStarted","Data":"6b354dcbefff7fa7caee2aaba4c3bdf408e699b4754ffae69c10621f6a2fbf6e"} Feb 14 04:36:42 crc kubenswrapper[4867]: I0214 04:36:42.323931 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c","Type":"ContainerStarted","Data":"70711207fa42dafdb29e5bf118bc6444bf48cc782f58da14bc872c1dee6f4995"} Feb 14 04:36:42 crc kubenswrapper[4867]: I0214 04:36:42.325327 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-prh4d" event={"ID":"323a0af9-9e80-476b-8315-e20a6dd41293","Type":"ContainerStarted","Data":"77228a0f066425d86bdda1aaf9057e24f843996dcb0f57300b551c06e527bd22"} Feb 14 04:36:42 crc kubenswrapper[4867]: I0214 04:36:42.325354 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-prh4d" event={"ID":"323a0af9-9e80-476b-8315-e20a6dd41293","Type":"ContainerStarted","Data":"09af16d3a20690cfc39a0ddb82488ac9f522f8bc29592b76f2d5e3c3d0549e4a"} Feb 14 04:36:42 crc kubenswrapper[4867]: I0214 04:36:42.328931 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-kwldn" podUID="77086ddb-f1c4-4387-a1b3-a7b9389d4eb0" containerName="registry-server" containerID="cri-o://2e95723f53dda67e6fbff64267cd010c5186a50cb85b25868b75a1965fb93aa5" gracePeriod=2 Feb 14 04:36:42 crc kubenswrapper[4867]: I0214 04:36:42.902971 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kwldn" Feb 14 04:36:43 crc kubenswrapper[4867]: I0214 04:36:43.038383 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snc9h\" (UniqueName: \"kubernetes.io/projected/77086ddb-f1c4-4387-a1b3-a7b9389d4eb0-kube-api-access-snc9h\") pod \"77086ddb-f1c4-4387-a1b3-a7b9389d4eb0\" (UID: \"77086ddb-f1c4-4387-a1b3-a7b9389d4eb0\") " Feb 14 04:36:43 crc kubenswrapper[4867]: I0214 04:36:43.038741 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77086ddb-f1c4-4387-a1b3-a7b9389d4eb0-catalog-content\") pod \"77086ddb-f1c4-4387-a1b3-a7b9389d4eb0\" (UID: \"77086ddb-f1c4-4387-a1b3-a7b9389d4eb0\") " Feb 14 04:36:43 crc kubenswrapper[4867]: I0214 04:36:43.039081 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77086ddb-f1c4-4387-a1b3-a7b9389d4eb0-utilities\") pod \"77086ddb-f1c4-4387-a1b3-a7b9389d4eb0\" (UID: \"77086ddb-f1c4-4387-a1b3-a7b9389d4eb0\") " Feb 14 04:36:43 crc kubenswrapper[4867]: I0214 04:36:43.040184 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77086ddb-f1c4-4387-a1b3-a7b9389d4eb0-utilities" (OuterVolumeSpecName: "utilities") pod "77086ddb-f1c4-4387-a1b3-a7b9389d4eb0" (UID: "77086ddb-f1c4-4387-a1b3-a7b9389d4eb0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:36:43 crc kubenswrapper[4867]: I0214 04:36:43.040340 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77086ddb-f1c4-4387-a1b3-a7b9389d4eb0-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 04:36:43 crc kubenswrapper[4867]: I0214 04:36:43.096098 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77086ddb-f1c4-4387-a1b3-a7b9389d4eb0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "77086ddb-f1c4-4387-a1b3-a7b9389d4eb0" (UID: "77086ddb-f1c4-4387-a1b3-a7b9389d4eb0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:36:43 crc kubenswrapper[4867]: I0214 04:36:43.142407 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77086ddb-f1c4-4387-a1b3-a7b9389d4eb0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 04:36:43 crc kubenswrapper[4867]: I0214 04:36:43.159136 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77086ddb-f1c4-4387-a1b3-a7b9389d4eb0-kube-api-access-snc9h" (OuterVolumeSpecName: "kube-api-access-snc9h") pod "77086ddb-f1c4-4387-a1b3-a7b9389d4eb0" (UID: "77086ddb-f1c4-4387-a1b3-a7b9389d4eb0"). InnerVolumeSpecName "kube-api-access-snc9h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:36:43 crc kubenswrapper[4867]: I0214 04:36:43.245791 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-snc9h\" (UniqueName: \"kubernetes.io/projected/77086ddb-f1c4-4387-a1b3-a7b9389d4eb0-kube-api-access-snc9h\") on node \"crc\" DevicePath \"\"" Feb 14 04:36:43 crc kubenswrapper[4867]: I0214 04:36:43.344969 4867 generic.go:334] "Generic (PLEG): container finished" podID="77086ddb-f1c4-4387-a1b3-a7b9389d4eb0" containerID="2e95723f53dda67e6fbff64267cd010c5186a50cb85b25868b75a1965fb93aa5" exitCode=0 Feb 14 04:36:43 crc kubenswrapper[4867]: I0214 04:36:43.345122 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kwldn" event={"ID":"77086ddb-f1c4-4387-a1b3-a7b9389d4eb0","Type":"ContainerDied","Data":"2e95723f53dda67e6fbff64267cd010c5186a50cb85b25868b75a1965fb93aa5"} Feb 14 04:36:43 crc kubenswrapper[4867]: I0214 04:36:43.345152 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kwldn" Feb 14 04:36:43 crc kubenswrapper[4867]: I0214 04:36:43.345229 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kwldn" event={"ID":"77086ddb-f1c4-4387-a1b3-a7b9389d4eb0","Type":"ContainerDied","Data":"f81d9f3f5e58496407123bbe89b13b2f4384e5424f5ed4516e82d1a0c14bf576"} Feb 14 04:36:43 crc kubenswrapper[4867]: I0214 04:36:43.345274 4867 scope.go:117] "RemoveContainer" containerID="2e95723f53dda67e6fbff64267cd010c5186a50cb85b25868b75a1965fb93aa5" Feb 14 04:36:43 crc kubenswrapper[4867]: I0214 04:36:43.347604 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27437fd9-2bc5-48ac-9e34-e733da15dd2b","Type":"ContainerStarted","Data":"793cecbd56bcbcfca8f8a59fa74a8549ee89fe0d7b86bb7ed8129fcfe01fcb5d"} Feb 14 04:36:43 crc kubenswrapper[4867]: I0214 04:36:43.352028 4867 generic.go:334] "Generic (PLEG): container finished" podID="323a0af9-9e80-476b-8315-e20a6dd41293" containerID="77228a0f066425d86bdda1aaf9057e24f843996dcb0f57300b551c06e527bd22" exitCode=0 Feb 14 04:36:43 crc kubenswrapper[4867]: I0214 04:36:43.352096 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-prh4d" event={"ID":"323a0af9-9e80-476b-8315-e20a6dd41293","Type":"ContainerDied","Data":"77228a0f066425d86bdda1aaf9057e24f843996dcb0f57300b551c06e527bd22"} Feb 14 04:36:43 crc kubenswrapper[4867]: I0214 04:36:43.917426 4867 scope.go:117] "RemoveContainer" containerID="4f2b9b821ddbdb0d02349dca656687220bddd2bf4415503ac614d53801212cea" Feb 14 04:36:43 crc kubenswrapper[4867]: I0214 04:36:43.935659 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kwldn"] Feb 14 04:36:43 crc kubenswrapper[4867]: I0214 04:36:43.946800 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-kwldn"] Feb 14 04:36:43 crc kubenswrapper[4867]: I0214 04:36:43.956653 4867 scope.go:117] "RemoveContainer" containerID="5253dc65bb4e2a66c82df57f3fab0290cc8ebb76baf27354b2a9c4455891c781" Feb 14 04:36:43 crc kubenswrapper[4867]: I0214 04:36:43.998392 4867 scope.go:117] "RemoveContainer" containerID="7203a3aa09f0fa634ee4bcd02b0e1dff1e29376e8dd84a4e743cbea72d4c480e" Feb 14 04:36:43 crc kubenswrapper[4867]: E0214 04:36:43.998957 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:36:44 crc kubenswrapper[4867]: I0214 04:36:44.038048 4867 scope.go:117] "RemoveContainer" containerID="2e95723f53dda67e6fbff64267cd010c5186a50cb85b25868b75a1965fb93aa5" Feb 14 04:36:44 crc kubenswrapper[4867]: E0214 04:36:44.038655 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e95723f53dda67e6fbff64267cd010c5186a50cb85b25868b75a1965fb93aa5\": container with ID starting with 2e95723f53dda67e6fbff64267cd010c5186a50cb85b25868b75a1965fb93aa5 not found: ID does not exist" containerID="2e95723f53dda67e6fbff64267cd010c5186a50cb85b25868b75a1965fb93aa5" Feb 14 04:36:44 crc kubenswrapper[4867]: I0214 04:36:44.038707 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e95723f53dda67e6fbff64267cd010c5186a50cb85b25868b75a1965fb93aa5"} err="failed to get container status \"2e95723f53dda67e6fbff64267cd010c5186a50cb85b25868b75a1965fb93aa5\": rpc error: code = NotFound desc = could not find container \"2e95723f53dda67e6fbff64267cd010c5186a50cb85b25868b75a1965fb93aa5\": container with ID starting with 2e95723f53dda67e6fbff64267cd010c5186a50cb85b25868b75a1965fb93aa5 not found: ID does not exist" Feb 14 04:36:44 crc kubenswrapper[4867]: I0214 04:36:44.038741 4867 scope.go:117] "RemoveContainer" containerID="4f2b9b821ddbdb0d02349dca656687220bddd2bf4415503ac614d53801212cea" Feb 14 04:36:44 crc kubenswrapper[4867]: E0214 04:36:44.039668 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f2b9b821ddbdb0d02349dca656687220bddd2bf4415503ac614d53801212cea\": container with ID starting with 4f2b9b821ddbdb0d02349dca656687220bddd2bf4415503ac614d53801212cea not found: ID does not exist" containerID="4f2b9b821ddbdb0d02349dca656687220bddd2bf4415503ac614d53801212cea" Feb 14 04:36:44 crc kubenswrapper[4867]: I0214 04:36:44.039748 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f2b9b821ddbdb0d02349dca656687220bddd2bf4415503ac614d53801212cea"} err="failed to get container status \"4f2b9b821ddbdb0d02349dca656687220bddd2bf4415503ac614d53801212cea\": rpc error: code = NotFound desc = could not find container \"4f2b9b821ddbdb0d02349dca656687220bddd2bf4415503ac614d53801212cea\": container with ID starting with 4f2b9b821ddbdb0d02349dca656687220bddd2bf4415503ac614d53801212cea not found: ID does not exist" Feb 14 04:36:44 crc kubenswrapper[4867]: I0214 04:36:44.039798 4867 scope.go:117] "RemoveContainer" containerID="5253dc65bb4e2a66c82df57f3fab0290cc8ebb76baf27354b2a9c4455891c781" Feb 14 04:36:44 crc kubenswrapper[4867]: E0214 04:36:44.040261 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5253dc65bb4e2a66c82df57f3fab0290cc8ebb76baf27354b2a9c4455891c781\": container with ID starting with 5253dc65bb4e2a66c82df57f3fab0290cc8ebb76baf27354b2a9c4455891c781 not found: ID does not exist" containerID="5253dc65bb4e2a66c82df57f3fab0290cc8ebb76baf27354b2a9c4455891c781" Feb 14 04:36:44 crc kubenswrapper[4867]: I0214 04:36:44.040306 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5253dc65bb4e2a66c82df57f3fab0290cc8ebb76baf27354b2a9c4455891c781"} err="failed to get container status \"5253dc65bb4e2a66c82df57f3fab0290cc8ebb76baf27354b2a9c4455891c781\": rpc error: code = NotFound desc = could not find container \"5253dc65bb4e2a66c82df57f3fab0290cc8ebb76baf27354b2a9c4455891c781\": container with ID starting with 5253dc65bb4e2a66c82df57f3fab0290cc8ebb76baf27354b2a9c4455891c781 not found: ID does not exist" Feb 14 04:36:44 crc kubenswrapper[4867]: E0214 04:36:44.340883 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="27437fd9-2bc5-48ac-9e34-e733da15dd2b" Feb 14 04:36:44 crc kubenswrapper[4867]: I0214 04:36:44.368861 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c","Type":"ContainerStarted","Data":"7079c60795ab2b59c2702098f1c0c9b2fdc7e32a70ad21a4cb53c2929c2218b6"} Feb 14 04:36:44 crc kubenswrapper[4867]: I0214 04:36:44.371313 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-prh4d" event={"ID":"323a0af9-9e80-476b-8315-e20a6dd41293","Type":"ContainerStarted","Data":"807a72ee976321737b9888e2e6b03023367c7b0608270daa117db375e52e0e38"} Feb 14 04:36:44 crc kubenswrapper[4867]: I0214 04:36:44.371449 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7d84b4d45c-prh4d" Feb 14 04:36:44 crc kubenswrapper[4867]: I0214 04:36:44.375652 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27437fd9-2bc5-48ac-9e34-e733da15dd2b","Type":"ContainerStarted","Data":"07c1546e9d32c390db109f6ed008be97ed287780d0e353ea325161c7f8bf4380"} Feb 14 04:36:44 crc kubenswrapper[4867]: I0214 04:36:44.375806 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 14 04:36:44 crc kubenswrapper[4867]: E0214 04:36:44.377229 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="27437fd9-2bc5-48ac-9e34-e733da15dd2b" Feb 14 04:36:44 crc kubenswrapper[4867]: I0214 04:36:44.377718 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"c8afa7ab-eaaa-4558-99d5-c655cf271f62","Type":"ContainerStarted","Data":"ad151054a2c473e2c8df602d26f12c713ca90442f0916e18cc8ecec85468a30c"} Feb 14 04:36:44 crc kubenswrapper[4867]: I0214 04:36:44.454599 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7d84b4d45c-prh4d" podStartSLOduration=14.454570282 podStartE2EDuration="14.454570282s" podCreationTimestamp="2026-02-14 04:36:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:36:44.449845938 +0000 UTC m=+1636.530783252" watchObservedRunningTime="2026-02-14 04:36:44.454570282 +0000 UTC m=+1636.535507596" Feb 14 04:36:45 crc kubenswrapper[4867]: I0214 04:36:45.013791 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77086ddb-f1c4-4387-a1b3-a7b9389d4eb0" path="/var/lib/kubelet/pods/77086ddb-f1c4-4387-a1b3-a7b9389d4eb0/volumes" Feb 14 04:36:45 crc kubenswrapper[4867]: E0214 04:36:45.394682 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="27437fd9-2bc5-48ac-9e34-e733da15dd2b" Feb 14 04:36:51 crc kubenswrapper[4867]: I0214 04:36:51.389871 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7d84b4d45c-prh4d" Feb 14 04:36:51 crc kubenswrapper[4867]: I0214 04:36:51.499289 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-5cgsc"] Feb 14 04:36:51 crc kubenswrapper[4867]: I0214 04:36:51.499962 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b7bbf7cf9-5cgsc" podUID="5971b677-9b43-4667-b205-3926975d03d8" containerName="dnsmasq-dns" containerID="cri-o://9ad581f44e041fa43febb489e9c262f745817b519d4f39097aec3abed254cc22" gracePeriod=10 Feb 14 04:36:51 crc kubenswrapper[4867]: I0214 04:36:51.623817 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6f6df4f56c-tnn8p"] Feb 14 04:36:51 crc kubenswrapper[4867]: E0214 04:36:51.627400 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77086ddb-f1c4-4387-a1b3-a7b9389d4eb0" containerName="extract-utilities" Feb 14 04:36:51 crc kubenswrapper[4867]: I0214 04:36:51.627441 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="77086ddb-f1c4-4387-a1b3-a7b9389d4eb0" containerName="extract-utilities" Feb 14 04:36:51 crc kubenswrapper[4867]: E0214 04:36:51.627460 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77086ddb-f1c4-4387-a1b3-a7b9389d4eb0" containerName="registry-server" Feb 14 04:36:51 crc kubenswrapper[4867]: I0214 04:36:51.627468 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="77086ddb-f1c4-4387-a1b3-a7b9389d4eb0" containerName="registry-server" Feb 14 04:36:51 crc kubenswrapper[4867]: E0214 04:36:51.627493 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77086ddb-f1c4-4387-a1b3-a7b9389d4eb0" containerName="extract-content" Feb 14 04:36:51 crc kubenswrapper[4867]: I0214 04:36:51.627502 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="77086ddb-f1c4-4387-a1b3-a7b9389d4eb0" containerName="extract-content" Feb 14 04:36:51 crc kubenswrapper[4867]: I0214 04:36:51.627831 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="77086ddb-f1c4-4387-a1b3-a7b9389d4eb0" containerName="registry-server" Feb 14 04:36:51 crc kubenswrapper[4867]: I0214 04:36:51.629826 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6df4f56c-tnn8p" Feb 14 04:36:51 crc kubenswrapper[4867]: I0214 04:36:51.655473 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f6df4f56c-tnn8p"] Feb 14 04:36:51 crc kubenswrapper[4867]: I0214 04:36:51.808637 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/2ff227b0-1fbd-4d96-9201-8ef0fb5a68a6-openstack-edpm-ipam\") pod \"dnsmasq-dns-6f6df4f56c-tnn8p\" (UID: \"2ff227b0-1fbd-4d96-9201-8ef0fb5a68a6\") " pod="openstack/dnsmasq-dns-6f6df4f56c-tnn8p" Feb 14 04:36:51 crc kubenswrapper[4867]: I0214 04:36:51.808734 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxwtp\" (UniqueName: \"kubernetes.io/projected/2ff227b0-1fbd-4d96-9201-8ef0fb5a68a6-kube-api-access-lxwtp\") pod \"dnsmasq-dns-6f6df4f56c-tnn8p\" (UID: \"2ff227b0-1fbd-4d96-9201-8ef0fb5a68a6\") " pod="openstack/dnsmasq-dns-6f6df4f56c-tnn8p" Feb 14 04:36:51 crc kubenswrapper[4867]: I0214 04:36:51.808846 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2ff227b0-1fbd-4d96-9201-8ef0fb5a68a6-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6df4f56c-tnn8p\" (UID: \"2ff227b0-1fbd-4d96-9201-8ef0fb5a68a6\") " pod="openstack/dnsmasq-dns-6f6df4f56c-tnn8p" Feb 14 04:36:51 crc kubenswrapper[4867]: I0214 04:36:51.808893 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2ff227b0-1fbd-4d96-9201-8ef0fb5a68a6-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6df4f56c-tnn8p\" (UID: \"2ff227b0-1fbd-4d96-9201-8ef0fb5a68a6\") " pod="openstack/dnsmasq-dns-6f6df4f56c-tnn8p" Feb 14 04:36:51 crc kubenswrapper[4867]: I0214 04:36:51.808935 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ff227b0-1fbd-4d96-9201-8ef0fb5a68a6-dns-svc\") pod \"dnsmasq-dns-6f6df4f56c-tnn8p\" (UID: \"2ff227b0-1fbd-4d96-9201-8ef0fb5a68a6\") " pod="openstack/dnsmasq-dns-6f6df4f56c-tnn8p" Feb 14 04:36:51 crc kubenswrapper[4867]: I0214 04:36:51.808982 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2ff227b0-1fbd-4d96-9201-8ef0fb5a68a6-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6df4f56c-tnn8p\" (UID: \"2ff227b0-1fbd-4d96-9201-8ef0fb5a68a6\") " pod="openstack/dnsmasq-dns-6f6df4f56c-tnn8p" Feb 14 04:36:51 crc kubenswrapper[4867]: I0214 04:36:51.809081 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ff227b0-1fbd-4d96-9201-8ef0fb5a68a6-config\") pod \"dnsmasq-dns-6f6df4f56c-tnn8p\" (UID: \"2ff227b0-1fbd-4d96-9201-8ef0fb5a68a6\") " pod="openstack/dnsmasq-dns-6f6df4f56c-tnn8p" Feb 14 04:36:51 crc kubenswrapper[4867]: I0214 04:36:51.910889 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxwtp\" (UniqueName: \"kubernetes.io/projected/2ff227b0-1fbd-4d96-9201-8ef0fb5a68a6-kube-api-access-lxwtp\") pod \"dnsmasq-dns-6f6df4f56c-tnn8p\" (UID: \"2ff227b0-1fbd-4d96-9201-8ef0fb5a68a6\") " pod="openstack/dnsmasq-dns-6f6df4f56c-tnn8p" Feb 14 04:36:51 crc kubenswrapper[4867]: I0214 04:36:51.911277 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2ff227b0-1fbd-4d96-9201-8ef0fb5a68a6-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6df4f56c-tnn8p\" (UID: \"2ff227b0-1fbd-4d96-9201-8ef0fb5a68a6\") " pod="openstack/dnsmasq-dns-6f6df4f56c-tnn8p" Feb 14 04:36:51 crc kubenswrapper[4867]: I0214 04:36:51.911321 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2ff227b0-1fbd-4d96-9201-8ef0fb5a68a6-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6df4f56c-tnn8p\" (UID: \"2ff227b0-1fbd-4d96-9201-8ef0fb5a68a6\") " pod="openstack/dnsmasq-dns-6f6df4f56c-tnn8p" Feb 14 04:36:51 crc kubenswrapper[4867]: I0214 04:36:51.911349 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ff227b0-1fbd-4d96-9201-8ef0fb5a68a6-dns-svc\") pod \"dnsmasq-dns-6f6df4f56c-tnn8p\" (UID: \"2ff227b0-1fbd-4d96-9201-8ef0fb5a68a6\") " pod="openstack/dnsmasq-dns-6f6df4f56c-tnn8p" Feb 14 04:36:51 crc kubenswrapper[4867]: I0214 04:36:51.911382 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2ff227b0-1fbd-4d96-9201-8ef0fb5a68a6-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6df4f56c-tnn8p\" (UID: \"2ff227b0-1fbd-4d96-9201-8ef0fb5a68a6\") " pod="openstack/dnsmasq-dns-6f6df4f56c-tnn8p" Feb 14 04:36:51 crc kubenswrapper[4867]: I0214 04:36:51.911457 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ff227b0-1fbd-4d96-9201-8ef0fb5a68a6-config\") pod \"dnsmasq-dns-6f6df4f56c-tnn8p\" (UID: \"2ff227b0-1fbd-4d96-9201-8ef0fb5a68a6\") " pod="openstack/dnsmasq-dns-6f6df4f56c-tnn8p" Feb 14 04:36:51 crc kubenswrapper[4867]: I0214 04:36:51.911544 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/2ff227b0-1fbd-4d96-9201-8ef0fb5a68a6-openstack-edpm-ipam\") pod \"dnsmasq-dns-6f6df4f56c-tnn8p\" (UID: \"2ff227b0-1fbd-4d96-9201-8ef0fb5a68a6\") " pod="openstack/dnsmasq-dns-6f6df4f56c-tnn8p" Feb 14 04:36:51 crc kubenswrapper[4867]: I0214 04:36:51.912948 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ff227b0-1fbd-4d96-9201-8ef0fb5a68a6-dns-svc\") pod \"dnsmasq-dns-6f6df4f56c-tnn8p\" (UID: \"2ff227b0-1fbd-4d96-9201-8ef0fb5a68a6\") " pod="openstack/dnsmasq-dns-6f6df4f56c-tnn8p" Feb 14 04:36:51 crc kubenswrapper[4867]: I0214 04:36:51.913266 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2ff227b0-1fbd-4d96-9201-8ef0fb5a68a6-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6df4f56c-tnn8p\" (UID: \"2ff227b0-1fbd-4d96-9201-8ef0fb5a68a6\") " pod="openstack/dnsmasq-dns-6f6df4f56c-tnn8p" Feb 14 04:36:51 crc kubenswrapper[4867]: I0214 04:36:51.913336 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2ff227b0-1fbd-4d96-9201-8ef0fb5a68a6-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6df4f56c-tnn8p\" (UID: \"2ff227b0-1fbd-4d96-9201-8ef0fb5a68a6\") " pod="openstack/dnsmasq-dns-6f6df4f56c-tnn8p" Feb 14 04:36:51 crc kubenswrapper[4867]: I0214 04:36:51.913444 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2ff227b0-1fbd-4d96-9201-8ef0fb5a68a6-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6df4f56c-tnn8p\" (UID: \"2ff227b0-1fbd-4d96-9201-8ef0fb5a68a6\") " pod="openstack/dnsmasq-dns-6f6df4f56c-tnn8p" Feb 14 04:36:51 crc kubenswrapper[4867]: I0214 04:36:51.913496 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/2ff227b0-1fbd-4d96-9201-8ef0fb5a68a6-openstack-edpm-ipam\") pod \"dnsmasq-dns-6f6df4f56c-tnn8p\" (UID: \"2ff227b0-1fbd-4d96-9201-8ef0fb5a68a6\") " pod="openstack/dnsmasq-dns-6f6df4f56c-tnn8p" Feb 14 04:36:51 crc kubenswrapper[4867]: I0214 04:36:51.913644 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ff227b0-1fbd-4d96-9201-8ef0fb5a68a6-config\") pod \"dnsmasq-dns-6f6df4f56c-tnn8p\" (UID: \"2ff227b0-1fbd-4d96-9201-8ef0fb5a68a6\") " pod="openstack/dnsmasq-dns-6f6df4f56c-tnn8p" Feb 14 04:36:51 crc kubenswrapper[4867]: I0214 04:36:51.939179 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxwtp\" (UniqueName: \"kubernetes.io/projected/2ff227b0-1fbd-4d96-9201-8ef0fb5a68a6-kube-api-access-lxwtp\") pod \"dnsmasq-dns-6f6df4f56c-tnn8p\" (UID: \"2ff227b0-1fbd-4d96-9201-8ef0fb5a68a6\") " pod="openstack/dnsmasq-dns-6f6df4f56c-tnn8p" Feb 14 04:36:52 crc kubenswrapper[4867]: I0214 04:36:52.008798 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6df4f56c-tnn8p" Feb 14 04:36:52 crc kubenswrapper[4867]: I0214 04:36:52.172114 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-5cgsc" Feb 14 04:36:52 crc kubenswrapper[4867]: I0214 04:36:52.346942 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5971b677-9b43-4667-b205-3926975d03d8-dns-swift-storage-0\") pod \"5971b677-9b43-4667-b205-3926975d03d8\" (UID: \"5971b677-9b43-4667-b205-3926975d03d8\") " Feb 14 04:36:52 crc kubenswrapper[4867]: I0214 04:36:52.347027 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5971b677-9b43-4667-b205-3926975d03d8-ovsdbserver-sb\") pod \"5971b677-9b43-4667-b205-3926975d03d8\" (UID: \"5971b677-9b43-4667-b205-3926975d03d8\") " Feb 14 04:36:52 crc kubenswrapper[4867]: I0214 04:36:52.347167 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5971b677-9b43-4667-b205-3926975d03d8-dns-svc\") pod \"5971b677-9b43-4667-b205-3926975d03d8\" (UID: \"5971b677-9b43-4667-b205-3926975d03d8\") " Feb 14 04:36:52 crc kubenswrapper[4867]: I0214 04:36:52.347197 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wnqpt\" (UniqueName: \"kubernetes.io/projected/5971b677-9b43-4667-b205-3926975d03d8-kube-api-access-wnqpt\") pod \"5971b677-9b43-4667-b205-3926975d03d8\" (UID: \"5971b677-9b43-4667-b205-3926975d03d8\") " Feb 14 04:36:52 crc kubenswrapper[4867]: I0214 04:36:52.347241 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5971b677-9b43-4667-b205-3926975d03d8-ovsdbserver-nb\") pod \"5971b677-9b43-4667-b205-3926975d03d8\" (UID: \"5971b677-9b43-4667-b205-3926975d03d8\") " Feb 14 04:36:52 crc kubenswrapper[4867]: I0214 04:36:52.347267 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5971b677-9b43-4667-b205-3926975d03d8-config\") pod \"5971b677-9b43-4667-b205-3926975d03d8\" (UID: \"5971b677-9b43-4667-b205-3926975d03d8\") " Feb 14 04:36:52 crc kubenswrapper[4867]: I0214 04:36:52.354633 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5971b677-9b43-4667-b205-3926975d03d8-kube-api-access-wnqpt" (OuterVolumeSpecName: "kube-api-access-wnqpt") pod "5971b677-9b43-4667-b205-3926975d03d8" (UID: "5971b677-9b43-4667-b205-3926975d03d8"). InnerVolumeSpecName "kube-api-access-wnqpt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:36:52 crc kubenswrapper[4867]: I0214 04:36:52.431851 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5971b677-9b43-4667-b205-3926975d03d8-config" (OuterVolumeSpecName: "config") pod "5971b677-9b43-4667-b205-3926975d03d8" (UID: "5971b677-9b43-4667-b205-3926975d03d8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:36:52 crc kubenswrapper[4867]: I0214 04:36:52.460556 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5971b677-9b43-4667-b205-3926975d03d8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5971b677-9b43-4667-b205-3926975d03d8" (UID: "5971b677-9b43-4667-b205-3926975d03d8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:36:52 crc kubenswrapper[4867]: I0214 04:36:52.462113 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5971b677-9b43-4667-b205-3926975d03d8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5971b677-9b43-4667-b205-3926975d03d8" (UID: "5971b677-9b43-4667-b205-3926975d03d8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:36:52 crc kubenswrapper[4867]: I0214 04:36:52.464632 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5971b677-9b43-4667-b205-3926975d03d8-ovsdbserver-sb\") pod \"5971b677-9b43-4667-b205-3926975d03d8\" (UID: \"5971b677-9b43-4667-b205-3926975d03d8\") " Feb 14 04:36:52 crc kubenswrapper[4867]: W0214 04:36:52.464750 4867 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/5971b677-9b43-4667-b205-3926975d03d8/volumes/kubernetes.io~configmap/ovsdbserver-sb Feb 14 04:36:52 crc kubenswrapper[4867]: I0214 04:36:52.464768 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5971b677-9b43-4667-b205-3926975d03d8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5971b677-9b43-4667-b205-3926975d03d8" (UID: "5971b677-9b43-4667-b205-3926975d03d8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:36:52 crc kubenswrapper[4867]: I0214 04:36:52.465732 4867 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5971b677-9b43-4667-b205-3926975d03d8-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 14 04:36:52 crc kubenswrapper[4867]: I0214 04:36:52.465752 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wnqpt\" (UniqueName: \"kubernetes.io/projected/5971b677-9b43-4667-b205-3926975d03d8-kube-api-access-wnqpt\") on node \"crc\" DevicePath \"\"" Feb 14 04:36:52 crc kubenswrapper[4867]: I0214 04:36:52.465763 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5971b677-9b43-4667-b205-3926975d03d8-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:36:52 crc kubenswrapper[4867]: I0214 04:36:52.465771 4867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5971b677-9b43-4667-b205-3926975d03d8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 14 04:36:52 crc kubenswrapper[4867]: I0214 04:36:52.469769 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5971b677-9b43-4667-b205-3926975d03d8-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "5971b677-9b43-4667-b205-3926975d03d8" (UID: "5971b677-9b43-4667-b205-3926975d03d8"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:36:52 crc kubenswrapper[4867]: I0214 04:36:52.488150 4867 generic.go:334] "Generic (PLEG): container finished" podID="5971b677-9b43-4667-b205-3926975d03d8" containerID="9ad581f44e041fa43febb489e9c262f745817b519d4f39097aec3abed254cc22" exitCode=0 Feb 14 04:36:52 crc kubenswrapper[4867]: I0214 04:36:52.488195 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-5cgsc" event={"ID":"5971b677-9b43-4667-b205-3926975d03d8","Type":"ContainerDied","Data":"9ad581f44e041fa43febb489e9c262f745817b519d4f39097aec3abed254cc22"} Feb 14 04:36:52 crc kubenswrapper[4867]: I0214 04:36:52.488224 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-5cgsc" event={"ID":"5971b677-9b43-4667-b205-3926975d03d8","Type":"ContainerDied","Data":"3c342daaec09db1c73482280fce80173920eec884b7d07687fab104355216038"} Feb 14 04:36:52 crc kubenswrapper[4867]: I0214 04:36:52.488244 4867 scope.go:117] "RemoveContainer" containerID="9ad581f44e041fa43febb489e9c262f745817b519d4f39097aec3abed254cc22" Feb 14 04:36:52 crc kubenswrapper[4867]: I0214 04:36:52.488409 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-5cgsc" Feb 14 04:36:52 crc kubenswrapper[4867]: I0214 04:36:52.493109 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5971b677-9b43-4667-b205-3926975d03d8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5971b677-9b43-4667-b205-3926975d03d8" (UID: "5971b677-9b43-4667-b205-3926975d03d8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:36:52 crc kubenswrapper[4867]: I0214 04:36:52.528456 4867 scope.go:117] "RemoveContainer" containerID="6971374dbc010707ba6790cccdbab9a07aa3260bf64fef9946cb0b85383f3d5f" Feb 14 04:36:52 crc kubenswrapper[4867]: I0214 04:36:52.564561 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f6df4f56c-tnn8p"] Feb 14 04:36:52 crc kubenswrapper[4867]: I0214 04:36:52.565450 4867 scope.go:117] "RemoveContainer" containerID="9ad581f44e041fa43febb489e9c262f745817b519d4f39097aec3abed254cc22" Feb 14 04:36:52 crc kubenswrapper[4867]: E0214 04:36:52.566559 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ad581f44e041fa43febb489e9c262f745817b519d4f39097aec3abed254cc22\": container with ID starting with 9ad581f44e041fa43febb489e9c262f745817b519d4f39097aec3abed254cc22 not found: ID does not exist" containerID="9ad581f44e041fa43febb489e9c262f745817b519d4f39097aec3abed254cc22" Feb 14 04:36:52 crc kubenswrapper[4867]: I0214 04:36:52.566609 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ad581f44e041fa43febb489e9c262f745817b519d4f39097aec3abed254cc22"} err="failed to get container status \"9ad581f44e041fa43febb489e9c262f745817b519d4f39097aec3abed254cc22\": rpc error: code = NotFound desc = could not find container \"9ad581f44e041fa43febb489e9c262f745817b519d4f39097aec3abed254cc22\": container with ID starting with 9ad581f44e041fa43febb489e9c262f745817b519d4f39097aec3abed254cc22 not found: ID does not exist" Feb 14 04:36:52 crc kubenswrapper[4867]: I0214 04:36:52.566656 4867 scope.go:117] "RemoveContainer" containerID="6971374dbc010707ba6790cccdbab9a07aa3260bf64fef9946cb0b85383f3d5f" Feb 14 04:36:52 crc kubenswrapper[4867]: E0214 04:36:52.567017 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6971374dbc010707ba6790cccdbab9a07aa3260bf64fef9946cb0b85383f3d5f\": container with ID starting with 6971374dbc010707ba6790cccdbab9a07aa3260bf64fef9946cb0b85383f3d5f not found: ID does not exist" containerID="6971374dbc010707ba6790cccdbab9a07aa3260bf64fef9946cb0b85383f3d5f" Feb 14 04:36:52 crc kubenswrapper[4867]: I0214 04:36:52.567053 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6971374dbc010707ba6790cccdbab9a07aa3260bf64fef9946cb0b85383f3d5f"} err="failed to get container status \"6971374dbc010707ba6790cccdbab9a07aa3260bf64fef9946cb0b85383f3d5f\": rpc error: code = NotFound desc = could not find container \"6971374dbc010707ba6790cccdbab9a07aa3260bf64fef9946cb0b85383f3d5f\": container with ID starting with 6971374dbc010707ba6790cccdbab9a07aa3260bf64fef9946cb0b85383f3d5f not found: ID does not exist" Feb 14 04:36:52 crc kubenswrapper[4867]: I0214 04:36:52.567719 4867 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5971b677-9b43-4667-b205-3926975d03d8-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 14 04:36:52 crc kubenswrapper[4867]: I0214 04:36:52.567757 4867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5971b677-9b43-4667-b205-3926975d03d8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 14 04:36:52 crc kubenswrapper[4867]: W0214 04:36:52.572718 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2ff227b0_1fbd_4d96_9201_8ef0fb5a68a6.slice/crio-67971b1a8f5283abd5efe6a9728aa4c2c312c64324ccdc8ba5c46519a4943368 WatchSource:0}: Error finding container 67971b1a8f5283abd5efe6a9728aa4c2c312c64324ccdc8ba5c46519a4943368: Status 404 returned error can't find the container with id 67971b1a8f5283abd5efe6a9728aa4c2c312c64324ccdc8ba5c46519a4943368 Feb 14 04:36:52 crc kubenswrapper[4867]: I0214 04:36:52.862588 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-5cgsc"] Feb 14 04:36:52 crc kubenswrapper[4867]: I0214 04:36:52.882631 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-5cgsc"] Feb 14 04:36:53 crc kubenswrapper[4867]: I0214 04:36:53.018172 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5971b677-9b43-4667-b205-3926975d03d8" path="/var/lib/kubelet/pods/5971b677-9b43-4667-b205-3926975d03d8/volumes" Feb 14 04:36:53 crc kubenswrapper[4867]: I0214 04:36:53.513909 4867 generic.go:334] "Generic (PLEG): container finished" podID="2ff227b0-1fbd-4d96-9201-8ef0fb5a68a6" containerID="91bff455b8941f7c61a76deb1385e22d412455f0a3376814dc857712292e4023" exitCode=0 Feb 14 04:36:53 crc kubenswrapper[4867]: I0214 04:36:53.513957 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6df4f56c-tnn8p" event={"ID":"2ff227b0-1fbd-4d96-9201-8ef0fb5a68a6","Type":"ContainerDied","Data":"91bff455b8941f7c61a76deb1385e22d412455f0a3376814dc857712292e4023"} Feb 14 04:36:53 crc kubenswrapper[4867]: I0214 04:36:53.513998 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6df4f56c-tnn8p" event={"ID":"2ff227b0-1fbd-4d96-9201-8ef0fb5a68a6","Type":"ContainerStarted","Data":"67971b1a8f5283abd5efe6a9728aa4c2c312c64324ccdc8ba5c46519a4943368"} Feb 14 04:36:54 crc kubenswrapper[4867]: I0214 04:36:54.528831 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6df4f56c-tnn8p" event={"ID":"2ff227b0-1fbd-4d96-9201-8ef0fb5a68a6","Type":"ContainerStarted","Data":"83842c5661d318a59ff36083bda28f2f64ebeb6e3b1dc9f95877497a7d664886"} Feb 14 04:36:54 crc kubenswrapper[4867]: I0214 04:36:54.529085 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6f6df4f56c-tnn8p" Feb 14 04:36:54 crc kubenswrapper[4867]: I0214 04:36:54.559368 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6f6df4f56c-tnn8p" podStartSLOduration=3.559340609 podStartE2EDuration="3.559340609s" podCreationTimestamp="2026-02-14 04:36:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:36:54.549225714 +0000 UTC m=+1646.630163068" watchObservedRunningTime="2026-02-14 04:36:54.559340609 +0000 UTC m=+1646.640277923" Feb 14 04:36:55 crc kubenswrapper[4867]: I0214 04:36:55.543783 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-l8hr2" event={"ID":"632c48c8-f0d5-4dc9-823e-fa96b9265e97","Type":"ContainerStarted","Data":"de721f6c491679859a0694193254d070c18018a3dbb5ddc13f5e6825aefb8ef2"} Feb 14 04:36:55 crc kubenswrapper[4867]: I0214 04:36:55.566010 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-l8hr2" podStartSLOduration=2.098260968 podStartE2EDuration="42.565989488s" podCreationTimestamp="2026-02-14 04:36:13 +0000 UTC" firstStartedPulling="2026-02-14 04:36:14.718802448 +0000 UTC m=+1606.799739762" lastFinishedPulling="2026-02-14 04:36:55.186530968 +0000 UTC m=+1647.267468282" observedRunningTime="2026-02-14 04:36:55.562307891 +0000 UTC m=+1647.643245205" watchObservedRunningTime="2026-02-14 04:36:55.565989488 +0000 UTC m=+1647.646926822" Feb 14 04:36:58 crc kubenswrapper[4867]: I0214 04:36:58.580267 4867 generic.go:334] "Generic (PLEG): container finished" podID="632c48c8-f0d5-4dc9-823e-fa96b9265e97" containerID="de721f6c491679859a0694193254d070c18018a3dbb5ddc13f5e6825aefb8ef2" exitCode=0 Feb 14 04:36:58 crc kubenswrapper[4867]: I0214 04:36:58.580382 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-l8hr2" event={"ID":"632c48c8-f0d5-4dc9-823e-fa96b9265e97","Type":"ContainerDied","Data":"de721f6c491679859a0694193254d070c18018a3dbb5ddc13f5e6825aefb8ef2"} Feb 14 04:36:59 crc kubenswrapper[4867]: I0214 04:36:59.014656 4867 scope.go:117] "RemoveContainer" containerID="7203a3aa09f0fa634ee4bcd02b0e1dff1e29376e8dd84a4e743cbea72d4c480e" Feb 14 04:36:59 crc kubenswrapper[4867]: E0214 04:36:59.014910 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:37:00 crc kubenswrapper[4867]: I0214 04:37:00.023516 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 14 04:37:00 crc kubenswrapper[4867]: I0214 04:37:00.134898 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-l8hr2" Feb 14 04:37:00 crc kubenswrapper[4867]: I0214 04:37:00.186293 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j82w7\" (UniqueName: \"kubernetes.io/projected/632c48c8-f0d5-4dc9-823e-fa96b9265e97-kube-api-access-j82w7\") pod \"632c48c8-f0d5-4dc9-823e-fa96b9265e97\" (UID: \"632c48c8-f0d5-4dc9-823e-fa96b9265e97\") " Feb 14 04:37:00 crc kubenswrapper[4867]: I0214 04:37:00.186527 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/632c48c8-f0d5-4dc9-823e-fa96b9265e97-config-data\") pod \"632c48c8-f0d5-4dc9-823e-fa96b9265e97\" (UID: \"632c48c8-f0d5-4dc9-823e-fa96b9265e97\") " Feb 14 04:37:00 crc kubenswrapper[4867]: I0214 04:37:00.186567 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/632c48c8-f0d5-4dc9-823e-fa96b9265e97-combined-ca-bundle\") pod \"632c48c8-f0d5-4dc9-823e-fa96b9265e97\" (UID: \"632c48c8-f0d5-4dc9-823e-fa96b9265e97\") " Feb 14 04:37:00 crc kubenswrapper[4867]: I0214 04:37:00.192292 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/632c48c8-f0d5-4dc9-823e-fa96b9265e97-kube-api-access-j82w7" (OuterVolumeSpecName: "kube-api-access-j82w7") pod "632c48c8-f0d5-4dc9-823e-fa96b9265e97" (UID: "632c48c8-f0d5-4dc9-823e-fa96b9265e97"). InnerVolumeSpecName "kube-api-access-j82w7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:37:00 crc kubenswrapper[4867]: I0214 04:37:00.223136 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/632c48c8-f0d5-4dc9-823e-fa96b9265e97-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "632c48c8-f0d5-4dc9-823e-fa96b9265e97" (UID: "632c48c8-f0d5-4dc9-823e-fa96b9265e97"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:37:00 crc kubenswrapper[4867]: I0214 04:37:00.291914 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j82w7\" (UniqueName: \"kubernetes.io/projected/632c48c8-f0d5-4dc9-823e-fa96b9265e97-kube-api-access-j82w7\") on node \"crc\" DevicePath \"\"" Feb 14 04:37:00 crc kubenswrapper[4867]: I0214 04:37:00.291988 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/632c48c8-f0d5-4dc9-823e-fa96b9265e97-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:37:00 crc kubenswrapper[4867]: I0214 04:37:00.297877 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/632c48c8-f0d5-4dc9-823e-fa96b9265e97-config-data" (OuterVolumeSpecName: "config-data") pod "632c48c8-f0d5-4dc9-823e-fa96b9265e97" (UID: "632c48c8-f0d5-4dc9-823e-fa96b9265e97"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:37:00 crc kubenswrapper[4867]: I0214 04:37:00.394259 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/632c48c8-f0d5-4dc9-823e-fa96b9265e97-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:37:00 crc kubenswrapper[4867]: I0214 04:37:00.612964 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27437fd9-2bc5-48ac-9e34-e733da15dd2b","Type":"ContainerStarted","Data":"86c896e795193cbc041ce48aa8f5cfb49ed56bfd923d3ce2eec001f309e51bd7"} Feb 14 04:37:00 crc kubenswrapper[4867]: I0214 04:37:00.614818 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-l8hr2" event={"ID":"632c48c8-f0d5-4dc9-823e-fa96b9265e97","Type":"ContainerDied","Data":"f6d7447bc4808aa0ae450dfc090bd3e6cef5e2bf5c0d0482fa7c73bb4eea0eab"} Feb 14 04:37:00 crc kubenswrapper[4867]: I0214 04:37:00.614879 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f6d7447bc4808aa0ae450dfc090bd3e6cef5e2bf5c0d0482fa7c73bb4eea0eab" Feb 14 04:37:00 crc kubenswrapper[4867]: I0214 04:37:00.614909 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-l8hr2" Feb 14 04:37:00 crc kubenswrapper[4867]: I0214 04:37:00.642293 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.523163866 podStartE2EDuration="42.642270758s" podCreationTimestamp="2026-02-14 04:36:18 +0000 UTC" firstStartedPulling="2026-02-14 04:36:20.244783202 +0000 UTC m=+1612.325720516" lastFinishedPulling="2026-02-14 04:37:00.363890094 +0000 UTC m=+1652.444827408" observedRunningTime="2026-02-14 04:37:00.639596458 +0000 UTC m=+1652.720533772" watchObservedRunningTime="2026-02-14 04:37:00.642270758 +0000 UTC m=+1652.723208072" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.636959 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-7b479dbc77-k8ts7"] Feb 14 04:37:01 crc kubenswrapper[4867]: E0214 04:37:01.637813 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5971b677-9b43-4667-b205-3926975d03d8" containerName="init" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.637828 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="5971b677-9b43-4667-b205-3926975d03d8" containerName="init" Feb 14 04:37:01 crc kubenswrapper[4867]: E0214 04:37:01.637851 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="632c48c8-f0d5-4dc9-823e-fa96b9265e97" containerName="heat-db-sync" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.637857 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="632c48c8-f0d5-4dc9-823e-fa96b9265e97" containerName="heat-db-sync" Feb 14 04:37:01 crc kubenswrapper[4867]: E0214 04:37:01.637880 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5971b677-9b43-4667-b205-3926975d03d8" containerName="dnsmasq-dns" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.637887 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="5971b677-9b43-4667-b205-3926975d03d8" containerName="dnsmasq-dns" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.638121 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="632c48c8-f0d5-4dc9-823e-fa96b9265e97" containerName="heat-db-sync" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.638156 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="5971b677-9b43-4667-b205-3926975d03d8" containerName="dnsmasq-dns" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.639281 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7b479dbc77-k8ts7" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.660112 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-7b479dbc77-k8ts7"] Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.706776 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-64c645895b-sclxg"] Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.709052 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-64c645895b-sclxg" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.723669 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-64c645895b-sclxg"] Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.734756 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcce6a26-826f-4268-9007-2e3c4411450f-config-data\") pod \"heat-engine-7b479dbc77-k8ts7\" (UID: \"fcce6a26-826f-4268-9007-2e3c4411450f\") " pod="openstack/heat-engine-7b479dbc77-k8ts7" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.734805 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rhgq\" (UniqueName: \"kubernetes.io/projected/fcce6a26-826f-4268-9007-2e3c4411450f-kube-api-access-7rhgq\") pod \"heat-engine-7b479dbc77-k8ts7\" (UID: \"fcce6a26-826f-4268-9007-2e3c4411450f\") " pod="openstack/heat-engine-7b479dbc77-k8ts7" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.735141 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fcce6a26-826f-4268-9007-2e3c4411450f-config-data-custom\") pod \"heat-engine-7b479dbc77-k8ts7\" (UID: \"fcce6a26-826f-4268-9007-2e3c4411450f\") " pod="openstack/heat-engine-7b479dbc77-k8ts7" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.735221 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcce6a26-826f-4268-9007-2e3c4411450f-combined-ca-bundle\") pod \"heat-engine-7b479dbc77-k8ts7\" (UID: \"fcce6a26-826f-4268-9007-2e3c4411450f\") " pod="openstack/heat-engine-7b479dbc77-k8ts7" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.741667 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-57b4cc7645-246cl"] Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.743558 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-57b4cc7645-246cl" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.767722 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-57b4cc7645-246cl"] Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.837904 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7996e855-fbe0-4324-a337-8841df83e714-internal-tls-certs\") pod \"heat-api-64c645895b-sclxg\" (UID: \"7996e855-fbe0-4324-a337-8841df83e714\") " pod="openstack/heat-api-64c645895b-sclxg" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.838060 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7996e855-fbe0-4324-a337-8841df83e714-public-tls-certs\") pod \"heat-api-64c645895b-sclxg\" (UID: \"7996e855-fbe0-4324-a337-8841df83e714\") " pod="openstack/heat-api-64c645895b-sclxg" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.838125 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24d4f5bc-b41b-4f17-977e-d36995a99521-config-data\") pod \"heat-cfnapi-57b4cc7645-246cl\" (UID: \"24d4f5bc-b41b-4f17-977e-d36995a99521\") " pod="openstack/heat-cfnapi-57b4cc7645-246cl" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.838208 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rhgq\" (UniqueName: \"kubernetes.io/projected/fcce6a26-826f-4268-9007-2e3c4411450f-kube-api-access-7rhgq\") pod \"heat-engine-7b479dbc77-k8ts7\" (UID: \"fcce6a26-826f-4268-9007-2e3c4411450f\") " pod="openstack/heat-engine-7b479dbc77-k8ts7" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.838236 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7996e855-fbe0-4324-a337-8841df83e714-config-data-custom\") pod \"heat-api-64c645895b-sclxg\" (UID: \"7996e855-fbe0-4324-a337-8841df83e714\") " pod="openstack/heat-api-64c645895b-sclxg" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.838257 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcce6a26-826f-4268-9007-2e3c4411450f-config-data\") pod \"heat-engine-7b479dbc77-k8ts7\" (UID: \"fcce6a26-826f-4268-9007-2e3c4411450f\") " pod="openstack/heat-engine-7b479dbc77-k8ts7" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.838282 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7996e855-fbe0-4324-a337-8841df83e714-config-data\") pod \"heat-api-64c645895b-sclxg\" (UID: \"7996e855-fbe0-4324-a337-8841df83e714\") " pod="openstack/heat-api-64c645895b-sclxg" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.838382 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24d4f5bc-b41b-4f17-977e-d36995a99521-combined-ca-bundle\") pod \"heat-cfnapi-57b4cc7645-246cl\" (UID: \"24d4f5bc-b41b-4f17-977e-d36995a99521\") " pod="openstack/heat-cfnapi-57b4cc7645-246cl" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.838574 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/24d4f5bc-b41b-4f17-977e-d36995a99521-public-tls-certs\") pod \"heat-cfnapi-57b4cc7645-246cl\" (UID: \"24d4f5bc-b41b-4f17-977e-d36995a99521\") " pod="openstack/heat-cfnapi-57b4cc7645-246cl" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.838696 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zs766\" (UniqueName: \"kubernetes.io/projected/24d4f5bc-b41b-4f17-977e-d36995a99521-kube-api-access-zs766\") pod \"heat-cfnapi-57b4cc7645-246cl\" (UID: \"24d4f5bc-b41b-4f17-977e-d36995a99521\") " pod="openstack/heat-cfnapi-57b4cc7645-246cl" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.838726 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/24d4f5bc-b41b-4f17-977e-d36995a99521-internal-tls-certs\") pod \"heat-cfnapi-57b4cc7645-246cl\" (UID: \"24d4f5bc-b41b-4f17-977e-d36995a99521\") " pod="openstack/heat-cfnapi-57b4cc7645-246cl" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.838781 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7996e855-fbe0-4324-a337-8841df83e714-combined-ca-bundle\") pod \"heat-api-64c645895b-sclxg\" (UID: \"7996e855-fbe0-4324-a337-8841df83e714\") " pod="openstack/heat-api-64c645895b-sclxg" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.838829 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fcce6a26-826f-4268-9007-2e3c4411450f-config-data-custom\") pod \"heat-engine-7b479dbc77-k8ts7\" (UID: \"fcce6a26-826f-4268-9007-2e3c4411450f\") " pod="openstack/heat-engine-7b479dbc77-k8ts7" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.838891 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcce6a26-826f-4268-9007-2e3c4411450f-combined-ca-bundle\") pod \"heat-engine-7b479dbc77-k8ts7\" (UID: \"fcce6a26-826f-4268-9007-2e3c4411450f\") " pod="openstack/heat-engine-7b479dbc77-k8ts7" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.839104 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/24d4f5bc-b41b-4f17-977e-d36995a99521-config-data-custom\") pod \"heat-cfnapi-57b4cc7645-246cl\" (UID: \"24d4f5bc-b41b-4f17-977e-d36995a99521\") " pod="openstack/heat-cfnapi-57b4cc7645-246cl" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.839137 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcr9p\" (UniqueName: \"kubernetes.io/projected/7996e855-fbe0-4324-a337-8841df83e714-kube-api-access-fcr9p\") pod \"heat-api-64c645895b-sclxg\" (UID: \"7996e855-fbe0-4324-a337-8841df83e714\") " pod="openstack/heat-api-64c645895b-sclxg" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.844777 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcce6a26-826f-4268-9007-2e3c4411450f-combined-ca-bundle\") pod \"heat-engine-7b479dbc77-k8ts7\" (UID: \"fcce6a26-826f-4268-9007-2e3c4411450f\") " pod="openstack/heat-engine-7b479dbc77-k8ts7" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.846418 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fcce6a26-826f-4268-9007-2e3c4411450f-config-data-custom\") pod \"heat-engine-7b479dbc77-k8ts7\" (UID: \"fcce6a26-826f-4268-9007-2e3c4411450f\") " pod="openstack/heat-engine-7b479dbc77-k8ts7" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.852585 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcce6a26-826f-4268-9007-2e3c4411450f-config-data\") pod \"heat-engine-7b479dbc77-k8ts7\" (UID: \"fcce6a26-826f-4268-9007-2e3c4411450f\") " pod="openstack/heat-engine-7b479dbc77-k8ts7" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.855903 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rhgq\" (UniqueName: \"kubernetes.io/projected/fcce6a26-826f-4268-9007-2e3c4411450f-kube-api-access-7rhgq\") pod \"heat-engine-7b479dbc77-k8ts7\" (UID: \"fcce6a26-826f-4268-9007-2e3c4411450f\") " pod="openstack/heat-engine-7b479dbc77-k8ts7" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.940971 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7996e855-fbe0-4324-a337-8841df83e714-config-data\") pod \"heat-api-64c645895b-sclxg\" (UID: \"7996e855-fbe0-4324-a337-8841df83e714\") " pod="openstack/heat-api-64c645895b-sclxg" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.941597 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24d4f5bc-b41b-4f17-977e-d36995a99521-combined-ca-bundle\") pod \"heat-cfnapi-57b4cc7645-246cl\" (UID: \"24d4f5bc-b41b-4f17-977e-d36995a99521\") " pod="openstack/heat-cfnapi-57b4cc7645-246cl" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.941673 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/24d4f5bc-b41b-4f17-977e-d36995a99521-public-tls-certs\") pod \"heat-cfnapi-57b4cc7645-246cl\" (UID: \"24d4f5bc-b41b-4f17-977e-d36995a99521\") " pod="openstack/heat-cfnapi-57b4cc7645-246cl" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.941725 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zs766\" (UniqueName: \"kubernetes.io/projected/24d4f5bc-b41b-4f17-977e-d36995a99521-kube-api-access-zs766\") pod \"heat-cfnapi-57b4cc7645-246cl\" (UID: \"24d4f5bc-b41b-4f17-977e-d36995a99521\") " pod="openstack/heat-cfnapi-57b4cc7645-246cl" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.941747 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/24d4f5bc-b41b-4f17-977e-d36995a99521-internal-tls-certs\") pod \"heat-cfnapi-57b4cc7645-246cl\" (UID: \"24d4f5bc-b41b-4f17-977e-d36995a99521\") " pod="openstack/heat-cfnapi-57b4cc7645-246cl" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.941773 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7996e855-fbe0-4324-a337-8841df83e714-combined-ca-bundle\") pod \"heat-api-64c645895b-sclxg\" (UID: \"7996e855-fbe0-4324-a337-8841df83e714\") " pod="openstack/heat-api-64c645895b-sclxg" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.941871 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/24d4f5bc-b41b-4f17-977e-d36995a99521-config-data-custom\") pod \"heat-cfnapi-57b4cc7645-246cl\" (UID: \"24d4f5bc-b41b-4f17-977e-d36995a99521\") " pod="openstack/heat-cfnapi-57b4cc7645-246cl" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.941950 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcr9p\" (UniqueName: \"kubernetes.io/projected/7996e855-fbe0-4324-a337-8841df83e714-kube-api-access-fcr9p\") pod \"heat-api-64c645895b-sclxg\" (UID: \"7996e855-fbe0-4324-a337-8841df83e714\") " pod="openstack/heat-api-64c645895b-sclxg" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.942145 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7996e855-fbe0-4324-a337-8841df83e714-internal-tls-certs\") pod \"heat-api-64c645895b-sclxg\" (UID: \"7996e855-fbe0-4324-a337-8841df83e714\") " pod="openstack/heat-api-64c645895b-sclxg" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.942192 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7996e855-fbe0-4324-a337-8841df83e714-public-tls-certs\") pod \"heat-api-64c645895b-sclxg\" (UID: \"7996e855-fbe0-4324-a337-8841df83e714\") " pod="openstack/heat-api-64c645895b-sclxg" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.942255 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24d4f5bc-b41b-4f17-977e-d36995a99521-config-data\") pod \"heat-cfnapi-57b4cc7645-246cl\" (UID: \"24d4f5bc-b41b-4f17-977e-d36995a99521\") " pod="openstack/heat-cfnapi-57b4cc7645-246cl" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.942304 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7996e855-fbe0-4324-a337-8841df83e714-config-data-custom\") pod \"heat-api-64c645895b-sclxg\" (UID: \"7996e855-fbe0-4324-a337-8841df83e714\") " pod="openstack/heat-api-64c645895b-sclxg" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.959007 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/24d4f5bc-b41b-4f17-977e-d36995a99521-config-data-custom\") pod \"heat-cfnapi-57b4cc7645-246cl\" (UID: \"24d4f5bc-b41b-4f17-977e-d36995a99521\") " pod="openstack/heat-cfnapi-57b4cc7645-246cl" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.964688 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24d4f5bc-b41b-4f17-977e-d36995a99521-config-data\") pod \"heat-cfnapi-57b4cc7645-246cl\" (UID: \"24d4f5bc-b41b-4f17-977e-d36995a99521\") " pod="openstack/heat-cfnapi-57b4cc7645-246cl" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.964761 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/24d4f5bc-b41b-4f17-977e-d36995a99521-internal-tls-certs\") pod \"heat-cfnapi-57b4cc7645-246cl\" (UID: \"24d4f5bc-b41b-4f17-977e-d36995a99521\") " pod="openstack/heat-cfnapi-57b4cc7645-246cl" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.965319 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7996e855-fbe0-4324-a337-8841df83e714-combined-ca-bundle\") pod \"heat-api-64c645895b-sclxg\" (UID: \"7996e855-fbe0-4324-a337-8841df83e714\") " pod="openstack/heat-api-64c645895b-sclxg" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.965591 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7996e855-fbe0-4324-a337-8841df83e714-public-tls-certs\") pod \"heat-api-64c645895b-sclxg\" (UID: \"7996e855-fbe0-4324-a337-8841df83e714\") " pod="openstack/heat-api-64c645895b-sclxg" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.965974 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7996e855-fbe0-4324-a337-8841df83e714-internal-tls-certs\") pod \"heat-api-64c645895b-sclxg\" (UID: \"7996e855-fbe0-4324-a337-8841df83e714\") " pod="openstack/heat-api-64c645895b-sclxg" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.966888 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24d4f5bc-b41b-4f17-977e-d36995a99521-combined-ca-bundle\") pod \"heat-cfnapi-57b4cc7645-246cl\" (UID: \"24d4f5bc-b41b-4f17-977e-d36995a99521\") " pod="openstack/heat-cfnapi-57b4cc7645-246cl" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.967210 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/24d4f5bc-b41b-4f17-977e-d36995a99521-public-tls-certs\") pod \"heat-cfnapi-57b4cc7645-246cl\" (UID: \"24d4f5bc-b41b-4f17-977e-d36995a99521\") " pod="openstack/heat-cfnapi-57b4cc7645-246cl" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.967580 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7996e855-fbe0-4324-a337-8841df83e714-config-data\") pod \"heat-api-64c645895b-sclxg\" (UID: \"7996e855-fbe0-4324-a337-8841df83e714\") " pod="openstack/heat-api-64c645895b-sclxg" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.968238 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7996e855-fbe0-4324-a337-8841df83e714-config-data-custom\") pod \"heat-api-64c645895b-sclxg\" (UID: \"7996e855-fbe0-4324-a337-8841df83e714\") " pod="openstack/heat-api-64c645895b-sclxg" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.969448 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcr9p\" (UniqueName: \"kubernetes.io/projected/7996e855-fbe0-4324-a337-8841df83e714-kube-api-access-fcr9p\") pod \"heat-api-64c645895b-sclxg\" (UID: \"7996e855-fbe0-4324-a337-8841df83e714\") " pod="openstack/heat-api-64c645895b-sclxg" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.973898 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zs766\" (UniqueName: \"kubernetes.io/projected/24d4f5bc-b41b-4f17-977e-d36995a99521-kube-api-access-zs766\") pod \"heat-cfnapi-57b4cc7645-246cl\" (UID: \"24d4f5bc-b41b-4f17-977e-d36995a99521\") " pod="openstack/heat-cfnapi-57b4cc7645-246cl" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.975109 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7b479dbc77-k8ts7" Feb 14 04:37:02 crc kubenswrapper[4867]: I0214 04:37:02.010677 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6f6df4f56c-tnn8p" Feb 14 04:37:02 crc kubenswrapper[4867]: I0214 04:37:02.042260 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-64c645895b-sclxg" Feb 14 04:37:02 crc kubenswrapper[4867]: I0214 04:37:02.088473 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-57b4cc7645-246cl" Feb 14 04:37:02 crc kubenswrapper[4867]: I0214 04:37:02.095388 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-prh4d"] Feb 14 04:37:02 crc kubenswrapper[4867]: I0214 04:37:02.095660 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7d84b4d45c-prh4d" podUID="323a0af9-9e80-476b-8315-e20a6dd41293" containerName="dnsmasq-dns" containerID="cri-o://807a72ee976321737b9888e2e6b03023367c7b0608270daa117db375e52e0e38" gracePeriod=10 Feb 14 04:37:02 crc kubenswrapper[4867]: I0214 04:37:02.665108 4867 generic.go:334] "Generic (PLEG): container finished" podID="323a0af9-9e80-476b-8315-e20a6dd41293" containerID="807a72ee976321737b9888e2e6b03023367c7b0608270daa117db375e52e0e38" exitCode=0 Feb 14 04:37:02 crc kubenswrapper[4867]: I0214 04:37:02.665805 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-prh4d" event={"ID":"323a0af9-9e80-476b-8315-e20a6dd41293","Type":"ContainerDied","Data":"807a72ee976321737b9888e2e6b03023367c7b0608270daa117db375e52e0e38"} Feb 14 04:37:02 crc kubenswrapper[4867]: I0214 04:37:02.690635 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-7b479dbc77-k8ts7"] Feb 14 04:37:02 crc kubenswrapper[4867]: I0214 04:37:02.731390 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-prh4d" Feb 14 04:37:02 crc kubenswrapper[4867]: I0214 04:37:02.873864 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/323a0af9-9e80-476b-8315-e20a6dd41293-dns-svc\") pod \"323a0af9-9e80-476b-8315-e20a6dd41293\" (UID: \"323a0af9-9e80-476b-8315-e20a6dd41293\") " Feb 14 04:37:02 crc kubenswrapper[4867]: I0214 04:37:02.876055 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/323a0af9-9e80-476b-8315-e20a6dd41293-ovsdbserver-sb\") pod \"323a0af9-9e80-476b-8315-e20a6dd41293\" (UID: \"323a0af9-9e80-476b-8315-e20a6dd41293\") " Feb 14 04:37:02 crc kubenswrapper[4867]: I0214 04:37:02.876200 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/323a0af9-9e80-476b-8315-e20a6dd41293-dns-swift-storage-0\") pod \"323a0af9-9e80-476b-8315-e20a6dd41293\" (UID: \"323a0af9-9e80-476b-8315-e20a6dd41293\") " Feb 14 04:37:02 crc kubenswrapper[4867]: I0214 04:37:02.876337 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/323a0af9-9e80-476b-8315-e20a6dd41293-openstack-edpm-ipam\") pod \"323a0af9-9e80-476b-8315-e20a6dd41293\" (UID: \"323a0af9-9e80-476b-8315-e20a6dd41293\") " Feb 14 04:37:02 crc kubenswrapper[4867]: I0214 04:37:02.876451 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/323a0af9-9e80-476b-8315-e20a6dd41293-ovsdbserver-nb\") pod \"323a0af9-9e80-476b-8315-e20a6dd41293\" (UID: \"323a0af9-9e80-476b-8315-e20a6dd41293\") " Feb 14 04:37:02 crc kubenswrapper[4867]: I0214 04:37:02.876639 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lgct9\" (UniqueName: \"kubernetes.io/projected/323a0af9-9e80-476b-8315-e20a6dd41293-kube-api-access-lgct9\") pod \"323a0af9-9e80-476b-8315-e20a6dd41293\" (UID: \"323a0af9-9e80-476b-8315-e20a6dd41293\") " Feb 14 04:37:02 crc kubenswrapper[4867]: I0214 04:37:02.876801 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/323a0af9-9e80-476b-8315-e20a6dd41293-config\") pod \"323a0af9-9e80-476b-8315-e20a6dd41293\" (UID: \"323a0af9-9e80-476b-8315-e20a6dd41293\") " Feb 14 04:37:02 crc kubenswrapper[4867]: I0214 04:37:02.893829 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/323a0af9-9e80-476b-8315-e20a6dd41293-kube-api-access-lgct9" (OuterVolumeSpecName: "kube-api-access-lgct9") pod "323a0af9-9e80-476b-8315-e20a6dd41293" (UID: "323a0af9-9e80-476b-8315-e20a6dd41293"). InnerVolumeSpecName "kube-api-access-lgct9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:37:02 crc kubenswrapper[4867]: I0214 04:37:02.982171 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lgct9\" (UniqueName: \"kubernetes.io/projected/323a0af9-9e80-476b-8315-e20a6dd41293-kube-api-access-lgct9\") on node \"crc\" DevicePath \"\"" Feb 14 04:37:03 crc kubenswrapper[4867]: I0214 04:37:03.063387 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/323a0af9-9e80-476b-8315-e20a6dd41293-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "323a0af9-9e80-476b-8315-e20a6dd41293" (UID: "323a0af9-9e80-476b-8315-e20a6dd41293"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:37:03 crc kubenswrapper[4867]: I0214 04:37:03.088410 4867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/323a0af9-9e80-476b-8315-e20a6dd41293-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 14 04:37:03 crc kubenswrapper[4867]: I0214 04:37:03.122692 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/323a0af9-9e80-476b-8315-e20a6dd41293-config" (OuterVolumeSpecName: "config") pod "323a0af9-9e80-476b-8315-e20a6dd41293" (UID: "323a0af9-9e80-476b-8315-e20a6dd41293"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:37:03 crc kubenswrapper[4867]: I0214 04:37:03.124617 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/323a0af9-9e80-476b-8315-e20a6dd41293-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "323a0af9-9e80-476b-8315-e20a6dd41293" (UID: "323a0af9-9e80-476b-8315-e20a6dd41293"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:37:03 crc kubenswrapper[4867]: I0214 04:37:03.126985 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/323a0af9-9e80-476b-8315-e20a6dd41293-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "323a0af9-9e80-476b-8315-e20a6dd41293" (UID: "323a0af9-9e80-476b-8315-e20a6dd41293"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:37:03 crc kubenswrapper[4867]: I0214 04:37:03.133958 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/323a0af9-9e80-476b-8315-e20a6dd41293-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "323a0af9-9e80-476b-8315-e20a6dd41293" (UID: "323a0af9-9e80-476b-8315-e20a6dd41293"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:37:03 crc kubenswrapper[4867]: I0214 04:37:03.145778 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/323a0af9-9e80-476b-8315-e20a6dd41293-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "323a0af9-9e80-476b-8315-e20a6dd41293" (UID: "323a0af9-9e80-476b-8315-e20a6dd41293"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:37:03 crc kubenswrapper[4867]: I0214 04:37:03.205989 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/323a0af9-9e80-476b-8315-e20a6dd41293-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:37:03 crc kubenswrapper[4867]: I0214 04:37:03.206025 4867 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/323a0af9-9e80-476b-8315-e20a6dd41293-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 14 04:37:03 crc kubenswrapper[4867]: I0214 04:37:03.206083 4867 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/323a0af9-9e80-476b-8315-e20a6dd41293-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 14 04:37:03 crc kubenswrapper[4867]: I0214 04:37:03.206107 4867 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/323a0af9-9e80-476b-8315-e20a6dd41293-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 14 04:37:03 crc kubenswrapper[4867]: I0214 04:37:03.206117 4867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/323a0af9-9e80-476b-8315-e20a6dd41293-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 14 04:37:03 crc kubenswrapper[4867]: I0214 04:37:03.214533 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-57b4cc7645-246cl"] Feb 14 04:37:03 crc kubenswrapper[4867]: I0214 04:37:03.214592 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-64c645895b-sclxg"] Feb 14 04:37:03 crc kubenswrapper[4867]: I0214 04:37:03.710646 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-57b4cc7645-246cl" event={"ID":"24d4f5bc-b41b-4f17-977e-d36995a99521","Type":"ContainerStarted","Data":"3bb339c7fb5a9f6190d834f9e570b3b868e24aa60f5c4bad1436e0ef3d6f9efc"} Feb 14 04:37:03 crc kubenswrapper[4867]: I0214 04:37:03.712278 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-64c645895b-sclxg" event={"ID":"7996e855-fbe0-4324-a337-8841df83e714","Type":"ContainerStarted","Data":"3982672d28d8f4fc4e1d43912a585753f73acb6860abc2d71be47146ae7fd801"} Feb 14 04:37:03 crc kubenswrapper[4867]: I0214 04:37:03.716843 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-prh4d" event={"ID":"323a0af9-9e80-476b-8315-e20a6dd41293","Type":"ContainerDied","Data":"09af16d3a20690cfc39a0ddb82488ac9f522f8bc29592b76f2d5e3c3d0549e4a"} Feb 14 04:37:03 crc kubenswrapper[4867]: I0214 04:37:03.717029 4867 scope.go:117] "RemoveContainer" containerID="807a72ee976321737b9888e2e6b03023367c7b0608270daa117db375e52e0e38" Feb 14 04:37:03 crc kubenswrapper[4867]: I0214 04:37:03.717308 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-prh4d" Feb 14 04:37:03 crc kubenswrapper[4867]: I0214 04:37:03.724327 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7b479dbc77-k8ts7" event={"ID":"fcce6a26-826f-4268-9007-2e3c4411450f","Type":"ContainerStarted","Data":"28c1c18be8bc5af10b632138413323e95c7af925ff2cd4c90fd2806527ff9500"} Feb 14 04:37:03 crc kubenswrapper[4867]: I0214 04:37:03.724373 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7b479dbc77-k8ts7" event={"ID":"fcce6a26-826f-4268-9007-2e3c4411450f","Type":"ContainerStarted","Data":"1f8d5d90d5f40764a034778f809df10a77a1f772ff13cee8c1701ef09bdbcdea"} Feb 14 04:37:03 crc kubenswrapper[4867]: I0214 04:37:03.725875 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-7b479dbc77-k8ts7" Feb 14 04:37:03 crc kubenswrapper[4867]: I0214 04:37:03.741421 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-7b479dbc77-k8ts7" podStartSLOduration=2.741405224 podStartE2EDuration="2.741405224s" podCreationTimestamp="2026-02-14 04:37:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:37:03.739283428 +0000 UTC m=+1655.820220742" watchObservedRunningTime="2026-02-14 04:37:03.741405224 +0000 UTC m=+1655.822342538" Feb 14 04:37:03 crc kubenswrapper[4867]: I0214 04:37:03.760874 4867 scope.go:117] "RemoveContainer" containerID="77228a0f066425d86bdda1aaf9057e24f843996dcb0f57300b551c06e527bd22" Feb 14 04:37:03 crc kubenswrapper[4867]: I0214 04:37:03.801846 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-prh4d"] Feb 14 04:37:03 crc kubenswrapper[4867]: I0214 04:37:03.812831 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-prh4d"] Feb 14 04:37:05 crc kubenswrapper[4867]: I0214 04:37:05.017309 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="323a0af9-9e80-476b-8315-e20a6dd41293" path="/var/lib/kubelet/pods/323a0af9-9e80-476b-8315-e20a6dd41293/volumes" Feb 14 04:37:05 crc kubenswrapper[4867]: I0214 04:37:05.758461 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-57b4cc7645-246cl" event={"ID":"24d4f5bc-b41b-4f17-977e-d36995a99521","Type":"ContainerStarted","Data":"a34564f9258d21118b5a9fc47dbf271e87bce6c7fcb1542194d809e1a09780fc"} Feb 14 04:37:05 crc kubenswrapper[4867]: I0214 04:37:05.758830 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-57b4cc7645-246cl" Feb 14 04:37:05 crc kubenswrapper[4867]: I0214 04:37:05.764207 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-64c645895b-sclxg" event={"ID":"7996e855-fbe0-4324-a337-8841df83e714","Type":"ContainerStarted","Data":"7496fb13527df2a9a4ad391ac85f871999c3edeb59b8d116df71d91e7094c773"} Feb 14 04:37:05 crc kubenswrapper[4867]: I0214 04:37:05.764256 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-64c645895b-sclxg" Feb 14 04:37:05 crc kubenswrapper[4867]: I0214 04:37:05.806275 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-57b4cc7645-246cl" podStartSLOduration=2.890746538 podStartE2EDuration="4.806241124s" podCreationTimestamp="2026-02-14 04:37:01 +0000 UTC" firstStartedPulling="2026-02-14 04:37:03.078746854 +0000 UTC m=+1655.159684168" lastFinishedPulling="2026-02-14 04:37:04.99424144 +0000 UTC m=+1657.075178754" observedRunningTime="2026-02-14 04:37:05.790822329 +0000 UTC m=+1657.871759643" watchObservedRunningTime="2026-02-14 04:37:05.806241124 +0000 UTC m=+1657.887178428" Feb 14 04:37:05 crc kubenswrapper[4867]: I0214 04:37:05.817646 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-64c645895b-sclxg" podStartSLOduration=2.870956568 podStartE2EDuration="4.817620623s" podCreationTimestamp="2026-02-14 04:37:01 +0000 UTC" firstStartedPulling="2026-02-14 04:37:03.064098729 +0000 UTC m=+1655.145036043" lastFinishedPulling="2026-02-14 04:37:05.010762784 +0000 UTC m=+1657.091700098" observedRunningTime="2026-02-14 04:37:05.813908166 +0000 UTC m=+1657.894845480" watchObservedRunningTime="2026-02-14 04:37:05.817620623 +0000 UTC m=+1657.898557937" Feb 14 04:37:10 crc kubenswrapper[4867]: I0214 04:37:10.998050 4867 scope.go:117] "RemoveContainer" containerID="7203a3aa09f0fa634ee4bcd02b0e1dff1e29376e8dd84a4e743cbea72d4c480e" Feb 14 04:37:11 crc kubenswrapper[4867]: E0214 04:37:10.999326 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:37:11 crc kubenswrapper[4867]: I0214 04:37:11.266016 4867 scope.go:117] "RemoveContainer" containerID="6388af96a9e8cd26ae554c99b13aa233ce10e1dc8de2f02a6f674fb4e51e6bd3" Feb 14 04:37:11 crc kubenswrapper[4867]: I0214 04:37:11.330870 4867 scope.go:117] "RemoveContainer" containerID="69544341c5ca0c8dd1de9f8750f822d8a653543dcc8f00f4deed22c84b48df5d" Feb 14 04:37:11 crc kubenswrapper[4867]: I0214 04:37:11.376139 4867 scope.go:117] "RemoveContainer" containerID="15ed364b0a49f81fd4949fca04378cd1d1cf5fcd161d0b8180bec6ace68b75fa" Feb 14 04:37:11 crc kubenswrapper[4867]: I0214 04:37:11.406032 4867 scope.go:117] "RemoveContainer" containerID="f89dad4a87be20772a4f4fed951cb674eab08ab883a7cf25710c335ef40caf93" Feb 14 04:37:11 crc kubenswrapper[4867]: I0214 04:37:11.462468 4867 scope.go:117] "RemoveContainer" containerID="509fb90f4e6334b9685b885ef46fd5f42dffc3b95cc1b48b90fc4906b6403562" Feb 14 04:37:13 crc kubenswrapper[4867]: I0214 04:37:13.538479 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-64c645895b-sclxg" Feb 14 04:37:13 crc kubenswrapper[4867]: I0214 04:37:13.547782 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-57b4cc7645-246cl" Feb 14 04:37:13 crc kubenswrapper[4867]: I0214 04:37:13.652745 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-6f55d59bf5-wfw72"] Feb 14 04:37:13 crc kubenswrapper[4867]: I0214 04:37:13.652986 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-6f55d59bf5-wfw72" podUID="fe0cc502-2f6a-41d9-8761-da930802201e" containerName="heat-api" containerID="cri-o://c20166d492a9fa577b96f854901f8b51fbf65bf3bdc78203cf773c6d9899e503" gracePeriod=60 Feb 14 04:37:13 crc kubenswrapper[4867]: I0214 04:37:13.681789 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-74d8ffb764-wz9cp"] Feb 14 04:37:13 crc kubenswrapper[4867]: I0214 04:37:13.682019 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-74d8ffb764-wz9cp" podUID="16f76a07-1b4d-4057-84c6-0cae915e01f7" containerName="heat-cfnapi" containerID="cri-o://11c8bf6db3fba0102b4b30e1ce307cf289b32ee921d87494ebf82f97afd541e7" gracePeriod=60 Feb 14 04:37:16 crc kubenswrapper[4867]: I0214 04:37:16.931755 4867 generic.go:334] "Generic (PLEG): container finished" podID="16f76a07-1b4d-4057-84c6-0cae915e01f7" containerID="11c8bf6db3fba0102b4b30e1ce307cf289b32ee921d87494ebf82f97afd541e7" exitCode=0 Feb 14 04:37:16 crc kubenswrapper[4867]: I0214 04:37:16.932300 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-74d8ffb764-wz9cp" event={"ID":"16f76a07-1b4d-4057-84c6-0cae915e01f7","Type":"ContainerDied","Data":"11c8bf6db3fba0102b4b30e1ce307cf289b32ee921d87494ebf82f97afd541e7"} Feb 14 04:37:16 crc kubenswrapper[4867]: I0214 04:37:16.937924 4867 generic.go:334] "Generic (PLEG): container finished" podID="c8afa7ab-eaaa-4558-99d5-c655cf271f62" containerID="ad151054a2c473e2c8df602d26f12c713ca90442f0916e18cc8ecec85468a30c" exitCode=0 Feb 14 04:37:16 crc kubenswrapper[4867]: I0214 04:37:16.937988 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"c8afa7ab-eaaa-4558-99d5-c655cf271f62","Type":"ContainerDied","Data":"ad151054a2c473e2c8df602d26f12c713ca90442f0916e18cc8ecec85468a30c"} Feb 14 04:37:16 crc kubenswrapper[4867]: I0214 04:37:16.941531 4867 generic.go:334] "Generic (PLEG): container finished" podID="0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c" containerID="7079c60795ab2b59c2702098f1c0c9b2fdc7e32a70ad21a4cb53c2929c2218b6" exitCode=0 Feb 14 04:37:16 crc kubenswrapper[4867]: I0214 04:37:16.941612 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c","Type":"ContainerDied","Data":"7079c60795ab2b59c2702098f1c0c9b2fdc7e32a70ad21a4cb53c2929c2218b6"} Feb 14 04:37:17 crc kubenswrapper[4867]: I0214 04:37:17.420332 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-74d8ffb764-wz9cp" Feb 14 04:37:17 crc kubenswrapper[4867]: I0214 04:37:17.466278 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/16f76a07-1b4d-4057-84c6-0cae915e01f7-internal-tls-certs\") pod \"16f76a07-1b4d-4057-84c6-0cae915e01f7\" (UID: \"16f76a07-1b4d-4057-84c6-0cae915e01f7\") " Feb 14 04:37:17 crc kubenswrapper[4867]: I0214 04:37:17.466384 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/16f76a07-1b4d-4057-84c6-0cae915e01f7-config-data-custom\") pod \"16f76a07-1b4d-4057-84c6-0cae915e01f7\" (UID: \"16f76a07-1b4d-4057-84c6-0cae915e01f7\") " Feb 14 04:37:17 crc kubenswrapper[4867]: I0214 04:37:17.466484 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m47wg\" (UniqueName: \"kubernetes.io/projected/16f76a07-1b4d-4057-84c6-0cae915e01f7-kube-api-access-m47wg\") pod \"16f76a07-1b4d-4057-84c6-0cae915e01f7\" (UID: \"16f76a07-1b4d-4057-84c6-0cae915e01f7\") " Feb 14 04:37:17 crc kubenswrapper[4867]: I0214 04:37:17.466550 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16f76a07-1b4d-4057-84c6-0cae915e01f7-combined-ca-bundle\") pod \"16f76a07-1b4d-4057-84c6-0cae915e01f7\" (UID: \"16f76a07-1b4d-4057-84c6-0cae915e01f7\") " Feb 14 04:37:17 crc kubenswrapper[4867]: I0214 04:37:17.466574 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16f76a07-1b4d-4057-84c6-0cae915e01f7-config-data\") pod \"16f76a07-1b4d-4057-84c6-0cae915e01f7\" (UID: \"16f76a07-1b4d-4057-84c6-0cae915e01f7\") " Feb 14 04:37:17 crc kubenswrapper[4867]: I0214 04:37:17.466617 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/16f76a07-1b4d-4057-84c6-0cae915e01f7-public-tls-certs\") pod \"16f76a07-1b4d-4057-84c6-0cae915e01f7\" (UID: \"16f76a07-1b4d-4057-84c6-0cae915e01f7\") " Feb 14 04:37:17 crc kubenswrapper[4867]: I0214 04:37:17.483793 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16f76a07-1b4d-4057-84c6-0cae915e01f7-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "16f76a07-1b4d-4057-84c6-0cae915e01f7" (UID: "16f76a07-1b4d-4057-84c6-0cae915e01f7"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:37:17 crc kubenswrapper[4867]: I0214 04:37:17.486777 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16f76a07-1b4d-4057-84c6-0cae915e01f7-kube-api-access-m47wg" (OuterVolumeSpecName: "kube-api-access-m47wg") pod "16f76a07-1b4d-4057-84c6-0cae915e01f7" (UID: "16f76a07-1b4d-4057-84c6-0cae915e01f7"). InnerVolumeSpecName "kube-api-access-m47wg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:37:17 crc kubenswrapper[4867]: I0214 04:37:17.568647 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m47wg\" (UniqueName: \"kubernetes.io/projected/16f76a07-1b4d-4057-84c6-0cae915e01f7-kube-api-access-m47wg\") on node \"crc\" DevicePath \"\"" Feb 14 04:37:17 crc kubenswrapper[4867]: I0214 04:37:17.568683 4867 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/16f76a07-1b4d-4057-84c6-0cae915e01f7-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 14 04:37:17 crc kubenswrapper[4867]: I0214 04:37:17.583808 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16f76a07-1b4d-4057-84c6-0cae915e01f7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "16f76a07-1b4d-4057-84c6-0cae915e01f7" (UID: "16f76a07-1b4d-4057-84c6-0cae915e01f7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:37:17 crc kubenswrapper[4867]: I0214 04:37:17.610650 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16f76a07-1b4d-4057-84c6-0cae915e01f7-config-data" (OuterVolumeSpecName: "config-data") pod "16f76a07-1b4d-4057-84c6-0cae915e01f7" (UID: "16f76a07-1b4d-4057-84c6-0cae915e01f7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:37:17 crc kubenswrapper[4867]: I0214 04:37:17.626708 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16f76a07-1b4d-4057-84c6-0cae915e01f7-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "16f76a07-1b4d-4057-84c6-0cae915e01f7" (UID: "16f76a07-1b4d-4057-84c6-0cae915e01f7"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:37:17 crc kubenswrapper[4867]: I0214 04:37:17.679912 4867 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/16f76a07-1b4d-4057-84c6-0cae915e01f7-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 14 04:37:17 crc kubenswrapper[4867]: I0214 04:37:17.679947 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16f76a07-1b4d-4057-84c6-0cae915e01f7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:37:17 crc kubenswrapper[4867]: I0214 04:37:17.679958 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16f76a07-1b4d-4057-84c6-0cae915e01f7-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:37:17 crc kubenswrapper[4867]: I0214 04:37:17.709802 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16f76a07-1b4d-4057-84c6-0cae915e01f7-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "16f76a07-1b4d-4057-84c6-0cae915e01f7" (UID: "16f76a07-1b4d-4057-84c6-0cae915e01f7"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:37:17 crc kubenswrapper[4867]: I0214 04:37:17.762975 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6f55d59bf5-wfw72" Feb 14 04:37:17 crc kubenswrapper[4867]: I0214 04:37:17.796594 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fe0cc502-2f6a-41d9-8761-da930802201e-config-data-custom\") pod \"fe0cc502-2f6a-41d9-8761-da930802201e\" (UID: \"fe0cc502-2f6a-41d9-8761-da930802201e\") " Feb 14 04:37:17 crc kubenswrapper[4867]: I0214 04:37:17.796656 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe0cc502-2f6a-41d9-8761-da930802201e-internal-tls-certs\") pod \"fe0cc502-2f6a-41d9-8761-da930802201e\" (UID: \"fe0cc502-2f6a-41d9-8761-da930802201e\") " Feb 14 04:37:17 crc kubenswrapper[4867]: I0214 04:37:17.796707 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe0cc502-2f6a-41d9-8761-da930802201e-config-data\") pod \"fe0cc502-2f6a-41d9-8761-da930802201e\" (UID: \"fe0cc502-2f6a-41d9-8761-da930802201e\") " Feb 14 04:37:17 crc kubenswrapper[4867]: I0214 04:37:17.796747 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe0cc502-2f6a-41d9-8761-da930802201e-combined-ca-bundle\") pod \"fe0cc502-2f6a-41d9-8761-da930802201e\" (UID: \"fe0cc502-2f6a-41d9-8761-da930802201e\") " Feb 14 04:37:17 crc kubenswrapper[4867]: I0214 04:37:17.796851 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe0cc502-2f6a-41d9-8761-da930802201e-public-tls-certs\") pod \"fe0cc502-2f6a-41d9-8761-da930802201e\" (UID: \"fe0cc502-2f6a-41d9-8761-da930802201e\") " Feb 14 04:37:17 crc kubenswrapper[4867]: I0214 04:37:17.796900 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vf86p\" (UniqueName: \"kubernetes.io/projected/fe0cc502-2f6a-41d9-8761-da930802201e-kube-api-access-vf86p\") pod \"fe0cc502-2f6a-41d9-8761-da930802201e\" (UID: \"fe0cc502-2f6a-41d9-8761-da930802201e\") " Feb 14 04:37:17 crc kubenswrapper[4867]: I0214 04:37:17.797305 4867 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/16f76a07-1b4d-4057-84c6-0cae915e01f7-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 14 04:37:17 crc kubenswrapper[4867]: I0214 04:37:17.806246 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe0cc502-2f6a-41d9-8761-da930802201e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "fe0cc502-2f6a-41d9-8761-da930802201e" (UID: "fe0cc502-2f6a-41d9-8761-da930802201e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:37:17 crc kubenswrapper[4867]: I0214 04:37:17.807491 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe0cc502-2f6a-41d9-8761-da930802201e-kube-api-access-vf86p" (OuterVolumeSpecName: "kube-api-access-vf86p") pod "fe0cc502-2f6a-41d9-8761-da930802201e" (UID: "fe0cc502-2f6a-41d9-8761-da930802201e"). InnerVolumeSpecName "kube-api-access-vf86p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:37:17 crc kubenswrapper[4867]: I0214 04:37:17.870703 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe0cc502-2f6a-41d9-8761-da930802201e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fe0cc502-2f6a-41d9-8761-da930802201e" (UID: "fe0cc502-2f6a-41d9-8761-da930802201e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:37:17 crc kubenswrapper[4867]: I0214 04:37:17.901153 4867 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fe0cc502-2f6a-41d9-8761-da930802201e-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 14 04:37:17 crc kubenswrapper[4867]: I0214 04:37:17.901477 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe0cc502-2f6a-41d9-8761-da930802201e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:37:17 crc kubenswrapper[4867]: I0214 04:37:17.901490 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vf86p\" (UniqueName: \"kubernetes.io/projected/fe0cc502-2f6a-41d9-8761-da930802201e-kube-api-access-vf86p\") on node \"crc\" DevicePath \"\"" Feb 14 04:37:17 crc kubenswrapper[4867]: I0214 04:37:17.919603 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe0cc502-2f6a-41d9-8761-da930802201e-config-data" (OuterVolumeSpecName: "config-data") pod "fe0cc502-2f6a-41d9-8761-da930802201e" (UID: "fe0cc502-2f6a-41d9-8761-da930802201e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:37:17 crc kubenswrapper[4867]: I0214 04:37:17.922601 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe0cc502-2f6a-41d9-8761-da930802201e-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "fe0cc502-2f6a-41d9-8761-da930802201e" (UID: "fe0cc502-2f6a-41d9-8761-da930802201e"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:37:17 crc kubenswrapper[4867]: I0214 04:37:17.929840 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe0cc502-2f6a-41d9-8761-da930802201e-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "fe0cc502-2f6a-41d9-8761-da930802201e" (UID: "fe0cc502-2f6a-41d9-8761-da930802201e"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:37:17 crc kubenswrapper[4867]: I0214 04:37:17.956907 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-74d8ffb764-wz9cp" event={"ID":"16f76a07-1b4d-4057-84c6-0cae915e01f7","Type":"ContainerDied","Data":"830da82a952f0eb79b72815166e8401af585ff9b46564a5260025bbc1ac28ad6"} Feb 14 04:37:17 crc kubenswrapper[4867]: I0214 04:37:17.956966 4867 scope.go:117] "RemoveContainer" containerID="11c8bf6db3fba0102b4b30e1ce307cf289b32ee921d87494ebf82f97afd541e7" Feb 14 04:37:17 crc kubenswrapper[4867]: I0214 04:37:17.957121 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-74d8ffb764-wz9cp" Feb 14 04:37:17 crc kubenswrapper[4867]: I0214 04:37:17.962737 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"c8afa7ab-eaaa-4558-99d5-c655cf271f62","Type":"ContainerStarted","Data":"512f2e34264f13e3be24daf121ee312c117a304ab4942efe8423bc47687661a4"} Feb 14 04:37:17 crc kubenswrapper[4867]: I0214 04:37:17.963495 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-2" Feb 14 04:37:17 crc kubenswrapper[4867]: I0214 04:37:17.978878 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c","Type":"ContainerStarted","Data":"f9320adf76aa40017005d159c959a51602c8dbc3b8224d7010e1c233cc75e4a0"} Feb 14 04:37:17 crc kubenswrapper[4867]: I0214 04:37:17.979940 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:37:17 crc kubenswrapper[4867]: I0214 04:37:17.981312 4867 generic.go:334] "Generic (PLEG): container finished" podID="fe0cc502-2f6a-41d9-8761-da930802201e" containerID="c20166d492a9fa577b96f854901f8b51fbf65bf3bdc78203cf773c6d9899e503" exitCode=0 Feb 14 04:37:17 crc kubenswrapper[4867]: I0214 04:37:17.981332 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6f55d59bf5-wfw72" event={"ID":"fe0cc502-2f6a-41d9-8761-da930802201e","Type":"ContainerDied","Data":"c20166d492a9fa577b96f854901f8b51fbf65bf3bdc78203cf773c6d9899e503"} Feb 14 04:37:17 crc kubenswrapper[4867]: I0214 04:37:17.981347 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6f55d59bf5-wfw72" event={"ID":"fe0cc502-2f6a-41d9-8761-da930802201e","Type":"ContainerDied","Data":"049d086e76bc10d2a5f14c7d8a9fe02a2d5fd8eadb747b6e9d8413f65e7ceb0e"} Feb 14 04:37:17 crc kubenswrapper[4867]: I0214 04:37:17.981389 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6f55d59bf5-wfw72" Feb 14 04:37:18 crc kubenswrapper[4867]: I0214 04:37:18.010963 4867 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe0cc502-2f6a-41d9-8761-da930802201e-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 14 04:37:18 crc kubenswrapper[4867]: I0214 04:37:18.011000 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe0cc502-2f6a-41d9-8761-da930802201e-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:37:18 crc kubenswrapper[4867]: I0214 04:37:18.011013 4867 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe0cc502-2f6a-41d9-8761-da930802201e-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 14 04:37:18 crc kubenswrapper[4867]: I0214 04:37:18.021423 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-2" podStartSLOduration=44.021403859 podStartE2EDuration="44.021403859s" podCreationTimestamp="2026-02-14 04:36:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:37:18.016699475 +0000 UTC m=+1670.097636789" watchObservedRunningTime="2026-02-14 04:37:18.021403859 +0000 UTC m=+1670.102341173" Feb 14 04:37:18 crc kubenswrapper[4867]: I0214 04:37:18.064270 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=44.064252105 podStartE2EDuration="44.064252105s" podCreationTimestamp="2026-02-14 04:36:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:37:18.053858822 +0000 UTC m=+1670.134796136" watchObservedRunningTime="2026-02-14 04:37:18.064252105 +0000 UTC m=+1670.145189419" Feb 14 04:37:18 crc kubenswrapper[4867]: I0214 04:37:18.149306 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-74d8ffb764-wz9cp"] Feb 14 04:37:18 crc kubenswrapper[4867]: I0214 04:37:18.166589 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-74d8ffb764-wz9cp"] Feb 14 04:37:18 crc kubenswrapper[4867]: I0214 04:37:18.179779 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-6f55d59bf5-wfw72"] Feb 14 04:37:18 crc kubenswrapper[4867]: I0214 04:37:18.196807 4867 scope.go:117] "RemoveContainer" containerID="c20166d492a9fa577b96f854901f8b51fbf65bf3bdc78203cf773c6d9899e503" Feb 14 04:37:18 crc kubenswrapper[4867]: I0214 04:37:18.203198 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-6f55d59bf5-wfw72"] Feb 14 04:37:18 crc kubenswrapper[4867]: I0214 04:37:18.255326 4867 scope.go:117] "RemoveContainer" containerID="c20166d492a9fa577b96f854901f8b51fbf65bf3bdc78203cf773c6d9899e503" Feb 14 04:37:18 crc kubenswrapper[4867]: E0214 04:37:18.258087 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c20166d492a9fa577b96f854901f8b51fbf65bf3bdc78203cf773c6d9899e503\": container with ID starting with c20166d492a9fa577b96f854901f8b51fbf65bf3bdc78203cf773c6d9899e503 not found: ID does not exist" containerID="c20166d492a9fa577b96f854901f8b51fbf65bf3bdc78203cf773c6d9899e503" Feb 14 04:37:18 crc kubenswrapper[4867]: I0214 04:37:18.258141 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c20166d492a9fa577b96f854901f8b51fbf65bf3bdc78203cf773c6d9899e503"} err="failed to get container status \"c20166d492a9fa577b96f854901f8b51fbf65bf3bdc78203cf773c6d9899e503\": rpc error: code = NotFound desc = could not find container \"c20166d492a9fa577b96f854901f8b51fbf65bf3bdc78203cf773c6d9899e503\": container with ID starting with c20166d492a9fa577b96f854901f8b51fbf65bf3bdc78203cf773c6d9899e503 not found: ID does not exist" Feb 14 04:37:19 crc kubenswrapper[4867]: I0214 04:37:19.126128 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16f76a07-1b4d-4057-84c6-0cae915e01f7" path="/var/lib/kubelet/pods/16f76a07-1b4d-4057-84c6-0cae915e01f7/volumes" Feb 14 04:37:19 crc kubenswrapper[4867]: I0214 04:37:19.127578 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe0cc502-2f6a-41d9-8761-da930802201e" path="/var/lib/kubelet/pods/fe0cc502-2f6a-41d9-8761-da930802201e/volumes" Feb 14 04:37:21 crc kubenswrapper[4867]: I0214 04:37:21.532030 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hwqcf"] Feb 14 04:37:21 crc kubenswrapper[4867]: E0214 04:37:21.532586 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe0cc502-2f6a-41d9-8761-da930802201e" containerName="heat-api" Feb 14 04:37:21 crc kubenswrapper[4867]: I0214 04:37:21.532601 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe0cc502-2f6a-41d9-8761-da930802201e" containerName="heat-api" Feb 14 04:37:21 crc kubenswrapper[4867]: E0214 04:37:21.532622 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="323a0af9-9e80-476b-8315-e20a6dd41293" containerName="dnsmasq-dns" Feb 14 04:37:21 crc kubenswrapper[4867]: I0214 04:37:21.532628 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="323a0af9-9e80-476b-8315-e20a6dd41293" containerName="dnsmasq-dns" Feb 14 04:37:21 crc kubenswrapper[4867]: E0214 04:37:21.532639 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16f76a07-1b4d-4057-84c6-0cae915e01f7" containerName="heat-cfnapi" Feb 14 04:37:21 crc kubenswrapper[4867]: I0214 04:37:21.532646 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="16f76a07-1b4d-4057-84c6-0cae915e01f7" containerName="heat-cfnapi" Feb 14 04:37:21 crc kubenswrapper[4867]: E0214 04:37:21.532674 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="323a0af9-9e80-476b-8315-e20a6dd41293" containerName="init" Feb 14 04:37:21 crc kubenswrapper[4867]: I0214 04:37:21.532680 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="323a0af9-9e80-476b-8315-e20a6dd41293" containerName="init" Feb 14 04:37:21 crc kubenswrapper[4867]: I0214 04:37:21.532899 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe0cc502-2f6a-41d9-8761-da930802201e" containerName="heat-api" Feb 14 04:37:21 crc kubenswrapper[4867]: I0214 04:37:21.532910 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="16f76a07-1b4d-4057-84c6-0cae915e01f7" containerName="heat-cfnapi" Feb 14 04:37:21 crc kubenswrapper[4867]: I0214 04:37:21.532927 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="323a0af9-9e80-476b-8315-e20a6dd41293" containerName="dnsmasq-dns" Feb 14 04:37:21 crc kubenswrapper[4867]: I0214 04:37:21.533763 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hwqcf" Feb 14 04:37:21 crc kubenswrapper[4867]: I0214 04:37:21.536409 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 14 04:37:21 crc kubenswrapper[4867]: I0214 04:37:21.537350 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 14 04:37:21 crc kubenswrapper[4867]: I0214 04:37:21.537888 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 14 04:37:21 crc kubenswrapper[4867]: I0214 04:37:21.537994 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-24tmg" Feb 14 04:37:21 crc kubenswrapper[4867]: I0214 04:37:21.588880 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hwqcf"] Feb 14 04:37:21 crc kubenswrapper[4867]: I0214 04:37:21.602757 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/51f6e45c-a545-4b49-b6f8-a3048619f24d-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-hwqcf\" (UID: \"51f6e45c-a545-4b49-b6f8-a3048619f24d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hwqcf" Feb 14 04:37:21 crc kubenswrapper[4867]: I0214 04:37:21.602943 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtch4\" (UniqueName: \"kubernetes.io/projected/51f6e45c-a545-4b49-b6f8-a3048619f24d-kube-api-access-mtch4\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-hwqcf\" (UID: \"51f6e45c-a545-4b49-b6f8-a3048619f24d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hwqcf" Feb 14 04:37:21 crc kubenswrapper[4867]: I0214 04:37:21.602969 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/51f6e45c-a545-4b49-b6f8-a3048619f24d-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-hwqcf\" (UID: \"51f6e45c-a545-4b49-b6f8-a3048619f24d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hwqcf" Feb 14 04:37:21 crc kubenswrapper[4867]: I0214 04:37:21.603039 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51f6e45c-a545-4b49-b6f8-a3048619f24d-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-hwqcf\" (UID: \"51f6e45c-a545-4b49-b6f8-a3048619f24d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hwqcf" Feb 14 04:37:21 crc kubenswrapper[4867]: I0214 04:37:21.704919 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/51f6e45c-a545-4b49-b6f8-a3048619f24d-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-hwqcf\" (UID: \"51f6e45c-a545-4b49-b6f8-a3048619f24d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hwqcf" Feb 14 04:37:21 crc kubenswrapper[4867]: I0214 04:37:21.705450 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtch4\" (UniqueName: \"kubernetes.io/projected/51f6e45c-a545-4b49-b6f8-a3048619f24d-kube-api-access-mtch4\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-hwqcf\" (UID: \"51f6e45c-a545-4b49-b6f8-a3048619f24d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hwqcf" Feb 14 04:37:21 crc kubenswrapper[4867]: I0214 04:37:21.705484 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/51f6e45c-a545-4b49-b6f8-a3048619f24d-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-hwqcf\" (UID: \"51f6e45c-a545-4b49-b6f8-a3048619f24d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hwqcf" Feb 14 04:37:21 crc kubenswrapper[4867]: I0214 04:37:21.706002 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51f6e45c-a545-4b49-b6f8-a3048619f24d-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-hwqcf\" (UID: \"51f6e45c-a545-4b49-b6f8-a3048619f24d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hwqcf" Feb 14 04:37:21 crc kubenswrapper[4867]: I0214 04:37:21.739176 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/51f6e45c-a545-4b49-b6f8-a3048619f24d-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-hwqcf\" (UID: \"51f6e45c-a545-4b49-b6f8-a3048619f24d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hwqcf" Feb 14 04:37:21 crc kubenswrapper[4867]: I0214 04:37:21.739185 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/51f6e45c-a545-4b49-b6f8-a3048619f24d-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-hwqcf\" (UID: \"51f6e45c-a545-4b49-b6f8-a3048619f24d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hwqcf" Feb 14 04:37:21 crc kubenswrapper[4867]: I0214 04:37:21.745260 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51f6e45c-a545-4b49-b6f8-a3048619f24d-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-hwqcf\" (UID: \"51f6e45c-a545-4b49-b6f8-a3048619f24d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hwqcf" Feb 14 04:37:21 crc kubenswrapper[4867]: I0214 04:37:21.749682 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtch4\" (UniqueName: \"kubernetes.io/projected/51f6e45c-a545-4b49-b6f8-a3048619f24d-kube-api-access-mtch4\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-hwqcf\" (UID: \"51f6e45c-a545-4b49-b6f8-a3048619f24d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hwqcf" Feb 14 04:37:21 crc kubenswrapper[4867]: I0214 04:37:21.874084 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hwqcf" Feb 14 04:37:22 crc kubenswrapper[4867]: I0214 04:37:22.036128 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-7b479dbc77-k8ts7" Feb 14 04:37:22 crc kubenswrapper[4867]: I0214 04:37:22.167261 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-7797898b6d-54xz8"] Feb 14 04:37:22 crc kubenswrapper[4867]: I0214 04:37:22.167614 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-7797898b6d-54xz8" podUID="7535f37c-f2f6-4e75-bfa2-48211fe86ef6" containerName="heat-engine" containerID="cri-o://f9f2e84685b68ba026ed32e937f1e9734f0455c3a2cb5f5a9465424b4369a198" gracePeriod=60 Feb 14 04:37:22 crc kubenswrapper[4867]: I0214 04:37:22.997833 4867 scope.go:117] "RemoveContainer" containerID="7203a3aa09f0fa634ee4bcd02b0e1dff1e29376e8dd84a4e743cbea72d4c480e" Feb 14 04:37:22 crc kubenswrapper[4867]: E0214 04:37:22.998549 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:37:23 crc kubenswrapper[4867]: W0214 04:37:23.040946 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod51f6e45c_a545_4b49_b6f8_a3048619f24d.slice/crio-1ed5f62b1367ab5d606495b7d287f182fb33167f5f1ae1565d0110ed63160b24 WatchSource:0}: Error finding container 1ed5f62b1367ab5d606495b7d287f182fb33167f5f1ae1565d0110ed63160b24: Status 404 returned error can't find the container with id 1ed5f62b1367ab5d606495b7d287f182fb33167f5f1ae1565d0110ed63160b24 Feb 14 04:37:23 crc kubenswrapper[4867]: I0214 04:37:23.061143 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hwqcf"] Feb 14 04:37:23 crc kubenswrapper[4867]: I0214 04:37:23.949013 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-dnl28"] Feb 14 04:37:23 crc kubenswrapper[4867]: I0214 04:37:23.966729 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-dnl28"] Feb 14 04:37:24 crc kubenswrapper[4867]: I0214 04:37:24.041355 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-vgdj4"] Feb 14 04:37:24 crc kubenswrapper[4867]: I0214 04:37:24.042997 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-vgdj4" Feb 14 04:37:24 crc kubenswrapper[4867]: I0214 04:37:24.046134 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 14 04:37:24 crc kubenswrapper[4867]: I0214 04:37:24.060285 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-vgdj4"] Feb 14 04:37:24 crc kubenswrapper[4867]: I0214 04:37:24.068427 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hwqcf" event={"ID":"51f6e45c-a545-4b49-b6f8-a3048619f24d","Type":"ContainerStarted","Data":"1ed5f62b1367ab5d606495b7d287f182fb33167f5f1ae1565d0110ed63160b24"} Feb 14 04:37:24 crc kubenswrapper[4867]: I0214 04:37:24.167533 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/844735e8-e1c8-426f-8f5b-ce4f64e2ffbf-combined-ca-bundle\") pod \"aodh-db-sync-vgdj4\" (UID: \"844735e8-e1c8-426f-8f5b-ce4f64e2ffbf\") " pod="openstack/aodh-db-sync-vgdj4" Feb 14 04:37:24 crc kubenswrapper[4867]: I0214 04:37:24.167657 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/844735e8-e1c8-426f-8f5b-ce4f64e2ffbf-config-data\") pod \"aodh-db-sync-vgdj4\" (UID: \"844735e8-e1c8-426f-8f5b-ce4f64e2ffbf\") " pod="openstack/aodh-db-sync-vgdj4" Feb 14 04:37:24 crc kubenswrapper[4867]: I0214 04:37:24.167742 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/844735e8-e1c8-426f-8f5b-ce4f64e2ffbf-scripts\") pod \"aodh-db-sync-vgdj4\" (UID: \"844735e8-e1c8-426f-8f5b-ce4f64e2ffbf\") " pod="openstack/aodh-db-sync-vgdj4" Feb 14 04:37:24 crc kubenswrapper[4867]: I0214 04:37:24.167918 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zc22\" (UniqueName: \"kubernetes.io/projected/844735e8-e1c8-426f-8f5b-ce4f64e2ffbf-kube-api-access-4zc22\") pod \"aodh-db-sync-vgdj4\" (UID: \"844735e8-e1c8-426f-8f5b-ce4f64e2ffbf\") " pod="openstack/aodh-db-sync-vgdj4" Feb 14 04:37:24 crc kubenswrapper[4867]: I0214 04:37:24.270044 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/844735e8-e1c8-426f-8f5b-ce4f64e2ffbf-scripts\") pod \"aodh-db-sync-vgdj4\" (UID: \"844735e8-e1c8-426f-8f5b-ce4f64e2ffbf\") " pod="openstack/aodh-db-sync-vgdj4" Feb 14 04:37:24 crc kubenswrapper[4867]: I0214 04:37:24.270180 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zc22\" (UniqueName: \"kubernetes.io/projected/844735e8-e1c8-426f-8f5b-ce4f64e2ffbf-kube-api-access-4zc22\") pod \"aodh-db-sync-vgdj4\" (UID: \"844735e8-e1c8-426f-8f5b-ce4f64e2ffbf\") " pod="openstack/aodh-db-sync-vgdj4" Feb 14 04:37:24 crc kubenswrapper[4867]: I0214 04:37:24.270278 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/844735e8-e1c8-426f-8f5b-ce4f64e2ffbf-combined-ca-bundle\") pod \"aodh-db-sync-vgdj4\" (UID: \"844735e8-e1c8-426f-8f5b-ce4f64e2ffbf\") " pod="openstack/aodh-db-sync-vgdj4" Feb 14 04:37:24 crc kubenswrapper[4867]: I0214 04:37:24.270311 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/844735e8-e1c8-426f-8f5b-ce4f64e2ffbf-config-data\") pod \"aodh-db-sync-vgdj4\" (UID: \"844735e8-e1c8-426f-8f5b-ce4f64e2ffbf\") " pod="openstack/aodh-db-sync-vgdj4" Feb 14 04:37:24 crc kubenswrapper[4867]: I0214 04:37:24.277119 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/844735e8-e1c8-426f-8f5b-ce4f64e2ffbf-scripts\") pod \"aodh-db-sync-vgdj4\" (UID: \"844735e8-e1c8-426f-8f5b-ce4f64e2ffbf\") " pod="openstack/aodh-db-sync-vgdj4" Feb 14 04:37:24 crc kubenswrapper[4867]: I0214 04:37:24.277308 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/844735e8-e1c8-426f-8f5b-ce4f64e2ffbf-combined-ca-bundle\") pod \"aodh-db-sync-vgdj4\" (UID: \"844735e8-e1c8-426f-8f5b-ce4f64e2ffbf\") " pod="openstack/aodh-db-sync-vgdj4" Feb 14 04:37:24 crc kubenswrapper[4867]: I0214 04:37:24.277705 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/844735e8-e1c8-426f-8f5b-ce4f64e2ffbf-config-data\") pod \"aodh-db-sync-vgdj4\" (UID: \"844735e8-e1c8-426f-8f5b-ce4f64e2ffbf\") " pod="openstack/aodh-db-sync-vgdj4" Feb 14 04:37:24 crc kubenswrapper[4867]: I0214 04:37:24.292807 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zc22\" (UniqueName: \"kubernetes.io/projected/844735e8-e1c8-426f-8f5b-ce4f64e2ffbf-kube-api-access-4zc22\") pod \"aodh-db-sync-vgdj4\" (UID: \"844735e8-e1c8-426f-8f5b-ce4f64e2ffbf\") " pod="openstack/aodh-db-sync-vgdj4" Feb 14 04:37:24 crc kubenswrapper[4867]: I0214 04:37:24.385839 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-vgdj4" Feb 14 04:37:24 crc kubenswrapper[4867]: I0214 04:37:24.898071 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-vgdj4"] Feb 14 04:37:25 crc kubenswrapper[4867]: I0214 04:37:25.025854 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df373c99-9a99-4793-90ef-3ad7887e5e3e" path="/var/lib/kubelet/pods/df373c99-9a99-4793-90ef-3ad7887e5e3e/volumes" Feb 14 04:37:25 crc kubenswrapper[4867]: I0214 04:37:25.086735 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-vgdj4" event={"ID":"844735e8-e1c8-426f-8f5b-ce4f64e2ffbf","Type":"ContainerStarted","Data":"c5f61f015fd804d2bc75796f49931d65d47ad38dbb5345ac1f23a25001f8039b"} Feb 14 04:37:29 crc kubenswrapper[4867]: E0214 04:37:29.984569 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f9f2e84685b68ba026ed32e937f1e9734f0455c3a2cb5f5a9465424b4369a198" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 14 04:37:29 crc kubenswrapper[4867]: E0214 04:37:29.986686 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f9f2e84685b68ba026ed32e937f1e9734f0455c3a2cb5f5a9465424b4369a198" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 14 04:37:29 crc kubenswrapper[4867]: E0214 04:37:29.988952 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f9f2e84685b68ba026ed32e937f1e9734f0455c3a2cb5f5a9465424b4369a198" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 14 04:37:29 crc kubenswrapper[4867]: E0214 04:37:29.989025 4867 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-7797898b6d-54xz8" podUID="7535f37c-f2f6-4e75-bfa2-48211fe86ef6" containerName="heat-engine" Feb 14 04:37:33 crc kubenswrapper[4867]: I0214 04:37:33.356546 4867 generic.go:334] "Generic (PLEG): container finished" podID="7535f37c-f2f6-4e75-bfa2-48211fe86ef6" containerID="f9f2e84685b68ba026ed32e937f1e9734f0455c3a2cb5f5a9465424b4369a198" exitCode=0 Feb 14 04:37:33 crc kubenswrapper[4867]: I0214 04:37:33.356765 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7797898b6d-54xz8" event={"ID":"7535f37c-f2f6-4e75-bfa2-48211fe86ef6","Type":"ContainerDied","Data":"f9f2e84685b68ba026ed32e937f1e9734f0455c3a2cb5f5a9465424b4369a198"} Feb 14 04:37:33 crc kubenswrapper[4867]: I0214 04:37:33.998609 4867 scope.go:117] "RemoveContainer" containerID="7203a3aa09f0fa634ee4bcd02b0e1dff1e29376e8dd84a4e743cbea72d4c480e" Feb 14 04:37:33 crc kubenswrapper[4867]: E0214 04:37:33.999328 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:37:34 crc kubenswrapper[4867]: I0214 04:37:34.712323 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-2" Feb 14 04:37:34 crc kubenswrapper[4867]: I0214 04:37:34.784010 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 14 04:37:34 crc kubenswrapper[4867]: I0214 04:37:34.979785 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:37:38 crc kubenswrapper[4867]: E0214 04:37:38.920201 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest" Feb 14 04:37:38 crc kubenswrapper[4867]: E0214 04:37:38.920834 4867 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 14 04:37:38 crc kubenswrapper[4867]: container &Container{Name:repo-setup-edpm-deployment-openstack-edpm-ipam,Image:quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest,Command:[],Args:[ansible-runner run /runner -p playbook.yaml -i repo-setup-edpm-deployment-openstack-edpm-ipam],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ANSIBLE_VERBOSITY,Value:2,ValueFrom:nil,},EnvVar{Name:RUNNER_PLAYBOOK,Value: Feb 14 04:37:38 crc kubenswrapper[4867]: - hosts: all Feb 14 04:37:38 crc kubenswrapper[4867]: strategy: linear Feb 14 04:37:38 crc kubenswrapper[4867]: tasks: Feb 14 04:37:38 crc kubenswrapper[4867]: - name: Enable podified-repos Feb 14 04:37:38 crc kubenswrapper[4867]: become: true Feb 14 04:37:38 crc kubenswrapper[4867]: ansible.builtin.shell: | Feb 14 04:37:38 crc kubenswrapper[4867]: set -euxo pipefail Feb 14 04:37:38 crc kubenswrapper[4867]: pushd /var/tmp Feb 14 04:37:38 crc kubenswrapper[4867]: curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz Feb 14 04:37:38 crc kubenswrapper[4867]: pushd repo-setup-main Feb 14 04:37:38 crc kubenswrapper[4867]: python3 -m venv ./venv Feb 14 04:37:38 crc kubenswrapper[4867]: PBR_VERSION=0.0.0 ./venv/bin/pip install ./ Feb 14 04:37:38 crc kubenswrapper[4867]: ./venv/bin/repo-setup current-podified -b antelope Feb 14 04:37:38 crc kubenswrapper[4867]: popd Feb 14 04:37:38 crc kubenswrapper[4867]: rm -rf repo-setup-main Feb 14 04:37:38 crc kubenswrapper[4867]: Feb 14 04:37:38 crc kubenswrapper[4867]: Feb 14 04:37:38 crc kubenswrapper[4867]: ,ValueFrom:nil,},EnvVar{Name:RUNNER_EXTRA_VARS,Value: Feb 14 04:37:38 crc kubenswrapper[4867]: edpm_override_hosts: openstack-edpm-ipam Feb 14 04:37:38 crc kubenswrapper[4867]: edpm_service_type: repo-setup Feb 14 04:37:38 crc kubenswrapper[4867]: Feb 14 04:37:38 crc kubenswrapper[4867]: Feb 14 04:37:38 crc kubenswrapper[4867]: ,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:repo-setup-combined-ca-bundle,ReadOnly:false,MountPath:/var/lib/openstack/cacerts/repo-setup,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key-openstack-edpm-ipam,ReadOnly:false,MountPath:/runner/env/ssh_key/ssh_key_openstack-edpm-ipam,SubPath:ssh_key_openstack-edpm-ipam,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:inventory,ReadOnly:false,MountPath:/runner/inventory/hosts,SubPath:inventory,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mtch4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:openstack-aee-default-env,},Optional:*true,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod repo-setup-edpm-deployment-openstack-edpm-ipam-hwqcf_openstack(51f6e45c-a545-4b49-b6f8-a3048619f24d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled Feb 14 04:37:38 crc kubenswrapper[4867]: > logger="UnhandledError" Feb 14 04:37:38 crc kubenswrapper[4867]: E0214 04:37:38.922017 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"repo-setup-edpm-deployment-openstack-edpm-ipam\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hwqcf" podUID="51f6e45c-a545-4b49-b6f8-a3048619f24d" Feb 14 04:37:39 crc kubenswrapper[4867]: I0214 04:37:39.033853 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 14 04:37:39 crc kubenswrapper[4867]: E0214 04:37:39.453187 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"repo-setup-edpm-deployment-openstack-edpm-ipam\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest\\\"\"" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hwqcf" podUID="51f6e45c-a545-4b49-b6f8-a3048619f24d" Feb 14 04:37:39 crc kubenswrapper[4867]: I0214 04:37:39.570191 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7797898b6d-54xz8" Feb 14 04:37:39 crc kubenswrapper[4867]: I0214 04:37:39.688725 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7535f37c-f2f6-4e75-bfa2-48211fe86ef6-config-data\") pod \"7535f37c-f2f6-4e75-bfa2-48211fe86ef6\" (UID: \"7535f37c-f2f6-4e75-bfa2-48211fe86ef6\") " Feb 14 04:37:39 crc kubenswrapper[4867]: I0214 04:37:39.688849 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7535f37c-f2f6-4e75-bfa2-48211fe86ef6-combined-ca-bundle\") pod \"7535f37c-f2f6-4e75-bfa2-48211fe86ef6\" (UID: \"7535f37c-f2f6-4e75-bfa2-48211fe86ef6\") " Feb 14 04:37:39 crc kubenswrapper[4867]: I0214 04:37:39.689037 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7535f37c-f2f6-4e75-bfa2-48211fe86ef6-config-data-custom\") pod \"7535f37c-f2f6-4e75-bfa2-48211fe86ef6\" (UID: \"7535f37c-f2f6-4e75-bfa2-48211fe86ef6\") " Feb 14 04:37:39 crc kubenswrapper[4867]: I0214 04:37:39.689136 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rpdz2\" (UniqueName: \"kubernetes.io/projected/7535f37c-f2f6-4e75-bfa2-48211fe86ef6-kube-api-access-rpdz2\") pod \"7535f37c-f2f6-4e75-bfa2-48211fe86ef6\" (UID: \"7535f37c-f2f6-4e75-bfa2-48211fe86ef6\") " Feb 14 04:37:39 crc kubenswrapper[4867]: I0214 04:37:39.699986 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7535f37c-f2f6-4e75-bfa2-48211fe86ef6-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "7535f37c-f2f6-4e75-bfa2-48211fe86ef6" (UID: "7535f37c-f2f6-4e75-bfa2-48211fe86ef6"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:37:39 crc kubenswrapper[4867]: I0214 04:37:39.701929 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7535f37c-f2f6-4e75-bfa2-48211fe86ef6-kube-api-access-rpdz2" (OuterVolumeSpecName: "kube-api-access-rpdz2") pod "7535f37c-f2f6-4e75-bfa2-48211fe86ef6" (UID: "7535f37c-f2f6-4e75-bfa2-48211fe86ef6"). InnerVolumeSpecName "kube-api-access-rpdz2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:37:39 crc kubenswrapper[4867]: I0214 04:37:39.735425 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7535f37c-f2f6-4e75-bfa2-48211fe86ef6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7535f37c-f2f6-4e75-bfa2-48211fe86ef6" (UID: "7535f37c-f2f6-4e75-bfa2-48211fe86ef6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:37:39 crc kubenswrapper[4867]: I0214 04:37:39.758836 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7535f37c-f2f6-4e75-bfa2-48211fe86ef6-config-data" (OuterVolumeSpecName: "config-data") pod "7535f37c-f2f6-4e75-bfa2-48211fe86ef6" (UID: "7535f37c-f2f6-4e75-bfa2-48211fe86ef6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:37:39 crc kubenswrapper[4867]: I0214 04:37:39.791940 4867 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7535f37c-f2f6-4e75-bfa2-48211fe86ef6-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 14 04:37:39 crc kubenswrapper[4867]: I0214 04:37:39.791987 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rpdz2\" (UniqueName: \"kubernetes.io/projected/7535f37c-f2f6-4e75-bfa2-48211fe86ef6-kube-api-access-rpdz2\") on node \"crc\" DevicePath \"\"" Feb 14 04:37:39 crc kubenswrapper[4867]: I0214 04:37:39.792002 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7535f37c-f2f6-4e75-bfa2-48211fe86ef6-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:37:39 crc kubenswrapper[4867]: I0214 04:37:39.792012 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7535f37c-f2f6-4e75-bfa2-48211fe86ef6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:37:39 crc kubenswrapper[4867]: I0214 04:37:39.908527 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-1" podUID="6bc83863-74f4-4509-969c-0f3305a542a8" containerName="rabbitmq" containerID="cri-o://88c159d1a43dc50e68ca5c624034eb8becafe830a496b5d85f7c11e183f4f8b3" gracePeriod=604795 Feb 14 04:37:40 crc kubenswrapper[4867]: I0214 04:37:40.462651 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-vgdj4" event={"ID":"844735e8-e1c8-426f-8f5b-ce4f64e2ffbf","Type":"ContainerStarted","Data":"fe59d6a45b3b1f49664971d341b7fc6d30fef719063bc033373a5e6d9bd21e9a"} Feb 14 04:37:40 crc kubenswrapper[4867]: I0214 04:37:40.465108 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7797898b6d-54xz8" event={"ID":"7535f37c-f2f6-4e75-bfa2-48211fe86ef6","Type":"ContainerDied","Data":"dd3c354011933e0f94727b4d8a7a0061c7e339109544dc62c211e6c435dc4d43"} Feb 14 04:37:40 crc kubenswrapper[4867]: I0214 04:37:40.465159 4867 scope.go:117] "RemoveContainer" containerID="f9f2e84685b68ba026ed32e937f1e9734f0455c3a2cb5f5a9465424b4369a198" Feb 14 04:37:40 crc kubenswrapper[4867]: I0214 04:37:40.465193 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7797898b6d-54xz8" Feb 14 04:37:40 crc kubenswrapper[4867]: I0214 04:37:40.508089 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-7797898b6d-54xz8"] Feb 14 04:37:40 crc kubenswrapper[4867]: I0214 04:37:40.533339 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-7797898b6d-54xz8"] Feb 14 04:37:41 crc kubenswrapper[4867]: I0214 04:37:41.033092 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7535f37c-f2f6-4e75-bfa2-48211fe86ef6" path="/var/lib/kubelet/pods/7535f37c-f2f6-4e75-bfa2-48211fe86ef6/volumes" Feb 14 04:37:41 crc kubenswrapper[4867]: I0214 04:37:41.505630 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-vgdj4" podStartSLOduration=3.390382844 podStartE2EDuration="17.505611729s" podCreationTimestamp="2026-02-14 04:37:24 +0000 UTC" firstStartedPulling="2026-02-14 04:37:24.909723679 +0000 UTC m=+1676.990660993" lastFinishedPulling="2026-02-14 04:37:39.024952564 +0000 UTC m=+1691.105889878" observedRunningTime="2026-02-14 04:37:41.503989187 +0000 UTC m=+1693.584926501" watchObservedRunningTime="2026-02-14 04:37:41.505611729 +0000 UTC m=+1693.586549043" Feb 14 04:37:44 crc kubenswrapper[4867]: I0214 04:37:44.519293 4867 generic.go:334] "Generic (PLEG): container finished" podID="844735e8-e1c8-426f-8f5b-ce4f64e2ffbf" containerID="fe59d6a45b3b1f49664971d341b7fc6d30fef719063bc033373a5e6d9bd21e9a" exitCode=0 Feb 14 04:37:44 crc kubenswrapper[4867]: I0214 04:37:44.519404 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-vgdj4" event={"ID":"844735e8-e1c8-426f-8f5b-ce4f64e2ffbf","Type":"ContainerDied","Data":"fe59d6a45b3b1f49664971d341b7fc6d30fef719063bc033373a5e6d9bd21e9a"} Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.074154 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-vgdj4" Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.166011 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4zc22\" (UniqueName: \"kubernetes.io/projected/844735e8-e1c8-426f-8f5b-ce4f64e2ffbf-kube-api-access-4zc22\") pod \"844735e8-e1c8-426f-8f5b-ce4f64e2ffbf\" (UID: \"844735e8-e1c8-426f-8f5b-ce4f64e2ffbf\") " Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.166201 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/844735e8-e1c8-426f-8f5b-ce4f64e2ffbf-config-data\") pod \"844735e8-e1c8-426f-8f5b-ce4f64e2ffbf\" (UID: \"844735e8-e1c8-426f-8f5b-ce4f64e2ffbf\") " Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.166236 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/844735e8-e1c8-426f-8f5b-ce4f64e2ffbf-combined-ca-bundle\") pod \"844735e8-e1c8-426f-8f5b-ce4f64e2ffbf\" (UID: \"844735e8-e1c8-426f-8f5b-ce4f64e2ffbf\") " Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.166467 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/844735e8-e1c8-426f-8f5b-ce4f64e2ffbf-scripts\") pod \"844735e8-e1c8-426f-8f5b-ce4f64e2ffbf\" (UID: \"844735e8-e1c8-426f-8f5b-ce4f64e2ffbf\") " Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.175027 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/844735e8-e1c8-426f-8f5b-ce4f64e2ffbf-scripts" (OuterVolumeSpecName: "scripts") pod "844735e8-e1c8-426f-8f5b-ce4f64e2ffbf" (UID: "844735e8-e1c8-426f-8f5b-ce4f64e2ffbf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.176334 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/844735e8-e1c8-426f-8f5b-ce4f64e2ffbf-kube-api-access-4zc22" (OuterVolumeSpecName: "kube-api-access-4zc22") pod "844735e8-e1c8-426f-8f5b-ce4f64e2ffbf" (UID: "844735e8-e1c8-426f-8f5b-ce4f64e2ffbf"). InnerVolumeSpecName "kube-api-access-4zc22". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.206050 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/844735e8-e1c8-426f-8f5b-ce4f64e2ffbf-config-data" (OuterVolumeSpecName: "config-data") pod "844735e8-e1c8-426f-8f5b-ce4f64e2ffbf" (UID: "844735e8-e1c8-426f-8f5b-ce4f64e2ffbf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.211616 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/844735e8-e1c8-426f-8f5b-ce4f64e2ffbf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "844735e8-e1c8-426f-8f5b-ce4f64e2ffbf" (UID: "844735e8-e1c8-426f-8f5b-ce4f64e2ffbf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.269077 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4zc22\" (UniqueName: \"kubernetes.io/projected/844735e8-e1c8-426f-8f5b-ce4f64e2ffbf-kube-api-access-4zc22\") on node \"crc\" DevicePath \"\"" Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.269471 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/844735e8-e1c8-426f-8f5b-ce4f64e2ffbf-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.269486 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/844735e8-e1c8-426f-8f5b-ce4f64e2ffbf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.269498 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/844735e8-e1c8-426f-8f5b-ce4f64e2ffbf-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.551368 4867 generic.go:334] "Generic (PLEG): container finished" podID="6bc83863-74f4-4509-969c-0f3305a542a8" containerID="88c159d1a43dc50e68ca5c624034eb8becafe830a496b5d85f7c11e183f4f8b3" exitCode=0 Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.551479 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"6bc83863-74f4-4509-969c-0f3305a542a8","Type":"ContainerDied","Data":"88c159d1a43dc50e68ca5c624034eb8becafe830a496b5d85f7c11e183f4f8b3"} Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.560075 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-vgdj4" event={"ID":"844735e8-e1c8-426f-8f5b-ce4f64e2ffbf","Type":"ContainerDied","Data":"c5f61f015fd804d2bc75796f49931d65d47ad38dbb5345ac1f23a25001f8039b"} Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.560117 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5f61f015fd804d2bc75796f49931d65d47ad38dbb5345ac1f23a25001f8039b" Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.560182 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-vgdj4" Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.701094 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.783610 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6bc83863-74f4-4509-969c-0f3305a542a8-server-conf\") pod \"6bc83863-74f4-4509-969c-0f3305a542a8\" (UID: \"6bc83863-74f4-4509-969c-0f3305a542a8\") " Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.783672 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-294tk\" (UniqueName: \"kubernetes.io/projected/6bc83863-74f4-4509-969c-0f3305a542a8-kube-api-access-294tk\") pod \"6bc83863-74f4-4509-969c-0f3305a542a8\" (UID: \"6bc83863-74f4-4509-969c-0f3305a542a8\") " Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.783699 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6bc83863-74f4-4509-969c-0f3305a542a8-rabbitmq-plugins\") pod \"6bc83863-74f4-4509-969c-0f3305a542a8\" (UID: \"6bc83863-74f4-4509-969c-0f3305a542a8\") " Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.783813 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6bc83863-74f4-4509-969c-0f3305a542a8-rabbitmq-confd\") pod \"6bc83863-74f4-4509-969c-0f3305a542a8\" (UID: \"6bc83863-74f4-4509-969c-0f3305a542a8\") " Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.784478 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-64ab6375-8d81-46bd-80ba-b738c813923f\") pod \"6bc83863-74f4-4509-969c-0f3305a542a8\" (UID: \"6bc83863-74f4-4509-969c-0f3305a542a8\") " Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.784580 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6bc83863-74f4-4509-969c-0f3305a542a8-plugins-conf\") pod \"6bc83863-74f4-4509-969c-0f3305a542a8\" (UID: \"6bc83863-74f4-4509-969c-0f3305a542a8\") " Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.784621 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6bc83863-74f4-4509-969c-0f3305a542a8-erlang-cookie-secret\") pod \"6bc83863-74f4-4509-969c-0f3305a542a8\" (UID: \"6bc83863-74f4-4509-969c-0f3305a542a8\") " Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.784645 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6bc83863-74f4-4509-969c-0f3305a542a8-rabbitmq-tls\") pod \"6bc83863-74f4-4509-969c-0f3305a542a8\" (UID: \"6bc83863-74f4-4509-969c-0f3305a542a8\") " Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.784695 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6bc83863-74f4-4509-969c-0f3305a542a8-config-data\") pod \"6bc83863-74f4-4509-969c-0f3305a542a8\" (UID: \"6bc83863-74f4-4509-969c-0f3305a542a8\") " Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.784764 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6bc83863-74f4-4509-969c-0f3305a542a8-rabbitmq-erlang-cookie\") pod \"6bc83863-74f4-4509-969c-0f3305a542a8\" (UID: \"6bc83863-74f4-4509-969c-0f3305a542a8\") " Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.784793 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6bc83863-74f4-4509-969c-0f3305a542a8-pod-info\") pod \"6bc83863-74f4-4509-969c-0f3305a542a8\" (UID: \"6bc83863-74f4-4509-969c-0f3305a542a8\") " Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.787252 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bc83863-74f4-4509-969c-0f3305a542a8-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "6bc83863-74f4-4509-969c-0f3305a542a8" (UID: "6bc83863-74f4-4509-969c-0f3305a542a8"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.787904 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6bc83863-74f4-4509-969c-0f3305a542a8-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "6bc83863-74f4-4509-969c-0f3305a542a8" (UID: "6bc83863-74f4-4509-969c-0f3305a542a8"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.790843 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6bc83863-74f4-4509-969c-0f3305a542a8-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "6bc83863-74f4-4509-969c-0f3305a542a8" (UID: "6bc83863-74f4-4509-969c-0f3305a542a8"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.791844 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bc83863-74f4-4509-969c-0f3305a542a8-kube-api-access-294tk" (OuterVolumeSpecName: "kube-api-access-294tk") pod "6bc83863-74f4-4509-969c-0f3305a542a8" (UID: "6bc83863-74f4-4509-969c-0f3305a542a8"). InnerVolumeSpecName "kube-api-access-294tk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.802173 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/6bc83863-74f4-4509-969c-0f3305a542a8-pod-info" (OuterVolumeSpecName: "pod-info") pod "6bc83863-74f4-4509-969c-0f3305a542a8" (UID: "6bc83863-74f4-4509-969c-0f3305a542a8"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.803600 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bc83863-74f4-4509-969c-0f3305a542a8-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "6bc83863-74f4-4509-969c-0f3305a542a8" (UID: "6bc83863-74f4-4509-969c-0f3305a542a8"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.804152 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bc83863-74f4-4509-969c-0f3305a542a8-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "6bc83863-74f4-4509-969c-0f3305a542a8" (UID: "6bc83863-74f4-4509-969c-0f3305a542a8"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.836051 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bc83863-74f4-4509-969c-0f3305a542a8-config-data" (OuterVolumeSpecName: "config-data") pod "6bc83863-74f4-4509-969c-0f3305a542a8" (UID: "6bc83863-74f4-4509-969c-0f3305a542a8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.873718 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-64ab6375-8d81-46bd-80ba-b738c813923f" (OuterVolumeSpecName: "persistence") pod "6bc83863-74f4-4509-969c-0f3305a542a8" (UID: "6bc83863-74f4-4509-969c-0f3305a542a8"). InnerVolumeSpecName "pvc-64ab6375-8d81-46bd-80ba-b738c813923f". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.880054 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bc83863-74f4-4509-969c-0f3305a542a8-server-conf" (OuterVolumeSpecName: "server-conf") pod "6bc83863-74f4-4509-969c-0f3305a542a8" (UID: "6bc83863-74f4-4509-969c-0f3305a542a8"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.889238 4867 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-64ab6375-8d81-46bd-80ba-b738c813923f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-64ab6375-8d81-46bd-80ba-b738c813923f\") on node \"crc\" " Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.889268 4867 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6bc83863-74f4-4509-969c-0f3305a542a8-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.889280 4867 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6bc83863-74f4-4509-969c-0f3305a542a8-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.889289 4867 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6bc83863-74f4-4509-969c-0f3305a542a8-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.889297 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6bc83863-74f4-4509-969c-0f3305a542a8-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.889306 4867 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6bc83863-74f4-4509-969c-0f3305a542a8-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.889315 4867 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6bc83863-74f4-4509-969c-0f3305a542a8-pod-info\") on node \"crc\" DevicePath \"\"" Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.889327 4867 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6bc83863-74f4-4509-969c-0f3305a542a8-server-conf\") on node \"crc\" DevicePath \"\"" Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.889336 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-294tk\" (UniqueName: \"kubernetes.io/projected/6bc83863-74f4-4509-969c-0f3305a542a8-kube-api-access-294tk\") on node \"crc\" DevicePath \"\"" Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.889344 4867 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6bc83863-74f4-4509-969c-0f3305a542a8-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.954357 4867 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.954670 4867 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-64ab6375-8d81-46bd-80ba-b738c813923f" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-64ab6375-8d81-46bd-80ba-b738c813923f") on node "crc" Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.963188 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.966030 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="58861691-18ee-408e-9b79-b12a411e99d0" containerName="aodh-api" containerID="cri-o://4f9fbe8278c2f8217fd9d1c65cfa1d016b54bc10a1b47dd522ac53e2da5bac45" gracePeriod=30 Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.966704 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="58861691-18ee-408e-9b79-b12a411e99d0" containerName="aodh-listener" containerID="cri-o://27e1492030b12bf8e17f8ae9468e42331d9cc302f11974a5a0fc14d2d151ad95" gracePeriod=30 Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.966771 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="58861691-18ee-408e-9b79-b12a411e99d0" containerName="aodh-notifier" containerID="cri-o://57c262920dac84f166643430c62b34648c079ac3eb2252d50e804a444b3475ef" gracePeriod=30 Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.966814 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="58861691-18ee-408e-9b79-b12a411e99d0" containerName="aodh-evaluator" containerID="cri-o://a6c180f71636733ac3331112696898cf83a02e4f76f35724da02b3fc7166a0be" gracePeriod=30 Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.993540 4867 reconciler_common.go:293] "Volume detached for volume \"pvc-64ab6375-8d81-46bd-80ba-b738c813923f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-64ab6375-8d81-46bd-80ba-b738c813923f\") on node \"crc\" DevicePath \"\"" Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.998968 4867 scope.go:117] "RemoveContainer" containerID="7203a3aa09f0fa634ee4bcd02b0e1dff1e29376e8dd84a4e743cbea72d4c480e" Feb 14 04:37:46 crc kubenswrapper[4867]: E0214 04:37:46.999225 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.043757 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bc83863-74f4-4509-969c-0f3305a542a8-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "6bc83863-74f4-4509-969c-0f3305a542a8" (UID: "6bc83863-74f4-4509-969c-0f3305a542a8"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.095725 4867 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6bc83863-74f4-4509-969c-0f3305a542a8-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.573178 4867 generic.go:334] "Generic (PLEG): container finished" podID="58861691-18ee-408e-9b79-b12a411e99d0" containerID="a6c180f71636733ac3331112696898cf83a02e4f76f35724da02b3fc7166a0be" exitCode=0 Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.573477 4867 generic.go:334] "Generic (PLEG): container finished" podID="58861691-18ee-408e-9b79-b12a411e99d0" containerID="4f9fbe8278c2f8217fd9d1c65cfa1d016b54bc10a1b47dd522ac53e2da5bac45" exitCode=0 Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.573261 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"58861691-18ee-408e-9b79-b12a411e99d0","Type":"ContainerDied","Data":"a6c180f71636733ac3331112696898cf83a02e4f76f35724da02b3fc7166a0be"} Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.573562 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"58861691-18ee-408e-9b79-b12a411e99d0","Type":"ContainerDied","Data":"4f9fbe8278c2f8217fd9d1c65cfa1d016b54bc10a1b47dd522ac53e2da5bac45"} Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.575786 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"6bc83863-74f4-4509-969c-0f3305a542a8","Type":"ContainerDied","Data":"6d2235a75be13119e9c9aa74a5f3a2e2f13d32b41febb3b537fd57f955f1f8bc"} Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.575821 4867 scope.go:117] "RemoveContainer" containerID="88c159d1a43dc50e68ca5c624034eb8becafe830a496b5d85f7c11e183f4f8b3" Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.575977 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.612131 4867 scope.go:117] "RemoveContainer" containerID="da72547c3496fadaa474b36d059bf8582881ee27c6b6aa73c9aa360c8e76f26d" Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.616552 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.634544 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.656768 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-1"] Feb 14 04:37:47 crc kubenswrapper[4867]: E0214 04:37:47.657418 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7535f37c-f2f6-4e75-bfa2-48211fe86ef6" containerName="heat-engine" Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.657445 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="7535f37c-f2f6-4e75-bfa2-48211fe86ef6" containerName="heat-engine" Feb 14 04:37:47 crc kubenswrapper[4867]: E0214 04:37:47.657463 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bc83863-74f4-4509-969c-0f3305a542a8" containerName="rabbitmq" Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.657469 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bc83863-74f4-4509-969c-0f3305a542a8" containerName="rabbitmq" Feb 14 04:37:47 crc kubenswrapper[4867]: E0214 04:37:47.657484 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="844735e8-e1c8-426f-8f5b-ce4f64e2ffbf" containerName="aodh-db-sync" Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.657491 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="844735e8-e1c8-426f-8f5b-ce4f64e2ffbf" containerName="aodh-db-sync" Feb 14 04:37:47 crc kubenswrapper[4867]: E0214 04:37:47.657526 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bc83863-74f4-4509-969c-0f3305a542a8" containerName="setup-container" Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.657532 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bc83863-74f4-4509-969c-0f3305a542a8" containerName="setup-container" Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.657786 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bc83863-74f4-4509-969c-0f3305a542a8" containerName="rabbitmq" Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.657820 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="7535f37c-f2f6-4e75-bfa2-48211fe86ef6" containerName="heat-engine" Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.657844 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="844735e8-e1c8-426f-8f5b-ce4f64e2ffbf" containerName="aodh-db-sync" Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.659493 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.683449 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.709402 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/82f2a63e-b256-4ad7-96ee-1def8a174cfb-pod-info\") pod \"rabbitmq-server-1\" (UID: \"82f2a63e-b256-4ad7-96ee-1def8a174cfb\") " pod="openstack/rabbitmq-server-1" Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.709455 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/82f2a63e-b256-4ad7-96ee-1def8a174cfb-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"82f2a63e-b256-4ad7-96ee-1def8a174cfb\") " pod="openstack/rabbitmq-server-1" Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.709486 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/82f2a63e-b256-4ad7-96ee-1def8a174cfb-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"82f2a63e-b256-4ad7-96ee-1def8a174cfb\") " pod="openstack/rabbitmq-server-1" Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.709730 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkd5b\" (UniqueName: \"kubernetes.io/projected/82f2a63e-b256-4ad7-96ee-1def8a174cfb-kube-api-access-xkd5b\") pod \"rabbitmq-server-1\" (UID: \"82f2a63e-b256-4ad7-96ee-1def8a174cfb\") " pod="openstack/rabbitmq-server-1" Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.709864 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/82f2a63e-b256-4ad7-96ee-1def8a174cfb-server-conf\") pod \"rabbitmq-server-1\" (UID: \"82f2a63e-b256-4ad7-96ee-1def8a174cfb\") " pod="openstack/rabbitmq-server-1" Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.709932 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/82f2a63e-b256-4ad7-96ee-1def8a174cfb-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"82f2a63e-b256-4ad7-96ee-1def8a174cfb\") " pod="openstack/rabbitmq-server-1" Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.710035 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/82f2a63e-b256-4ad7-96ee-1def8a174cfb-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"82f2a63e-b256-4ad7-96ee-1def8a174cfb\") " pod="openstack/rabbitmq-server-1" Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.710093 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/82f2a63e-b256-4ad7-96ee-1def8a174cfb-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"82f2a63e-b256-4ad7-96ee-1def8a174cfb\") " pod="openstack/rabbitmq-server-1" Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.710124 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-64ab6375-8d81-46bd-80ba-b738c813923f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-64ab6375-8d81-46bd-80ba-b738c813923f\") pod \"rabbitmq-server-1\" (UID: \"82f2a63e-b256-4ad7-96ee-1def8a174cfb\") " pod="openstack/rabbitmq-server-1" Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.710259 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/82f2a63e-b256-4ad7-96ee-1def8a174cfb-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"82f2a63e-b256-4ad7-96ee-1def8a174cfb\") " pod="openstack/rabbitmq-server-1" Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.710283 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/82f2a63e-b256-4ad7-96ee-1def8a174cfb-config-data\") pod \"rabbitmq-server-1\" (UID: \"82f2a63e-b256-4ad7-96ee-1def8a174cfb\") " pod="openstack/rabbitmq-server-1" Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.812625 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/82f2a63e-b256-4ad7-96ee-1def8a174cfb-pod-info\") pod \"rabbitmq-server-1\" (UID: \"82f2a63e-b256-4ad7-96ee-1def8a174cfb\") " pod="openstack/rabbitmq-server-1" Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.812704 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/82f2a63e-b256-4ad7-96ee-1def8a174cfb-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"82f2a63e-b256-4ad7-96ee-1def8a174cfb\") " pod="openstack/rabbitmq-server-1" Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.812751 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/82f2a63e-b256-4ad7-96ee-1def8a174cfb-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"82f2a63e-b256-4ad7-96ee-1def8a174cfb\") " pod="openstack/rabbitmq-server-1" Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.812803 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkd5b\" (UniqueName: \"kubernetes.io/projected/82f2a63e-b256-4ad7-96ee-1def8a174cfb-kube-api-access-xkd5b\") pod \"rabbitmq-server-1\" (UID: \"82f2a63e-b256-4ad7-96ee-1def8a174cfb\") " pod="openstack/rabbitmq-server-1" Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.812867 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/82f2a63e-b256-4ad7-96ee-1def8a174cfb-server-conf\") pod \"rabbitmq-server-1\" (UID: \"82f2a63e-b256-4ad7-96ee-1def8a174cfb\") " pod="openstack/rabbitmq-server-1" Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.812914 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/82f2a63e-b256-4ad7-96ee-1def8a174cfb-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"82f2a63e-b256-4ad7-96ee-1def8a174cfb\") " pod="openstack/rabbitmq-server-1" Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.812983 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/82f2a63e-b256-4ad7-96ee-1def8a174cfb-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"82f2a63e-b256-4ad7-96ee-1def8a174cfb\") " pod="openstack/rabbitmq-server-1" Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.813052 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/82f2a63e-b256-4ad7-96ee-1def8a174cfb-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"82f2a63e-b256-4ad7-96ee-1def8a174cfb\") " pod="openstack/rabbitmq-server-1" Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.813099 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-64ab6375-8d81-46bd-80ba-b738c813923f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-64ab6375-8d81-46bd-80ba-b738c813923f\") pod \"rabbitmq-server-1\" (UID: \"82f2a63e-b256-4ad7-96ee-1def8a174cfb\") " pod="openstack/rabbitmq-server-1" Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.813172 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/82f2a63e-b256-4ad7-96ee-1def8a174cfb-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"82f2a63e-b256-4ad7-96ee-1def8a174cfb\") " pod="openstack/rabbitmq-server-1" Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.813209 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/82f2a63e-b256-4ad7-96ee-1def8a174cfb-config-data\") pod \"rabbitmq-server-1\" (UID: \"82f2a63e-b256-4ad7-96ee-1def8a174cfb\") " pod="openstack/rabbitmq-server-1" Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.813804 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/82f2a63e-b256-4ad7-96ee-1def8a174cfb-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"82f2a63e-b256-4ad7-96ee-1def8a174cfb\") " pod="openstack/rabbitmq-server-1" Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.813924 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/82f2a63e-b256-4ad7-96ee-1def8a174cfb-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"82f2a63e-b256-4ad7-96ee-1def8a174cfb\") " pod="openstack/rabbitmq-server-1" Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.814401 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/82f2a63e-b256-4ad7-96ee-1def8a174cfb-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"82f2a63e-b256-4ad7-96ee-1def8a174cfb\") " pod="openstack/rabbitmq-server-1" Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.814606 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/82f2a63e-b256-4ad7-96ee-1def8a174cfb-server-conf\") pod \"rabbitmq-server-1\" (UID: \"82f2a63e-b256-4ad7-96ee-1def8a174cfb\") " pod="openstack/rabbitmq-server-1" Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.814724 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/82f2a63e-b256-4ad7-96ee-1def8a174cfb-config-data\") pod \"rabbitmq-server-1\" (UID: \"82f2a63e-b256-4ad7-96ee-1def8a174cfb\") " pod="openstack/rabbitmq-server-1" Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.815490 4867 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.815551 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-64ab6375-8d81-46bd-80ba-b738c813923f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-64ab6375-8d81-46bd-80ba-b738c813923f\") pod \"rabbitmq-server-1\" (UID: \"82f2a63e-b256-4ad7-96ee-1def8a174cfb\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/55ff7cc17667ae9e120da2b34de2e1baed28e5c0bfceac7c1699349f36759e58/globalmount\"" pod="openstack/rabbitmq-server-1" Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.821006 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/82f2a63e-b256-4ad7-96ee-1def8a174cfb-pod-info\") pod \"rabbitmq-server-1\" (UID: \"82f2a63e-b256-4ad7-96ee-1def8a174cfb\") " pod="openstack/rabbitmq-server-1" Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.823278 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/82f2a63e-b256-4ad7-96ee-1def8a174cfb-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"82f2a63e-b256-4ad7-96ee-1def8a174cfb\") " pod="openstack/rabbitmq-server-1" Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.823353 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/82f2a63e-b256-4ad7-96ee-1def8a174cfb-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"82f2a63e-b256-4ad7-96ee-1def8a174cfb\") " pod="openstack/rabbitmq-server-1" Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.823568 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/82f2a63e-b256-4ad7-96ee-1def8a174cfb-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"82f2a63e-b256-4ad7-96ee-1def8a174cfb\") " pod="openstack/rabbitmq-server-1" Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.841281 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkd5b\" (UniqueName: \"kubernetes.io/projected/82f2a63e-b256-4ad7-96ee-1def8a174cfb-kube-api-access-xkd5b\") pod \"rabbitmq-server-1\" (UID: \"82f2a63e-b256-4ad7-96ee-1def8a174cfb\") " pod="openstack/rabbitmq-server-1" Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.891678 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-64ab6375-8d81-46bd-80ba-b738c813923f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-64ab6375-8d81-46bd-80ba-b738c813923f\") pod \"rabbitmq-server-1\" (UID: \"82f2a63e-b256-4ad7-96ee-1def8a174cfb\") " pod="openstack/rabbitmq-server-1" Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.978350 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 14 04:37:48 crc kubenswrapper[4867]: I0214 04:37:48.580286 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 14 04:37:48 crc kubenswrapper[4867]: W0214 04:37:48.584718 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82f2a63e_b256_4ad7_96ee_1def8a174cfb.slice/crio-59012886a85bf863af669f0867fcced616071f97cfdf03bf6796c95d85bbae24 WatchSource:0}: Error finding container 59012886a85bf863af669f0867fcced616071f97cfdf03bf6796c95d85bbae24: Status 404 returned error can't find the container with id 59012886a85bf863af669f0867fcced616071f97cfdf03bf6796c95d85bbae24 Feb 14 04:37:49 crc kubenswrapper[4867]: I0214 04:37:49.015007 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6bc83863-74f4-4509-969c-0f3305a542a8" path="/var/lib/kubelet/pods/6bc83863-74f4-4509-969c-0f3305a542a8/volumes" Feb 14 04:37:49 crc kubenswrapper[4867]: I0214 04:37:49.606927 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"82f2a63e-b256-4ad7-96ee-1def8a174cfb","Type":"ContainerStarted","Data":"59012886a85bf863af669f0867fcced616071f97cfdf03bf6796c95d85bbae24"} Feb 14 04:37:50 crc kubenswrapper[4867]: I0214 04:37:50.619352 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"82f2a63e-b256-4ad7-96ee-1def8a174cfb","Type":"ContainerStarted","Data":"0c997e7bc3d5f543f14547386fa8ede76fc6a555faa3b09cca505eba9cd2af8d"} Feb 14 04:37:51 crc kubenswrapper[4867]: I0214 04:37:51.653599 4867 generic.go:334] "Generic (PLEG): container finished" podID="58861691-18ee-408e-9b79-b12a411e99d0" containerID="27e1492030b12bf8e17f8ae9468e42331d9cc302f11974a5a0fc14d2d151ad95" exitCode=0 Feb 14 04:37:51 crc kubenswrapper[4867]: I0214 04:37:51.655497 4867 generic.go:334] "Generic (PLEG): container finished" podID="58861691-18ee-408e-9b79-b12a411e99d0" containerID="57c262920dac84f166643430c62b34648c079ac3eb2252d50e804a444b3475ef" exitCode=0 Feb 14 04:37:51 crc kubenswrapper[4867]: I0214 04:37:51.658294 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"58861691-18ee-408e-9b79-b12a411e99d0","Type":"ContainerDied","Data":"27e1492030b12bf8e17f8ae9468e42331d9cc302f11974a5a0fc14d2d151ad95"} Feb 14 04:37:51 crc kubenswrapper[4867]: I0214 04:37:51.658904 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"58861691-18ee-408e-9b79-b12a411e99d0","Type":"ContainerDied","Data":"57c262920dac84f166643430c62b34648c079ac3eb2252d50e804a444b3475ef"} Feb 14 04:37:52 crc kubenswrapper[4867]: I0214 04:37:52.079392 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 14 04:37:52 crc kubenswrapper[4867]: I0214 04:37:52.153352 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/58861691-18ee-408e-9b79-b12a411e99d0-public-tls-certs\") pod \"58861691-18ee-408e-9b79-b12a411e99d0\" (UID: \"58861691-18ee-408e-9b79-b12a411e99d0\") " Feb 14 04:37:52 crc kubenswrapper[4867]: I0214 04:37:52.153633 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58861691-18ee-408e-9b79-b12a411e99d0-scripts\") pod \"58861691-18ee-408e-9b79-b12a411e99d0\" (UID: \"58861691-18ee-408e-9b79-b12a411e99d0\") " Feb 14 04:37:52 crc kubenswrapper[4867]: I0214 04:37:52.154362 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/58861691-18ee-408e-9b79-b12a411e99d0-internal-tls-certs\") pod \"58861691-18ee-408e-9b79-b12a411e99d0\" (UID: \"58861691-18ee-408e-9b79-b12a411e99d0\") " Feb 14 04:37:52 crc kubenswrapper[4867]: I0214 04:37:52.154481 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m47lq\" (UniqueName: \"kubernetes.io/projected/58861691-18ee-408e-9b79-b12a411e99d0-kube-api-access-m47lq\") pod \"58861691-18ee-408e-9b79-b12a411e99d0\" (UID: \"58861691-18ee-408e-9b79-b12a411e99d0\") " Feb 14 04:37:52 crc kubenswrapper[4867]: I0214 04:37:52.154747 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58861691-18ee-408e-9b79-b12a411e99d0-config-data\") pod \"58861691-18ee-408e-9b79-b12a411e99d0\" (UID: \"58861691-18ee-408e-9b79-b12a411e99d0\") " Feb 14 04:37:52 crc kubenswrapper[4867]: I0214 04:37:52.155201 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58861691-18ee-408e-9b79-b12a411e99d0-combined-ca-bundle\") pod \"58861691-18ee-408e-9b79-b12a411e99d0\" (UID: \"58861691-18ee-408e-9b79-b12a411e99d0\") " Feb 14 04:37:52 crc kubenswrapper[4867]: I0214 04:37:52.169551 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58861691-18ee-408e-9b79-b12a411e99d0-scripts" (OuterVolumeSpecName: "scripts") pod "58861691-18ee-408e-9b79-b12a411e99d0" (UID: "58861691-18ee-408e-9b79-b12a411e99d0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:37:52 crc kubenswrapper[4867]: I0214 04:37:52.169682 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58861691-18ee-408e-9b79-b12a411e99d0-kube-api-access-m47lq" (OuterVolumeSpecName: "kube-api-access-m47lq") pod "58861691-18ee-408e-9b79-b12a411e99d0" (UID: "58861691-18ee-408e-9b79-b12a411e99d0"). InnerVolumeSpecName "kube-api-access-m47lq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:37:52 crc kubenswrapper[4867]: I0214 04:37:52.277568 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m47lq\" (UniqueName: \"kubernetes.io/projected/58861691-18ee-408e-9b79-b12a411e99d0-kube-api-access-m47lq\") on node \"crc\" DevicePath \"\"" Feb 14 04:37:52 crc kubenswrapper[4867]: I0214 04:37:52.277943 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58861691-18ee-408e-9b79-b12a411e99d0-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:37:52 crc kubenswrapper[4867]: I0214 04:37:52.280786 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58861691-18ee-408e-9b79-b12a411e99d0-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "58861691-18ee-408e-9b79-b12a411e99d0" (UID: "58861691-18ee-408e-9b79-b12a411e99d0"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:37:52 crc kubenswrapper[4867]: I0214 04:37:52.292768 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58861691-18ee-408e-9b79-b12a411e99d0-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "58861691-18ee-408e-9b79-b12a411e99d0" (UID: "58861691-18ee-408e-9b79-b12a411e99d0"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:37:52 crc kubenswrapper[4867]: I0214 04:37:52.338118 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58861691-18ee-408e-9b79-b12a411e99d0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "58861691-18ee-408e-9b79-b12a411e99d0" (UID: "58861691-18ee-408e-9b79-b12a411e99d0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:37:52 crc kubenswrapper[4867]: I0214 04:37:52.354592 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58861691-18ee-408e-9b79-b12a411e99d0-config-data" (OuterVolumeSpecName: "config-data") pod "58861691-18ee-408e-9b79-b12a411e99d0" (UID: "58861691-18ee-408e-9b79-b12a411e99d0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:37:52 crc kubenswrapper[4867]: I0214 04:37:52.382260 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58861691-18ee-408e-9b79-b12a411e99d0-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:37:52 crc kubenswrapper[4867]: I0214 04:37:52.382351 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58861691-18ee-408e-9b79-b12a411e99d0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:37:52 crc kubenswrapper[4867]: I0214 04:37:52.382370 4867 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/58861691-18ee-408e-9b79-b12a411e99d0-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 14 04:37:52 crc kubenswrapper[4867]: I0214 04:37:52.382385 4867 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/58861691-18ee-408e-9b79-b12a411e99d0-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 14 04:37:52 crc kubenswrapper[4867]: I0214 04:37:52.681200 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 14 04:37:52 crc kubenswrapper[4867]: I0214 04:37:52.680958 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"58861691-18ee-408e-9b79-b12a411e99d0","Type":"ContainerDied","Data":"cc6bfc1f8b14bfadc90bd97fe9104d42e32da1b206a8c9f9b7d46cb64815cc9b"} Feb 14 04:37:52 crc kubenswrapper[4867]: I0214 04:37:52.681297 4867 scope.go:117] "RemoveContainer" containerID="27e1492030b12bf8e17f8ae9468e42331d9cc302f11974a5a0fc14d2d151ad95" Feb 14 04:37:52 crc kubenswrapper[4867]: I0214 04:37:52.752488 4867 scope.go:117] "RemoveContainer" containerID="57c262920dac84f166643430c62b34648c079ac3eb2252d50e804a444b3475ef" Feb 14 04:37:52 crc kubenswrapper[4867]: I0214 04:37:52.765068 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Feb 14 04:37:52 crc kubenswrapper[4867]: I0214 04:37:52.786715 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Feb 14 04:37:52 crc kubenswrapper[4867]: I0214 04:37:52.815608 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Feb 14 04:37:52 crc kubenswrapper[4867]: E0214 04:37:52.816281 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58861691-18ee-408e-9b79-b12a411e99d0" containerName="aodh-listener" Feb 14 04:37:52 crc kubenswrapper[4867]: I0214 04:37:52.816308 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="58861691-18ee-408e-9b79-b12a411e99d0" containerName="aodh-listener" Feb 14 04:37:52 crc kubenswrapper[4867]: E0214 04:37:52.816329 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58861691-18ee-408e-9b79-b12a411e99d0" containerName="aodh-evaluator" Feb 14 04:37:52 crc kubenswrapper[4867]: I0214 04:37:52.816338 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="58861691-18ee-408e-9b79-b12a411e99d0" containerName="aodh-evaluator" Feb 14 04:37:52 crc kubenswrapper[4867]: E0214 04:37:52.816368 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58861691-18ee-408e-9b79-b12a411e99d0" containerName="aodh-api" Feb 14 04:37:52 crc kubenswrapper[4867]: I0214 04:37:52.816376 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="58861691-18ee-408e-9b79-b12a411e99d0" containerName="aodh-api" Feb 14 04:37:52 crc kubenswrapper[4867]: E0214 04:37:52.816392 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58861691-18ee-408e-9b79-b12a411e99d0" containerName="aodh-notifier" Feb 14 04:37:52 crc kubenswrapper[4867]: I0214 04:37:52.816400 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="58861691-18ee-408e-9b79-b12a411e99d0" containerName="aodh-notifier" Feb 14 04:37:52 crc kubenswrapper[4867]: I0214 04:37:52.816712 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="58861691-18ee-408e-9b79-b12a411e99d0" containerName="aodh-evaluator" Feb 14 04:37:52 crc kubenswrapper[4867]: I0214 04:37:52.816742 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="58861691-18ee-408e-9b79-b12a411e99d0" containerName="aodh-listener" Feb 14 04:37:52 crc kubenswrapper[4867]: I0214 04:37:52.816763 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="58861691-18ee-408e-9b79-b12a411e99d0" containerName="aodh-api" Feb 14 04:37:52 crc kubenswrapper[4867]: I0214 04:37:52.816781 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="58861691-18ee-408e-9b79-b12a411e99d0" containerName="aodh-notifier" Feb 14 04:37:52 crc kubenswrapper[4867]: I0214 04:37:52.819724 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 14 04:37:52 crc kubenswrapper[4867]: I0214 04:37:52.831212 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Feb 14 04:37:52 crc kubenswrapper[4867]: I0214 04:37:52.831598 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Feb 14 04:37:52 crc kubenswrapper[4867]: I0214 04:37:52.831774 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Feb 14 04:37:52 crc kubenswrapper[4867]: I0214 04:37:52.831917 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Feb 14 04:37:52 crc kubenswrapper[4867]: I0214 04:37:52.832066 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-bzvlt" Feb 14 04:37:52 crc kubenswrapper[4867]: I0214 04:37:52.871021 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 14 04:37:52 crc kubenswrapper[4867]: I0214 04:37:52.885755 4867 scope.go:117] "RemoveContainer" containerID="a6c180f71636733ac3331112696898cf83a02e4f76f35724da02b3fc7166a0be" Feb 14 04:37:52 crc kubenswrapper[4867]: I0214 04:37:52.899498 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4jf4\" (UniqueName: \"kubernetes.io/projected/532a3c72-e995-4be9-a7db-f288b6c1a311-kube-api-access-b4jf4\") pod \"aodh-0\" (UID: \"532a3c72-e995-4be9-a7db-f288b6c1a311\") " pod="openstack/aodh-0" Feb 14 04:37:52 crc kubenswrapper[4867]: I0214 04:37:52.899705 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/532a3c72-e995-4be9-a7db-f288b6c1a311-combined-ca-bundle\") pod \"aodh-0\" (UID: \"532a3c72-e995-4be9-a7db-f288b6c1a311\") " pod="openstack/aodh-0" Feb 14 04:37:52 crc kubenswrapper[4867]: I0214 04:37:52.899735 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/532a3c72-e995-4be9-a7db-f288b6c1a311-config-data\") pod \"aodh-0\" (UID: \"532a3c72-e995-4be9-a7db-f288b6c1a311\") " pod="openstack/aodh-0" Feb 14 04:37:52 crc kubenswrapper[4867]: I0214 04:37:52.899794 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/532a3c72-e995-4be9-a7db-f288b6c1a311-scripts\") pod \"aodh-0\" (UID: \"532a3c72-e995-4be9-a7db-f288b6c1a311\") " pod="openstack/aodh-0" Feb 14 04:37:52 crc kubenswrapper[4867]: I0214 04:37:52.899842 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/532a3c72-e995-4be9-a7db-f288b6c1a311-public-tls-certs\") pod \"aodh-0\" (UID: \"532a3c72-e995-4be9-a7db-f288b6c1a311\") " pod="openstack/aodh-0" Feb 14 04:37:52 crc kubenswrapper[4867]: I0214 04:37:52.899862 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/532a3c72-e995-4be9-a7db-f288b6c1a311-internal-tls-certs\") pod \"aodh-0\" (UID: \"532a3c72-e995-4be9-a7db-f288b6c1a311\") " pod="openstack/aodh-0" Feb 14 04:37:52 crc kubenswrapper[4867]: I0214 04:37:52.964704 4867 scope.go:117] "RemoveContainer" containerID="4f9fbe8278c2f8217fd9d1c65cfa1d016b54bc10a1b47dd522ac53e2da5bac45" Feb 14 04:37:53 crc kubenswrapper[4867]: I0214 04:37:53.001752 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/532a3c72-e995-4be9-a7db-f288b6c1a311-combined-ca-bundle\") pod \"aodh-0\" (UID: \"532a3c72-e995-4be9-a7db-f288b6c1a311\") " pod="openstack/aodh-0" Feb 14 04:37:53 crc kubenswrapper[4867]: I0214 04:37:53.001800 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/532a3c72-e995-4be9-a7db-f288b6c1a311-config-data\") pod \"aodh-0\" (UID: \"532a3c72-e995-4be9-a7db-f288b6c1a311\") " pod="openstack/aodh-0" Feb 14 04:37:53 crc kubenswrapper[4867]: I0214 04:37:53.001857 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/532a3c72-e995-4be9-a7db-f288b6c1a311-scripts\") pod \"aodh-0\" (UID: \"532a3c72-e995-4be9-a7db-f288b6c1a311\") " pod="openstack/aodh-0" Feb 14 04:37:53 crc kubenswrapper[4867]: I0214 04:37:53.001906 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/532a3c72-e995-4be9-a7db-f288b6c1a311-public-tls-certs\") pod \"aodh-0\" (UID: \"532a3c72-e995-4be9-a7db-f288b6c1a311\") " pod="openstack/aodh-0" Feb 14 04:37:53 crc kubenswrapper[4867]: I0214 04:37:53.001926 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/532a3c72-e995-4be9-a7db-f288b6c1a311-internal-tls-certs\") pod \"aodh-0\" (UID: \"532a3c72-e995-4be9-a7db-f288b6c1a311\") " pod="openstack/aodh-0" Feb 14 04:37:53 crc kubenswrapper[4867]: I0214 04:37:53.001980 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4jf4\" (UniqueName: \"kubernetes.io/projected/532a3c72-e995-4be9-a7db-f288b6c1a311-kube-api-access-b4jf4\") pod \"aodh-0\" (UID: \"532a3c72-e995-4be9-a7db-f288b6c1a311\") " pod="openstack/aodh-0" Feb 14 04:37:53 crc kubenswrapper[4867]: I0214 04:37:53.019334 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/532a3c72-e995-4be9-a7db-f288b6c1a311-combined-ca-bundle\") pod \"aodh-0\" (UID: \"532a3c72-e995-4be9-a7db-f288b6c1a311\") " pod="openstack/aodh-0" Feb 14 04:37:53 crc kubenswrapper[4867]: I0214 04:37:53.031115 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/532a3c72-e995-4be9-a7db-f288b6c1a311-internal-tls-certs\") pod \"aodh-0\" (UID: \"532a3c72-e995-4be9-a7db-f288b6c1a311\") " pod="openstack/aodh-0" Feb 14 04:37:53 crc kubenswrapper[4867]: I0214 04:37:53.031844 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/532a3c72-e995-4be9-a7db-f288b6c1a311-config-data\") pod \"aodh-0\" (UID: \"532a3c72-e995-4be9-a7db-f288b6c1a311\") " pod="openstack/aodh-0" Feb 14 04:37:53 crc kubenswrapper[4867]: I0214 04:37:53.034049 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/532a3c72-e995-4be9-a7db-f288b6c1a311-scripts\") pod \"aodh-0\" (UID: \"532a3c72-e995-4be9-a7db-f288b6c1a311\") " pod="openstack/aodh-0" Feb 14 04:37:53 crc kubenswrapper[4867]: I0214 04:37:53.038141 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4jf4\" (UniqueName: \"kubernetes.io/projected/532a3c72-e995-4be9-a7db-f288b6c1a311-kube-api-access-b4jf4\") pod \"aodh-0\" (UID: \"532a3c72-e995-4be9-a7db-f288b6c1a311\") " pod="openstack/aodh-0" Feb 14 04:37:53 crc kubenswrapper[4867]: I0214 04:37:53.041941 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/532a3c72-e995-4be9-a7db-f288b6c1a311-public-tls-certs\") pod \"aodh-0\" (UID: \"532a3c72-e995-4be9-a7db-f288b6c1a311\") " pod="openstack/aodh-0" Feb 14 04:37:53 crc kubenswrapper[4867]: I0214 04:37:53.043235 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58861691-18ee-408e-9b79-b12a411e99d0" path="/var/lib/kubelet/pods/58861691-18ee-408e-9b79-b12a411e99d0/volumes" Feb 14 04:37:53 crc kubenswrapper[4867]: I0214 04:37:53.178221 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 14 04:37:53 crc kubenswrapper[4867]: I0214 04:37:53.694713 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 14 04:37:53 crc kubenswrapper[4867]: W0214 04:37:53.698231 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod532a3c72_e995_4be9_a7db_f288b6c1a311.slice/crio-381dfd8a48c307eb4aade4eebfd760203b44f8b6d4481d0431e1e872168cde42 WatchSource:0}: Error finding container 381dfd8a48c307eb4aade4eebfd760203b44f8b6d4481d0431e1e872168cde42: Status 404 returned error can't find the container with id 381dfd8a48c307eb4aade4eebfd760203b44f8b6d4481d0431e1e872168cde42 Feb 14 04:37:54 crc kubenswrapper[4867]: I0214 04:37:54.770240 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"532a3c72-e995-4be9-a7db-f288b6c1a311","Type":"ContainerStarted","Data":"39999fbf2ddf3c22f5b9205c3843402abc9bb8243fcc54eedfbd407de609235f"} Feb 14 04:37:54 crc kubenswrapper[4867]: I0214 04:37:54.770764 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"532a3c72-e995-4be9-a7db-f288b6c1a311","Type":"ContainerStarted","Data":"381dfd8a48c307eb4aade4eebfd760203b44f8b6d4481d0431e1e872168cde42"} Feb 14 04:37:55 crc kubenswrapper[4867]: I0214 04:37:55.462372 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 14 04:37:55 crc kubenswrapper[4867]: I0214 04:37:55.784606 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hwqcf" event={"ID":"51f6e45c-a545-4b49-b6f8-a3048619f24d","Type":"ContainerStarted","Data":"4dfb9147b07e16c62fa4639323c3d36860eb45af3594e44e8ad1917e1137afb0"} Feb 14 04:37:55 crc kubenswrapper[4867]: I0214 04:37:55.790553 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"532a3c72-e995-4be9-a7db-f288b6c1a311","Type":"ContainerStarted","Data":"378c59ea6c07febe7b47f99516a097f124d4b45c0df3c5c729a6d53fa1de580b"} Feb 14 04:37:55 crc kubenswrapper[4867]: I0214 04:37:55.813847 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hwqcf" podStartSLOduration=2.399048747 podStartE2EDuration="34.813817035s" podCreationTimestamp="2026-02-14 04:37:21 +0000 UTC" firstStartedPulling="2026-02-14 04:37:23.044566844 +0000 UTC m=+1675.125504158" lastFinishedPulling="2026-02-14 04:37:55.459335132 +0000 UTC m=+1707.540272446" observedRunningTime="2026-02-14 04:37:55.802038756 +0000 UTC m=+1707.882976070" watchObservedRunningTime="2026-02-14 04:37:55.813817035 +0000 UTC m=+1707.894754359" Feb 14 04:37:57 crc kubenswrapper[4867]: I0214 04:37:57.819058 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"532a3c72-e995-4be9-a7db-f288b6c1a311","Type":"ContainerStarted","Data":"d2f3218bdb190f321c2fbe6cd36634897baca28f15e2ed125c73b6fd0acc1b07"} Feb 14 04:37:58 crc kubenswrapper[4867]: I0214 04:37:58.834198 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"532a3c72-e995-4be9-a7db-f288b6c1a311","Type":"ContainerStarted","Data":"e270e96ca58e876c7e16b4b03ffe7a632053e7fb18379eb0e83e058f2f0eec47"} Feb 14 04:37:58 crc kubenswrapper[4867]: I0214 04:37:58.880130 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.469795514 podStartE2EDuration="6.880110567s" podCreationTimestamp="2026-02-14 04:37:52 +0000 UTC" firstStartedPulling="2026-02-14 04:37:53.701477417 +0000 UTC m=+1705.782414731" lastFinishedPulling="2026-02-14 04:37:58.11179246 +0000 UTC m=+1710.192729784" observedRunningTime="2026-02-14 04:37:58.869601011 +0000 UTC m=+1710.950538325" watchObservedRunningTime="2026-02-14 04:37:58.880110567 +0000 UTC m=+1710.961047881" Feb 14 04:38:01 crc kubenswrapper[4867]: I0214 04:38:01.998352 4867 scope.go:117] "RemoveContainer" containerID="7203a3aa09f0fa634ee4bcd02b0e1dff1e29376e8dd84a4e743cbea72d4c480e" Feb 14 04:38:02 crc kubenswrapper[4867]: E0214 04:38:01.999376 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:38:07 crc kubenswrapper[4867]: I0214 04:38:07.964282 4867 generic.go:334] "Generic (PLEG): container finished" podID="51f6e45c-a545-4b49-b6f8-a3048619f24d" containerID="4dfb9147b07e16c62fa4639323c3d36860eb45af3594e44e8ad1917e1137afb0" exitCode=0 Feb 14 04:38:07 crc kubenswrapper[4867]: I0214 04:38:07.964369 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hwqcf" event={"ID":"51f6e45c-a545-4b49-b6f8-a3048619f24d","Type":"ContainerDied","Data":"4dfb9147b07e16c62fa4639323c3d36860eb45af3594e44e8ad1917e1137afb0"} Feb 14 04:38:09 crc kubenswrapper[4867]: I0214 04:38:09.592714 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hwqcf" Feb 14 04:38:09 crc kubenswrapper[4867]: I0214 04:38:09.695295 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51f6e45c-a545-4b49-b6f8-a3048619f24d-repo-setup-combined-ca-bundle\") pod \"51f6e45c-a545-4b49-b6f8-a3048619f24d\" (UID: \"51f6e45c-a545-4b49-b6f8-a3048619f24d\") " Feb 14 04:38:09 crc kubenswrapper[4867]: I0214 04:38:09.695491 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/51f6e45c-a545-4b49-b6f8-a3048619f24d-ssh-key-openstack-edpm-ipam\") pod \"51f6e45c-a545-4b49-b6f8-a3048619f24d\" (UID: \"51f6e45c-a545-4b49-b6f8-a3048619f24d\") " Feb 14 04:38:09 crc kubenswrapper[4867]: I0214 04:38:09.695780 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/51f6e45c-a545-4b49-b6f8-a3048619f24d-inventory\") pod \"51f6e45c-a545-4b49-b6f8-a3048619f24d\" (UID: \"51f6e45c-a545-4b49-b6f8-a3048619f24d\") " Feb 14 04:38:09 crc kubenswrapper[4867]: I0214 04:38:09.695859 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mtch4\" (UniqueName: \"kubernetes.io/projected/51f6e45c-a545-4b49-b6f8-a3048619f24d-kube-api-access-mtch4\") pod \"51f6e45c-a545-4b49-b6f8-a3048619f24d\" (UID: \"51f6e45c-a545-4b49-b6f8-a3048619f24d\") " Feb 14 04:38:09 crc kubenswrapper[4867]: I0214 04:38:09.709761 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51f6e45c-a545-4b49-b6f8-a3048619f24d-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "51f6e45c-a545-4b49-b6f8-a3048619f24d" (UID: "51f6e45c-a545-4b49-b6f8-a3048619f24d"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:38:09 crc kubenswrapper[4867]: I0214 04:38:09.712480 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51f6e45c-a545-4b49-b6f8-a3048619f24d-kube-api-access-mtch4" (OuterVolumeSpecName: "kube-api-access-mtch4") pod "51f6e45c-a545-4b49-b6f8-a3048619f24d" (UID: "51f6e45c-a545-4b49-b6f8-a3048619f24d"). InnerVolumeSpecName "kube-api-access-mtch4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:38:09 crc kubenswrapper[4867]: I0214 04:38:09.738776 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51f6e45c-a545-4b49-b6f8-a3048619f24d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "51f6e45c-a545-4b49-b6f8-a3048619f24d" (UID: "51f6e45c-a545-4b49-b6f8-a3048619f24d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:38:09 crc kubenswrapper[4867]: I0214 04:38:09.738833 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51f6e45c-a545-4b49-b6f8-a3048619f24d-inventory" (OuterVolumeSpecName: "inventory") pod "51f6e45c-a545-4b49-b6f8-a3048619f24d" (UID: "51f6e45c-a545-4b49-b6f8-a3048619f24d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:38:09 crc kubenswrapper[4867]: I0214 04:38:09.800149 4867 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/51f6e45c-a545-4b49-b6f8-a3048619f24d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 14 04:38:09 crc kubenswrapper[4867]: I0214 04:38:09.800297 4867 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/51f6e45c-a545-4b49-b6f8-a3048619f24d-inventory\") on node \"crc\" DevicePath \"\"" Feb 14 04:38:09 crc kubenswrapper[4867]: I0214 04:38:09.800395 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mtch4\" (UniqueName: \"kubernetes.io/projected/51f6e45c-a545-4b49-b6f8-a3048619f24d-kube-api-access-mtch4\") on node \"crc\" DevicePath \"\"" Feb 14 04:38:09 crc kubenswrapper[4867]: I0214 04:38:09.800487 4867 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51f6e45c-a545-4b49-b6f8-a3048619f24d-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:38:09 crc kubenswrapper[4867]: I0214 04:38:09.990342 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hwqcf" event={"ID":"51f6e45c-a545-4b49-b6f8-a3048619f24d","Type":"ContainerDied","Data":"1ed5f62b1367ab5d606495b7d287f182fb33167f5f1ae1565d0110ed63160b24"} Feb 14 04:38:09 crc kubenswrapper[4867]: I0214 04:38:09.990395 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ed5f62b1367ab5d606495b7d287f182fb33167f5f1ae1565d0110ed63160b24" Feb 14 04:38:09 crc kubenswrapper[4867]: I0214 04:38:09.990394 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hwqcf" Feb 14 04:38:10 crc kubenswrapper[4867]: I0214 04:38:10.093031 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-drcl6"] Feb 14 04:38:10 crc kubenswrapper[4867]: E0214 04:38:10.093714 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51f6e45c-a545-4b49-b6f8-a3048619f24d" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 14 04:38:10 crc kubenswrapper[4867]: I0214 04:38:10.093742 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="51f6e45c-a545-4b49-b6f8-a3048619f24d" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 14 04:38:10 crc kubenswrapper[4867]: I0214 04:38:10.094103 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="51f6e45c-a545-4b49-b6f8-a3048619f24d" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 14 04:38:10 crc kubenswrapper[4867]: I0214 04:38:10.095248 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-drcl6" Feb 14 04:38:10 crc kubenswrapper[4867]: I0214 04:38:10.100017 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-24tmg" Feb 14 04:38:10 crc kubenswrapper[4867]: I0214 04:38:10.100095 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 14 04:38:10 crc kubenswrapper[4867]: I0214 04:38:10.100242 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 14 04:38:10 crc kubenswrapper[4867]: I0214 04:38:10.100333 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 14 04:38:10 crc kubenswrapper[4867]: I0214 04:38:10.112413 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-drcl6"] Feb 14 04:38:10 crc kubenswrapper[4867]: I0214 04:38:10.210486 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0c240366-e845-4987-943c-afc965ddc2f4-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-drcl6\" (UID: \"0c240366-e845-4987-943c-afc965ddc2f4\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-drcl6" Feb 14 04:38:10 crc kubenswrapper[4867]: I0214 04:38:10.210603 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjpp8\" (UniqueName: \"kubernetes.io/projected/0c240366-e845-4987-943c-afc965ddc2f4-kube-api-access-xjpp8\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-drcl6\" (UID: \"0c240366-e845-4987-943c-afc965ddc2f4\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-drcl6" Feb 14 04:38:10 crc kubenswrapper[4867]: I0214 04:38:10.211484 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0c240366-e845-4987-943c-afc965ddc2f4-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-drcl6\" (UID: \"0c240366-e845-4987-943c-afc965ddc2f4\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-drcl6" Feb 14 04:38:10 crc kubenswrapper[4867]: I0214 04:38:10.314081 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0c240366-e845-4987-943c-afc965ddc2f4-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-drcl6\" (UID: \"0c240366-e845-4987-943c-afc965ddc2f4\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-drcl6" Feb 14 04:38:10 crc kubenswrapper[4867]: I0214 04:38:10.314210 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0c240366-e845-4987-943c-afc965ddc2f4-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-drcl6\" (UID: \"0c240366-e845-4987-943c-afc965ddc2f4\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-drcl6" Feb 14 04:38:10 crc kubenswrapper[4867]: I0214 04:38:10.314287 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjpp8\" (UniqueName: \"kubernetes.io/projected/0c240366-e845-4987-943c-afc965ddc2f4-kube-api-access-xjpp8\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-drcl6\" (UID: \"0c240366-e845-4987-943c-afc965ddc2f4\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-drcl6" Feb 14 04:38:10 crc kubenswrapper[4867]: I0214 04:38:10.317738 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0c240366-e845-4987-943c-afc965ddc2f4-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-drcl6\" (UID: \"0c240366-e845-4987-943c-afc965ddc2f4\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-drcl6" Feb 14 04:38:10 crc kubenswrapper[4867]: I0214 04:38:10.325185 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0c240366-e845-4987-943c-afc965ddc2f4-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-drcl6\" (UID: \"0c240366-e845-4987-943c-afc965ddc2f4\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-drcl6" Feb 14 04:38:10 crc kubenswrapper[4867]: I0214 04:38:10.345157 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjpp8\" (UniqueName: \"kubernetes.io/projected/0c240366-e845-4987-943c-afc965ddc2f4-kube-api-access-xjpp8\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-drcl6\" (UID: \"0c240366-e845-4987-943c-afc965ddc2f4\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-drcl6" Feb 14 04:38:10 crc kubenswrapper[4867]: I0214 04:38:10.412410 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-drcl6" Feb 14 04:38:10 crc kubenswrapper[4867]: W0214 04:38:10.982470 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0c240366_e845_4987_943c_afc965ddc2f4.slice/crio-616ac096f90001fabeb48cda041cbb7023d85d90c7dd445ca19f1756c6bdd174 WatchSource:0}: Error finding container 616ac096f90001fabeb48cda041cbb7023d85d90c7dd445ca19f1756c6bdd174: Status 404 returned error can't find the container with id 616ac096f90001fabeb48cda041cbb7023d85d90c7dd445ca19f1756c6bdd174 Feb 14 04:38:10 crc kubenswrapper[4867]: I0214 04:38:10.989832 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-drcl6"] Feb 14 04:38:11 crc kubenswrapper[4867]: I0214 04:38:11.015230 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-drcl6" event={"ID":"0c240366-e845-4987-943c-afc965ddc2f4","Type":"ContainerStarted","Data":"616ac096f90001fabeb48cda041cbb7023d85d90c7dd445ca19f1756c6bdd174"} Feb 14 04:38:11 crc kubenswrapper[4867]: I0214 04:38:11.740692 4867 scope.go:117] "RemoveContainer" containerID="60316f17511ab27fc3a729f8ccdd9f3a0822ad95a99d3ea5ac358cbcc6ece82a" Feb 14 04:38:12 crc kubenswrapper[4867]: I0214 04:38:12.018047 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-drcl6" event={"ID":"0c240366-e845-4987-943c-afc965ddc2f4","Type":"ContainerStarted","Data":"a1eef6317edb4a0f0097da785220304f9ef9d722ee3c945d26560564cc6deb12"} Feb 14 04:38:12 crc kubenswrapper[4867]: I0214 04:38:12.042941 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-drcl6" podStartSLOduration=1.408118221 podStartE2EDuration="2.04292041s" podCreationTimestamp="2026-02-14 04:38:10 +0000 UTC" firstStartedPulling="2026-02-14 04:38:10.986470043 +0000 UTC m=+1723.067407357" lastFinishedPulling="2026-02-14 04:38:11.621272232 +0000 UTC m=+1723.702209546" observedRunningTime="2026-02-14 04:38:12.032047974 +0000 UTC m=+1724.112985288" watchObservedRunningTime="2026-02-14 04:38:12.04292041 +0000 UTC m=+1724.123857724" Feb 14 04:38:12 crc kubenswrapper[4867]: I0214 04:38:12.998130 4867 scope.go:117] "RemoveContainer" containerID="7203a3aa09f0fa634ee4bcd02b0e1dff1e29376e8dd84a4e743cbea72d4c480e" Feb 14 04:38:12 crc kubenswrapper[4867]: E0214 04:38:12.998729 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:38:15 crc kubenswrapper[4867]: I0214 04:38:15.055394 4867 generic.go:334] "Generic (PLEG): container finished" podID="0c240366-e845-4987-943c-afc965ddc2f4" containerID="a1eef6317edb4a0f0097da785220304f9ef9d722ee3c945d26560564cc6deb12" exitCode=0 Feb 14 04:38:15 crc kubenswrapper[4867]: I0214 04:38:15.055499 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-drcl6" event={"ID":"0c240366-e845-4987-943c-afc965ddc2f4","Type":"ContainerDied","Data":"a1eef6317edb4a0f0097da785220304f9ef9d722ee3c945d26560564cc6deb12"} Feb 14 04:38:16 crc kubenswrapper[4867]: I0214 04:38:16.814919 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-drcl6" Feb 14 04:38:16 crc kubenswrapper[4867]: I0214 04:38:16.886404 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0c240366-e845-4987-943c-afc965ddc2f4-ssh-key-openstack-edpm-ipam\") pod \"0c240366-e845-4987-943c-afc965ddc2f4\" (UID: \"0c240366-e845-4987-943c-afc965ddc2f4\") " Feb 14 04:38:16 crc kubenswrapper[4867]: I0214 04:38:16.886618 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0c240366-e845-4987-943c-afc965ddc2f4-inventory\") pod \"0c240366-e845-4987-943c-afc965ddc2f4\" (UID: \"0c240366-e845-4987-943c-afc965ddc2f4\") " Feb 14 04:38:16 crc kubenswrapper[4867]: I0214 04:38:16.886741 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xjpp8\" (UniqueName: \"kubernetes.io/projected/0c240366-e845-4987-943c-afc965ddc2f4-kube-api-access-xjpp8\") pod \"0c240366-e845-4987-943c-afc965ddc2f4\" (UID: \"0c240366-e845-4987-943c-afc965ddc2f4\") " Feb 14 04:38:16 crc kubenswrapper[4867]: I0214 04:38:16.897940 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c240366-e845-4987-943c-afc965ddc2f4-kube-api-access-xjpp8" (OuterVolumeSpecName: "kube-api-access-xjpp8") pod "0c240366-e845-4987-943c-afc965ddc2f4" (UID: "0c240366-e845-4987-943c-afc965ddc2f4"). InnerVolumeSpecName "kube-api-access-xjpp8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:38:16 crc kubenswrapper[4867]: I0214 04:38:16.932722 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c240366-e845-4987-943c-afc965ddc2f4-inventory" (OuterVolumeSpecName: "inventory") pod "0c240366-e845-4987-943c-afc965ddc2f4" (UID: "0c240366-e845-4987-943c-afc965ddc2f4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:38:16 crc kubenswrapper[4867]: I0214 04:38:16.939009 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c240366-e845-4987-943c-afc965ddc2f4-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0c240366-e845-4987-943c-afc965ddc2f4" (UID: "0c240366-e845-4987-943c-afc965ddc2f4"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:38:16 crc kubenswrapper[4867]: I0214 04:38:16.989733 4867 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0c240366-e845-4987-943c-afc965ddc2f4-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 14 04:38:16 crc kubenswrapper[4867]: I0214 04:38:16.989767 4867 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0c240366-e845-4987-943c-afc965ddc2f4-inventory\") on node \"crc\" DevicePath \"\"" Feb 14 04:38:16 crc kubenswrapper[4867]: I0214 04:38:16.989777 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xjpp8\" (UniqueName: \"kubernetes.io/projected/0c240366-e845-4987-943c-afc965ddc2f4-kube-api-access-xjpp8\") on node \"crc\" DevicePath \"\"" Feb 14 04:38:17 crc kubenswrapper[4867]: I0214 04:38:17.081303 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-drcl6" event={"ID":"0c240366-e845-4987-943c-afc965ddc2f4","Type":"ContainerDied","Data":"616ac096f90001fabeb48cda041cbb7023d85d90c7dd445ca19f1756c6bdd174"} Feb 14 04:38:17 crc kubenswrapper[4867]: I0214 04:38:17.081351 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="616ac096f90001fabeb48cda041cbb7023d85d90c7dd445ca19f1756c6bdd174" Feb 14 04:38:17 crc kubenswrapper[4867]: I0214 04:38:17.081412 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-drcl6" Feb 14 04:38:17 crc kubenswrapper[4867]: I0214 04:38:17.153969 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d4nh9"] Feb 14 04:38:17 crc kubenswrapper[4867]: E0214 04:38:17.154479 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c240366-e845-4987-943c-afc965ddc2f4" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 14 04:38:17 crc kubenswrapper[4867]: I0214 04:38:17.154498 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c240366-e845-4987-943c-afc965ddc2f4" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 14 04:38:17 crc kubenswrapper[4867]: I0214 04:38:17.154731 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c240366-e845-4987-943c-afc965ddc2f4" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 14 04:38:17 crc kubenswrapper[4867]: I0214 04:38:17.155494 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d4nh9" Feb 14 04:38:17 crc kubenswrapper[4867]: I0214 04:38:17.157371 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 14 04:38:17 crc kubenswrapper[4867]: I0214 04:38:17.157553 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 14 04:38:17 crc kubenswrapper[4867]: I0214 04:38:17.157551 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-24tmg" Feb 14 04:38:17 crc kubenswrapper[4867]: I0214 04:38:17.157902 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 14 04:38:17 crc kubenswrapper[4867]: I0214 04:38:17.180228 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d4nh9"] Feb 14 04:38:17 crc kubenswrapper[4867]: I0214 04:38:17.197264 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3d43ea0-54e7-4fd1-892d-bbc3d01a5321-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-d4nh9\" (UID: \"e3d43ea0-54e7-4fd1-892d-bbc3d01a5321\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d4nh9" Feb 14 04:38:17 crc kubenswrapper[4867]: I0214 04:38:17.197398 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e3d43ea0-54e7-4fd1-892d-bbc3d01a5321-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-d4nh9\" (UID: \"e3d43ea0-54e7-4fd1-892d-bbc3d01a5321\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d4nh9" Feb 14 04:38:17 crc kubenswrapper[4867]: I0214 04:38:17.197436 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e3d43ea0-54e7-4fd1-892d-bbc3d01a5321-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-d4nh9\" (UID: \"e3d43ea0-54e7-4fd1-892d-bbc3d01a5321\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d4nh9" Feb 14 04:38:17 crc kubenswrapper[4867]: I0214 04:38:17.197550 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n99rh\" (UniqueName: \"kubernetes.io/projected/e3d43ea0-54e7-4fd1-892d-bbc3d01a5321-kube-api-access-n99rh\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-d4nh9\" (UID: \"e3d43ea0-54e7-4fd1-892d-bbc3d01a5321\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d4nh9" Feb 14 04:38:17 crc kubenswrapper[4867]: I0214 04:38:17.300495 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n99rh\" (UniqueName: \"kubernetes.io/projected/e3d43ea0-54e7-4fd1-892d-bbc3d01a5321-kube-api-access-n99rh\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-d4nh9\" (UID: \"e3d43ea0-54e7-4fd1-892d-bbc3d01a5321\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d4nh9" Feb 14 04:38:17 crc kubenswrapper[4867]: I0214 04:38:17.300647 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3d43ea0-54e7-4fd1-892d-bbc3d01a5321-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-d4nh9\" (UID: \"e3d43ea0-54e7-4fd1-892d-bbc3d01a5321\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d4nh9" Feb 14 04:38:17 crc kubenswrapper[4867]: I0214 04:38:17.300772 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e3d43ea0-54e7-4fd1-892d-bbc3d01a5321-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-d4nh9\" (UID: \"e3d43ea0-54e7-4fd1-892d-bbc3d01a5321\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d4nh9" Feb 14 04:38:17 crc kubenswrapper[4867]: I0214 04:38:17.300831 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e3d43ea0-54e7-4fd1-892d-bbc3d01a5321-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-d4nh9\" (UID: \"e3d43ea0-54e7-4fd1-892d-bbc3d01a5321\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d4nh9" Feb 14 04:38:17 crc kubenswrapper[4867]: I0214 04:38:17.304189 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3d43ea0-54e7-4fd1-892d-bbc3d01a5321-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-d4nh9\" (UID: \"e3d43ea0-54e7-4fd1-892d-bbc3d01a5321\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d4nh9" Feb 14 04:38:17 crc kubenswrapper[4867]: I0214 04:38:17.304408 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e3d43ea0-54e7-4fd1-892d-bbc3d01a5321-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-d4nh9\" (UID: \"e3d43ea0-54e7-4fd1-892d-bbc3d01a5321\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d4nh9" Feb 14 04:38:17 crc kubenswrapper[4867]: I0214 04:38:17.318478 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e3d43ea0-54e7-4fd1-892d-bbc3d01a5321-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-d4nh9\" (UID: \"e3d43ea0-54e7-4fd1-892d-bbc3d01a5321\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d4nh9" Feb 14 04:38:17 crc kubenswrapper[4867]: I0214 04:38:17.328784 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n99rh\" (UniqueName: \"kubernetes.io/projected/e3d43ea0-54e7-4fd1-892d-bbc3d01a5321-kube-api-access-n99rh\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-d4nh9\" (UID: \"e3d43ea0-54e7-4fd1-892d-bbc3d01a5321\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d4nh9" Feb 14 04:38:17 crc kubenswrapper[4867]: I0214 04:38:17.478763 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d4nh9" Feb 14 04:38:18 crc kubenswrapper[4867]: I0214 04:38:18.032644 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d4nh9"] Feb 14 04:38:18 crc kubenswrapper[4867]: I0214 04:38:18.094846 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d4nh9" event={"ID":"e3d43ea0-54e7-4fd1-892d-bbc3d01a5321","Type":"ContainerStarted","Data":"29be09ee8292887c8dae314e3fa0f7206f5042ff48d634e5b2ba0410adb6d585"} Feb 14 04:38:19 crc kubenswrapper[4867]: I0214 04:38:19.113696 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d4nh9" event={"ID":"e3d43ea0-54e7-4fd1-892d-bbc3d01a5321","Type":"ContainerStarted","Data":"092ff2e32550d64bb67818137543ba61871a207e331e990a8f5b06ace8a5b266"} Feb 14 04:38:19 crc kubenswrapper[4867]: I0214 04:38:19.128393 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d4nh9" podStartSLOduration=1.733767801 podStartE2EDuration="2.128373579s" podCreationTimestamp="2026-02-14 04:38:17 +0000 UTC" firstStartedPulling="2026-02-14 04:38:18.037448688 +0000 UTC m=+1730.118386002" lastFinishedPulling="2026-02-14 04:38:18.432054476 +0000 UTC m=+1730.512991780" observedRunningTime="2026-02-14 04:38:19.128252526 +0000 UTC m=+1731.209189880" watchObservedRunningTime="2026-02-14 04:38:19.128373579 +0000 UTC m=+1731.209310893" Feb 14 04:38:22 crc kubenswrapper[4867]: E0214 04:38:22.909358 4867 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82f2a63e_b256_4ad7_96ee_1def8a174cfb.slice/crio-0c997e7bc3d5f543f14547386fa8ede76fc6a555faa3b09cca505eba9cd2af8d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82f2a63e_b256_4ad7_96ee_1def8a174cfb.slice/crio-conmon-0c997e7bc3d5f543f14547386fa8ede76fc6a555faa3b09cca505eba9cd2af8d.scope\": RecentStats: unable to find data in memory cache]" Feb 14 04:38:23 crc kubenswrapper[4867]: I0214 04:38:23.176834 4867 generic.go:334] "Generic (PLEG): container finished" podID="82f2a63e-b256-4ad7-96ee-1def8a174cfb" containerID="0c997e7bc3d5f543f14547386fa8ede76fc6a555faa3b09cca505eba9cd2af8d" exitCode=0 Feb 14 04:38:23 crc kubenswrapper[4867]: I0214 04:38:23.176887 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"82f2a63e-b256-4ad7-96ee-1def8a174cfb","Type":"ContainerDied","Data":"0c997e7bc3d5f543f14547386fa8ede76fc6a555faa3b09cca505eba9cd2af8d"} Feb 14 04:38:24 crc kubenswrapper[4867]: I0214 04:38:24.187728 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"82f2a63e-b256-4ad7-96ee-1def8a174cfb","Type":"ContainerStarted","Data":"af9e2c35de2cc94006f292659a9a95da1307cfc7554fb5036d7df0d867dfc8f3"} Feb 14 04:38:24 crc kubenswrapper[4867]: I0214 04:38:24.188446 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-1" Feb 14 04:38:24 crc kubenswrapper[4867]: I0214 04:38:24.219237 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-1" podStartSLOduration=37.219216444 podStartE2EDuration="37.219216444s" podCreationTimestamp="2026-02-14 04:37:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:38:24.208438391 +0000 UTC m=+1736.289375725" watchObservedRunningTime="2026-02-14 04:38:24.219216444 +0000 UTC m=+1736.300153748" Feb 14 04:38:24 crc kubenswrapper[4867]: I0214 04:38:24.997331 4867 scope.go:117] "RemoveContainer" containerID="7203a3aa09f0fa634ee4bcd02b0e1dff1e29376e8dd84a4e743cbea72d4c480e" Feb 14 04:38:24 crc kubenswrapper[4867]: E0214 04:38:24.997685 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:38:37 crc kubenswrapper[4867]: I0214 04:38:37.981908 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-1" Feb 14 04:38:37 crc kubenswrapper[4867]: I0214 04:38:37.997430 4867 scope.go:117] "RemoveContainer" containerID="7203a3aa09f0fa634ee4bcd02b0e1dff1e29376e8dd84a4e743cbea72d4c480e" Feb 14 04:38:37 crc kubenswrapper[4867]: E0214 04:38:37.997732 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:38:38 crc kubenswrapper[4867]: I0214 04:38:38.035526 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 14 04:38:42 crc kubenswrapper[4867]: I0214 04:38:42.657463 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="647ba30a-5526-4e27-9095-680c31ff4eb3" containerName="rabbitmq" containerID="cri-o://47b0dc8cf76452537b6a08713121a73a00752e3dfe3f1a9f1b2a3edca2f295a0" gracePeriod=604796 Feb 14 04:38:47 crc kubenswrapper[4867]: I0214 04:38:47.917832 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="647ba30a-5526-4e27-9095-680c31ff4eb3" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.127:5671: connect: connection refused" Feb 14 04:38:49 crc kubenswrapper[4867]: I0214 04:38:49.471081 4867 generic.go:334] "Generic (PLEG): container finished" podID="647ba30a-5526-4e27-9095-680c31ff4eb3" containerID="47b0dc8cf76452537b6a08713121a73a00752e3dfe3f1a9f1b2a3edca2f295a0" exitCode=0 Feb 14 04:38:49 crc kubenswrapper[4867]: I0214 04:38:49.471646 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"647ba30a-5526-4e27-9095-680c31ff4eb3","Type":"ContainerDied","Data":"47b0dc8cf76452537b6a08713121a73a00752e3dfe3f1a9f1b2a3edca2f295a0"} Feb 14 04:38:49 crc kubenswrapper[4867]: I0214 04:38:49.471685 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"647ba30a-5526-4e27-9095-680c31ff4eb3","Type":"ContainerDied","Data":"3dfa840147a64ccb967653d642c377ae9470c558827d87830014de26dfbf1136"} Feb 14 04:38:49 crc kubenswrapper[4867]: I0214 04:38:49.471700 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3dfa840147a64ccb967653d642c377ae9470c558827d87830014de26dfbf1136" Feb 14 04:38:49 crc kubenswrapper[4867]: I0214 04:38:49.538272 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 14 04:38:49 crc kubenswrapper[4867]: I0214 04:38:49.694323 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/647ba30a-5526-4e27-9095-680c31ff4eb3-rabbitmq-erlang-cookie\") pod \"647ba30a-5526-4e27-9095-680c31ff4eb3\" (UID: \"647ba30a-5526-4e27-9095-680c31ff4eb3\") " Feb 14 04:38:49 crc kubenswrapper[4867]: I0214 04:38:49.694401 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/647ba30a-5526-4e27-9095-680c31ff4eb3-config-data\") pod \"647ba30a-5526-4e27-9095-680c31ff4eb3\" (UID: \"647ba30a-5526-4e27-9095-680c31ff4eb3\") " Feb 14 04:38:49 crc kubenswrapper[4867]: I0214 04:38:49.694438 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/647ba30a-5526-4e27-9095-680c31ff4eb3-plugins-conf\") pod \"647ba30a-5526-4e27-9095-680c31ff4eb3\" (UID: \"647ba30a-5526-4e27-9095-680c31ff4eb3\") " Feb 14 04:38:49 crc kubenswrapper[4867]: I0214 04:38:49.694468 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/647ba30a-5526-4e27-9095-680c31ff4eb3-rabbitmq-confd\") pod \"647ba30a-5526-4e27-9095-680c31ff4eb3\" (UID: \"647ba30a-5526-4e27-9095-680c31ff4eb3\") " Feb 14 04:38:49 crc kubenswrapper[4867]: I0214 04:38:49.694486 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/647ba30a-5526-4e27-9095-680c31ff4eb3-rabbitmq-tls\") pod \"647ba30a-5526-4e27-9095-680c31ff4eb3\" (UID: \"647ba30a-5526-4e27-9095-680c31ff4eb3\") " Feb 14 04:38:49 crc kubenswrapper[4867]: I0214 04:38:49.694555 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/647ba30a-5526-4e27-9095-680c31ff4eb3-erlang-cookie-secret\") pod \"647ba30a-5526-4e27-9095-680c31ff4eb3\" (UID: \"647ba30a-5526-4e27-9095-680c31ff4eb3\") " Feb 14 04:38:49 crc kubenswrapper[4867]: I0214 04:38:49.696156 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5e0ed597-0ada-4a46-9560-1f84a6822196\") pod \"647ba30a-5526-4e27-9095-680c31ff4eb3\" (UID: \"647ba30a-5526-4e27-9095-680c31ff4eb3\") " Feb 14 04:38:49 crc kubenswrapper[4867]: I0214 04:38:49.696287 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/647ba30a-5526-4e27-9095-680c31ff4eb3-server-conf\") pod \"647ba30a-5526-4e27-9095-680c31ff4eb3\" (UID: \"647ba30a-5526-4e27-9095-680c31ff4eb3\") " Feb 14 04:38:49 crc kubenswrapper[4867]: I0214 04:38:49.696316 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6kp9g\" (UniqueName: \"kubernetes.io/projected/647ba30a-5526-4e27-9095-680c31ff4eb3-kube-api-access-6kp9g\") pod \"647ba30a-5526-4e27-9095-680c31ff4eb3\" (UID: \"647ba30a-5526-4e27-9095-680c31ff4eb3\") " Feb 14 04:38:49 crc kubenswrapper[4867]: I0214 04:38:49.696334 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/647ba30a-5526-4e27-9095-680c31ff4eb3-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "647ba30a-5526-4e27-9095-680c31ff4eb3" (UID: "647ba30a-5526-4e27-9095-680c31ff4eb3"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:38:49 crc kubenswrapper[4867]: I0214 04:38:49.696448 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/647ba30a-5526-4e27-9095-680c31ff4eb3-pod-info\") pod \"647ba30a-5526-4e27-9095-680c31ff4eb3\" (UID: \"647ba30a-5526-4e27-9095-680c31ff4eb3\") " Feb 14 04:38:49 crc kubenswrapper[4867]: I0214 04:38:49.696493 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/647ba30a-5526-4e27-9095-680c31ff4eb3-rabbitmq-plugins\") pod \"647ba30a-5526-4e27-9095-680c31ff4eb3\" (UID: \"647ba30a-5526-4e27-9095-680c31ff4eb3\") " Feb 14 04:38:49 crc kubenswrapper[4867]: I0214 04:38:49.697267 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/647ba30a-5526-4e27-9095-680c31ff4eb3-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "647ba30a-5526-4e27-9095-680c31ff4eb3" (UID: "647ba30a-5526-4e27-9095-680c31ff4eb3"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:38:49 crc kubenswrapper[4867]: I0214 04:38:49.697693 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/647ba30a-5526-4e27-9095-680c31ff4eb3-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "647ba30a-5526-4e27-9095-680c31ff4eb3" (UID: "647ba30a-5526-4e27-9095-680c31ff4eb3"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:38:49 crc kubenswrapper[4867]: I0214 04:38:49.700994 4867 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/647ba30a-5526-4e27-9095-680c31ff4eb3-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 14 04:38:49 crc kubenswrapper[4867]: I0214 04:38:49.701073 4867 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/647ba30a-5526-4e27-9095-680c31ff4eb3-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 14 04:38:49 crc kubenswrapper[4867]: I0214 04:38:49.701093 4867 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/647ba30a-5526-4e27-9095-680c31ff4eb3-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 14 04:38:49 crc kubenswrapper[4867]: I0214 04:38:49.707955 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/647ba30a-5526-4e27-9095-680c31ff4eb3-pod-info" (OuterVolumeSpecName: "pod-info") pod "647ba30a-5526-4e27-9095-680c31ff4eb3" (UID: "647ba30a-5526-4e27-9095-680c31ff4eb3"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 14 04:38:49 crc kubenswrapper[4867]: I0214 04:38:49.712977 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/647ba30a-5526-4e27-9095-680c31ff4eb3-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "647ba30a-5526-4e27-9095-680c31ff4eb3" (UID: "647ba30a-5526-4e27-9095-680c31ff4eb3"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:38:49 crc kubenswrapper[4867]: I0214 04:38:49.737371 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/647ba30a-5526-4e27-9095-680c31ff4eb3-kube-api-access-6kp9g" (OuterVolumeSpecName: "kube-api-access-6kp9g") pod "647ba30a-5526-4e27-9095-680c31ff4eb3" (UID: "647ba30a-5526-4e27-9095-680c31ff4eb3"). InnerVolumeSpecName "kube-api-access-6kp9g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:38:49 crc kubenswrapper[4867]: I0214 04:38:49.738621 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/647ba30a-5526-4e27-9095-680c31ff4eb3-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "647ba30a-5526-4e27-9095-680c31ff4eb3" (UID: "647ba30a-5526-4e27-9095-680c31ff4eb3"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:38:49 crc kubenswrapper[4867]: I0214 04:38:49.760092 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/647ba30a-5526-4e27-9095-680c31ff4eb3-config-data" (OuterVolumeSpecName: "config-data") pod "647ba30a-5526-4e27-9095-680c31ff4eb3" (UID: "647ba30a-5526-4e27-9095-680c31ff4eb3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:38:49 crc kubenswrapper[4867]: I0214 04:38:49.761168 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5e0ed597-0ada-4a46-9560-1f84a6822196" (OuterVolumeSpecName: "persistence") pod "647ba30a-5526-4e27-9095-680c31ff4eb3" (UID: "647ba30a-5526-4e27-9095-680c31ff4eb3"). InnerVolumeSpecName "pvc-5e0ed597-0ada-4a46-9560-1f84a6822196". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 14 04:38:49 crc kubenswrapper[4867]: I0214 04:38:49.802995 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6kp9g\" (UniqueName: \"kubernetes.io/projected/647ba30a-5526-4e27-9095-680c31ff4eb3-kube-api-access-6kp9g\") on node \"crc\" DevicePath \"\"" Feb 14 04:38:49 crc kubenswrapper[4867]: I0214 04:38:49.803235 4867 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/647ba30a-5526-4e27-9095-680c31ff4eb3-pod-info\") on node \"crc\" DevicePath \"\"" Feb 14 04:38:49 crc kubenswrapper[4867]: I0214 04:38:49.803246 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/647ba30a-5526-4e27-9095-680c31ff4eb3-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:38:49 crc kubenswrapper[4867]: I0214 04:38:49.803254 4867 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/647ba30a-5526-4e27-9095-680c31ff4eb3-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 14 04:38:49 crc kubenswrapper[4867]: I0214 04:38:49.803261 4867 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/647ba30a-5526-4e27-9095-680c31ff4eb3-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 14 04:38:49 crc kubenswrapper[4867]: I0214 04:38:49.803468 4867 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-5e0ed597-0ada-4a46-9560-1f84a6822196\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5e0ed597-0ada-4a46-9560-1f84a6822196\") on node \"crc\" " Feb 14 04:38:49 crc kubenswrapper[4867]: I0214 04:38:49.814074 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/647ba30a-5526-4e27-9095-680c31ff4eb3-server-conf" (OuterVolumeSpecName: "server-conf") pod "647ba30a-5526-4e27-9095-680c31ff4eb3" (UID: "647ba30a-5526-4e27-9095-680c31ff4eb3"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:38:49 crc kubenswrapper[4867]: I0214 04:38:49.866018 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/647ba30a-5526-4e27-9095-680c31ff4eb3-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "647ba30a-5526-4e27-9095-680c31ff4eb3" (UID: "647ba30a-5526-4e27-9095-680c31ff4eb3"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:38:49 crc kubenswrapper[4867]: I0214 04:38:49.886555 4867 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 14 04:38:49 crc kubenswrapper[4867]: I0214 04:38:49.886788 4867 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-5e0ed597-0ada-4a46-9560-1f84a6822196" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5e0ed597-0ada-4a46-9560-1f84a6822196") on node "crc" Feb 14 04:38:49 crc kubenswrapper[4867]: I0214 04:38:49.906404 4867 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/647ba30a-5526-4e27-9095-680c31ff4eb3-server-conf\") on node \"crc\" DevicePath \"\"" Feb 14 04:38:49 crc kubenswrapper[4867]: I0214 04:38:49.906486 4867 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/647ba30a-5526-4e27-9095-680c31ff4eb3-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 14 04:38:49 crc kubenswrapper[4867]: I0214 04:38:49.906551 4867 reconciler_common.go:293] "Volume detached for volume \"pvc-5e0ed597-0ada-4a46-9560-1f84a6822196\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5e0ed597-0ada-4a46-9560-1f84a6822196\") on node \"crc\" DevicePath \"\"" Feb 14 04:38:50 crc kubenswrapper[4867]: I0214 04:38:50.497281 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 14 04:38:50 crc kubenswrapper[4867]: I0214 04:38:50.564568 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 14 04:38:50 crc kubenswrapper[4867]: I0214 04:38:50.572935 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 14 04:38:50 crc kubenswrapper[4867]: I0214 04:38:50.608073 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 14 04:38:50 crc kubenswrapper[4867]: E0214 04:38:50.611536 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="647ba30a-5526-4e27-9095-680c31ff4eb3" containerName="rabbitmq" Feb 14 04:38:50 crc kubenswrapper[4867]: I0214 04:38:50.611765 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="647ba30a-5526-4e27-9095-680c31ff4eb3" containerName="rabbitmq" Feb 14 04:38:50 crc kubenswrapper[4867]: E0214 04:38:50.611855 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="647ba30a-5526-4e27-9095-680c31ff4eb3" containerName="setup-container" Feb 14 04:38:50 crc kubenswrapper[4867]: I0214 04:38:50.611931 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="647ba30a-5526-4e27-9095-680c31ff4eb3" containerName="setup-container" Feb 14 04:38:50 crc kubenswrapper[4867]: I0214 04:38:50.612361 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="647ba30a-5526-4e27-9095-680c31ff4eb3" containerName="rabbitmq" Feb 14 04:38:50 crc kubenswrapper[4867]: I0214 04:38:50.614145 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 14 04:38:50 crc kubenswrapper[4867]: I0214 04:38:50.625743 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 14 04:38:50 crc kubenswrapper[4867]: I0214 04:38:50.638547 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7e279860-a36f-473d-a79a-a34e5820e5a6-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7e279860-a36f-473d-a79a-a34e5820e5a6\") " pod="openstack/rabbitmq-server-0" Feb 14 04:38:50 crc kubenswrapper[4867]: I0214 04:38:50.641827 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7e279860-a36f-473d-a79a-a34e5820e5a6-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7e279860-a36f-473d-a79a-a34e5820e5a6\") " pod="openstack/rabbitmq-server-0" Feb 14 04:38:50 crc kubenswrapper[4867]: I0214 04:38:50.642207 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7e279860-a36f-473d-a79a-a34e5820e5a6-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7e279860-a36f-473d-a79a-a34e5820e5a6\") " pod="openstack/rabbitmq-server-0" Feb 14 04:38:50 crc kubenswrapper[4867]: I0214 04:38:50.642367 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7e279860-a36f-473d-a79a-a34e5820e5a6-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7e279860-a36f-473d-a79a-a34e5820e5a6\") " pod="openstack/rabbitmq-server-0" Feb 14 04:38:50 crc kubenswrapper[4867]: I0214 04:38:50.642484 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7e279860-a36f-473d-a79a-a34e5820e5a6-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7e279860-a36f-473d-a79a-a34e5820e5a6\") " pod="openstack/rabbitmq-server-0" Feb 14 04:38:50 crc kubenswrapper[4867]: I0214 04:38:50.642838 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7e279860-a36f-473d-a79a-a34e5820e5a6-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7e279860-a36f-473d-a79a-a34e5820e5a6\") " pod="openstack/rabbitmq-server-0" Feb 14 04:38:50 crc kubenswrapper[4867]: I0214 04:38:50.642982 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7e279860-a36f-473d-a79a-a34e5820e5a6-config-data\") pod \"rabbitmq-server-0\" (UID: \"7e279860-a36f-473d-a79a-a34e5820e5a6\") " pod="openstack/rabbitmq-server-0" Feb 14 04:38:50 crc kubenswrapper[4867]: I0214 04:38:50.643149 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7e279860-a36f-473d-a79a-a34e5820e5a6-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7e279860-a36f-473d-a79a-a34e5820e5a6\") " pod="openstack/rabbitmq-server-0" Feb 14 04:38:50 crc kubenswrapper[4867]: I0214 04:38:50.643432 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-5e0ed597-0ada-4a46-9560-1f84a6822196\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5e0ed597-0ada-4a46-9560-1f84a6822196\") pod \"rabbitmq-server-0\" (UID: \"7e279860-a36f-473d-a79a-a34e5820e5a6\") " pod="openstack/rabbitmq-server-0" Feb 14 04:38:50 crc kubenswrapper[4867]: I0214 04:38:50.643583 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7e279860-a36f-473d-a79a-a34e5820e5a6-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7e279860-a36f-473d-a79a-a34e5820e5a6\") " pod="openstack/rabbitmq-server-0" Feb 14 04:38:50 crc kubenswrapper[4867]: I0214 04:38:50.643829 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qntcb\" (UniqueName: \"kubernetes.io/projected/7e279860-a36f-473d-a79a-a34e5820e5a6-kube-api-access-qntcb\") pod \"rabbitmq-server-0\" (UID: \"7e279860-a36f-473d-a79a-a34e5820e5a6\") " pod="openstack/rabbitmq-server-0" Feb 14 04:38:50 crc kubenswrapper[4867]: I0214 04:38:50.745846 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-5e0ed597-0ada-4a46-9560-1f84a6822196\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5e0ed597-0ada-4a46-9560-1f84a6822196\") pod \"rabbitmq-server-0\" (UID: \"7e279860-a36f-473d-a79a-a34e5820e5a6\") " pod="openstack/rabbitmq-server-0" Feb 14 04:38:50 crc kubenswrapper[4867]: I0214 04:38:50.745892 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7e279860-a36f-473d-a79a-a34e5820e5a6-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7e279860-a36f-473d-a79a-a34e5820e5a6\") " pod="openstack/rabbitmq-server-0" Feb 14 04:38:50 crc kubenswrapper[4867]: I0214 04:38:50.745939 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qntcb\" (UniqueName: \"kubernetes.io/projected/7e279860-a36f-473d-a79a-a34e5820e5a6-kube-api-access-qntcb\") pod \"rabbitmq-server-0\" (UID: \"7e279860-a36f-473d-a79a-a34e5820e5a6\") " pod="openstack/rabbitmq-server-0" Feb 14 04:38:50 crc kubenswrapper[4867]: I0214 04:38:50.746004 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7e279860-a36f-473d-a79a-a34e5820e5a6-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7e279860-a36f-473d-a79a-a34e5820e5a6\") " pod="openstack/rabbitmq-server-0" Feb 14 04:38:50 crc kubenswrapper[4867]: I0214 04:38:50.746020 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7e279860-a36f-473d-a79a-a34e5820e5a6-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7e279860-a36f-473d-a79a-a34e5820e5a6\") " pod="openstack/rabbitmq-server-0" Feb 14 04:38:50 crc kubenswrapper[4867]: I0214 04:38:50.746053 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7e279860-a36f-473d-a79a-a34e5820e5a6-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7e279860-a36f-473d-a79a-a34e5820e5a6\") " pod="openstack/rabbitmq-server-0" Feb 14 04:38:50 crc kubenswrapper[4867]: I0214 04:38:50.746092 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7e279860-a36f-473d-a79a-a34e5820e5a6-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7e279860-a36f-473d-a79a-a34e5820e5a6\") " pod="openstack/rabbitmq-server-0" Feb 14 04:38:50 crc kubenswrapper[4867]: I0214 04:38:50.746106 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7e279860-a36f-473d-a79a-a34e5820e5a6-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7e279860-a36f-473d-a79a-a34e5820e5a6\") " pod="openstack/rabbitmq-server-0" Feb 14 04:38:50 crc kubenswrapper[4867]: I0214 04:38:50.746190 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7e279860-a36f-473d-a79a-a34e5820e5a6-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7e279860-a36f-473d-a79a-a34e5820e5a6\") " pod="openstack/rabbitmq-server-0" Feb 14 04:38:50 crc kubenswrapper[4867]: I0214 04:38:50.746209 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7e279860-a36f-473d-a79a-a34e5820e5a6-config-data\") pod \"rabbitmq-server-0\" (UID: \"7e279860-a36f-473d-a79a-a34e5820e5a6\") " pod="openstack/rabbitmq-server-0" Feb 14 04:38:50 crc kubenswrapper[4867]: I0214 04:38:50.746252 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7e279860-a36f-473d-a79a-a34e5820e5a6-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7e279860-a36f-473d-a79a-a34e5820e5a6\") " pod="openstack/rabbitmq-server-0" Feb 14 04:38:50 crc kubenswrapper[4867]: I0214 04:38:50.747745 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7e279860-a36f-473d-a79a-a34e5820e5a6-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7e279860-a36f-473d-a79a-a34e5820e5a6\") " pod="openstack/rabbitmq-server-0" Feb 14 04:38:50 crc kubenswrapper[4867]: I0214 04:38:50.748387 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7e279860-a36f-473d-a79a-a34e5820e5a6-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7e279860-a36f-473d-a79a-a34e5820e5a6\") " pod="openstack/rabbitmq-server-0" Feb 14 04:38:50 crc kubenswrapper[4867]: I0214 04:38:50.751248 4867 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 04:38:50 crc kubenswrapper[4867]: I0214 04:38:50.751294 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-5e0ed597-0ada-4a46-9560-1f84a6822196\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5e0ed597-0ada-4a46-9560-1f84a6822196\") pod \"rabbitmq-server-0\" (UID: \"7e279860-a36f-473d-a79a-a34e5820e5a6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b6ecbc127793ccdba0f55c49c319b455a0b3bdad6043979264d9c6d7f92205d3/globalmount\"" pod="openstack/rabbitmq-server-0" Feb 14 04:38:50 crc kubenswrapper[4867]: I0214 04:38:50.752005 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7e279860-a36f-473d-a79a-a34e5820e5a6-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7e279860-a36f-473d-a79a-a34e5820e5a6\") " pod="openstack/rabbitmq-server-0" Feb 14 04:38:50 crc kubenswrapper[4867]: I0214 04:38:50.752115 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7e279860-a36f-473d-a79a-a34e5820e5a6-config-data\") pod \"rabbitmq-server-0\" (UID: \"7e279860-a36f-473d-a79a-a34e5820e5a6\") " pod="openstack/rabbitmq-server-0" Feb 14 04:38:50 crc kubenswrapper[4867]: I0214 04:38:50.752006 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7e279860-a36f-473d-a79a-a34e5820e5a6-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7e279860-a36f-473d-a79a-a34e5820e5a6\") " pod="openstack/rabbitmq-server-0" Feb 14 04:38:50 crc kubenswrapper[4867]: I0214 04:38:50.756301 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7e279860-a36f-473d-a79a-a34e5820e5a6-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7e279860-a36f-473d-a79a-a34e5820e5a6\") " pod="openstack/rabbitmq-server-0" Feb 14 04:38:50 crc kubenswrapper[4867]: I0214 04:38:50.756974 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7e279860-a36f-473d-a79a-a34e5820e5a6-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7e279860-a36f-473d-a79a-a34e5820e5a6\") " pod="openstack/rabbitmq-server-0" Feb 14 04:38:50 crc kubenswrapper[4867]: I0214 04:38:50.759023 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7e279860-a36f-473d-a79a-a34e5820e5a6-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7e279860-a36f-473d-a79a-a34e5820e5a6\") " pod="openstack/rabbitmq-server-0" Feb 14 04:38:50 crc kubenswrapper[4867]: I0214 04:38:50.783037 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qntcb\" (UniqueName: \"kubernetes.io/projected/7e279860-a36f-473d-a79a-a34e5820e5a6-kube-api-access-qntcb\") pod \"rabbitmq-server-0\" (UID: \"7e279860-a36f-473d-a79a-a34e5820e5a6\") " pod="openstack/rabbitmq-server-0" Feb 14 04:38:50 crc kubenswrapper[4867]: I0214 04:38:50.786775 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7e279860-a36f-473d-a79a-a34e5820e5a6-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7e279860-a36f-473d-a79a-a34e5820e5a6\") " pod="openstack/rabbitmq-server-0" Feb 14 04:38:50 crc kubenswrapper[4867]: I0214 04:38:50.855587 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-5e0ed597-0ada-4a46-9560-1f84a6822196\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5e0ed597-0ada-4a46-9560-1f84a6822196\") pod \"rabbitmq-server-0\" (UID: \"7e279860-a36f-473d-a79a-a34e5820e5a6\") " pod="openstack/rabbitmq-server-0" Feb 14 04:38:50 crc kubenswrapper[4867]: I0214 04:38:50.953753 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 14 04:38:51 crc kubenswrapper[4867]: I0214 04:38:51.023331 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="647ba30a-5526-4e27-9095-680c31ff4eb3" path="/var/lib/kubelet/pods/647ba30a-5526-4e27-9095-680c31ff4eb3/volumes" Feb 14 04:38:51 crc kubenswrapper[4867]: I0214 04:38:51.535359 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 14 04:38:51 crc kubenswrapper[4867]: I0214 04:38:51.998300 4867 scope.go:117] "RemoveContainer" containerID="7203a3aa09f0fa634ee4bcd02b0e1dff1e29376e8dd84a4e743cbea72d4c480e" Feb 14 04:38:51 crc kubenswrapper[4867]: E0214 04:38:51.998966 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:38:52 crc kubenswrapper[4867]: I0214 04:38:52.554120 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7e279860-a36f-473d-a79a-a34e5820e5a6","Type":"ContainerStarted","Data":"d5401a97e1f766e18450e5ec1ee7aadecaede15c285c6fcfb043d1ff4ce891e6"} Feb 14 04:38:54 crc kubenswrapper[4867]: I0214 04:38:54.579480 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7e279860-a36f-473d-a79a-a34e5820e5a6","Type":"ContainerStarted","Data":"275e7be6a1276f951172bfaf0e7561f63cbf6ac9f3028d790a5328e07743e27c"} Feb 14 04:39:03 crc kubenswrapper[4867]: I0214 04:39:03.020551 4867 scope.go:117] "RemoveContainer" containerID="7203a3aa09f0fa634ee4bcd02b0e1dff1e29376e8dd84a4e743cbea72d4c480e" Feb 14 04:39:03 crc kubenswrapper[4867]: E0214 04:39:03.037026 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:39:11 crc kubenswrapper[4867]: I0214 04:39:11.956332 4867 scope.go:117] "RemoveContainer" containerID="2985355e95eee0dc957c0e21e160693198281b44121fdf6f1cd86e16275d7eea" Feb 14 04:39:11 crc kubenswrapper[4867]: I0214 04:39:11.988665 4867 scope.go:117] "RemoveContainer" containerID="47b0dc8cf76452537b6a08713121a73a00752e3dfe3f1a9f1b2a3edca2f295a0" Feb 14 04:39:17 crc kubenswrapper[4867]: I0214 04:39:17.997208 4867 scope.go:117] "RemoveContainer" containerID="7203a3aa09f0fa634ee4bcd02b0e1dff1e29376e8dd84a4e743cbea72d4c480e" Feb 14 04:39:17 crc kubenswrapper[4867]: E0214 04:39:17.998146 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:39:25 crc kubenswrapper[4867]: I0214 04:39:25.951429 4867 generic.go:334] "Generic (PLEG): container finished" podID="7e279860-a36f-473d-a79a-a34e5820e5a6" containerID="275e7be6a1276f951172bfaf0e7561f63cbf6ac9f3028d790a5328e07743e27c" exitCode=0 Feb 14 04:39:25 crc kubenswrapper[4867]: I0214 04:39:25.951518 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7e279860-a36f-473d-a79a-a34e5820e5a6","Type":"ContainerDied","Data":"275e7be6a1276f951172bfaf0e7561f63cbf6ac9f3028d790a5328e07743e27c"} Feb 14 04:39:26 crc kubenswrapper[4867]: I0214 04:39:26.974876 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7e279860-a36f-473d-a79a-a34e5820e5a6","Type":"ContainerStarted","Data":"6d3ff5bc076eb69718b7185fe7e4458fcc2cf0606b4fdca1d9beacf0ba141acb"} Feb 14 04:39:26 crc kubenswrapper[4867]: I0214 04:39:26.975739 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 14 04:39:26 crc kubenswrapper[4867]: I0214 04:39:26.998612 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.998590406 podStartE2EDuration="36.998590406s" podCreationTimestamp="2026-02-14 04:38:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:39:26.996113311 +0000 UTC m=+1799.077050625" watchObservedRunningTime="2026-02-14 04:39:26.998590406 +0000 UTC m=+1799.079527720" Feb 14 04:39:32 crc kubenswrapper[4867]: I0214 04:39:32.997416 4867 scope.go:117] "RemoveContainer" containerID="7203a3aa09f0fa634ee4bcd02b0e1dff1e29376e8dd84a4e743cbea72d4c480e" Feb 14 04:39:32 crc kubenswrapper[4867]: E0214 04:39:32.998141 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:39:40 crc kubenswrapper[4867]: I0214 04:39:40.958735 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 14 04:39:43 crc kubenswrapper[4867]: I0214 04:39:43.998077 4867 scope.go:117] "RemoveContainer" containerID="7203a3aa09f0fa634ee4bcd02b0e1dff1e29376e8dd84a4e743cbea72d4c480e" Feb 14 04:39:43 crc kubenswrapper[4867]: E0214 04:39:43.998708 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:39:56 crc kubenswrapper[4867]: I0214 04:39:56.043496 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-t56pc"] Feb 14 04:39:56 crc kubenswrapper[4867]: I0214 04:39:56.074686 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-cff6-account-create-update-ktnvw"] Feb 14 04:39:56 crc kubenswrapper[4867]: I0214 04:39:56.092352 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-t56pc"] Feb 14 04:39:56 crc kubenswrapper[4867]: I0214 04:39:56.104624 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-cff6-account-create-update-ktnvw"] Feb 14 04:39:57 crc kubenswrapper[4867]: I0214 04:39:57.011572 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0fef49b7-7486-40dc-aedc-9814adb071e2" path="/var/lib/kubelet/pods/0fef49b7-7486-40dc-aedc-9814adb071e2/volumes" Feb 14 04:39:57 crc kubenswrapper[4867]: I0214 04:39:57.012961 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b72434a2-25c0-4fd4-89cf-eff7bee167c3" path="/var/lib/kubelet/pods/b72434a2-25c0-4fd4-89cf-eff7bee167c3/volumes" Feb 14 04:39:57 crc kubenswrapper[4867]: I0214 04:39:57.997836 4867 scope.go:117] "RemoveContainer" containerID="7203a3aa09f0fa634ee4bcd02b0e1dff1e29376e8dd84a4e743cbea72d4c480e" Feb 14 04:39:57 crc kubenswrapper[4867]: E0214 04:39:57.998451 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:39:58 crc kubenswrapper[4867]: I0214 04:39:58.040143 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-brnhd"] Feb 14 04:39:58 crc kubenswrapper[4867]: I0214 04:39:58.054650 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-a782-account-create-update-dzhfz"] Feb 14 04:39:58 crc kubenswrapper[4867]: I0214 04:39:58.074496 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-aef7-account-create-update-w7xz9"] Feb 14 04:39:58 crc kubenswrapper[4867]: I0214 04:39:58.089261 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-a782-account-create-update-dzhfz"] Feb 14 04:39:58 crc kubenswrapper[4867]: I0214 04:39:58.100845 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-brnhd"] Feb 14 04:39:58 crc kubenswrapper[4867]: I0214 04:39:58.114086 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-qmj24"] Feb 14 04:39:58 crc kubenswrapper[4867]: I0214 04:39:58.125446 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-aef7-account-create-update-w7xz9"] Feb 14 04:39:58 crc kubenswrapper[4867]: I0214 04:39:58.136990 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-qmj24"] Feb 14 04:39:59 crc kubenswrapper[4867]: I0214 04:39:59.013116 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="853d3739-366e-498f-ac28-6df19ee88dee" path="/var/lib/kubelet/pods/853d3739-366e-498f-ac28-6df19ee88dee/volumes" Feb 14 04:39:59 crc kubenswrapper[4867]: I0214 04:39:59.015565 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af1b76a6-cc66-4a23-893d-df38ba5aac38" path="/var/lib/kubelet/pods/af1b76a6-cc66-4a23-893d-df38ba5aac38/volumes" Feb 14 04:39:59 crc kubenswrapper[4867]: I0214 04:39:59.017288 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b10f828b-59d6-4eb2-8922-aec92f274280" path="/var/lib/kubelet/pods/b10f828b-59d6-4eb2-8922-aec92f274280/volumes" Feb 14 04:39:59 crc kubenswrapper[4867]: I0214 04:39:59.018439 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e62c2a1e-55e4-4b7d-90db-ab37eecdb659" path="/var/lib/kubelet/pods/e62c2a1e-55e4-4b7d-90db-ab37eecdb659/volumes" Feb 14 04:39:59 crc kubenswrapper[4867]: I0214 04:39:59.035466 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-4f85-account-create-update-7m6h2"] Feb 14 04:39:59 crc kubenswrapper[4867]: I0214 04:39:59.048133 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-4f85-account-create-update-7m6h2"] Feb 14 04:40:00 crc kubenswrapper[4867]: I0214 04:40:00.030988 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-7klnf"] Feb 14 04:40:00 crc kubenswrapper[4867]: I0214 04:40:00.042774 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-7klnf"] Feb 14 04:40:01 crc kubenswrapper[4867]: I0214 04:40:01.013065 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1207dbcf-080a-40c2-a0cb-ab39e7225aaf" path="/var/lib/kubelet/pods/1207dbcf-080a-40c2-a0cb-ab39e7225aaf/volumes" Feb 14 04:40:01 crc kubenswrapper[4867]: I0214 04:40:01.015302 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa8913cb-b163-4973-b6e2-ac741177964e" path="/var/lib/kubelet/pods/fa8913cb-b163-4973-b6e2-ac741177964e/volumes" Feb 14 04:40:09 crc kubenswrapper[4867]: I0214 04:40:09.040787 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-pjc8k"] Feb 14 04:40:09 crc kubenswrapper[4867]: I0214 04:40:09.057351 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-pjc8k"] Feb 14 04:40:09 crc kubenswrapper[4867]: I0214 04:40:09.069248 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-92c4-account-create-update-r2w8b"] Feb 14 04:40:09 crc kubenswrapper[4867]: I0214 04:40:09.078755 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-92c4-account-create-update-r2w8b"] Feb 14 04:40:10 crc kubenswrapper[4867]: I0214 04:40:10.998311 4867 scope.go:117] "RemoveContainer" containerID="7203a3aa09f0fa634ee4bcd02b0e1dff1e29376e8dd84a4e743cbea72d4c480e" Feb 14 04:40:10 crc kubenswrapper[4867]: E0214 04:40:10.998903 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:40:11 crc kubenswrapper[4867]: I0214 04:40:11.012845 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e27a3cb-c301-4fa0-b9a1-9aa3bac0305a" path="/var/lib/kubelet/pods/2e27a3cb-c301-4fa0-b9a1-9aa3bac0305a/volumes" Feb 14 04:40:11 crc kubenswrapper[4867]: I0214 04:40:11.014008 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36e07f1b-6481-42a9-a605-b472a8cc3945" path="/var/lib/kubelet/pods/36e07f1b-6481-42a9-a605-b472a8cc3945/volumes" Feb 14 04:40:12 crc kubenswrapper[4867]: I0214 04:40:12.109778 4867 scope.go:117] "RemoveContainer" containerID="50f6a1e55c135273f16192c4d930b15a06776fce11c739aadacaa3a89fc4b153" Feb 14 04:40:12 crc kubenswrapper[4867]: I0214 04:40:12.134195 4867 scope.go:117] "RemoveContainer" containerID="6169e5fdf0e74fe086570773b95de46198a0244319d8d869f06e9d58ae4d08cb" Feb 14 04:40:12 crc kubenswrapper[4867]: I0214 04:40:12.198604 4867 scope.go:117] "RemoveContainer" containerID="41305e93b907718ed0332e27cd0c47623d93ba3f8546dbde9032dfe08f5e2a6c" Feb 14 04:40:12 crc kubenswrapper[4867]: I0214 04:40:12.260185 4867 scope.go:117] "RemoveContainer" containerID="659356ffd1920059def60984a1f291aad46ef6d15393b55c49987a54a05704a7" Feb 14 04:40:12 crc kubenswrapper[4867]: I0214 04:40:12.331339 4867 scope.go:117] "RemoveContainer" containerID="7ee48e595ead334c45b0c14aeec7251dc9cd4d60d85c2a40a47348b3ee0e687a" Feb 14 04:40:12 crc kubenswrapper[4867]: I0214 04:40:12.381720 4867 scope.go:117] "RemoveContainer" containerID="f4258135bf11c6ed1dd99f5c1f581fcb97da6e22ed3370067c3b4edacd5e6962" Feb 14 04:40:12 crc kubenswrapper[4867]: I0214 04:40:12.435843 4867 scope.go:117] "RemoveContainer" containerID="4f99901f0da4b1da0863796edd2dde44662d1bb2b2807e64f939fdf575d0e6af" Feb 14 04:40:12 crc kubenswrapper[4867]: I0214 04:40:12.457097 4867 scope.go:117] "RemoveContainer" containerID="027f7b47ecf95746bb9733dbd606f94b7866eecb1f1ce8cb4d1598a367884200" Feb 14 04:40:12 crc kubenswrapper[4867]: I0214 04:40:12.485071 4867 scope.go:117] "RemoveContainer" containerID="ae0a83f28bdc3a06d4663a0d9d8e67b0716eee94221bc552fd5d22ba9ecc6605" Feb 14 04:40:12 crc kubenswrapper[4867]: I0214 04:40:12.524038 4867 scope.go:117] "RemoveContainer" containerID="4331549532fda4f50fc6d3ddd019e8a773925579f6102f8ec4140112305629a4" Feb 14 04:40:12 crc kubenswrapper[4867]: I0214 04:40:12.551139 4867 scope.go:117] "RemoveContainer" containerID="63b1841b94ccfe878085e7aaa4ff2044786571fd3492e4ffbe7576e35506afb2" Feb 14 04:40:23 crc kubenswrapper[4867]: I0214 04:40:23.997354 4867 scope.go:117] "RemoveContainer" containerID="7203a3aa09f0fa634ee4bcd02b0e1dff1e29376e8dd84a4e743cbea72d4c480e" Feb 14 04:40:23 crc kubenswrapper[4867]: E0214 04:40:23.998380 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:40:26 crc kubenswrapper[4867]: I0214 04:40:26.048748 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-k62wg"] Feb 14 04:40:26 crc kubenswrapper[4867]: I0214 04:40:26.062932 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-k62wg"] Feb 14 04:40:27 crc kubenswrapper[4867]: I0214 04:40:27.009823 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0d44618-795d-4cc5-a98b-c0c5d77ffdcb" path="/var/lib/kubelet/pods/f0d44618-795d-4cc5-a98b-c0c5d77ffdcb/volumes" Feb 14 04:40:34 crc kubenswrapper[4867]: I0214 04:40:34.997158 4867 scope.go:117] "RemoveContainer" containerID="7203a3aa09f0fa634ee4bcd02b0e1dff1e29376e8dd84a4e743cbea72d4c480e" Feb 14 04:40:34 crc kubenswrapper[4867]: E0214 04:40:34.998057 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:40:48 crc kubenswrapper[4867]: I0214 04:40:48.032158 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-gzvxs"] Feb 14 04:40:48 crc kubenswrapper[4867]: I0214 04:40:48.046755 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-gzvxs"] Feb 14 04:40:49 crc kubenswrapper[4867]: I0214 04:40:49.010902 4867 scope.go:117] "RemoveContainer" containerID="7203a3aa09f0fa634ee4bcd02b0e1dff1e29376e8dd84a4e743cbea72d4c480e" Feb 14 04:40:49 crc kubenswrapper[4867]: I0214 04:40:49.011915 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2" path="/var/lib/kubelet/pods/e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2/volumes" Feb 14 04:40:49 crc kubenswrapper[4867]: E0214 04:40:49.012130 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:40:55 crc kubenswrapper[4867]: I0214 04:40:55.051908 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-fad3-account-create-update-zwwh5"] Feb 14 04:40:55 crc kubenswrapper[4867]: I0214 04:40:55.067015 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-f62v7"] Feb 14 04:40:55 crc kubenswrapper[4867]: I0214 04:40:55.086876 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-9vmb7"] Feb 14 04:40:55 crc kubenswrapper[4867]: I0214 04:40:55.100340 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-fad3-account-create-update-zwwh5"] Feb 14 04:40:55 crc kubenswrapper[4867]: I0214 04:40:55.111700 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-9vmb7"] Feb 14 04:40:55 crc kubenswrapper[4867]: I0214 04:40:55.123356 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-f62v7"] Feb 14 04:40:57 crc kubenswrapper[4867]: I0214 04:40:57.010877 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c993d62-94a7-4903-b984-adcef36b53b8" path="/var/lib/kubelet/pods/9c993d62-94a7-4903-b984-adcef36b53b8/volumes" Feb 14 04:40:57 crc kubenswrapper[4867]: I0214 04:40:57.012786 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd001336-81f9-43f6-9540-432047e6c98a" path="/var/lib/kubelet/pods/bd001336-81f9-43f6-9540-432047e6c98a/volumes" Feb 14 04:40:57 crc kubenswrapper[4867]: I0214 04:40:57.014166 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f90d34b6-263e-4515-a13a-a41fda1c40ca" path="/var/lib/kubelet/pods/f90d34b6-263e-4515-a13a-a41fda1c40ca/volumes" Feb 14 04:40:59 crc kubenswrapper[4867]: I0214 04:40:59.038208 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-3b6b-account-create-update-74g2s"] Feb 14 04:40:59 crc kubenswrapper[4867]: I0214 04:40:59.056723 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-3b6b-account-create-update-74g2s"] Feb 14 04:40:59 crc kubenswrapper[4867]: I0214 04:40:59.071935 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-07f7-account-create-update-k24c7"] Feb 14 04:40:59 crc kubenswrapper[4867]: I0214 04:40:59.083390 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-8zqfs"] Feb 14 04:40:59 crc kubenswrapper[4867]: I0214 04:40:59.094923 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-7kcws"] Feb 14 04:40:59 crc kubenswrapper[4867]: I0214 04:40:59.105192 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-07f7-account-create-update-k24c7"] Feb 14 04:40:59 crc kubenswrapper[4867]: I0214 04:40:59.116899 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-7kcws"] Feb 14 04:40:59 crc kubenswrapper[4867]: I0214 04:40:59.127963 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-bab0-account-create-update-kmfpg"] Feb 14 04:40:59 crc kubenswrapper[4867]: I0214 04:40:59.138000 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-8zqfs"] Feb 14 04:40:59 crc kubenswrapper[4867]: I0214 04:40:59.148178 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-bab0-account-create-update-kmfpg"] Feb 14 04:40:59 crc kubenswrapper[4867]: I0214 04:40:59.997552 4867 scope.go:117] "RemoveContainer" containerID="7203a3aa09f0fa634ee4bcd02b0e1dff1e29376e8dd84a4e743cbea72d4c480e" Feb 14 04:40:59 crc kubenswrapper[4867]: E0214 04:40:59.998018 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:41:01 crc kubenswrapper[4867]: I0214 04:41:01.011916 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c5e9025-3781-4461-98d7-0d0d72c3b59b" path="/var/lib/kubelet/pods/2c5e9025-3781-4461-98d7-0d0d72c3b59b/volumes" Feb 14 04:41:01 crc kubenswrapper[4867]: I0214 04:41:01.012979 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6961722f-b14d-42f2-bd56-68686c2e8a9a" path="/var/lib/kubelet/pods/6961722f-b14d-42f2-bd56-68686c2e8a9a/volumes" Feb 14 04:41:01 crc kubenswrapper[4867]: I0214 04:41:01.014638 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1826e5b-3563-455f-9caf-9c4ee203210f" path="/var/lib/kubelet/pods/b1826e5b-3563-455f-9caf-9c4ee203210f/volumes" Feb 14 04:41:01 crc kubenswrapper[4867]: I0214 04:41:01.016223 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c14b9ea2-b4ee-4365-8b77-d58ff122fabb" path="/var/lib/kubelet/pods/c14b9ea2-b4ee-4365-8b77-d58ff122fabb/volumes" Feb 14 04:41:01 crc kubenswrapper[4867]: I0214 04:41:01.018926 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1f3a1a1-5734-4782-98e1-1eb22cfbdf93" path="/var/lib/kubelet/pods/d1f3a1a1-5734-4782-98e1-1eb22cfbdf93/volumes" Feb 14 04:41:04 crc kubenswrapper[4867]: I0214 04:41:04.037583 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-gk75z"] Feb 14 04:41:04 crc kubenswrapper[4867]: I0214 04:41:04.049809 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-gk75z"] Feb 14 04:41:05 crc kubenswrapper[4867]: I0214 04:41:05.012170 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49af28f1-d33f-4717-81a7-4377bfef388c" path="/var/lib/kubelet/pods/49af28f1-d33f-4717-81a7-4377bfef388c/volumes" Feb 14 04:41:12 crc kubenswrapper[4867]: I0214 04:41:12.829627 4867 scope.go:117] "RemoveContainer" containerID="8ee377ab9df59755c2608bf160912f4986e5a570c0b163efea645d0bbf2907f0" Feb 14 04:41:12 crc kubenswrapper[4867]: I0214 04:41:12.855016 4867 scope.go:117] "RemoveContainer" containerID="8042db461fd6eabaa93681751cc5037c8a7ddd74046cd943405dc18cc37f069c" Feb 14 04:41:12 crc kubenswrapper[4867]: I0214 04:41:12.882894 4867 scope.go:117] "RemoveContainer" containerID="cb180091e4ae70970aa78bde495475b793634681199f41c69a03b8635b020332" Feb 14 04:41:12 crc kubenswrapper[4867]: I0214 04:41:12.963571 4867 scope.go:117] "RemoveContainer" containerID="8d4513234d1fad24212cdf82718a385562881173fcd13074ff0a12c06d73e620" Feb 14 04:41:13 crc kubenswrapper[4867]: I0214 04:41:13.020802 4867 scope.go:117] "RemoveContainer" containerID="645d09ab3ab20918409aff17c8b3710b4ffbfa06ad1a509445fe4ca8b7901e2d" Feb 14 04:41:13 crc kubenswrapper[4867]: I0214 04:41:13.074856 4867 scope.go:117] "RemoveContainer" containerID="abb5bce0228ffe2b4f577c72d541587bc9ccc14c780b4813bbfbccab7bd48336" Feb 14 04:41:13 crc kubenswrapper[4867]: I0214 04:41:13.101617 4867 scope.go:117] "RemoveContainer" containerID="4f77da80359dbcaaf7f1b0862edf00e5f51cbdfe953464edb0d8a0f3cd5a1425" Feb 14 04:41:13 crc kubenswrapper[4867]: I0214 04:41:13.159930 4867 scope.go:117] "RemoveContainer" containerID="b0ee3d8476bae8f4a3fe8c62bb7c061a9556901f3c45531ad9e5c2cc20102b49" Feb 14 04:41:13 crc kubenswrapper[4867]: I0214 04:41:13.223035 4867 scope.go:117] "RemoveContainer" containerID="e481f6b0c38be3cb0239424de842f33edc585ce836916de0d7d544ab198683d3" Feb 14 04:41:13 crc kubenswrapper[4867]: I0214 04:41:13.255561 4867 scope.go:117] "RemoveContainer" containerID="2bdf28b1e859bb5d2211947dae2797aa206db181b3539ea0de854f0f3e6d89c6" Feb 14 04:41:13 crc kubenswrapper[4867]: I0214 04:41:13.280889 4867 scope.go:117] "RemoveContainer" containerID="cbf0ef6610c1740254fda0700aa42a6fdd3885fcc7d65e0c4bc4ef1fc1f78288" Feb 14 04:41:13 crc kubenswrapper[4867]: I0214 04:41:13.301147 4867 scope.go:117] "RemoveContainer" containerID="da8ab728620d5f0651397fa356c829bf5bff0ab2414fec4cf72bb2494ac4d8b1" Feb 14 04:41:13 crc kubenswrapper[4867]: I0214 04:41:13.319316 4867 scope.go:117] "RemoveContainer" containerID="bd098d1d3f5431ee4dfc77512f72bdb3c684d719a4f758c6fe63d5e6f0d5b682" Feb 14 04:41:13 crc kubenswrapper[4867]: I0214 04:41:13.340016 4867 scope.go:117] "RemoveContainer" containerID="dac7c15e8d204db1888f9efc6944db09a4f811e1647c31593e86131c9a51b98c" Feb 14 04:41:13 crc kubenswrapper[4867]: I0214 04:41:13.358923 4867 scope.go:117] "RemoveContainer" containerID="d05fe3ff5d6d0b733fa083ac07e6cf3331ccf5ca5bbba2a8f738913293195786" Feb 14 04:41:13 crc kubenswrapper[4867]: I0214 04:41:13.997074 4867 scope.go:117] "RemoveContainer" containerID="7203a3aa09f0fa634ee4bcd02b0e1dff1e29376e8dd84a4e743cbea72d4c480e" Feb 14 04:41:14 crc kubenswrapper[4867]: I0214 04:41:14.266696 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" event={"ID":"5992e46c-bce7-4b9f-82f2-c7ffb93286cd","Type":"ContainerStarted","Data":"8ef22e983ed33de6916be45630c900d98abc980cea24a0e66ba99e9fbf263b65"} Feb 14 04:41:24 crc kubenswrapper[4867]: I0214 04:41:24.385966 4867 generic.go:334] "Generic (PLEG): container finished" podID="e3d43ea0-54e7-4fd1-892d-bbc3d01a5321" containerID="092ff2e32550d64bb67818137543ba61871a207e331e990a8f5b06ace8a5b266" exitCode=0 Feb 14 04:41:24 crc kubenswrapper[4867]: I0214 04:41:24.386102 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d4nh9" event={"ID":"e3d43ea0-54e7-4fd1-892d-bbc3d01a5321","Type":"ContainerDied","Data":"092ff2e32550d64bb67818137543ba61871a207e331e990a8f5b06ace8a5b266"} Feb 14 04:41:25 crc kubenswrapper[4867]: I0214 04:41:25.903744 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d4nh9" Feb 14 04:41:26 crc kubenswrapper[4867]: I0214 04:41:26.000427 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3d43ea0-54e7-4fd1-892d-bbc3d01a5321-bootstrap-combined-ca-bundle\") pod \"e3d43ea0-54e7-4fd1-892d-bbc3d01a5321\" (UID: \"e3d43ea0-54e7-4fd1-892d-bbc3d01a5321\") " Feb 14 04:41:26 crc kubenswrapper[4867]: I0214 04:41:26.000481 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e3d43ea0-54e7-4fd1-892d-bbc3d01a5321-ssh-key-openstack-edpm-ipam\") pod \"e3d43ea0-54e7-4fd1-892d-bbc3d01a5321\" (UID: \"e3d43ea0-54e7-4fd1-892d-bbc3d01a5321\") " Feb 14 04:41:26 crc kubenswrapper[4867]: I0214 04:41:26.000621 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n99rh\" (UniqueName: \"kubernetes.io/projected/e3d43ea0-54e7-4fd1-892d-bbc3d01a5321-kube-api-access-n99rh\") pod \"e3d43ea0-54e7-4fd1-892d-bbc3d01a5321\" (UID: \"e3d43ea0-54e7-4fd1-892d-bbc3d01a5321\") " Feb 14 04:41:26 crc kubenswrapper[4867]: I0214 04:41:26.000733 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e3d43ea0-54e7-4fd1-892d-bbc3d01a5321-inventory\") pod \"e3d43ea0-54e7-4fd1-892d-bbc3d01a5321\" (UID: \"e3d43ea0-54e7-4fd1-892d-bbc3d01a5321\") " Feb 14 04:41:26 crc kubenswrapper[4867]: I0214 04:41:26.008315 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3d43ea0-54e7-4fd1-892d-bbc3d01a5321-kube-api-access-n99rh" (OuterVolumeSpecName: "kube-api-access-n99rh") pod "e3d43ea0-54e7-4fd1-892d-bbc3d01a5321" (UID: "e3d43ea0-54e7-4fd1-892d-bbc3d01a5321"). InnerVolumeSpecName "kube-api-access-n99rh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:41:26 crc kubenswrapper[4867]: I0214 04:41:26.009163 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3d43ea0-54e7-4fd1-892d-bbc3d01a5321-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "e3d43ea0-54e7-4fd1-892d-bbc3d01a5321" (UID: "e3d43ea0-54e7-4fd1-892d-bbc3d01a5321"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:41:26 crc kubenswrapper[4867]: I0214 04:41:26.043564 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3d43ea0-54e7-4fd1-892d-bbc3d01a5321-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e3d43ea0-54e7-4fd1-892d-bbc3d01a5321" (UID: "e3d43ea0-54e7-4fd1-892d-bbc3d01a5321"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:41:26 crc kubenswrapper[4867]: I0214 04:41:26.047367 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3d43ea0-54e7-4fd1-892d-bbc3d01a5321-inventory" (OuterVolumeSpecName: "inventory") pod "e3d43ea0-54e7-4fd1-892d-bbc3d01a5321" (UID: "e3d43ea0-54e7-4fd1-892d-bbc3d01a5321"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:41:26 crc kubenswrapper[4867]: I0214 04:41:26.109636 4867 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3d43ea0-54e7-4fd1-892d-bbc3d01a5321-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:41:26 crc kubenswrapper[4867]: I0214 04:41:26.109688 4867 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e3d43ea0-54e7-4fd1-892d-bbc3d01a5321-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 14 04:41:26 crc kubenswrapper[4867]: I0214 04:41:26.109699 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n99rh\" (UniqueName: \"kubernetes.io/projected/e3d43ea0-54e7-4fd1-892d-bbc3d01a5321-kube-api-access-n99rh\") on node \"crc\" DevicePath \"\"" Feb 14 04:41:26 crc kubenswrapper[4867]: I0214 04:41:26.109710 4867 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e3d43ea0-54e7-4fd1-892d-bbc3d01a5321-inventory\") on node \"crc\" DevicePath \"\"" Feb 14 04:41:26 crc kubenswrapper[4867]: I0214 04:41:26.409955 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d4nh9" event={"ID":"e3d43ea0-54e7-4fd1-892d-bbc3d01a5321","Type":"ContainerDied","Data":"29be09ee8292887c8dae314e3fa0f7206f5042ff48d634e5b2ba0410adb6d585"} Feb 14 04:41:26 crc kubenswrapper[4867]: I0214 04:41:26.410527 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29be09ee8292887c8dae314e3fa0f7206f5042ff48d634e5b2ba0410adb6d585" Feb 14 04:41:26 crc kubenswrapper[4867]: I0214 04:41:26.410091 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d4nh9" Feb 14 04:41:26 crc kubenswrapper[4867]: I0214 04:41:26.512391 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jsmhs"] Feb 14 04:41:26 crc kubenswrapper[4867]: E0214 04:41:26.513594 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3d43ea0-54e7-4fd1-892d-bbc3d01a5321" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 14 04:41:26 crc kubenswrapper[4867]: I0214 04:41:26.513721 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3d43ea0-54e7-4fd1-892d-bbc3d01a5321" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 14 04:41:26 crc kubenswrapper[4867]: I0214 04:41:26.514119 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3d43ea0-54e7-4fd1-892d-bbc3d01a5321" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 14 04:41:26 crc kubenswrapper[4867]: I0214 04:41:26.515200 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jsmhs" Feb 14 04:41:26 crc kubenswrapper[4867]: I0214 04:41:26.517646 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 14 04:41:26 crc kubenswrapper[4867]: I0214 04:41:26.517938 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 14 04:41:26 crc kubenswrapper[4867]: I0214 04:41:26.518290 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-24tmg" Feb 14 04:41:26 crc kubenswrapper[4867]: I0214 04:41:26.518613 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 14 04:41:26 crc kubenswrapper[4867]: I0214 04:41:26.527268 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jsmhs"] Feb 14 04:41:26 crc kubenswrapper[4867]: I0214 04:41:26.629088 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89f9d\" (UniqueName: \"kubernetes.io/projected/879dee23-804e-4b8a-ac20-0546383202b0-kube-api-access-89f9d\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-jsmhs\" (UID: \"879dee23-804e-4b8a-ac20-0546383202b0\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jsmhs" Feb 14 04:41:26 crc kubenswrapper[4867]: I0214 04:41:26.629235 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/879dee23-804e-4b8a-ac20-0546383202b0-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-jsmhs\" (UID: \"879dee23-804e-4b8a-ac20-0546383202b0\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jsmhs" Feb 14 04:41:26 crc kubenswrapper[4867]: I0214 04:41:26.630024 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/879dee23-804e-4b8a-ac20-0546383202b0-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-jsmhs\" (UID: \"879dee23-804e-4b8a-ac20-0546383202b0\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jsmhs" Feb 14 04:41:26 crc kubenswrapper[4867]: I0214 04:41:26.732594 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89f9d\" (UniqueName: \"kubernetes.io/projected/879dee23-804e-4b8a-ac20-0546383202b0-kube-api-access-89f9d\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-jsmhs\" (UID: \"879dee23-804e-4b8a-ac20-0546383202b0\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jsmhs" Feb 14 04:41:26 crc kubenswrapper[4867]: I0214 04:41:26.732705 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/879dee23-804e-4b8a-ac20-0546383202b0-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-jsmhs\" (UID: \"879dee23-804e-4b8a-ac20-0546383202b0\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jsmhs" Feb 14 04:41:26 crc kubenswrapper[4867]: I0214 04:41:26.732834 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/879dee23-804e-4b8a-ac20-0546383202b0-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-jsmhs\" (UID: \"879dee23-804e-4b8a-ac20-0546383202b0\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jsmhs" Feb 14 04:41:26 crc kubenswrapper[4867]: I0214 04:41:26.739669 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/879dee23-804e-4b8a-ac20-0546383202b0-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-jsmhs\" (UID: \"879dee23-804e-4b8a-ac20-0546383202b0\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jsmhs" Feb 14 04:41:26 crc kubenswrapper[4867]: I0214 04:41:26.739973 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/879dee23-804e-4b8a-ac20-0546383202b0-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-jsmhs\" (UID: \"879dee23-804e-4b8a-ac20-0546383202b0\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jsmhs" Feb 14 04:41:26 crc kubenswrapper[4867]: I0214 04:41:26.754016 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89f9d\" (UniqueName: \"kubernetes.io/projected/879dee23-804e-4b8a-ac20-0546383202b0-kube-api-access-89f9d\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-jsmhs\" (UID: \"879dee23-804e-4b8a-ac20-0546383202b0\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jsmhs" Feb 14 04:41:26 crc kubenswrapper[4867]: I0214 04:41:26.906912 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jsmhs" Feb 14 04:41:27 crc kubenswrapper[4867]: I0214 04:41:27.955423 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jsmhs"] Feb 14 04:41:27 crc kubenswrapper[4867]: I0214 04:41:27.962119 4867 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 14 04:41:28 crc kubenswrapper[4867]: I0214 04:41:28.443261 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jsmhs" event={"ID":"879dee23-804e-4b8a-ac20-0546383202b0","Type":"ContainerStarted","Data":"0ea6ba3fbefa772411725e98100241b5cb4626f4565f14146e95e611286f63e9"} Feb 14 04:41:29 crc kubenswrapper[4867]: I0214 04:41:29.457132 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jsmhs" event={"ID":"879dee23-804e-4b8a-ac20-0546383202b0","Type":"ContainerStarted","Data":"88077af96545be122279d1b3f191975503bdfd1844ffea9e66c95cf4f20aead0"} Feb 14 04:41:29 crc kubenswrapper[4867]: I0214 04:41:29.554962 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jsmhs" podStartSLOduration=3.159906951 podStartE2EDuration="3.55493972s" podCreationTimestamp="2026-02-14 04:41:26 +0000 UTC" firstStartedPulling="2026-02-14 04:41:27.961880355 +0000 UTC m=+1920.042817669" lastFinishedPulling="2026-02-14 04:41:28.356913124 +0000 UTC m=+1920.437850438" observedRunningTime="2026-02-14 04:41:29.544576238 +0000 UTC m=+1921.625513562" watchObservedRunningTime="2026-02-14 04:41:29.55493972 +0000 UTC m=+1921.635877034" Feb 14 04:41:36 crc kubenswrapper[4867]: I0214 04:41:36.049296 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-425tq"] Feb 14 04:41:36 crc kubenswrapper[4867]: I0214 04:41:36.068369 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-425tq"] Feb 14 04:41:37 crc kubenswrapper[4867]: I0214 04:41:37.014953 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed6edd10-56a9-4431-bb38-7b266f802e63" path="/var/lib/kubelet/pods/ed6edd10-56a9-4431-bb38-7b266f802e63/volumes" Feb 14 04:41:47 crc kubenswrapper[4867]: I0214 04:41:47.074725 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-9zrmj"] Feb 14 04:41:47 crc kubenswrapper[4867]: I0214 04:41:47.086715 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-9zrmj"] Feb 14 04:41:48 crc kubenswrapper[4867]: I0214 04:41:48.043812 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-gdzwh"] Feb 14 04:41:48 crc kubenswrapper[4867]: I0214 04:41:48.064032 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-gdzwh"] Feb 14 04:41:49 crc kubenswrapper[4867]: I0214 04:41:49.020225 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87589008-b930-4698-b94b-883c707d5fb1" path="/var/lib/kubelet/pods/87589008-b930-4698-b94b-883c707d5fb1/volumes" Feb 14 04:41:49 crc kubenswrapper[4867]: I0214 04:41:49.021273 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffefbab2-8288-4eaa-9df3-e95383cdf19d" path="/var/lib/kubelet/pods/ffefbab2-8288-4eaa-9df3-e95383cdf19d/volumes" Feb 14 04:41:58 crc kubenswrapper[4867]: I0214 04:41:58.050427 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-mklx7"] Feb 14 04:41:58 crc kubenswrapper[4867]: I0214 04:41:58.066816 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-mklx7"] Feb 14 04:41:59 crc kubenswrapper[4867]: I0214 04:41:59.010881 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cccb73cc-2b89-4363-b7ca-44dfa627d9f9" path="/var/lib/kubelet/pods/cccb73cc-2b89-4363-b7ca-44dfa627d9f9/volumes" Feb 14 04:41:59 crc kubenswrapper[4867]: I0214 04:41:59.039409 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-grkqh"] Feb 14 04:41:59 crc kubenswrapper[4867]: I0214 04:41:59.052284 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-grkqh"] Feb 14 04:42:01 crc kubenswrapper[4867]: I0214 04:42:01.011203 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c973bde-ff14-4cce-9f9c-57354dbd4adb" path="/var/lib/kubelet/pods/9c973bde-ff14-4cce-9f9c-57354dbd4adb/volumes" Feb 14 04:42:13 crc kubenswrapper[4867]: I0214 04:42:13.691040 4867 scope.go:117] "RemoveContainer" containerID="933362dc125c07b501be0afbe062e3a9150917f293f02be88bdfafccd96cea38" Feb 14 04:42:13 crc kubenswrapper[4867]: I0214 04:42:13.733077 4867 scope.go:117] "RemoveContainer" containerID="b4af422ec473bd7a3a6d6b89b2e7229c4375e35cf75e8494db638d7095f07468" Feb 14 04:42:13 crc kubenswrapper[4867]: I0214 04:42:13.795599 4867 scope.go:117] "RemoveContainer" containerID="cbc1c766da784a3e5453caf17699272e324db8e8f9f9c7202b12542f06aac4da" Feb 14 04:42:13 crc kubenswrapper[4867]: I0214 04:42:13.864040 4867 scope.go:117] "RemoveContainer" containerID="f215c5a914efdb087a943f5dda611b846de12406e04a977d9c6c6acb8ed9e635" Feb 14 04:42:13 crc kubenswrapper[4867]: I0214 04:42:13.924274 4867 scope.go:117] "RemoveContainer" containerID="42546acb8bf1d18a2013b6f620e8fb872f570e002bf0d9270838f9f12f95b201" Feb 14 04:42:44 crc kubenswrapper[4867]: I0214 04:42:44.045534 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-8539-account-create-update-9j9p8"] Feb 14 04:42:44 crc kubenswrapper[4867]: I0214 04:42:44.083558 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-5ffts"] Feb 14 04:42:44 crc kubenswrapper[4867]: I0214 04:42:44.098344 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-8539-account-create-update-9j9p8"] Feb 14 04:42:44 crc kubenswrapper[4867]: I0214 04:42:44.111335 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-5ffts"] Feb 14 04:42:45 crc kubenswrapper[4867]: I0214 04:42:45.010212 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="289f81c2-9092-4a51-a1b4-8eedaa09aedb" path="/var/lib/kubelet/pods/289f81c2-9092-4a51-a1b4-8eedaa09aedb/volumes" Feb 14 04:42:45 crc kubenswrapper[4867]: I0214 04:42:45.011116 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b7729cf-7332-4432-999f-fbee997b2201" path="/var/lib/kubelet/pods/2b7729cf-7332-4432-999f-fbee997b2201/volumes" Feb 14 04:42:45 crc kubenswrapper[4867]: I0214 04:42:45.031080 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-t8trt"] Feb 14 04:42:45 crc kubenswrapper[4867]: I0214 04:42:45.042021 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-t8trt"] Feb 14 04:42:46 crc kubenswrapper[4867]: I0214 04:42:46.038987 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-slfhr"] Feb 14 04:42:46 crc kubenswrapper[4867]: I0214 04:42:46.062204 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-a338-account-create-update-2zjhb"] Feb 14 04:42:46 crc kubenswrapper[4867]: I0214 04:42:46.080105 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-8094-account-create-update-pbbgl"] Feb 14 04:42:46 crc kubenswrapper[4867]: I0214 04:42:46.090432 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-slfhr"] Feb 14 04:42:46 crc kubenswrapper[4867]: I0214 04:42:46.101169 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-8094-account-create-update-pbbgl"] Feb 14 04:42:46 crc kubenswrapper[4867]: I0214 04:42:46.112959 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-a338-account-create-update-2zjhb"] Feb 14 04:42:47 crc kubenswrapper[4867]: I0214 04:42:47.012539 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="041c55d6-87c7-47b4-a53b-9b38cb85e3d2" path="/var/lib/kubelet/pods/041c55d6-87c7-47b4-a53b-9b38cb85e3d2/volumes" Feb 14 04:42:47 crc kubenswrapper[4867]: I0214 04:42:47.014628 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="708fbc3f-a05a-4b29-b455-32db117495d1" path="/var/lib/kubelet/pods/708fbc3f-a05a-4b29-b455-32db117495d1/volumes" Feb 14 04:42:47 crc kubenswrapper[4867]: I0214 04:42:47.015948 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="730dbd9b-ddff-4d09-89ff-b9135ed83042" path="/var/lib/kubelet/pods/730dbd9b-ddff-4d09-89ff-b9135ed83042/volumes" Feb 14 04:42:47 crc kubenswrapper[4867]: I0214 04:42:47.019768 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80c71d92-a9d1-4256-b7be-678dc34d1562" path="/var/lib/kubelet/pods/80c71d92-a9d1-4256-b7be-678dc34d1562/volumes" Feb 14 04:43:14 crc kubenswrapper[4867]: I0214 04:43:14.103100 4867 scope.go:117] "RemoveContainer" containerID="d2f2315be8742d702e7dd2d0f528c431c081e7e1ce092b2f26f01dd567075c43" Feb 14 04:43:14 crc kubenswrapper[4867]: I0214 04:43:14.232097 4867 scope.go:117] "RemoveContainer" containerID="ac04f78f97056d2b2550db33626b10963bebb9d175cf60c35210d274045c9458" Feb 14 04:43:14 crc kubenswrapper[4867]: I0214 04:43:14.259400 4867 scope.go:117] "RemoveContainer" containerID="3e1ef6da3ebdc2673f2981d47e0b77af1c8ade8d3cd5fb3292ef5cb9e14386e5" Feb 14 04:43:14 crc kubenswrapper[4867]: I0214 04:43:14.311590 4867 scope.go:117] "RemoveContainer" containerID="0c5aa3d36bd716587576d157b08b003ad1372b31da48794e4d003f7f4a82a1b3" Feb 14 04:43:14 crc kubenswrapper[4867]: I0214 04:43:14.369416 4867 scope.go:117] "RemoveContainer" containerID="edb8483472d537c583af237081de995fee4a32c9b18a192549b88c1b5ca41e5a" Feb 14 04:43:14 crc kubenswrapper[4867]: I0214 04:43:14.436090 4867 scope.go:117] "RemoveContainer" containerID="6bd7d606fb9b6188c28f7b964e2aed897ff801c850465bbc0ee30e5f3fa5796c" Feb 14 04:43:18 crc kubenswrapper[4867]: I0214 04:43:18.686936 4867 generic.go:334] "Generic (PLEG): container finished" podID="879dee23-804e-4b8a-ac20-0546383202b0" containerID="88077af96545be122279d1b3f191975503bdfd1844ffea9e66c95cf4f20aead0" exitCode=0 Feb 14 04:43:18 crc kubenswrapper[4867]: I0214 04:43:18.687034 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jsmhs" event={"ID":"879dee23-804e-4b8a-ac20-0546383202b0","Type":"ContainerDied","Data":"88077af96545be122279d1b3f191975503bdfd1844ffea9e66c95cf4f20aead0"} Feb 14 04:43:20 crc kubenswrapper[4867]: I0214 04:43:20.249052 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jsmhs" Feb 14 04:43:20 crc kubenswrapper[4867]: I0214 04:43:20.311717 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89f9d\" (UniqueName: \"kubernetes.io/projected/879dee23-804e-4b8a-ac20-0546383202b0-kube-api-access-89f9d\") pod \"879dee23-804e-4b8a-ac20-0546383202b0\" (UID: \"879dee23-804e-4b8a-ac20-0546383202b0\") " Feb 14 04:43:20 crc kubenswrapper[4867]: I0214 04:43:20.311836 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/879dee23-804e-4b8a-ac20-0546383202b0-ssh-key-openstack-edpm-ipam\") pod \"879dee23-804e-4b8a-ac20-0546383202b0\" (UID: \"879dee23-804e-4b8a-ac20-0546383202b0\") " Feb 14 04:43:20 crc kubenswrapper[4867]: I0214 04:43:20.312192 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/879dee23-804e-4b8a-ac20-0546383202b0-inventory\") pod \"879dee23-804e-4b8a-ac20-0546383202b0\" (UID: \"879dee23-804e-4b8a-ac20-0546383202b0\") " Feb 14 04:43:20 crc kubenswrapper[4867]: I0214 04:43:20.322888 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/879dee23-804e-4b8a-ac20-0546383202b0-kube-api-access-89f9d" (OuterVolumeSpecName: "kube-api-access-89f9d") pod "879dee23-804e-4b8a-ac20-0546383202b0" (UID: "879dee23-804e-4b8a-ac20-0546383202b0"). InnerVolumeSpecName "kube-api-access-89f9d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:43:20 crc kubenswrapper[4867]: I0214 04:43:20.356329 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/879dee23-804e-4b8a-ac20-0546383202b0-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "879dee23-804e-4b8a-ac20-0546383202b0" (UID: "879dee23-804e-4b8a-ac20-0546383202b0"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:43:20 crc kubenswrapper[4867]: I0214 04:43:20.365702 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/879dee23-804e-4b8a-ac20-0546383202b0-inventory" (OuterVolumeSpecName: "inventory") pod "879dee23-804e-4b8a-ac20-0546383202b0" (UID: "879dee23-804e-4b8a-ac20-0546383202b0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:43:20 crc kubenswrapper[4867]: I0214 04:43:20.416271 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89f9d\" (UniqueName: \"kubernetes.io/projected/879dee23-804e-4b8a-ac20-0546383202b0-kube-api-access-89f9d\") on node \"crc\" DevicePath \"\"" Feb 14 04:43:20 crc kubenswrapper[4867]: I0214 04:43:20.416303 4867 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/879dee23-804e-4b8a-ac20-0546383202b0-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 14 04:43:20 crc kubenswrapper[4867]: I0214 04:43:20.416316 4867 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/879dee23-804e-4b8a-ac20-0546383202b0-inventory\") on node \"crc\" DevicePath \"\"" Feb 14 04:43:20 crc kubenswrapper[4867]: I0214 04:43:20.708644 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jsmhs" event={"ID":"879dee23-804e-4b8a-ac20-0546383202b0","Type":"ContainerDied","Data":"0ea6ba3fbefa772411725e98100241b5cb4626f4565f14146e95e611286f63e9"} Feb 14 04:43:20 crc kubenswrapper[4867]: I0214 04:43:20.709055 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ea6ba3fbefa772411725e98100241b5cb4626f4565f14146e95e611286f63e9" Feb 14 04:43:20 crc kubenswrapper[4867]: I0214 04:43:20.708693 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jsmhs" Feb 14 04:43:20 crc kubenswrapper[4867]: I0214 04:43:20.807530 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fq9nf"] Feb 14 04:43:20 crc kubenswrapper[4867]: E0214 04:43:20.808202 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="879dee23-804e-4b8a-ac20-0546383202b0" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 14 04:43:20 crc kubenswrapper[4867]: I0214 04:43:20.808229 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="879dee23-804e-4b8a-ac20-0546383202b0" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 14 04:43:20 crc kubenswrapper[4867]: I0214 04:43:20.808558 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="879dee23-804e-4b8a-ac20-0546383202b0" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 14 04:43:20 crc kubenswrapper[4867]: I0214 04:43:20.809821 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fq9nf" Feb 14 04:43:20 crc kubenswrapper[4867]: I0214 04:43:20.813787 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 14 04:43:20 crc kubenswrapper[4867]: I0214 04:43:20.813840 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 14 04:43:20 crc kubenswrapper[4867]: I0214 04:43:20.814132 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-24tmg" Feb 14 04:43:20 crc kubenswrapper[4867]: I0214 04:43:20.814395 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 14 04:43:20 crc kubenswrapper[4867]: I0214 04:43:20.838549 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fq9nf"] Feb 14 04:43:20 crc kubenswrapper[4867]: I0214 04:43:20.927485 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a716bc3f-98b5-4c50-af5f-46de007bd255-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-fq9nf\" (UID: \"a716bc3f-98b5-4c50-af5f-46de007bd255\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fq9nf" Feb 14 04:43:20 crc kubenswrapper[4867]: I0214 04:43:20.927596 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a716bc3f-98b5-4c50-af5f-46de007bd255-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-fq9nf\" (UID: \"a716bc3f-98b5-4c50-af5f-46de007bd255\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fq9nf" Feb 14 04:43:20 crc kubenswrapper[4867]: I0214 04:43:20.928102 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mc6s\" (UniqueName: \"kubernetes.io/projected/a716bc3f-98b5-4c50-af5f-46de007bd255-kube-api-access-9mc6s\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-fq9nf\" (UID: \"a716bc3f-98b5-4c50-af5f-46de007bd255\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fq9nf" Feb 14 04:43:21 crc kubenswrapper[4867]: I0214 04:43:21.030667 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a716bc3f-98b5-4c50-af5f-46de007bd255-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-fq9nf\" (UID: \"a716bc3f-98b5-4c50-af5f-46de007bd255\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fq9nf" Feb 14 04:43:21 crc kubenswrapper[4867]: I0214 04:43:21.030733 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a716bc3f-98b5-4c50-af5f-46de007bd255-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-fq9nf\" (UID: \"a716bc3f-98b5-4c50-af5f-46de007bd255\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fq9nf" Feb 14 04:43:21 crc kubenswrapper[4867]: I0214 04:43:21.030915 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mc6s\" (UniqueName: \"kubernetes.io/projected/a716bc3f-98b5-4c50-af5f-46de007bd255-kube-api-access-9mc6s\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-fq9nf\" (UID: \"a716bc3f-98b5-4c50-af5f-46de007bd255\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fq9nf" Feb 14 04:43:21 crc kubenswrapper[4867]: I0214 04:43:21.035493 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a716bc3f-98b5-4c50-af5f-46de007bd255-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-fq9nf\" (UID: \"a716bc3f-98b5-4c50-af5f-46de007bd255\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fq9nf" Feb 14 04:43:21 crc kubenswrapper[4867]: I0214 04:43:21.046897 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mc6s\" (UniqueName: \"kubernetes.io/projected/a716bc3f-98b5-4c50-af5f-46de007bd255-kube-api-access-9mc6s\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-fq9nf\" (UID: \"a716bc3f-98b5-4c50-af5f-46de007bd255\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fq9nf" Feb 14 04:43:21 crc kubenswrapper[4867]: I0214 04:43:21.048364 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a716bc3f-98b5-4c50-af5f-46de007bd255-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-fq9nf\" (UID: \"a716bc3f-98b5-4c50-af5f-46de007bd255\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fq9nf" Feb 14 04:43:21 crc kubenswrapper[4867]: I0214 04:43:21.135173 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fq9nf" Feb 14 04:43:21 crc kubenswrapper[4867]: I0214 04:43:21.679681 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fq9nf"] Feb 14 04:43:21 crc kubenswrapper[4867]: I0214 04:43:21.725397 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fq9nf" event={"ID":"a716bc3f-98b5-4c50-af5f-46de007bd255","Type":"ContainerStarted","Data":"208a79f3cfc52aaff17abca229e10a8824ca713a1e3a5b62ea85e80419b33efa"} Feb 14 04:43:22 crc kubenswrapper[4867]: I0214 04:43:22.737742 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fq9nf" event={"ID":"a716bc3f-98b5-4c50-af5f-46de007bd255","Type":"ContainerStarted","Data":"850da5d1fa200fe1da722734f617061d3c9bc463258327d71d858df718dab9e6"} Feb 14 04:43:22 crc kubenswrapper[4867]: I0214 04:43:22.761346 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fq9nf" podStartSLOduration=2.329171393 podStartE2EDuration="2.761214264s" podCreationTimestamp="2026-02-14 04:43:20 +0000 UTC" firstStartedPulling="2026-02-14 04:43:21.684170466 +0000 UTC m=+2033.765107780" lastFinishedPulling="2026-02-14 04:43:22.116213337 +0000 UTC m=+2034.197150651" observedRunningTime="2026-02-14 04:43:22.754432295 +0000 UTC m=+2034.835369609" watchObservedRunningTime="2026-02-14 04:43:22.761214264 +0000 UTC m=+2034.842151598" Feb 14 04:43:31 crc kubenswrapper[4867]: I0214 04:43:31.251332 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 04:43:31 crc kubenswrapper[4867]: I0214 04:43:31.252096 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 04:43:43 crc kubenswrapper[4867]: I0214 04:43:43.060224 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-vwg9c"] Feb 14 04:43:43 crc kubenswrapper[4867]: I0214 04:43:43.075961 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-vwg9c"] Feb 14 04:43:45 crc kubenswrapper[4867]: I0214 04:43:45.013520 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd08e0e3-a41f-4b25-b71a-1c968410d52e" path="/var/lib/kubelet/pods/cd08e0e3-a41f-4b25-b71a-1c968410d52e/volumes" Feb 14 04:43:49 crc kubenswrapper[4867]: I0214 04:43:49.050668 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-42f0-account-create-update-vx5cp"] Feb 14 04:43:49 crc kubenswrapper[4867]: I0214 04:43:49.068817 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-4dwll"] Feb 14 04:43:49 crc kubenswrapper[4867]: I0214 04:43:49.081572 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-4dwll"] Feb 14 04:43:49 crc kubenswrapper[4867]: I0214 04:43:49.091810 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-42f0-account-create-update-vx5cp"] Feb 14 04:43:51 crc kubenswrapper[4867]: I0214 04:43:51.014871 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="486bfb80-5589-4e9e-84d3-10726a066702" path="/var/lib/kubelet/pods/486bfb80-5589-4e9e-84d3-10726a066702/volumes" Feb 14 04:43:51 crc kubenswrapper[4867]: I0214 04:43:51.016972 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4aa569b6-1ec2-48e8-99c2-f165e5ea9604" path="/var/lib/kubelet/pods/4aa569b6-1ec2-48e8-99c2-f165e5ea9604/volumes" Feb 14 04:44:01 crc kubenswrapper[4867]: I0214 04:44:01.250902 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 04:44:01 crc kubenswrapper[4867]: I0214 04:44:01.251525 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 04:44:09 crc kubenswrapper[4867]: I0214 04:44:09.093575 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-8pszd"] Feb 14 04:44:09 crc kubenswrapper[4867]: I0214 04:44:09.124715 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-8pszd"] Feb 14 04:44:11 crc kubenswrapper[4867]: I0214 04:44:11.022010 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9947f337-0734-4b4e-bc31-e68e6354ed74" path="/var/lib/kubelet/pods/9947f337-0734-4b4e-bc31-e68e6354ed74/volumes" Feb 14 04:44:11 crc kubenswrapper[4867]: I0214 04:44:11.033745 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-jw78d"] Feb 14 04:44:11 crc kubenswrapper[4867]: I0214 04:44:11.043249 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-jw78d"] Feb 14 04:44:13 crc kubenswrapper[4867]: I0214 04:44:13.020366 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2bbf3a42-f012-4bed-a60e-1defcd0b1af9" path="/var/lib/kubelet/pods/2bbf3a42-f012-4bed-a60e-1defcd0b1af9/volumes" Feb 14 04:44:14 crc kubenswrapper[4867]: I0214 04:44:14.595552 4867 scope.go:117] "RemoveContainer" containerID="9434b7a5d62d84c5fafd89a974659be60c5965c5fe3ab11c7ca5ecbded575989" Feb 14 04:44:14 crc kubenswrapper[4867]: I0214 04:44:14.636633 4867 scope.go:117] "RemoveContainer" containerID="25d2bb0267b03452021a150ec90554f6e1f81995014c999f80f860ac88461b64" Feb 14 04:44:14 crc kubenswrapper[4867]: I0214 04:44:14.703108 4867 scope.go:117] "RemoveContainer" containerID="f354428129d549a2471d562380d7b2183b151280e2771b123ea6777b6dcf2c51" Feb 14 04:44:14 crc kubenswrapper[4867]: I0214 04:44:14.793744 4867 scope.go:117] "RemoveContainer" containerID="0f96994fd5725370a862ce87b1e8d08bfc4ff10235813b94e745a18d93f42f91" Feb 14 04:44:14 crc kubenswrapper[4867]: I0214 04:44:14.872748 4867 scope.go:117] "RemoveContainer" containerID="4c91a1eedf3612a0a64e4ffb88ac40594ed3abc921178439efbfe687de9b9c76" Feb 14 04:44:31 crc kubenswrapper[4867]: I0214 04:44:31.251338 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 04:44:31 crc kubenswrapper[4867]: I0214 04:44:31.251957 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 04:44:31 crc kubenswrapper[4867]: I0214 04:44:31.252020 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" Feb 14 04:44:31 crc kubenswrapper[4867]: I0214 04:44:31.253157 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8ef22e983ed33de6916be45630c900d98abc980cea24a0e66ba99e9fbf263b65"} pod="openshift-machine-config-operator/machine-config-daemon-4s95t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 04:44:31 crc kubenswrapper[4867]: I0214 04:44:31.253232 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" containerID="cri-o://8ef22e983ed33de6916be45630c900d98abc980cea24a0e66ba99e9fbf263b65" gracePeriod=600 Feb 14 04:44:31 crc kubenswrapper[4867]: I0214 04:44:31.589179 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-r75vv"] Feb 14 04:44:31 crc kubenswrapper[4867]: I0214 04:44:31.593243 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r75vv" Feb 14 04:44:31 crc kubenswrapper[4867]: I0214 04:44:31.602989 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-r75vv"] Feb 14 04:44:31 crc kubenswrapper[4867]: I0214 04:44:31.638357 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xp7hp\" (UniqueName: \"kubernetes.io/projected/b5adcee9-1419-4c20-b96e-4886a1f19c68-kube-api-access-xp7hp\") pod \"community-operators-r75vv\" (UID: \"b5adcee9-1419-4c20-b96e-4886a1f19c68\") " pod="openshift-marketplace/community-operators-r75vv" Feb 14 04:44:31 crc kubenswrapper[4867]: I0214 04:44:31.638451 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5adcee9-1419-4c20-b96e-4886a1f19c68-utilities\") pod \"community-operators-r75vv\" (UID: \"b5adcee9-1419-4c20-b96e-4886a1f19c68\") " pod="openshift-marketplace/community-operators-r75vv" Feb 14 04:44:31 crc kubenswrapper[4867]: I0214 04:44:31.638556 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5adcee9-1419-4c20-b96e-4886a1f19c68-catalog-content\") pod \"community-operators-r75vv\" (UID: \"b5adcee9-1419-4c20-b96e-4886a1f19c68\") " pod="openshift-marketplace/community-operators-r75vv" Feb 14 04:44:31 crc kubenswrapper[4867]: I0214 04:44:31.642696 4867 generic.go:334] "Generic (PLEG): container finished" podID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerID="8ef22e983ed33de6916be45630c900d98abc980cea24a0e66ba99e9fbf263b65" exitCode=0 Feb 14 04:44:31 crc kubenswrapper[4867]: I0214 04:44:31.642734 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" event={"ID":"5992e46c-bce7-4b9f-82f2-c7ffb93286cd","Type":"ContainerDied","Data":"8ef22e983ed33de6916be45630c900d98abc980cea24a0e66ba99e9fbf263b65"} Feb 14 04:44:31 crc kubenswrapper[4867]: I0214 04:44:31.642766 4867 scope.go:117] "RemoveContainer" containerID="7203a3aa09f0fa634ee4bcd02b0e1dff1e29376e8dd84a4e743cbea72d4c480e" Feb 14 04:44:31 crc kubenswrapper[4867]: I0214 04:44:31.741186 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5adcee9-1419-4c20-b96e-4886a1f19c68-catalog-content\") pod \"community-operators-r75vv\" (UID: \"b5adcee9-1419-4c20-b96e-4886a1f19c68\") " pod="openshift-marketplace/community-operators-r75vv" Feb 14 04:44:31 crc kubenswrapper[4867]: I0214 04:44:31.741673 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5adcee9-1419-4c20-b96e-4886a1f19c68-catalog-content\") pod \"community-operators-r75vv\" (UID: \"b5adcee9-1419-4c20-b96e-4886a1f19c68\") " pod="openshift-marketplace/community-operators-r75vv" Feb 14 04:44:31 crc kubenswrapper[4867]: I0214 04:44:31.741716 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xp7hp\" (UniqueName: \"kubernetes.io/projected/b5adcee9-1419-4c20-b96e-4886a1f19c68-kube-api-access-xp7hp\") pod \"community-operators-r75vv\" (UID: \"b5adcee9-1419-4c20-b96e-4886a1f19c68\") " pod="openshift-marketplace/community-operators-r75vv" Feb 14 04:44:31 crc kubenswrapper[4867]: I0214 04:44:31.741749 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5adcee9-1419-4c20-b96e-4886a1f19c68-utilities\") pod \"community-operators-r75vv\" (UID: \"b5adcee9-1419-4c20-b96e-4886a1f19c68\") " pod="openshift-marketplace/community-operators-r75vv" Feb 14 04:44:31 crc kubenswrapper[4867]: I0214 04:44:31.742324 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5adcee9-1419-4c20-b96e-4886a1f19c68-utilities\") pod \"community-operators-r75vv\" (UID: \"b5adcee9-1419-4c20-b96e-4886a1f19c68\") " pod="openshift-marketplace/community-operators-r75vv" Feb 14 04:44:31 crc kubenswrapper[4867]: I0214 04:44:31.766225 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xp7hp\" (UniqueName: \"kubernetes.io/projected/b5adcee9-1419-4c20-b96e-4886a1f19c68-kube-api-access-xp7hp\") pod \"community-operators-r75vv\" (UID: \"b5adcee9-1419-4c20-b96e-4886a1f19c68\") " pod="openshift-marketplace/community-operators-r75vv" Feb 14 04:44:31 crc kubenswrapper[4867]: I0214 04:44:31.926620 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r75vv" Feb 14 04:44:32 crc kubenswrapper[4867]: I0214 04:44:32.602057 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-r75vv"] Feb 14 04:44:32 crc kubenswrapper[4867]: I0214 04:44:32.670433 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r75vv" event={"ID":"b5adcee9-1419-4c20-b96e-4886a1f19c68","Type":"ContainerStarted","Data":"429bdccd454e07224012faaaa97764f590a609292e1cea0ebe0e35d368f7b141"} Feb 14 04:44:32 crc kubenswrapper[4867]: I0214 04:44:32.674228 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" event={"ID":"5992e46c-bce7-4b9f-82f2-c7ffb93286cd","Type":"ContainerStarted","Data":"2e46dcab63865af965f1ceab9775684d2c284c2072e738aed0acdc7b372802d2"} Feb 14 04:44:32 crc kubenswrapper[4867]: I0214 04:44:32.678389 4867 generic.go:334] "Generic (PLEG): container finished" podID="a716bc3f-98b5-4c50-af5f-46de007bd255" containerID="850da5d1fa200fe1da722734f617061d3c9bc463258327d71d858df718dab9e6" exitCode=0 Feb 14 04:44:32 crc kubenswrapper[4867]: I0214 04:44:32.678449 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fq9nf" event={"ID":"a716bc3f-98b5-4c50-af5f-46de007bd255","Type":"ContainerDied","Data":"850da5d1fa200fe1da722734f617061d3c9bc463258327d71d858df718dab9e6"} Feb 14 04:44:33 crc kubenswrapper[4867]: I0214 04:44:33.692591 4867 generic.go:334] "Generic (PLEG): container finished" podID="b5adcee9-1419-4c20-b96e-4886a1f19c68" containerID="4bbc8658b79a62d3761a54fc5307fcfbd9755f7df3887332b937b52cb17b7449" exitCode=0 Feb 14 04:44:33 crc kubenswrapper[4867]: I0214 04:44:33.692670 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r75vv" event={"ID":"b5adcee9-1419-4c20-b96e-4886a1f19c68","Type":"ContainerDied","Data":"4bbc8658b79a62d3761a54fc5307fcfbd9755f7df3887332b937b52cb17b7449"} Feb 14 04:44:34 crc kubenswrapper[4867]: I0214 04:44:34.187740 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fq9nf" Feb 14 04:44:34 crc kubenswrapper[4867]: I0214 04:44:34.306291 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a716bc3f-98b5-4c50-af5f-46de007bd255-inventory\") pod \"a716bc3f-98b5-4c50-af5f-46de007bd255\" (UID: \"a716bc3f-98b5-4c50-af5f-46de007bd255\") " Feb 14 04:44:34 crc kubenswrapper[4867]: I0214 04:44:34.306786 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a716bc3f-98b5-4c50-af5f-46de007bd255-ssh-key-openstack-edpm-ipam\") pod \"a716bc3f-98b5-4c50-af5f-46de007bd255\" (UID: \"a716bc3f-98b5-4c50-af5f-46de007bd255\") " Feb 14 04:44:34 crc kubenswrapper[4867]: I0214 04:44:34.307027 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9mc6s\" (UniqueName: \"kubernetes.io/projected/a716bc3f-98b5-4c50-af5f-46de007bd255-kube-api-access-9mc6s\") pod \"a716bc3f-98b5-4c50-af5f-46de007bd255\" (UID: \"a716bc3f-98b5-4c50-af5f-46de007bd255\") " Feb 14 04:44:34 crc kubenswrapper[4867]: I0214 04:44:34.313037 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a716bc3f-98b5-4c50-af5f-46de007bd255-kube-api-access-9mc6s" (OuterVolumeSpecName: "kube-api-access-9mc6s") pod "a716bc3f-98b5-4c50-af5f-46de007bd255" (UID: "a716bc3f-98b5-4c50-af5f-46de007bd255"). InnerVolumeSpecName "kube-api-access-9mc6s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:44:34 crc kubenswrapper[4867]: I0214 04:44:34.342213 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a716bc3f-98b5-4c50-af5f-46de007bd255-inventory" (OuterVolumeSpecName: "inventory") pod "a716bc3f-98b5-4c50-af5f-46de007bd255" (UID: "a716bc3f-98b5-4c50-af5f-46de007bd255"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:44:34 crc kubenswrapper[4867]: I0214 04:44:34.350855 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a716bc3f-98b5-4c50-af5f-46de007bd255-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a716bc3f-98b5-4c50-af5f-46de007bd255" (UID: "a716bc3f-98b5-4c50-af5f-46de007bd255"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:44:34 crc kubenswrapper[4867]: I0214 04:44:34.410264 4867 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a716bc3f-98b5-4c50-af5f-46de007bd255-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 14 04:44:34 crc kubenswrapper[4867]: I0214 04:44:34.410301 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9mc6s\" (UniqueName: \"kubernetes.io/projected/a716bc3f-98b5-4c50-af5f-46de007bd255-kube-api-access-9mc6s\") on node \"crc\" DevicePath \"\"" Feb 14 04:44:34 crc kubenswrapper[4867]: I0214 04:44:34.410312 4867 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a716bc3f-98b5-4c50-af5f-46de007bd255-inventory\") on node \"crc\" DevicePath \"\"" Feb 14 04:44:34 crc kubenswrapper[4867]: I0214 04:44:34.705350 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r75vv" event={"ID":"b5adcee9-1419-4c20-b96e-4886a1f19c68","Type":"ContainerStarted","Data":"10005da65e2c73639ff16fcacd7548293f56d416bfac9e18c035429ff03e132e"} Feb 14 04:44:34 crc kubenswrapper[4867]: I0214 04:44:34.708529 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fq9nf" event={"ID":"a716bc3f-98b5-4c50-af5f-46de007bd255","Type":"ContainerDied","Data":"208a79f3cfc52aaff17abca229e10a8824ca713a1e3a5b62ea85e80419b33efa"} Feb 14 04:44:34 crc kubenswrapper[4867]: I0214 04:44:34.708576 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fq9nf" Feb 14 04:44:34 crc kubenswrapper[4867]: I0214 04:44:34.708587 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="208a79f3cfc52aaff17abca229e10a8824ca713a1e3a5b62ea85e80419b33efa" Feb 14 04:44:34 crc kubenswrapper[4867]: I0214 04:44:34.882081 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-sk5ns"] Feb 14 04:44:34 crc kubenswrapper[4867]: E0214 04:44:34.882817 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a716bc3f-98b5-4c50-af5f-46de007bd255" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 14 04:44:34 crc kubenswrapper[4867]: I0214 04:44:34.882867 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="a716bc3f-98b5-4c50-af5f-46de007bd255" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 14 04:44:34 crc kubenswrapper[4867]: I0214 04:44:34.883220 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="a716bc3f-98b5-4c50-af5f-46de007bd255" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 14 04:44:34 crc kubenswrapper[4867]: I0214 04:44:34.884388 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-sk5ns" Feb 14 04:44:34 crc kubenswrapper[4867]: I0214 04:44:34.890313 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-24tmg" Feb 14 04:44:34 crc kubenswrapper[4867]: I0214 04:44:34.890620 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 14 04:44:34 crc kubenswrapper[4867]: I0214 04:44:34.890814 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 14 04:44:34 crc kubenswrapper[4867]: I0214 04:44:34.890975 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 14 04:44:34 crc kubenswrapper[4867]: I0214 04:44:34.933140 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-sk5ns"] Feb 14 04:44:35 crc kubenswrapper[4867]: I0214 04:44:35.025739 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6eaa68ce-0a13-47ec-b1d9-3a11bd50c4be-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-sk5ns\" (UID: \"6eaa68ce-0a13-47ec-b1d9-3a11bd50c4be\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-sk5ns" Feb 14 04:44:35 crc kubenswrapper[4867]: I0214 04:44:35.025958 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6eaa68ce-0a13-47ec-b1d9-3a11bd50c4be-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-sk5ns\" (UID: \"6eaa68ce-0a13-47ec-b1d9-3a11bd50c4be\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-sk5ns" Feb 14 04:44:35 crc kubenswrapper[4867]: I0214 04:44:35.026213 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpk7l\" (UniqueName: \"kubernetes.io/projected/6eaa68ce-0a13-47ec-b1d9-3a11bd50c4be-kube-api-access-mpk7l\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-sk5ns\" (UID: \"6eaa68ce-0a13-47ec-b1d9-3a11bd50c4be\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-sk5ns" Feb 14 04:44:35 crc kubenswrapper[4867]: I0214 04:44:35.128245 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6eaa68ce-0a13-47ec-b1d9-3a11bd50c4be-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-sk5ns\" (UID: \"6eaa68ce-0a13-47ec-b1d9-3a11bd50c4be\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-sk5ns" Feb 14 04:44:35 crc kubenswrapper[4867]: I0214 04:44:35.128361 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6eaa68ce-0a13-47ec-b1d9-3a11bd50c4be-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-sk5ns\" (UID: \"6eaa68ce-0a13-47ec-b1d9-3a11bd50c4be\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-sk5ns" Feb 14 04:44:35 crc kubenswrapper[4867]: I0214 04:44:35.128488 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpk7l\" (UniqueName: \"kubernetes.io/projected/6eaa68ce-0a13-47ec-b1d9-3a11bd50c4be-kube-api-access-mpk7l\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-sk5ns\" (UID: \"6eaa68ce-0a13-47ec-b1d9-3a11bd50c4be\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-sk5ns" Feb 14 04:44:35 crc kubenswrapper[4867]: I0214 04:44:35.133165 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6eaa68ce-0a13-47ec-b1d9-3a11bd50c4be-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-sk5ns\" (UID: \"6eaa68ce-0a13-47ec-b1d9-3a11bd50c4be\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-sk5ns" Feb 14 04:44:35 crc kubenswrapper[4867]: I0214 04:44:35.133562 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6eaa68ce-0a13-47ec-b1d9-3a11bd50c4be-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-sk5ns\" (UID: \"6eaa68ce-0a13-47ec-b1d9-3a11bd50c4be\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-sk5ns" Feb 14 04:44:35 crc kubenswrapper[4867]: I0214 04:44:35.145904 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpk7l\" (UniqueName: \"kubernetes.io/projected/6eaa68ce-0a13-47ec-b1d9-3a11bd50c4be-kube-api-access-mpk7l\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-sk5ns\" (UID: \"6eaa68ce-0a13-47ec-b1d9-3a11bd50c4be\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-sk5ns" Feb 14 04:44:35 crc kubenswrapper[4867]: I0214 04:44:35.178762 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5tmrm"] Feb 14 04:44:35 crc kubenswrapper[4867]: I0214 04:44:35.182019 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5tmrm" Feb 14 04:44:35 crc kubenswrapper[4867]: I0214 04:44:35.189993 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5tmrm"] Feb 14 04:44:35 crc kubenswrapper[4867]: I0214 04:44:35.233224 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-sk5ns" Feb 14 04:44:35 crc kubenswrapper[4867]: I0214 04:44:35.333144 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7288f7d-b1ef-4c2e-afab-abf0640eca5b-utilities\") pod \"redhat-operators-5tmrm\" (UID: \"f7288f7d-b1ef-4c2e-afab-abf0640eca5b\") " pod="openshift-marketplace/redhat-operators-5tmrm" Feb 14 04:44:35 crc kubenswrapper[4867]: I0214 04:44:35.333493 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7288f7d-b1ef-4c2e-afab-abf0640eca5b-catalog-content\") pod \"redhat-operators-5tmrm\" (UID: \"f7288f7d-b1ef-4c2e-afab-abf0640eca5b\") " pod="openshift-marketplace/redhat-operators-5tmrm" Feb 14 04:44:35 crc kubenswrapper[4867]: I0214 04:44:35.333669 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpdx2\" (UniqueName: \"kubernetes.io/projected/f7288f7d-b1ef-4c2e-afab-abf0640eca5b-kube-api-access-gpdx2\") pod \"redhat-operators-5tmrm\" (UID: \"f7288f7d-b1ef-4c2e-afab-abf0640eca5b\") " pod="openshift-marketplace/redhat-operators-5tmrm" Feb 14 04:44:35 crc kubenswrapper[4867]: I0214 04:44:35.439275 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7288f7d-b1ef-4c2e-afab-abf0640eca5b-catalog-content\") pod \"redhat-operators-5tmrm\" (UID: \"f7288f7d-b1ef-4c2e-afab-abf0640eca5b\") " pod="openshift-marketplace/redhat-operators-5tmrm" Feb 14 04:44:35 crc kubenswrapper[4867]: I0214 04:44:35.439441 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpdx2\" (UniqueName: \"kubernetes.io/projected/f7288f7d-b1ef-4c2e-afab-abf0640eca5b-kube-api-access-gpdx2\") pod \"redhat-operators-5tmrm\" (UID: \"f7288f7d-b1ef-4c2e-afab-abf0640eca5b\") " pod="openshift-marketplace/redhat-operators-5tmrm" Feb 14 04:44:35 crc kubenswrapper[4867]: I0214 04:44:35.439719 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7288f7d-b1ef-4c2e-afab-abf0640eca5b-utilities\") pod \"redhat-operators-5tmrm\" (UID: \"f7288f7d-b1ef-4c2e-afab-abf0640eca5b\") " pod="openshift-marketplace/redhat-operators-5tmrm" Feb 14 04:44:35 crc kubenswrapper[4867]: I0214 04:44:35.440765 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7288f7d-b1ef-4c2e-afab-abf0640eca5b-utilities\") pod \"redhat-operators-5tmrm\" (UID: \"f7288f7d-b1ef-4c2e-afab-abf0640eca5b\") " pod="openshift-marketplace/redhat-operators-5tmrm" Feb 14 04:44:35 crc kubenswrapper[4867]: I0214 04:44:35.441153 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7288f7d-b1ef-4c2e-afab-abf0640eca5b-catalog-content\") pod \"redhat-operators-5tmrm\" (UID: \"f7288f7d-b1ef-4c2e-afab-abf0640eca5b\") " pod="openshift-marketplace/redhat-operators-5tmrm" Feb 14 04:44:35 crc kubenswrapper[4867]: I0214 04:44:35.470091 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpdx2\" (UniqueName: \"kubernetes.io/projected/f7288f7d-b1ef-4c2e-afab-abf0640eca5b-kube-api-access-gpdx2\") pod \"redhat-operators-5tmrm\" (UID: \"f7288f7d-b1ef-4c2e-afab-abf0640eca5b\") " pod="openshift-marketplace/redhat-operators-5tmrm" Feb 14 04:44:35 crc kubenswrapper[4867]: I0214 04:44:35.677304 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5tmrm" Feb 14 04:44:35 crc kubenswrapper[4867]: I0214 04:44:35.874906 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-sk5ns"] Feb 14 04:44:36 crc kubenswrapper[4867]: I0214 04:44:36.252806 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5tmrm"] Feb 14 04:44:36 crc kubenswrapper[4867]: W0214 04:44:36.257313 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7288f7d_b1ef_4c2e_afab_abf0640eca5b.slice/crio-0d54b1c70e28e064450fb2d8570606b5e38f9337b5941836227df530cc9171aa WatchSource:0}: Error finding container 0d54b1c70e28e064450fb2d8570606b5e38f9337b5941836227df530cc9171aa: Status 404 returned error can't find the container with id 0d54b1c70e28e064450fb2d8570606b5e38f9337b5941836227df530cc9171aa Feb 14 04:44:36 crc kubenswrapper[4867]: I0214 04:44:36.737447 4867 generic.go:334] "Generic (PLEG): container finished" podID="b5adcee9-1419-4c20-b96e-4886a1f19c68" containerID="10005da65e2c73639ff16fcacd7548293f56d416bfac9e18c035429ff03e132e" exitCode=0 Feb 14 04:44:36 crc kubenswrapper[4867]: I0214 04:44:36.737865 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r75vv" event={"ID":"b5adcee9-1419-4c20-b96e-4886a1f19c68","Type":"ContainerDied","Data":"10005da65e2c73639ff16fcacd7548293f56d416bfac9e18c035429ff03e132e"} Feb 14 04:44:36 crc kubenswrapper[4867]: I0214 04:44:36.741766 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-sk5ns" event={"ID":"6eaa68ce-0a13-47ec-b1d9-3a11bd50c4be","Type":"ContainerStarted","Data":"608fe4d2e1e82ab95ee69da48f73cb0f32b952078e33f199d4f4180bbeaafdbc"} Feb 14 04:44:36 crc kubenswrapper[4867]: I0214 04:44:36.747860 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5tmrm" event={"ID":"f7288f7d-b1ef-4c2e-afab-abf0640eca5b","Type":"ContainerStarted","Data":"b5a7e32df88ba8c060c472b7c45bf07342ae640287bab89509a991d77dd9e9ae"} Feb 14 04:44:36 crc kubenswrapper[4867]: I0214 04:44:36.747908 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5tmrm" event={"ID":"f7288f7d-b1ef-4c2e-afab-abf0640eca5b","Type":"ContainerStarted","Data":"0d54b1c70e28e064450fb2d8570606b5e38f9337b5941836227df530cc9171aa"} Feb 14 04:44:37 crc kubenswrapper[4867]: I0214 04:44:37.778652 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r75vv" event={"ID":"b5adcee9-1419-4c20-b96e-4886a1f19c68","Type":"ContainerStarted","Data":"666c681872e1eab3e17aeafbe100cddf40a7eab0a3a2721a86433a8789ec0392"} Feb 14 04:44:37 crc kubenswrapper[4867]: I0214 04:44:37.781991 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-sk5ns" event={"ID":"6eaa68ce-0a13-47ec-b1d9-3a11bd50c4be","Type":"ContainerStarted","Data":"3815f58046638aaf7f2b843997ead59144a79e71583e19548be80d855ca3b469"} Feb 14 04:44:37 crc kubenswrapper[4867]: I0214 04:44:37.784973 4867 generic.go:334] "Generic (PLEG): container finished" podID="f7288f7d-b1ef-4c2e-afab-abf0640eca5b" containerID="b5a7e32df88ba8c060c472b7c45bf07342ae640287bab89509a991d77dd9e9ae" exitCode=0 Feb 14 04:44:37 crc kubenswrapper[4867]: I0214 04:44:37.785007 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5tmrm" event={"ID":"f7288f7d-b1ef-4c2e-afab-abf0640eca5b","Type":"ContainerDied","Data":"b5a7e32df88ba8c060c472b7c45bf07342ae640287bab89509a991d77dd9e9ae"} Feb 14 04:44:37 crc kubenswrapper[4867]: I0214 04:44:37.816249 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-r75vv" podStartSLOduration=3.223748605 podStartE2EDuration="6.816223212s" podCreationTimestamp="2026-02-14 04:44:31 +0000 UTC" firstStartedPulling="2026-02-14 04:44:33.696158823 +0000 UTC m=+2105.777096147" lastFinishedPulling="2026-02-14 04:44:37.28863344 +0000 UTC m=+2109.369570754" observedRunningTime="2026-02-14 04:44:37.799092282 +0000 UTC m=+2109.880029616" watchObservedRunningTime="2026-02-14 04:44:37.816223212 +0000 UTC m=+2109.897160526" Feb 14 04:44:37 crc kubenswrapper[4867]: I0214 04:44:37.855310 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-sk5ns" podStartSLOduration=3.362497501 podStartE2EDuration="3.855285468s" podCreationTimestamp="2026-02-14 04:44:34 +0000 UTC" firstStartedPulling="2026-02-14 04:44:35.922674221 +0000 UTC m=+2108.003611535" lastFinishedPulling="2026-02-14 04:44:36.415462188 +0000 UTC m=+2108.496399502" observedRunningTime="2026-02-14 04:44:37.837376507 +0000 UTC m=+2109.918313821" watchObservedRunningTime="2026-02-14 04:44:37.855285468 +0000 UTC m=+2109.936222782" Feb 14 04:44:38 crc kubenswrapper[4867]: I0214 04:44:38.797625 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5tmrm" event={"ID":"f7288f7d-b1ef-4c2e-afab-abf0640eca5b","Type":"ContainerStarted","Data":"36e4894f5c0703edfdafd6fce0e06fa2efe687f65144ec11262a4f943fdda9c8"} Feb 14 04:44:41 crc kubenswrapper[4867]: I0214 04:44:41.927067 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-r75vv" Feb 14 04:44:41 crc kubenswrapper[4867]: I0214 04:44:41.927751 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-r75vv" Feb 14 04:44:41 crc kubenswrapper[4867]: I0214 04:44:41.994823 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-r75vv" Feb 14 04:44:42 crc kubenswrapper[4867]: I0214 04:44:42.841682 4867 generic.go:334] "Generic (PLEG): container finished" podID="6eaa68ce-0a13-47ec-b1d9-3a11bd50c4be" containerID="3815f58046638aaf7f2b843997ead59144a79e71583e19548be80d855ca3b469" exitCode=0 Feb 14 04:44:42 crc kubenswrapper[4867]: I0214 04:44:42.841766 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-sk5ns" event={"ID":"6eaa68ce-0a13-47ec-b1d9-3a11bd50c4be","Type":"ContainerDied","Data":"3815f58046638aaf7f2b843997ead59144a79e71583e19548be80d855ca3b469"} Feb 14 04:44:42 crc kubenswrapper[4867]: I0214 04:44:42.906805 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-r75vv" Feb 14 04:44:43 crc kubenswrapper[4867]: I0214 04:44:43.770282 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-r75vv"] Feb 14 04:44:43 crc kubenswrapper[4867]: I0214 04:44:43.853658 4867 generic.go:334] "Generic (PLEG): container finished" podID="f7288f7d-b1ef-4c2e-afab-abf0640eca5b" containerID="36e4894f5c0703edfdafd6fce0e06fa2efe687f65144ec11262a4f943fdda9c8" exitCode=0 Feb 14 04:44:43 crc kubenswrapper[4867]: I0214 04:44:43.853725 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5tmrm" event={"ID":"f7288f7d-b1ef-4c2e-afab-abf0640eca5b","Type":"ContainerDied","Data":"36e4894f5c0703edfdafd6fce0e06fa2efe687f65144ec11262a4f943fdda9c8"} Feb 14 04:44:44 crc kubenswrapper[4867]: I0214 04:44:44.606058 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-sk5ns" Feb 14 04:44:44 crc kubenswrapper[4867]: I0214 04:44:44.785676 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mpk7l\" (UniqueName: \"kubernetes.io/projected/6eaa68ce-0a13-47ec-b1d9-3a11bd50c4be-kube-api-access-mpk7l\") pod \"6eaa68ce-0a13-47ec-b1d9-3a11bd50c4be\" (UID: \"6eaa68ce-0a13-47ec-b1d9-3a11bd50c4be\") " Feb 14 04:44:44 crc kubenswrapper[4867]: I0214 04:44:44.785907 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6eaa68ce-0a13-47ec-b1d9-3a11bd50c4be-ssh-key-openstack-edpm-ipam\") pod \"6eaa68ce-0a13-47ec-b1d9-3a11bd50c4be\" (UID: \"6eaa68ce-0a13-47ec-b1d9-3a11bd50c4be\") " Feb 14 04:44:44 crc kubenswrapper[4867]: I0214 04:44:44.786046 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6eaa68ce-0a13-47ec-b1d9-3a11bd50c4be-inventory\") pod \"6eaa68ce-0a13-47ec-b1d9-3a11bd50c4be\" (UID: \"6eaa68ce-0a13-47ec-b1d9-3a11bd50c4be\") " Feb 14 04:44:44 crc kubenswrapper[4867]: I0214 04:44:44.794893 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6eaa68ce-0a13-47ec-b1d9-3a11bd50c4be-kube-api-access-mpk7l" (OuterVolumeSpecName: "kube-api-access-mpk7l") pod "6eaa68ce-0a13-47ec-b1d9-3a11bd50c4be" (UID: "6eaa68ce-0a13-47ec-b1d9-3a11bd50c4be"). InnerVolumeSpecName "kube-api-access-mpk7l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:44:44 crc kubenswrapper[4867]: I0214 04:44:44.829350 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6eaa68ce-0a13-47ec-b1d9-3a11bd50c4be-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6eaa68ce-0a13-47ec-b1d9-3a11bd50c4be" (UID: "6eaa68ce-0a13-47ec-b1d9-3a11bd50c4be"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:44:44 crc kubenswrapper[4867]: I0214 04:44:44.832549 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6eaa68ce-0a13-47ec-b1d9-3a11bd50c4be-inventory" (OuterVolumeSpecName: "inventory") pod "6eaa68ce-0a13-47ec-b1d9-3a11bd50c4be" (UID: "6eaa68ce-0a13-47ec-b1d9-3a11bd50c4be"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:44:44 crc kubenswrapper[4867]: I0214 04:44:44.866244 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5tmrm" event={"ID":"f7288f7d-b1ef-4c2e-afab-abf0640eca5b","Type":"ContainerStarted","Data":"8734ecedd6ef520c39b963d953d5ba95466a58f89253815e3cfaf6003fdb92f9"} Feb 14 04:44:44 crc kubenswrapper[4867]: I0214 04:44:44.870188 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-sk5ns" event={"ID":"6eaa68ce-0a13-47ec-b1d9-3a11bd50c4be","Type":"ContainerDied","Data":"608fe4d2e1e82ab95ee69da48f73cb0f32b952078e33f199d4f4180bbeaafdbc"} Feb 14 04:44:44 crc kubenswrapper[4867]: I0214 04:44:44.870342 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="608fe4d2e1e82ab95ee69da48f73cb0f32b952078e33f199d4f4180bbeaafdbc" Feb 14 04:44:44 crc kubenswrapper[4867]: I0214 04:44:44.870256 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-sk5ns" Feb 14 04:44:44 crc kubenswrapper[4867]: I0214 04:44:44.870243 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-r75vv" podUID="b5adcee9-1419-4c20-b96e-4886a1f19c68" containerName="registry-server" containerID="cri-o://666c681872e1eab3e17aeafbe100cddf40a7eab0a3a2721a86433a8789ec0392" gracePeriod=2 Feb 14 04:44:44 crc kubenswrapper[4867]: I0214 04:44:44.889036 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mpk7l\" (UniqueName: \"kubernetes.io/projected/6eaa68ce-0a13-47ec-b1d9-3a11bd50c4be-kube-api-access-mpk7l\") on node \"crc\" DevicePath \"\"" Feb 14 04:44:44 crc kubenswrapper[4867]: I0214 04:44:44.889087 4867 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6eaa68ce-0a13-47ec-b1d9-3a11bd50c4be-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 14 04:44:44 crc kubenswrapper[4867]: I0214 04:44:44.889101 4867 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6eaa68ce-0a13-47ec-b1d9-3a11bd50c4be-inventory\") on node \"crc\" DevicePath \"\"" Feb 14 04:44:44 crc kubenswrapper[4867]: I0214 04:44:44.894622 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5tmrm" podStartSLOduration=3.415196339 podStartE2EDuration="9.894601515s" podCreationTimestamp="2026-02-14 04:44:35 +0000 UTC" firstStartedPulling="2026-02-14 04:44:37.788452312 +0000 UTC m=+2109.869389626" lastFinishedPulling="2026-02-14 04:44:44.267857488 +0000 UTC m=+2116.348794802" observedRunningTime="2026-02-14 04:44:44.891861053 +0000 UTC m=+2116.972798367" watchObservedRunningTime="2026-02-14 04:44:44.894601515 +0000 UTC m=+2116.975538829" Feb 14 04:44:44 crc kubenswrapper[4867]: I0214 04:44:44.971172 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-c22xw"] Feb 14 04:44:44 crc kubenswrapper[4867]: E0214 04:44:44.992030 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6eaa68ce-0a13-47ec-b1d9-3a11bd50c4be" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 14 04:44:44 crc kubenswrapper[4867]: I0214 04:44:44.992076 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="6eaa68ce-0a13-47ec-b1d9-3a11bd50c4be" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 14 04:44:44 crc kubenswrapper[4867]: I0214 04:44:44.993128 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="6eaa68ce-0a13-47ec-b1d9-3a11bd50c4be" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 14 04:44:45 crc kubenswrapper[4867]: I0214 04:44:44.999985 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-c22xw"] Feb 14 04:44:45 crc kubenswrapper[4867]: I0214 04:44:45.005233 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c22xw" Feb 14 04:44:45 crc kubenswrapper[4867]: I0214 04:44:45.009969 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-24tmg" Feb 14 04:44:45 crc kubenswrapper[4867]: I0214 04:44:45.012400 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 14 04:44:45 crc kubenswrapper[4867]: I0214 04:44:45.012586 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 14 04:44:45 crc kubenswrapper[4867]: I0214 04:44:45.014523 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 14 04:44:45 crc kubenswrapper[4867]: I0214 04:44:45.097579 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0b6f69a7-8ea6-48ad-aa0c-bd11b1efef10-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-c22xw\" (UID: \"0b6f69a7-8ea6-48ad-aa0c-bd11b1efef10\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c22xw" Feb 14 04:44:45 crc kubenswrapper[4867]: I0214 04:44:45.098077 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nccx2\" (UniqueName: \"kubernetes.io/projected/0b6f69a7-8ea6-48ad-aa0c-bd11b1efef10-kube-api-access-nccx2\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-c22xw\" (UID: \"0b6f69a7-8ea6-48ad-aa0c-bd11b1efef10\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c22xw" Feb 14 04:44:45 crc kubenswrapper[4867]: I0214 04:44:45.098282 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0b6f69a7-8ea6-48ad-aa0c-bd11b1efef10-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-c22xw\" (UID: \"0b6f69a7-8ea6-48ad-aa0c-bd11b1efef10\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c22xw" Feb 14 04:44:45 crc kubenswrapper[4867]: I0214 04:44:45.201679 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0b6f69a7-8ea6-48ad-aa0c-bd11b1efef10-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-c22xw\" (UID: \"0b6f69a7-8ea6-48ad-aa0c-bd11b1efef10\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c22xw" Feb 14 04:44:45 crc kubenswrapper[4867]: I0214 04:44:45.201889 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0b6f69a7-8ea6-48ad-aa0c-bd11b1efef10-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-c22xw\" (UID: \"0b6f69a7-8ea6-48ad-aa0c-bd11b1efef10\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c22xw" Feb 14 04:44:45 crc kubenswrapper[4867]: I0214 04:44:45.202003 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nccx2\" (UniqueName: \"kubernetes.io/projected/0b6f69a7-8ea6-48ad-aa0c-bd11b1efef10-kube-api-access-nccx2\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-c22xw\" (UID: \"0b6f69a7-8ea6-48ad-aa0c-bd11b1efef10\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c22xw" Feb 14 04:44:45 crc kubenswrapper[4867]: I0214 04:44:45.209879 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0b6f69a7-8ea6-48ad-aa0c-bd11b1efef10-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-c22xw\" (UID: \"0b6f69a7-8ea6-48ad-aa0c-bd11b1efef10\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c22xw" Feb 14 04:44:45 crc kubenswrapper[4867]: I0214 04:44:45.211143 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0b6f69a7-8ea6-48ad-aa0c-bd11b1efef10-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-c22xw\" (UID: \"0b6f69a7-8ea6-48ad-aa0c-bd11b1efef10\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c22xw" Feb 14 04:44:45 crc kubenswrapper[4867]: I0214 04:44:45.234044 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nccx2\" (UniqueName: \"kubernetes.io/projected/0b6f69a7-8ea6-48ad-aa0c-bd11b1efef10-kube-api-access-nccx2\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-c22xw\" (UID: \"0b6f69a7-8ea6-48ad-aa0c-bd11b1efef10\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c22xw" Feb 14 04:44:45 crc kubenswrapper[4867]: I0214 04:44:45.280227 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r75vv" Feb 14 04:44:45 crc kubenswrapper[4867]: I0214 04:44:45.338852 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c22xw" Feb 14 04:44:45 crc kubenswrapper[4867]: I0214 04:44:45.405230 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5adcee9-1419-4c20-b96e-4886a1f19c68-utilities\") pod \"b5adcee9-1419-4c20-b96e-4886a1f19c68\" (UID: \"b5adcee9-1419-4c20-b96e-4886a1f19c68\") " Feb 14 04:44:45 crc kubenswrapper[4867]: I0214 04:44:45.405363 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xp7hp\" (UniqueName: \"kubernetes.io/projected/b5adcee9-1419-4c20-b96e-4886a1f19c68-kube-api-access-xp7hp\") pod \"b5adcee9-1419-4c20-b96e-4886a1f19c68\" (UID: \"b5adcee9-1419-4c20-b96e-4886a1f19c68\") " Feb 14 04:44:45 crc kubenswrapper[4867]: I0214 04:44:45.405468 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5adcee9-1419-4c20-b96e-4886a1f19c68-catalog-content\") pod \"b5adcee9-1419-4c20-b96e-4886a1f19c68\" (UID: \"b5adcee9-1419-4c20-b96e-4886a1f19c68\") " Feb 14 04:44:45 crc kubenswrapper[4867]: I0214 04:44:45.409055 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5adcee9-1419-4c20-b96e-4886a1f19c68-utilities" (OuterVolumeSpecName: "utilities") pod "b5adcee9-1419-4c20-b96e-4886a1f19c68" (UID: "b5adcee9-1419-4c20-b96e-4886a1f19c68"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:44:45 crc kubenswrapper[4867]: I0214 04:44:45.410033 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5adcee9-1419-4c20-b96e-4886a1f19c68-kube-api-access-xp7hp" (OuterVolumeSpecName: "kube-api-access-xp7hp") pod "b5adcee9-1419-4c20-b96e-4886a1f19c68" (UID: "b5adcee9-1419-4c20-b96e-4886a1f19c68"). InnerVolumeSpecName "kube-api-access-xp7hp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:44:45 crc kubenswrapper[4867]: I0214 04:44:45.458421 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5adcee9-1419-4c20-b96e-4886a1f19c68-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b5adcee9-1419-4c20-b96e-4886a1f19c68" (UID: "b5adcee9-1419-4c20-b96e-4886a1f19c68"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:44:45 crc kubenswrapper[4867]: I0214 04:44:45.508057 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5adcee9-1419-4c20-b96e-4886a1f19c68-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 04:44:45 crc kubenswrapper[4867]: I0214 04:44:45.508380 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xp7hp\" (UniqueName: \"kubernetes.io/projected/b5adcee9-1419-4c20-b96e-4886a1f19c68-kube-api-access-xp7hp\") on node \"crc\" DevicePath \"\"" Feb 14 04:44:45 crc kubenswrapper[4867]: I0214 04:44:45.508393 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5adcee9-1419-4c20-b96e-4886a1f19c68-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 04:44:45 crc kubenswrapper[4867]: I0214 04:44:45.677781 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5tmrm" Feb 14 04:44:45 crc kubenswrapper[4867]: I0214 04:44:45.677990 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5tmrm" Feb 14 04:44:45 crc kubenswrapper[4867]: I0214 04:44:45.883477 4867 generic.go:334] "Generic (PLEG): container finished" podID="b5adcee9-1419-4c20-b96e-4886a1f19c68" containerID="666c681872e1eab3e17aeafbe100cddf40a7eab0a3a2721a86433a8789ec0392" exitCode=0 Feb 14 04:44:45 crc kubenswrapper[4867]: I0214 04:44:45.884609 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r75vv" Feb 14 04:44:45 crc kubenswrapper[4867]: I0214 04:44:45.886870 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r75vv" event={"ID":"b5adcee9-1419-4c20-b96e-4886a1f19c68","Type":"ContainerDied","Data":"666c681872e1eab3e17aeafbe100cddf40a7eab0a3a2721a86433a8789ec0392"} Feb 14 04:44:45 crc kubenswrapper[4867]: I0214 04:44:45.886938 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r75vv" event={"ID":"b5adcee9-1419-4c20-b96e-4886a1f19c68","Type":"ContainerDied","Data":"429bdccd454e07224012faaaa97764f590a609292e1cea0ebe0e35d368f7b141"} Feb 14 04:44:45 crc kubenswrapper[4867]: I0214 04:44:45.886960 4867 scope.go:117] "RemoveContainer" containerID="666c681872e1eab3e17aeafbe100cddf40a7eab0a3a2721a86433a8789ec0392" Feb 14 04:44:45 crc kubenswrapper[4867]: I0214 04:44:45.923300 4867 scope.go:117] "RemoveContainer" containerID="10005da65e2c73639ff16fcacd7548293f56d416bfac9e18c035429ff03e132e" Feb 14 04:44:45 crc kubenswrapper[4867]: I0214 04:44:45.930864 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-r75vv"] Feb 14 04:44:45 crc kubenswrapper[4867]: I0214 04:44:45.948694 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-r75vv"] Feb 14 04:44:45 crc kubenswrapper[4867]: I0214 04:44:45.965158 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-c22xw"] Feb 14 04:44:45 crc kubenswrapper[4867]: W0214 04:44:45.966172 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b6f69a7_8ea6_48ad_aa0c_bd11b1efef10.slice/crio-0ce552b1f72639eb3f74a2e6671f112bab5f516045d9b65f7a60c6a824ab8dac WatchSource:0}: Error finding container 0ce552b1f72639eb3f74a2e6671f112bab5f516045d9b65f7a60c6a824ab8dac: Status 404 returned error can't find the container with id 0ce552b1f72639eb3f74a2e6671f112bab5f516045d9b65f7a60c6a824ab8dac Feb 14 04:44:45 crc kubenswrapper[4867]: I0214 04:44:45.972775 4867 scope.go:117] "RemoveContainer" containerID="4bbc8658b79a62d3761a54fc5307fcfbd9755f7df3887332b937b52cb17b7449" Feb 14 04:44:46 crc kubenswrapper[4867]: I0214 04:44:46.083721 4867 scope.go:117] "RemoveContainer" containerID="666c681872e1eab3e17aeafbe100cddf40a7eab0a3a2721a86433a8789ec0392" Feb 14 04:44:46 crc kubenswrapper[4867]: E0214 04:44:46.084284 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"666c681872e1eab3e17aeafbe100cddf40a7eab0a3a2721a86433a8789ec0392\": container with ID starting with 666c681872e1eab3e17aeafbe100cddf40a7eab0a3a2721a86433a8789ec0392 not found: ID does not exist" containerID="666c681872e1eab3e17aeafbe100cddf40a7eab0a3a2721a86433a8789ec0392" Feb 14 04:44:46 crc kubenswrapper[4867]: I0214 04:44:46.084318 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"666c681872e1eab3e17aeafbe100cddf40a7eab0a3a2721a86433a8789ec0392"} err="failed to get container status \"666c681872e1eab3e17aeafbe100cddf40a7eab0a3a2721a86433a8789ec0392\": rpc error: code = NotFound desc = could not find container \"666c681872e1eab3e17aeafbe100cddf40a7eab0a3a2721a86433a8789ec0392\": container with ID starting with 666c681872e1eab3e17aeafbe100cddf40a7eab0a3a2721a86433a8789ec0392 not found: ID does not exist" Feb 14 04:44:46 crc kubenswrapper[4867]: I0214 04:44:46.084338 4867 scope.go:117] "RemoveContainer" containerID="10005da65e2c73639ff16fcacd7548293f56d416bfac9e18c035429ff03e132e" Feb 14 04:44:46 crc kubenswrapper[4867]: E0214 04:44:46.084769 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10005da65e2c73639ff16fcacd7548293f56d416bfac9e18c035429ff03e132e\": container with ID starting with 10005da65e2c73639ff16fcacd7548293f56d416bfac9e18c035429ff03e132e not found: ID does not exist" containerID="10005da65e2c73639ff16fcacd7548293f56d416bfac9e18c035429ff03e132e" Feb 14 04:44:46 crc kubenswrapper[4867]: I0214 04:44:46.084789 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10005da65e2c73639ff16fcacd7548293f56d416bfac9e18c035429ff03e132e"} err="failed to get container status \"10005da65e2c73639ff16fcacd7548293f56d416bfac9e18c035429ff03e132e\": rpc error: code = NotFound desc = could not find container \"10005da65e2c73639ff16fcacd7548293f56d416bfac9e18c035429ff03e132e\": container with ID starting with 10005da65e2c73639ff16fcacd7548293f56d416bfac9e18c035429ff03e132e not found: ID does not exist" Feb 14 04:44:46 crc kubenswrapper[4867]: I0214 04:44:46.084802 4867 scope.go:117] "RemoveContainer" containerID="4bbc8658b79a62d3761a54fc5307fcfbd9755f7df3887332b937b52cb17b7449" Feb 14 04:44:46 crc kubenswrapper[4867]: E0214 04:44:46.085104 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4bbc8658b79a62d3761a54fc5307fcfbd9755f7df3887332b937b52cb17b7449\": container with ID starting with 4bbc8658b79a62d3761a54fc5307fcfbd9755f7df3887332b937b52cb17b7449 not found: ID does not exist" containerID="4bbc8658b79a62d3761a54fc5307fcfbd9755f7df3887332b937b52cb17b7449" Feb 14 04:44:46 crc kubenswrapper[4867]: I0214 04:44:46.085122 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bbc8658b79a62d3761a54fc5307fcfbd9755f7df3887332b937b52cb17b7449"} err="failed to get container status \"4bbc8658b79a62d3761a54fc5307fcfbd9755f7df3887332b937b52cb17b7449\": rpc error: code = NotFound desc = could not find container \"4bbc8658b79a62d3761a54fc5307fcfbd9755f7df3887332b937b52cb17b7449\": container with ID starting with 4bbc8658b79a62d3761a54fc5307fcfbd9755f7df3887332b937b52cb17b7449 not found: ID does not exist" Feb 14 04:44:46 crc kubenswrapper[4867]: I0214 04:44:46.735481 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5tmrm" podUID="f7288f7d-b1ef-4c2e-afab-abf0640eca5b" containerName="registry-server" probeResult="failure" output=< Feb 14 04:44:46 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 04:44:46 crc kubenswrapper[4867]: > Feb 14 04:44:46 crc kubenswrapper[4867]: I0214 04:44:46.907490 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c22xw" event={"ID":"0b6f69a7-8ea6-48ad-aa0c-bd11b1efef10","Type":"ContainerStarted","Data":"4cb72980b5b9bee8ac466efa1a7b02120564eee883847b51bd4f9469ad29807a"} Feb 14 04:44:46 crc kubenswrapper[4867]: I0214 04:44:46.907574 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c22xw" event={"ID":"0b6f69a7-8ea6-48ad-aa0c-bd11b1efef10","Type":"ContainerStarted","Data":"0ce552b1f72639eb3f74a2e6671f112bab5f516045d9b65f7a60c6a824ab8dac"} Feb 14 04:44:46 crc kubenswrapper[4867]: I0214 04:44:46.936973 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c22xw" podStartSLOduration=2.5361594739999997 podStartE2EDuration="2.936951694s" podCreationTimestamp="2026-02-14 04:44:44 +0000 UTC" firstStartedPulling="2026-02-14 04:44:45.97346592 +0000 UTC m=+2118.054403234" lastFinishedPulling="2026-02-14 04:44:46.37425814 +0000 UTC m=+2118.455195454" observedRunningTime="2026-02-14 04:44:46.928236205 +0000 UTC m=+2119.009173519" watchObservedRunningTime="2026-02-14 04:44:46.936951694 +0000 UTC m=+2119.017889008" Feb 14 04:44:47 crc kubenswrapper[4867]: I0214 04:44:47.014300 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5adcee9-1419-4c20-b96e-4886a1f19c68" path="/var/lib/kubelet/pods/b5adcee9-1419-4c20-b96e-4886a1f19c68/volumes" Feb 14 04:44:56 crc kubenswrapper[4867]: I0214 04:44:56.729328 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5tmrm" podUID="f7288f7d-b1ef-4c2e-afab-abf0640eca5b" containerName="registry-server" probeResult="failure" output=< Feb 14 04:44:56 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 04:44:56 crc kubenswrapper[4867]: > Feb 14 04:45:00 crc kubenswrapper[4867]: I0214 04:45:00.047032 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-k2ls7"] Feb 14 04:45:00 crc kubenswrapper[4867]: I0214 04:45:00.063954 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-k2ls7"] Feb 14 04:45:00 crc kubenswrapper[4867]: I0214 04:45:00.150247 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517405-57nzc"] Feb 14 04:45:00 crc kubenswrapper[4867]: E0214 04:45:00.150824 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5adcee9-1419-4c20-b96e-4886a1f19c68" containerName="registry-server" Feb 14 04:45:00 crc kubenswrapper[4867]: I0214 04:45:00.150843 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5adcee9-1419-4c20-b96e-4886a1f19c68" containerName="registry-server" Feb 14 04:45:00 crc kubenswrapper[4867]: E0214 04:45:00.150851 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5adcee9-1419-4c20-b96e-4886a1f19c68" containerName="extract-content" Feb 14 04:45:00 crc kubenswrapper[4867]: I0214 04:45:00.150858 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5adcee9-1419-4c20-b96e-4886a1f19c68" containerName="extract-content" Feb 14 04:45:00 crc kubenswrapper[4867]: E0214 04:45:00.150916 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5adcee9-1419-4c20-b96e-4886a1f19c68" containerName="extract-utilities" Feb 14 04:45:00 crc kubenswrapper[4867]: I0214 04:45:00.150923 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5adcee9-1419-4c20-b96e-4886a1f19c68" containerName="extract-utilities" Feb 14 04:45:00 crc kubenswrapper[4867]: I0214 04:45:00.151154 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5adcee9-1419-4c20-b96e-4886a1f19c68" containerName="registry-server" Feb 14 04:45:00 crc kubenswrapper[4867]: I0214 04:45:00.152121 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517405-57nzc" Feb 14 04:45:00 crc kubenswrapper[4867]: I0214 04:45:00.156028 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 14 04:45:00 crc kubenswrapper[4867]: I0214 04:45:00.161228 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 14 04:45:00 crc kubenswrapper[4867]: I0214 04:45:00.162809 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517405-57nzc"] Feb 14 04:45:00 crc kubenswrapper[4867]: I0214 04:45:00.168766 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c9309a87-899d-49c2-885b-9d5689c3086b-secret-volume\") pod \"collect-profiles-29517405-57nzc\" (UID: \"c9309a87-899d-49c2-885b-9d5689c3086b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517405-57nzc" Feb 14 04:45:00 crc kubenswrapper[4867]: I0214 04:45:00.169005 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnxl9\" (UniqueName: \"kubernetes.io/projected/c9309a87-899d-49c2-885b-9d5689c3086b-kube-api-access-jnxl9\") pod \"collect-profiles-29517405-57nzc\" (UID: \"c9309a87-899d-49c2-885b-9d5689c3086b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517405-57nzc" Feb 14 04:45:00 crc kubenswrapper[4867]: I0214 04:45:00.169108 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c9309a87-899d-49c2-885b-9d5689c3086b-config-volume\") pod \"collect-profiles-29517405-57nzc\" (UID: \"c9309a87-899d-49c2-885b-9d5689c3086b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517405-57nzc" Feb 14 04:45:00 crc kubenswrapper[4867]: I0214 04:45:00.271317 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnxl9\" (UniqueName: \"kubernetes.io/projected/c9309a87-899d-49c2-885b-9d5689c3086b-kube-api-access-jnxl9\") pod \"collect-profiles-29517405-57nzc\" (UID: \"c9309a87-899d-49c2-885b-9d5689c3086b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517405-57nzc" Feb 14 04:45:00 crc kubenswrapper[4867]: I0214 04:45:00.271415 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c9309a87-899d-49c2-885b-9d5689c3086b-config-volume\") pod \"collect-profiles-29517405-57nzc\" (UID: \"c9309a87-899d-49c2-885b-9d5689c3086b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517405-57nzc" Feb 14 04:45:00 crc kubenswrapper[4867]: I0214 04:45:00.271481 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c9309a87-899d-49c2-885b-9d5689c3086b-secret-volume\") pod \"collect-profiles-29517405-57nzc\" (UID: \"c9309a87-899d-49c2-885b-9d5689c3086b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517405-57nzc" Feb 14 04:45:00 crc kubenswrapper[4867]: I0214 04:45:00.272628 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c9309a87-899d-49c2-885b-9d5689c3086b-config-volume\") pod \"collect-profiles-29517405-57nzc\" (UID: \"c9309a87-899d-49c2-885b-9d5689c3086b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517405-57nzc" Feb 14 04:45:00 crc kubenswrapper[4867]: I0214 04:45:00.277461 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c9309a87-899d-49c2-885b-9d5689c3086b-secret-volume\") pod \"collect-profiles-29517405-57nzc\" (UID: \"c9309a87-899d-49c2-885b-9d5689c3086b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517405-57nzc" Feb 14 04:45:00 crc kubenswrapper[4867]: I0214 04:45:00.289479 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnxl9\" (UniqueName: \"kubernetes.io/projected/c9309a87-899d-49c2-885b-9d5689c3086b-kube-api-access-jnxl9\") pod \"collect-profiles-29517405-57nzc\" (UID: \"c9309a87-899d-49c2-885b-9d5689c3086b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517405-57nzc" Feb 14 04:45:00 crc kubenswrapper[4867]: I0214 04:45:00.483758 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517405-57nzc" Feb 14 04:45:00 crc kubenswrapper[4867]: I0214 04:45:00.964955 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517405-57nzc"] Feb 14 04:45:01 crc kubenswrapper[4867]: I0214 04:45:01.036318 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4be79f3c-fa78-40d2-9ad9-d1dfd965c831" path="/var/lib/kubelet/pods/4be79f3c-fa78-40d2-9ad9-d1dfd965c831/volumes" Feb 14 04:45:01 crc kubenswrapper[4867]: I0214 04:45:01.089910 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517405-57nzc" event={"ID":"c9309a87-899d-49c2-885b-9d5689c3086b","Type":"ContainerStarted","Data":"d0a6d3a10289bed0a3a52adf3fa173eea292037db6396740729ea2564654297f"} Feb 14 04:45:02 crc kubenswrapper[4867]: I0214 04:45:02.124498 4867 generic.go:334] "Generic (PLEG): container finished" podID="c9309a87-899d-49c2-885b-9d5689c3086b" containerID="ab4ee5d7ccbbb8ee4ad53cb2ebd2a425cf55cf8aed22876c6ecd5b2b84a7972a" exitCode=0 Feb 14 04:45:02 crc kubenswrapper[4867]: I0214 04:45:02.124633 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517405-57nzc" event={"ID":"c9309a87-899d-49c2-885b-9d5689c3086b","Type":"ContainerDied","Data":"ab4ee5d7ccbbb8ee4ad53cb2ebd2a425cf55cf8aed22876c6ecd5b2b84a7972a"} Feb 14 04:45:03 crc kubenswrapper[4867]: I0214 04:45:03.661670 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517405-57nzc" Feb 14 04:45:03 crc kubenswrapper[4867]: I0214 04:45:03.665357 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c9309a87-899d-49c2-885b-9d5689c3086b-secret-volume\") pod \"c9309a87-899d-49c2-885b-9d5689c3086b\" (UID: \"c9309a87-899d-49c2-885b-9d5689c3086b\") " Feb 14 04:45:03 crc kubenswrapper[4867]: I0214 04:45:03.665487 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jnxl9\" (UniqueName: \"kubernetes.io/projected/c9309a87-899d-49c2-885b-9d5689c3086b-kube-api-access-jnxl9\") pod \"c9309a87-899d-49c2-885b-9d5689c3086b\" (UID: \"c9309a87-899d-49c2-885b-9d5689c3086b\") " Feb 14 04:45:03 crc kubenswrapper[4867]: I0214 04:45:03.665847 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c9309a87-899d-49c2-885b-9d5689c3086b-config-volume\") pod \"c9309a87-899d-49c2-885b-9d5689c3086b\" (UID: \"c9309a87-899d-49c2-885b-9d5689c3086b\") " Feb 14 04:45:03 crc kubenswrapper[4867]: I0214 04:45:03.667057 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9309a87-899d-49c2-885b-9d5689c3086b-config-volume" (OuterVolumeSpecName: "config-volume") pod "c9309a87-899d-49c2-885b-9d5689c3086b" (UID: "c9309a87-899d-49c2-885b-9d5689c3086b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:45:03 crc kubenswrapper[4867]: I0214 04:45:03.671556 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9309a87-899d-49c2-885b-9d5689c3086b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c9309a87-899d-49c2-885b-9d5689c3086b" (UID: "c9309a87-899d-49c2-885b-9d5689c3086b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:45:03 crc kubenswrapper[4867]: I0214 04:45:03.679192 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9309a87-899d-49c2-885b-9d5689c3086b-kube-api-access-jnxl9" (OuterVolumeSpecName: "kube-api-access-jnxl9") pod "c9309a87-899d-49c2-885b-9d5689c3086b" (UID: "c9309a87-899d-49c2-885b-9d5689c3086b"). InnerVolumeSpecName "kube-api-access-jnxl9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:45:03 crc kubenswrapper[4867]: I0214 04:45:03.769597 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jnxl9\" (UniqueName: \"kubernetes.io/projected/c9309a87-899d-49c2-885b-9d5689c3086b-kube-api-access-jnxl9\") on node \"crc\" DevicePath \"\"" Feb 14 04:45:03 crc kubenswrapper[4867]: I0214 04:45:03.769629 4867 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c9309a87-899d-49c2-885b-9d5689c3086b-config-volume\") on node \"crc\" DevicePath \"\"" Feb 14 04:45:03 crc kubenswrapper[4867]: I0214 04:45:03.769638 4867 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c9309a87-899d-49c2-885b-9d5689c3086b-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 14 04:45:04 crc kubenswrapper[4867]: I0214 04:45:04.149725 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517405-57nzc" event={"ID":"c9309a87-899d-49c2-885b-9d5689c3086b","Type":"ContainerDied","Data":"d0a6d3a10289bed0a3a52adf3fa173eea292037db6396740729ea2564654297f"} Feb 14 04:45:04 crc kubenswrapper[4867]: I0214 04:45:04.149773 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0a6d3a10289bed0a3a52adf3fa173eea292037db6396740729ea2564654297f" Feb 14 04:45:04 crc kubenswrapper[4867]: I0214 04:45:04.149847 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517405-57nzc" Feb 14 04:45:04 crc kubenswrapper[4867]: I0214 04:45:04.738082 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517360-jfvsd"] Feb 14 04:45:04 crc kubenswrapper[4867]: I0214 04:45:04.753115 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517360-jfvsd"] Feb 14 04:45:05 crc kubenswrapper[4867]: I0214 04:45:05.013887 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71ac31c5-7a3b-4c18-aa9e-c193fa8f778a" path="/var/lib/kubelet/pods/71ac31c5-7a3b-4c18-aa9e-c193fa8f778a/volumes" Feb 14 04:45:05 crc kubenswrapper[4867]: I0214 04:45:05.729109 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5tmrm" Feb 14 04:45:05 crc kubenswrapper[4867]: I0214 04:45:05.781843 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5tmrm" Feb 14 04:45:06 crc kubenswrapper[4867]: I0214 04:45:06.382073 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5tmrm"] Feb 14 04:45:07 crc kubenswrapper[4867]: I0214 04:45:07.192781 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5tmrm" podUID="f7288f7d-b1ef-4c2e-afab-abf0640eca5b" containerName="registry-server" containerID="cri-o://8734ecedd6ef520c39b963d953d5ba95466a58f89253815e3cfaf6003fdb92f9" gracePeriod=2 Feb 14 04:45:07 crc kubenswrapper[4867]: I0214 04:45:07.660844 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5tmrm" Feb 14 04:45:07 crc kubenswrapper[4867]: I0214 04:45:07.782845 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gpdx2\" (UniqueName: \"kubernetes.io/projected/f7288f7d-b1ef-4c2e-afab-abf0640eca5b-kube-api-access-gpdx2\") pod \"f7288f7d-b1ef-4c2e-afab-abf0640eca5b\" (UID: \"f7288f7d-b1ef-4c2e-afab-abf0640eca5b\") " Feb 14 04:45:07 crc kubenswrapper[4867]: I0214 04:45:07.782927 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7288f7d-b1ef-4c2e-afab-abf0640eca5b-catalog-content\") pod \"f7288f7d-b1ef-4c2e-afab-abf0640eca5b\" (UID: \"f7288f7d-b1ef-4c2e-afab-abf0640eca5b\") " Feb 14 04:45:07 crc kubenswrapper[4867]: I0214 04:45:07.783093 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7288f7d-b1ef-4c2e-afab-abf0640eca5b-utilities\") pod \"f7288f7d-b1ef-4c2e-afab-abf0640eca5b\" (UID: \"f7288f7d-b1ef-4c2e-afab-abf0640eca5b\") " Feb 14 04:45:07 crc kubenswrapper[4867]: I0214 04:45:07.783848 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7288f7d-b1ef-4c2e-afab-abf0640eca5b-utilities" (OuterVolumeSpecName: "utilities") pod "f7288f7d-b1ef-4c2e-afab-abf0640eca5b" (UID: "f7288f7d-b1ef-4c2e-afab-abf0640eca5b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:45:07 crc kubenswrapper[4867]: I0214 04:45:07.784705 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7288f7d-b1ef-4c2e-afab-abf0640eca5b-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 04:45:07 crc kubenswrapper[4867]: I0214 04:45:07.794283 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7288f7d-b1ef-4c2e-afab-abf0640eca5b-kube-api-access-gpdx2" (OuterVolumeSpecName: "kube-api-access-gpdx2") pod "f7288f7d-b1ef-4c2e-afab-abf0640eca5b" (UID: "f7288f7d-b1ef-4c2e-afab-abf0640eca5b"). InnerVolumeSpecName "kube-api-access-gpdx2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:45:07 crc kubenswrapper[4867]: I0214 04:45:07.886390 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gpdx2\" (UniqueName: \"kubernetes.io/projected/f7288f7d-b1ef-4c2e-afab-abf0640eca5b-kube-api-access-gpdx2\") on node \"crc\" DevicePath \"\"" Feb 14 04:45:07 crc kubenswrapper[4867]: I0214 04:45:07.926170 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7288f7d-b1ef-4c2e-afab-abf0640eca5b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f7288f7d-b1ef-4c2e-afab-abf0640eca5b" (UID: "f7288f7d-b1ef-4c2e-afab-abf0640eca5b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:45:07 crc kubenswrapper[4867]: I0214 04:45:07.989635 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7288f7d-b1ef-4c2e-afab-abf0640eca5b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 04:45:08 crc kubenswrapper[4867]: I0214 04:45:08.205323 4867 generic.go:334] "Generic (PLEG): container finished" podID="f7288f7d-b1ef-4c2e-afab-abf0640eca5b" containerID="8734ecedd6ef520c39b963d953d5ba95466a58f89253815e3cfaf6003fdb92f9" exitCode=0 Feb 14 04:45:08 crc kubenswrapper[4867]: I0214 04:45:08.206285 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5tmrm" event={"ID":"f7288f7d-b1ef-4c2e-afab-abf0640eca5b","Type":"ContainerDied","Data":"8734ecedd6ef520c39b963d953d5ba95466a58f89253815e3cfaf6003fdb92f9"} Feb 14 04:45:08 crc kubenswrapper[4867]: I0214 04:45:08.206368 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5tmrm" event={"ID":"f7288f7d-b1ef-4c2e-afab-abf0640eca5b","Type":"ContainerDied","Data":"0d54b1c70e28e064450fb2d8570606b5e38f9337b5941836227df530cc9171aa"} Feb 14 04:45:08 crc kubenswrapper[4867]: I0214 04:45:08.206459 4867 scope.go:117] "RemoveContainer" containerID="8734ecedd6ef520c39b963d953d5ba95466a58f89253815e3cfaf6003fdb92f9" Feb 14 04:45:08 crc kubenswrapper[4867]: I0214 04:45:08.206738 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5tmrm" Feb 14 04:45:08 crc kubenswrapper[4867]: I0214 04:45:08.257067 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5tmrm"] Feb 14 04:45:08 crc kubenswrapper[4867]: I0214 04:45:08.259101 4867 scope.go:117] "RemoveContainer" containerID="36e4894f5c0703edfdafd6fce0e06fa2efe687f65144ec11262a4f943fdda9c8" Feb 14 04:45:08 crc kubenswrapper[4867]: I0214 04:45:08.272243 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5tmrm"] Feb 14 04:45:08 crc kubenswrapper[4867]: I0214 04:45:08.282789 4867 scope.go:117] "RemoveContainer" containerID="b5a7e32df88ba8c060c472b7c45bf07342ae640287bab89509a991d77dd9e9ae" Feb 14 04:45:08 crc kubenswrapper[4867]: I0214 04:45:08.331065 4867 scope.go:117] "RemoveContainer" containerID="8734ecedd6ef520c39b963d953d5ba95466a58f89253815e3cfaf6003fdb92f9" Feb 14 04:45:08 crc kubenswrapper[4867]: E0214 04:45:08.331711 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8734ecedd6ef520c39b963d953d5ba95466a58f89253815e3cfaf6003fdb92f9\": container with ID starting with 8734ecedd6ef520c39b963d953d5ba95466a58f89253815e3cfaf6003fdb92f9 not found: ID does not exist" containerID="8734ecedd6ef520c39b963d953d5ba95466a58f89253815e3cfaf6003fdb92f9" Feb 14 04:45:08 crc kubenswrapper[4867]: I0214 04:45:08.331778 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8734ecedd6ef520c39b963d953d5ba95466a58f89253815e3cfaf6003fdb92f9"} err="failed to get container status \"8734ecedd6ef520c39b963d953d5ba95466a58f89253815e3cfaf6003fdb92f9\": rpc error: code = NotFound desc = could not find container \"8734ecedd6ef520c39b963d953d5ba95466a58f89253815e3cfaf6003fdb92f9\": container with ID starting with 8734ecedd6ef520c39b963d953d5ba95466a58f89253815e3cfaf6003fdb92f9 not found: ID does not exist" Feb 14 04:45:08 crc kubenswrapper[4867]: I0214 04:45:08.331810 4867 scope.go:117] "RemoveContainer" containerID="36e4894f5c0703edfdafd6fce0e06fa2efe687f65144ec11262a4f943fdda9c8" Feb 14 04:45:08 crc kubenswrapper[4867]: E0214 04:45:08.332248 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36e4894f5c0703edfdafd6fce0e06fa2efe687f65144ec11262a4f943fdda9c8\": container with ID starting with 36e4894f5c0703edfdafd6fce0e06fa2efe687f65144ec11262a4f943fdda9c8 not found: ID does not exist" containerID="36e4894f5c0703edfdafd6fce0e06fa2efe687f65144ec11262a4f943fdda9c8" Feb 14 04:45:08 crc kubenswrapper[4867]: I0214 04:45:08.332335 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36e4894f5c0703edfdafd6fce0e06fa2efe687f65144ec11262a4f943fdda9c8"} err="failed to get container status \"36e4894f5c0703edfdafd6fce0e06fa2efe687f65144ec11262a4f943fdda9c8\": rpc error: code = NotFound desc = could not find container \"36e4894f5c0703edfdafd6fce0e06fa2efe687f65144ec11262a4f943fdda9c8\": container with ID starting with 36e4894f5c0703edfdafd6fce0e06fa2efe687f65144ec11262a4f943fdda9c8 not found: ID does not exist" Feb 14 04:45:08 crc kubenswrapper[4867]: I0214 04:45:08.332411 4867 scope.go:117] "RemoveContainer" containerID="b5a7e32df88ba8c060c472b7c45bf07342ae640287bab89509a991d77dd9e9ae" Feb 14 04:45:08 crc kubenswrapper[4867]: E0214 04:45:08.332747 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5a7e32df88ba8c060c472b7c45bf07342ae640287bab89509a991d77dd9e9ae\": container with ID starting with b5a7e32df88ba8c060c472b7c45bf07342ae640287bab89509a991d77dd9e9ae not found: ID does not exist" containerID="b5a7e32df88ba8c060c472b7c45bf07342ae640287bab89509a991d77dd9e9ae" Feb 14 04:45:08 crc kubenswrapper[4867]: I0214 04:45:08.332773 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5a7e32df88ba8c060c472b7c45bf07342ae640287bab89509a991d77dd9e9ae"} err="failed to get container status \"b5a7e32df88ba8c060c472b7c45bf07342ae640287bab89509a991d77dd9e9ae\": rpc error: code = NotFound desc = could not find container \"b5a7e32df88ba8c060c472b7c45bf07342ae640287bab89509a991d77dd9e9ae\": container with ID starting with b5a7e32df88ba8c060c472b7c45bf07342ae640287bab89509a991d77dd9e9ae not found: ID does not exist" Feb 14 04:45:09 crc kubenswrapper[4867]: I0214 04:45:09.011094 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7288f7d-b1ef-4c2e-afab-abf0640eca5b" path="/var/lib/kubelet/pods/f7288f7d-b1ef-4c2e-afab-abf0640eca5b/volumes" Feb 14 04:45:12 crc kubenswrapper[4867]: I0214 04:45:12.993879 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rkbk8"] Feb 14 04:45:12 crc kubenswrapper[4867]: E0214 04:45:12.995137 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7288f7d-b1ef-4c2e-afab-abf0640eca5b" containerName="extract-utilities" Feb 14 04:45:12 crc kubenswrapper[4867]: I0214 04:45:12.995160 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7288f7d-b1ef-4c2e-afab-abf0640eca5b" containerName="extract-utilities" Feb 14 04:45:12 crc kubenswrapper[4867]: E0214 04:45:12.995187 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9309a87-899d-49c2-885b-9d5689c3086b" containerName="collect-profiles" Feb 14 04:45:12 crc kubenswrapper[4867]: I0214 04:45:12.995196 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9309a87-899d-49c2-885b-9d5689c3086b" containerName="collect-profiles" Feb 14 04:45:12 crc kubenswrapper[4867]: E0214 04:45:12.995223 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7288f7d-b1ef-4c2e-afab-abf0640eca5b" containerName="registry-server" Feb 14 04:45:12 crc kubenswrapper[4867]: I0214 04:45:12.995233 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7288f7d-b1ef-4c2e-afab-abf0640eca5b" containerName="registry-server" Feb 14 04:45:12 crc kubenswrapper[4867]: E0214 04:45:12.995269 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7288f7d-b1ef-4c2e-afab-abf0640eca5b" containerName="extract-content" Feb 14 04:45:12 crc kubenswrapper[4867]: I0214 04:45:12.995277 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7288f7d-b1ef-4c2e-afab-abf0640eca5b" containerName="extract-content" Feb 14 04:45:12 crc kubenswrapper[4867]: I0214 04:45:12.995541 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9309a87-899d-49c2-885b-9d5689c3086b" containerName="collect-profiles" Feb 14 04:45:12 crc kubenswrapper[4867]: I0214 04:45:12.995579 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7288f7d-b1ef-4c2e-afab-abf0640eca5b" containerName="registry-server" Feb 14 04:45:12 crc kubenswrapper[4867]: I0214 04:45:12.998328 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rkbk8" Feb 14 04:45:13 crc kubenswrapper[4867]: I0214 04:45:13.016626 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rkbk8"] Feb 14 04:45:13 crc kubenswrapper[4867]: I0214 04:45:13.110985 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xx5v6\" (UniqueName: \"kubernetes.io/projected/95abd277-f40d-4636-8270-ff2346c0c30e-kube-api-access-xx5v6\") pod \"redhat-marketplace-rkbk8\" (UID: \"95abd277-f40d-4636-8270-ff2346c0c30e\") " pod="openshift-marketplace/redhat-marketplace-rkbk8" Feb 14 04:45:13 crc kubenswrapper[4867]: I0214 04:45:13.111280 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95abd277-f40d-4636-8270-ff2346c0c30e-utilities\") pod \"redhat-marketplace-rkbk8\" (UID: \"95abd277-f40d-4636-8270-ff2346c0c30e\") " pod="openshift-marketplace/redhat-marketplace-rkbk8" Feb 14 04:45:13 crc kubenswrapper[4867]: I0214 04:45:13.111454 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95abd277-f40d-4636-8270-ff2346c0c30e-catalog-content\") pod \"redhat-marketplace-rkbk8\" (UID: \"95abd277-f40d-4636-8270-ff2346c0c30e\") " pod="openshift-marketplace/redhat-marketplace-rkbk8" Feb 14 04:45:13 crc kubenswrapper[4867]: I0214 04:45:13.214152 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95abd277-f40d-4636-8270-ff2346c0c30e-catalog-content\") pod \"redhat-marketplace-rkbk8\" (UID: \"95abd277-f40d-4636-8270-ff2346c0c30e\") " pod="openshift-marketplace/redhat-marketplace-rkbk8" Feb 14 04:45:13 crc kubenswrapper[4867]: I0214 04:45:13.214311 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xx5v6\" (UniqueName: \"kubernetes.io/projected/95abd277-f40d-4636-8270-ff2346c0c30e-kube-api-access-xx5v6\") pod \"redhat-marketplace-rkbk8\" (UID: \"95abd277-f40d-4636-8270-ff2346c0c30e\") " pod="openshift-marketplace/redhat-marketplace-rkbk8" Feb 14 04:45:13 crc kubenswrapper[4867]: I0214 04:45:13.214488 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95abd277-f40d-4636-8270-ff2346c0c30e-utilities\") pod \"redhat-marketplace-rkbk8\" (UID: \"95abd277-f40d-4636-8270-ff2346c0c30e\") " pod="openshift-marketplace/redhat-marketplace-rkbk8" Feb 14 04:45:13 crc kubenswrapper[4867]: I0214 04:45:13.214920 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95abd277-f40d-4636-8270-ff2346c0c30e-utilities\") pod \"redhat-marketplace-rkbk8\" (UID: \"95abd277-f40d-4636-8270-ff2346c0c30e\") " pod="openshift-marketplace/redhat-marketplace-rkbk8" Feb 14 04:45:13 crc kubenswrapper[4867]: I0214 04:45:13.215154 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95abd277-f40d-4636-8270-ff2346c0c30e-catalog-content\") pod \"redhat-marketplace-rkbk8\" (UID: \"95abd277-f40d-4636-8270-ff2346c0c30e\") " pod="openshift-marketplace/redhat-marketplace-rkbk8" Feb 14 04:45:13 crc kubenswrapper[4867]: I0214 04:45:13.238543 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xx5v6\" (UniqueName: \"kubernetes.io/projected/95abd277-f40d-4636-8270-ff2346c0c30e-kube-api-access-xx5v6\") pod \"redhat-marketplace-rkbk8\" (UID: \"95abd277-f40d-4636-8270-ff2346c0c30e\") " pod="openshift-marketplace/redhat-marketplace-rkbk8" Feb 14 04:45:13 crc kubenswrapper[4867]: I0214 04:45:13.322844 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rkbk8" Feb 14 04:45:13 crc kubenswrapper[4867]: I0214 04:45:13.878048 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rkbk8"] Feb 14 04:45:14 crc kubenswrapper[4867]: I0214 04:45:14.278922 4867 generic.go:334] "Generic (PLEG): container finished" podID="95abd277-f40d-4636-8270-ff2346c0c30e" containerID="f7b29d61fb24ac793717ab513c38001c265a48bef742ed02acc7976e062136a6" exitCode=0 Feb 14 04:45:14 crc kubenswrapper[4867]: I0214 04:45:14.279220 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rkbk8" event={"ID":"95abd277-f40d-4636-8270-ff2346c0c30e","Type":"ContainerDied","Data":"f7b29d61fb24ac793717ab513c38001c265a48bef742ed02acc7976e062136a6"} Feb 14 04:45:14 crc kubenswrapper[4867]: I0214 04:45:14.279249 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rkbk8" event={"ID":"95abd277-f40d-4636-8270-ff2346c0c30e","Type":"ContainerStarted","Data":"7dba7fa7daa95862a94787ddc52839f0db353369b79c7f54925323d822855af2"} Feb 14 04:45:15 crc kubenswrapper[4867]: I0214 04:45:15.113046 4867 scope.go:117] "RemoveContainer" containerID="8824aa9f9bf0f294916520c801c31cbd1d85520f64360c54d9e396f8acec8e15" Feb 14 04:45:15 crc kubenswrapper[4867]: I0214 04:45:15.140706 4867 scope.go:117] "RemoveContainer" containerID="aa8fea275ce5bfacf3d08b45c45e75a0934c35dd23257fef4ead33c26bfccaa6" Feb 14 04:45:15 crc kubenswrapper[4867]: I0214 04:45:15.294147 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rkbk8" event={"ID":"95abd277-f40d-4636-8270-ff2346c0c30e","Type":"ContainerStarted","Data":"e7f70ffebfcf510c6ee587e19b5dd98542b06b91636607b80a052f56be830f49"} Feb 14 04:45:16 crc kubenswrapper[4867]: I0214 04:45:16.307023 4867 generic.go:334] "Generic (PLEG): container finished" podID="95abd277-f40d-4636-8270-ff2346c0c30e" containerID="e7f70ffebfcf510c6ee587e19b5dd98542b06b91636607b80a052f56be830f49" exitCode=0 Feb 14 04:45:16 crc kubenswrapper[4867]: I0214 04:45:16.307199 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rkbk8" event={"ID":"95abd277-f40d-4636-8270-ff2346c0c30e","Type":"ContainerDied","Data":"e7f70ffebfcf510c6ee587e19b5dd98542b06b91636607b80a052f56be830f49"} Feb 14 04:45:17 crc kubenswrapper[4867]: I0214 04:45:17.321731 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rkbk8" event={"ID":"95abd277-f40d-4636-8270-ff2346c0c30e","Type":"ContainerStarted","Data":"60838b1a7eccd0bd11a68ff8a246e089d01be15b4c1189c4254eecebf47502eb"} Feb 14 04:45:17 crc kubenswrapper[4867]: I0214 04:45:17.357996 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rkbk8" podStartSLOduration=2.8607585110000002 podStartE2EDuration="5.357974079s" podCreationTimestamp="2026-02-14 04:45:12 +0000 UTC" firstStartedPulling="2026-02-14 04:45:14.28095991 +0000 UTC m=+2146.361897224" lastFinishedPulling="2026-02-14 04:45:16.778175478 +0000 UTC m=+2148.859112792" observedRunningTime="2026-02-14 04:45:17.349468555 +0000 UTC m=+2149.430405899" watchObservedRunningTime="2026-02-14 04:45:17.357974079 +0000 UTC m=+2149.438911393" Feb 14 04:45:23 crc kubenswrapper[4867]: I0214 04:45:23.323636 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rkbk8" Feb 14 04:45:23 crc kubenswrapper[4867]: I0214 04:45:23.324300 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rkbk8" Feb 14 04:45:23 crc kubenswrapper[4867]: I0214 04:45:23.377783 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rkbk8" Feb 14 04:45:23 crc kubenswrapper[4867]: I0214 04:45:23.385807 4867 generic.go:334] "Generic (PLEG): container finished" podID="0b6f69a7-8ea6-48ad-aa0c-bd11b1efef10" containerID="4cb72980b5b9bee8ac466efa1a7b02120564eee883847b51bd4f9469ad29807a" exitCode=0 Feb 14 04:45:23 crc kubenswrapper[4867]: I0214 04:45:23.387053 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c22xw" event={"ID":"0b6f69a7-8ea6-48ad-aa0c-bd11b1efef10","Type":"ContainerDied","Data":"4cb72980b5b9bee8ac466efa1a7b02120564eee883847b51bd4f9469ad29807a"} Feb 14 04:45:23 crc kubenswrapper[4867]: I0214 04:45:23.439741 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rkbk8" Feb 14 04:45:23 crc kubenswrapper[4867]: I0214 04:45:23.617378 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rkbk8"] Feb 14 04:45:24 crc kubenswrapper[4867]: I0214 04:45:24.861415 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c22xw" Feb 14 04:45:25 crc kubenswrapper[4867]: I0214 04:45:25.041623 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0b6f69a7-8ea6-48ad-aa0c-bd11b1efef10-inventory\") pod \"0b6f69a7-8ea6-48ad-aa0c-bd11b1efef10\" (UID: \"0b6f69a7-8ea6-48ad-aa0c-bd11b1efef10\") " Feb 14 04:45:25 crc kubenswrapper[4867]: I0214 04:45:25.041775 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nccx2\" (UniqueName: \"kubernetes.io/projected/0b6f69a7-8ea6-48ad-aa0c-bd11b1efef10-kube-api-access-nccx2\") pod \"0b6f69a7-8ea6-48ad-aa0c-bd11b1efef10\" (UID: \"0b6f69a7-8ea6-48ad-aa0c-bd11b1efef10\") " Feb 14 04:45:25 crc kubenswrapper[4867]: I0214 04:45:25.041948 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0b6f69a7-8ea6-48ad-aa0c-bd11b1efef10-ssh-key-openstack-edpm-ipam\") pod \"0b6f69a7-8ea6-48ad-aa0c-bd11b1efef10\" (UID: \"0b6f69a7-8ea6-48ad-aa0c-bd11b1efef10\") " Feb 14 04:45:25 crc kubenswrapper[4867]: I0214 04:45:25.048288 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b6f69a7-8ea6-48ad-aa0c-bd11b1efef10-kube-api-access-nccx2" (OuterVolumeSpecName: "kube-api-access-nccx2") pod "0b6f69a7-8ea6-48ad-aa0c-bd11b1efef10" (UID: "0b6f69a7-8ea6-48ad-aa0c-bd11b1efef10"). InnerVolumeSpecName "kube-api-access-nccx2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:45:25 crc kubenswrapper[4867]: I0214 04:45:25.078804 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b6f69a7-8ea6-48ad-aa0c-bd11b1efef10-inventory" (OuterVolumeSpecName: "inventory") pod "0b6f69a7-8ea6-48ad-aa0c-bd11b1efef10" (UID: "0b6f69a7-8ea6-48ad-aa0c-bd11b1efef10"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:45:25 crc kubenswrapper[4867]: I0214 04:45:25.083615 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b6f69a7-8ea6-48ad-aa0c-bd11b1efef10-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0b6f69a7-8ea6-48ad-aa0c-bd11b1efef10" (UID: "0b6f69a7-8ea6-48ad-aa0c-bd11b1efef10"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:45:25 crc kubenswrapper[4867]: I0214 04:45:25.145925 4867 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0b6f69a7-8ea6-48ad-aa0c-bd11b1efef10-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 14 04:45:25 crc kubenswrapper[4867]: I0214 04:45:25.145964 4867 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0b6f69a7-8ea6-48ad-aa0c-bd11b1efef10-inventory\") on node \"crc\" DevicePath \"\"" Feb 14 04:45:25 crc kubenswrapper[4867]: I0214 04:45:25.145980 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nccx2\" (UniqueName: \"kubernetes.io/projected/0b6f69a7-8ea6-48ad-aa0c-bd11b1efef10-kube-api-access-nccx2\") on node \"crc\" DevicePath \"\"" Feb 14 04:45:25 crc kubenswrapper[4867]: I0214 04:45:25.409953 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c22xw" event={"ID":"0b6f69a7-8ea6-48ad-aa0c-bd11b1efef10","Type":"ContainerDied","Data":"0ce552b1f72639eb3f74a2e6671f112bab5f516045d9b65f7a60c6a824ab8dac"} Feb 14 04:45:25 crc kubenswrapper[4867]: I0214 04:45:25.409979 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c22xw" Feb 14 04:45:25 crc kubenswrapper[4867]: I0214 04:45:25.410009 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ce552b1f72639eb3f74a2e6671f112bab5f516045d9b65f7a60c6a824ab8dac" Feb 14 04:45:25 crc kubenswrapper[4867]: I0214 04:45:25.410105 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rkbk8" podUID="95abd277-f40d-4636-8270-ff2346c0c30e" containerName="registry-server" containerID="cri-o://60838b1a7eccd0bd11a68ff8a246e089d01be15b4c1189c4254eecebf47502eb" gracePeriod=2 Feb 14 04:45:25 crc kubenswrapper[4867]: I0214 04:45:25.512329 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-78rwr"] Feb 14 04:45:25 crc kubenswrapper[4867]: E0214 04:45:25.513103 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b6f69a7-8ea6-48ad-aa0c-bd11b1efef10" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 14 04:45:25 crc kubenswrapper[4867]: I0214 04:45:25.513164 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b6f69a7-8ea6-48ad-aa0c-bd11b1efef10" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 14 04:45:25 crc kubenswrapper[4867]: I0214 04:45:25.513472 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b6f69a7-8ea6-48ad-aa0c-bd11b1efef10" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 14 04:45:25 crc kubenswrapper[4867]: I0214 04:45:25.514604 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-78rwr" Feb 14 04:45:25 crc kubenswrapper[4867]: I0214 04:45:25.549901 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 14 04:45:25 crc kubenswrapper[4867]: I0214 04:45:25.550490 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-24tmg" Feb 14 04:45:25 crc kubenswrapper[4867]: I0214 04:45:25.552367 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 14 04:45:25 crc kubenswrapper[4867]: I0214 04:45:25.553361 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-78rwr"] Feb 14 04:45:25 crc kubenswrapper[4867]: I0214 04:45:25.553798 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 14 04:45:25 crc kubenswrapper[4867]: I0214 04:45:25.662844 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e04d43db-dfbf-41c6-8b73-48ff87baa800-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-78rwr\" (UID: \"e04d43db-dfbf-41c6-8b73-48ff87baa800\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-78rwr" Feb 14 04:45:25 crc kubenswrapper[4867]: I0214 04:45:25.662938 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e04d43db-dfbf-41c6-8b73-48ff87baa800-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-78rwr\" (UID: \"e04d43db-dfbf-41c6-8b73-48ff87baa800\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-78rwr" Feb 14 04:45:25 crc kubenswrapper[4867]: I0214 04:45:25.663404 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6nds\" (UniqueName: \"kubernetes.io/projected/e04d43db-dfbf-41c6-8b73-48ff87baa800-kube-api-access-z6nds\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-78rwr\" (UID: \"e04d43db-dfbf-41c6-8b73-48ff87baa800\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-78rwr" Feb 14 04:45:25 crc kubenswrapper[4867]: I0214 04:45:25.765435 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e04d43db-dfbf-41c6-8b73-48ff87baa800-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-78rwr\" (UID: \"e04d43db-dfbf-41c6-8b73-48ff87baa800\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-78rwr" Feb 14 04:45:25 crc kubenswrapper[4867]: I0214 04:45:25.765573 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6nds\" (UniqueName: \"kubernetes.io/projected/e04d43db-dfbf-41c6-8b73-48ff87baa800-kube-api-access-z6nds\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-78rwr\" (UID: \"e04d43db-dfbf-41c6-8b73-48ff87baa800\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-78rwr" Feb 14 04:45:25 crc kubenswrapper[4867]: I0214 04:45:25.765686 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e04d43db-dfbf-41c6-8b73-48ff87baa800-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-78rwr\" (UID: \"e04d43db-dfbf-41c6-8b73-48ff87baa800\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-78rwr" Feb 14 04:45:25 crc kubenswrapper[4867]: I0214 04:45:25.769701 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e04d43db-dfbf-41c6-8b73-48ff87baa800-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-78rwr\" (UID: \"e04d43db-dfbf-41c6-8b73-48ff87baa800\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-78rwr" Feb 14 04:45:25 crc kubenswrapper[4867]: I0214 04:45:25.772229 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e04d43db-dfbf-41c6-8b73-48ff87baa800-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-78rwr\" (UID: \"e04d43db-dfbf-41c6-8b73-48ff87baa800\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-78rwr" Feb 14 04:45:25 crc kubenswrapper[4867]: I0214 04:45:25.785643 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6nds\" (UniqueName: \"kubernetes.io/projected/e04d43db-dfbf-41c6-8b73-48ff87baa800-kube-api-access-z6nds\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-78rwr\" (UID: \"e04d43db-dfbf-41c6-8b73-48ff87baa800\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-78rwr" Feb 14 04:45:25 crc kubenswrapper[4867]: I0214 04:45:25.969773 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-78rwr" Feb 14 04:45:26 crc kubenswrapper[4867]: I0214 04:45:26.427025 4867 generic.go:334] "Generic (PLEG): container finished" podID="95abd277-f40d-4636-8270-ff2346c0c30e" containerID="60838b1a7eccd0bd11a68ff8a246e089d01be15b4c1189c4254eecebf47502eb" exitCode=0 Feb 14 04:45:26 crc kubenswrapper[4867]: I0214 04:45:26.427179 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rkbk8" event={"ID":"95abd277-f40d-4636-8270-ff2346c0c30e","Type":"ContainerDied","Data":"60838b1a7eccd0bd11a68ff8a246e089d01be15b4c1189c4254eecebf47502eb"} Feb 14 04:45:26 crc kubenswrapper[4867]: I0214 04:45:26.524106 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rkbk8" Feb 14 04:45:26 crc kubenswrapper[4867]: I0214 04:45:26.538895 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-78rwr"] Feb 14 04:45:26 crc kubenswrapper[4867]: I0214 04:45:26.690334 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xx5v6\" (UniqueName: \"kubernetes.io/projected/95abd277-f40d-4636-8270-ff2346c0c30e-kube-api-access-xx5v6\") pod \"95abd277-f40d-4636-8270-ff2346c0c30e\" (UID: \"95abd277-f40d-4636-8270-ff2346c0c30e\") " Feb 14 04:45:26 crc kubenswrapper[4867]: I0214 04:45:26.690377 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95abd277-f40d-4636-8270-ff2346c0c30e-catalog-content\") pod \"95abd277-f40d-4636-8270-ff2346c0c30e\" (UID: \"95abd277-f40d-4636-8270-ff2346c0c30e\") " Feb 14 04:45:26 crc kubenswrapper[4867]: I0214 04:45:26.690586 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95abd277-f40d-4636-8270-ff2346c0c30e-utilities\") pod \"95abd277-f40d-4636-8270-ff2346c0c30e\" (UID: \"95abd277-f40d-4636-8270-ff2346c0c30e\") " Feb 14 04:45:26 crc kubenswrapper[4867]: I0214 04:45:26.691335 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95abd277-f40d-4636-8270-ff2346c0c30e-utilities" (OuterVolumeSpecName: "utilities") pod "95abd277-f40d-4636-8270-ff2346c0c30e" (UID: "95abd277-f40d-4636-8270-ff2346c0c30e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:45:26 crc kubenswrapper[4867]: I0214 04:45:26.695686 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95abd277-f40d-4636-8270-ff2346c0c30e-kube-api-access-xx5v6" (OuterVolumeSpecName: "kube-api-access-xx5v6") pod "95abd277-f40d-4636-8270-ff2346c0c30e" (UID: "95abd277-f40d-4636-8270-ff2346c0c30e"). InnerVolumeSpecName "kube-api-access-xx5v6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:45:26 crc kubenswrapper[4867]: I0214 04:45:26.720171 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95abd277-f40d-4636-8270-ff2346c0c30e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "95abd277-f40d-4636-8270-ff2346c0c30e" (UID: "95abd277-f40d-4636-8270-ff2346c0c30e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:45:26 crc kubenswrapper[4867]: I0214 04:45:26.795017 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xx5v6\" (UniqueName: \"kubernetes.io/projected/95abd277-f40d-4636-8270-ff2346c0c30e-kube-api-access-xx5v6\") on node \"crc\" DevicePath \"\"" Feb 14 04:45:26 crc kubenswrapper[4867]: I0214 04:45:26.795576 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95abd277-f40d-4636-8270-ff2346c0c30e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 04:45:26 crc kubenswrapper[4867]: I0214 04:45:26.795665 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95abd277-f40d-4636-8270-ff2346c0c30e-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 04:45:27 crc kubenswrapper[4867]: I0214 04:45:27.440980 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-78rwr" event={"ID":"e04d43db-dfbf-41c6-8b73-48ff87baa800","Type":"ContainerStarted","Data":"125fc3ab07da934876f9ac00cce7fe26fbc7c1cfcc5339611269a9b23363849c"} Feb 14 04:45:27 crc kubenswrapper[4867]: I0214 04:45:27.441624 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-78rwr" event={"ID":"e04d43db-dfbf-41c6-8b73-48ff87baa800","Type":"ContainerStarted","Data":"750729294b3f87321e8b630da6705c327d5b33fdc3cbf1e2deddb61b89bb4759"} Feb 14 04:45:27 crc kubenswrapper[4867]: I0214 04:45:27.446862 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rkbk8" event={"ID":"95abd277-f40d-4636-8270-ff2346c0c30e","Type":"ContainerDied","Data":"7dba7fa7daa95862a94787ddc52839f0db353369b79c7f54925323d822855af2"} Feb 14 04:45:27 crc kubenswrapper[4867]: I0214 04:45:27.446917 4867 scope.go:117] "RemoveContainer" containerID="60838b1a7eccd0bd11a68ff8a246e089d01be15b4c1189c4254eecebf47502eb" Feb 14 04:45:27 crc kubenswrapper[4867]: I0214 04:45:27.447052 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rkbk8" Feb 14 04:45:27 crc kubenswrapper[4867]: I0214 04:45:27.471255 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-78rwr" podStartSLOduration=2.097174937 podStartE2EDuration="2.471227112s" podCreationTimestamp="2026-02-14 04:45:25 +0000 UTC" firstStartedPulling="2026-02-14 04:45:26.539943341 +0000 UTC m=+2158.620880655" lastFinishedPulling="2026-02-14 04:45:26.913995516 +0000 UTC m=+2158.994932830" observedRunningTime="2026-02-14 04:45:27.465968084 +0000 UTC m=+2159.546905408" watchObservedRunningTime="2026-02-14 04:45:27.471227112 +0000 UTC m=+2159.552164436" Feb 14 04:45:27 crc kubenswrapper[4867]: I0214 04:45:27.489768 4867 scope.go:117] "RemoveContainer" containerID="e7f70ffebfcf510c6ee587e19b5dd98542b06b91636607b80a052f56be830f49" Feb 14 04:45:27 crc kubenswrapper[4867]: I0214 04:45:27.508558 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rkbk8"] Feb 14 04:45:27 crc kubenswrapper[4867]: I0214 04:45:27.512180 4867 scope.go:117] "RemoveContainer" containerID="f7b29d61fb24ac793717ab513c38001c265a48bef742ed02acc7976e062136a6" Feb 14 04:45:27 crc kubenswrapper[4867]: I0214 04:45:27.525475 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rkbk8"] Feb 14 04:45:29 crc kubenswrapper[4867]: I0214 04:45:29.015900 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95abd277-f40d-4636-8270-ff2346c0c30e" path="/var/lib/kubelet/pods/95abd277-f40d-4636-8270-ff2346c0c30e/volumes" Feb 14 04:46:12 crc kubenswrapper[4867]: I0214 04:46:12.913409 4867 generic.go:334] "Generic (PLEG): container finished" podID="e04d43db-dfbf-41c6-8b73-48ff87baa800" containerID="125fc3ab07da934876f9ac00cce7fe26fbc7c1cfcc5339611269a9b23363849c" exitCode=0 Feb 14 04:46:12 crc kubenswrapper[4867]: I0214 04:46:12.913475 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-78rwr" event={"ID":"e04d43db-dfbf-41c6-8b73-48ff87baa800","Type":"ContainerDied","Data":"125fc3ab07da934876f9ac00cce7fe26fbc7c1cfcc5339611269a9b23363849c"} Feb 14 04:46:14 crc kubenswrapper[4867]: I0214 04:46:14.426005 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-78rwr" Feb 14 04:46:14 crc kubenswrapper[4867]: I0214 04:46:14.533142 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6nds\" (UniqueName: \"kubernetes.io/projected/e04d43db-dfbf-41c6-8b73-48ff87baa800-kube-api-access-z6nds\") pod \"e04d43db-dfbf-41c6-8b73-48ff87baa800\" (UID: \"e04d43db-dfbf-41c6-8b73-48ff87baa800\") " Feb 14 04:46:14 crc kubenswrapper[4867]: I0214 04:46:14.533556 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e04d43db-dfbf-41c6-8b73-48ff87baa800-inventory\") pod \"e04d43db-dfbf-41c6-8b73-48ff87baa800\" (UID: \"e04d43db-dfbf-41c6-8b73-48ff87baa800\") " Feb 14 04:46:14 crc kubenswrapper[4867]: I0214 04:46:14.533842 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e04d43db-dfbf-41c6-8b73-48ff87baa800-ssh-key-openstack-edpm-ipam\") pod \"e04d43db-dfbf-41c6-8b73-48ff87baa800\" (UID: \"e04d43db-dfbf-41c6-8b73-48ff87baa800\") " Feb 14 04:46:14 crc kubenswrapper[4867]: I0214 04:46:14.539790 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e04d43db-dfbf-41c6-8b73-48ff87baa800-kube-api-access-z6nds" (OuterVolumeSpecName: "kube-api-access-z6nds") pod "e04d43db-dfbf-41c6-8b73-48ff87baa800" (UID: "e04d43db-dfbf-41c6-8b73-48ff87baa800"). InnerVolumeSpecName "kube-api-access-z6nds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:46:14 crc kubenswrapper[4867]: I0214 04:46:14.571086 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e04d43db-dfbf-41c6-8b73-48ff87baa800-inventory" (OuterVolumeSpecName: "inventory") pod "e04d43db-dfbf-41c6-8b73-48ff87baa800" (UID: "e04d43db-dfbf-41c6-8b73-48ff87baa800"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:46:14 crc kubenswrapper[4867]: I0214 04:46:14.572329 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e04d43db-dfbf-41c6-8b73-48ff87baa800-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e04d43db-dfbf-41c6-8b73-48ff87baa800" (UID: "e04d43db-dfbf-41c6-8b73-48ff87baa800"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:46:14 crc kubenswrapper[4867]: I0214 04:46:14.637693 4867 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e04d43db-dfbf-41c6-8b73-48ff87baa800-inventory\") on node \"crc\" DevicePath \"\"" Feb 14 04:46:14 crc kubenswrapper[4867]: I0214 04:46:14.637733 4867 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e04d43db-dfbf-41c6-8b73-48ff87baa800-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 14 04:46:14 crc kubenswrapper[4867]: I0214 04:46:14.637744 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z6nds\" (UniqueName: \"kubernetes.io/projected/e04d43db-dfbf-41c6-8b73-48ff87baa800-kube-api-access-z6nds\") on node \"crc\" DevicePath \"\"" Feb 14 04:46:14 crc kubenswrapper[4867]: I0214 04:46:14.935780 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-78rwr" event={"ID":"e04d43db-dfbf-41c6-8b73-48ff87baa800","Type":"ContainerDied","Data":"750729294b3f87321e8b630da6705c327d5b33fdc3cbf1e2deddb61b89bb4759"} Feb 14 04:46:14 crc kubenswrapper[4867]: I0214 04:46:14.936271 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="750729294b3f87321e8b630da6705c327d5b33fdc3cbf1e2deddb61b89bb4759" Feb 14 04:46:14 crc kubenswrapper[4867]: I0214 04:46:14.935854 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-78rwr" Feb 14 04:46:15 crc kubenswrapper[4867]: I0214 04:46:15.050741 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-5rl49"] Feb 14 04:46:15 crc kubenswrapper[4867]: E0214 04:46:15.051369 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95abd277-f40d-4636-8270-ff2346c0c30e" containerName="registry-server" Feb 14 04:46:15 crc kubenswrapper[4867]: I0214 04:46:15.051393 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="95abd277-f40d-4636-8270-ff2346c0c30e" containerName="registry-server" Feb 14 04:46:15 crc kubenswrapper[4867]: E0214 04:46:15.051421 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95abd277-f40d-4636-8270-ff2346c0c30e" containerName="extract-utilities" Feb 14 04:46:15 crc kubenswrapper[4867]: I0214 04:46:15.051430 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="95abd277-f40d-4636-8270-ff2346c0c30e" containerName="extract-utilities" Feb 14 04:46:15 crc kubenswrapper[4867]: E0214 04:46:15.051502 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e04d43db-dfbf-41c6-8b73-48ff87baa800" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 14 04:46:15 crc kubenswrapper[4867]: I0214 04:46:15.051615 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="e04d43db-dfbf-41c6-8b73-48ff87baa800" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 14 04:46:15 crc kubenswrapper[4867]: E0214 04:46:15.051638 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95abd277-f40d-4636-8270-ff2346c0c30e" containerName="extract-content" Feb 14 04:46:15 crc kubenswrapper[4867]: I0214 04:46:15.051647 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="95abd277-f40d-4636-8270-ff2346c0c30e" containerName="extract-content" Feb 14 04:46:15 crc kubenswrapper[4867]: I0214 04:46:15.051925 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="95abd277-f40d-4636-8270-ff2346c0c30e" containerName="registry-server" Feb 14 04:46:15 crc kubenswrapper[4867]: I0214 04:46:15.051952 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="e04d43db-dfbf-41c6-8b73-48ff87baa800" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 14 04:46:15 crc kubenswrapper[4867]: I0214 04:46:15.053038 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-5rl49" Feb 14 04:46:15 crc kubenswrapper[4867]: I0214 04:46:15.061330 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-5rl49"] Feb 14 04:46:15 crc kubenswrapper[4867]: I0214 04:46:15.061997 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 14 04:46:15 crc kubenswrapper[4867]: I0214 04:46:15.062126 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 14 04:46:15 crc kubenswrapper[4867]: I0214 04:46:15.062575 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 14 04:46:15 crc kubenswrapper[4867]: I0214 04:46:15.062770 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-24tmg" Feb 14 04:46:15 crc kubenswrapper[4867]: I0214 04:46:15.152491 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czk66\" (UniqueName: \"kubernetes.io/projected/e72df4ca-d603-4f2e-9ff1-3ec392ef11b7-kube-api-access-czk66\") pod \"ssh-known-hosts-edpm-deployment-5rl49\" (UID: \"e72df4ca-d603-4f2e-9ff1-3ec392ef11b7\") " pod="openstack/ssh-known-hosts-edpm-deployment-5rl49" Feb 14 04:46:15 crc kubenswrapper[4867]: I0214 04:46:15.152780 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e72df4ca-d603-4f2e-9ff1-3ec392ef11b7-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-5rl49\" (UID: \"e72df4ca-d603-4f2e-9ff1-3ec392ef11b7\") " pod="openstack/ssh-known-hosts-edpm-deployment-5rl49" Feb 14 04:46:15 crc kubenswrapper[4867]: I0214 04:46:15.153967 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/e72df4ca-d603-4f2e-9ff1-3ec392ef11b7-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-5rl49\" (UID: \"e72df4ca-d603-4f2e-9ff1-3ec392ef11b7\") " pod="openstack/ssh-known-hosts-edpm-deployment-5rl49" Feb 14 04:46:15 crc kubenswrapper[4867]: I0214 04:46:15.257282 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/e72df4ca-d603-4f2e-9ff1-3ec392ef11b7-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-5rl49\" (UID: \"e72df4ca-d603-4f2e-9ff1-3ec392ef11b7\") " pod="openstack/ssh-known-hosts-edpm-deployment-5rl49" Feb 14 04:46:15 crc kubenswrapper[4867]: I0214 04:46:15.257440 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czk66\" (UniqueName: \"kubernetes.io/projected/e72df4ca-d603-4f2e-9ff1-3ec392ef11b7-kube-api-access-czk66\") pod \"ssh-known-hosts-edpm-deployment-5rl49\" (UID: \"e72df4ca-d603-4f2e-9ff1-3ec392ef11b7\") " pod="openstack/ssh-known-hosts-edpm-deployment-5rl49" Feb 14 04:46:15 crc kubenswrapper[4867]: I0214 04:46:15.257518 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e72df4ca-d603-4f2e-9ff1-3ec392ef11b7-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-5rl49\" (UID: \"e72df4ca-d603-4f2e-9ff1-3ec392ef11b7\") " pod="openstack/ssh-known-hosts-edpm-deployment-5rl49" Feb 14 04:46:15 crc kubenswrapper[4867]: I0214 04:46:15.263139 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/e72df4ca-d603-4f2e-9ff1-3ec392ef11b7-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-5rl49\" (UID: \"e72df4ca-d603-4f2e-9ff1-3ec392ef11b7\") " pod="openstack/ssh-known-hosts-edpm-deployment-5rl49" Feb 14 04:46:15 crc kubenswrapper[4867]: I0214 04:46:15.268911 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e72df4ca-d603-4f2e-9ff1-3ec392ef11b7-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-5rl49\" (UID: \"e72df4ca-d603-4f2e-9ff1-3ec392ef11b7\") " pod="openstack/ssh-known-hosts-edpm-deployment-5rl49" Feb 14 04:46:15 crc kubenswrapper[4867]: I0214 04:46:15.280774 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czk66\" (UniqueName: \"kubernetes.io/projected/e72df4ca-d603-4f2e-9ff1-3ec392ef11b7-kube-api-access-czk66\") pod \"ssh-known-hosts-edpm-deployment-5rl49\" (UID: \"e72df4ca-d603-4f2e-9ff1-3ec392ef11b7\") " pod="openstack/ssh-known-hosts-edpm-deployment-5rl49" Feb 14 04:46:15 crc kubenswrapper[4867]: I0214 04:46:15.388267 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-5rl49" Feb 14 04:46:15 crc kubenswrapper[4867]: I0214 04:46:15.986116 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-5rl49"] Feb 14 04:46:16 crc kubenswrapper[4867]: I0214 04:46:16.965879 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-5rl49" event={"ID":"e72df4ca-d603-4f2e-9ff1-3ec392ef11b7","Type":"ContainerStarted","Data":"15acb3b34153ca1356737c299c7c242dde0cbf2dfec6a09e182e67becd2cf5ea"} Feb 14 04:46:16 crc kubenswrapper[4867]: I0214 04:46:16.966495 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-5rl49" event={"ID":"e72df4ca-d603-4f2e-9ff1-3ec392ef11b7","Type":"ContainerStarted","Data":"eb32a6602bc91b2bbe627a287febff7d49e7c842a26964d9336fa01d5b7c94b5"} Feb 14 04:46:16 crc kubenswrapper[4867]: I0214 04:46:16.990778 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-5rl49" podStartSLOduration=1.5338248540000001 podStartE2EDuration="1.990752341s" podCreationTimestamp="2026-02-14 04:46:15 +0000 UTC" firstStartedPulling="2026-02-14 04:46:15.993449511 +0000 UTC m=+2208.074386825" lastFinishedPulling="2026-02-14 04:46:16.450376998 +0000 UTC m=+2208.531314312" observedRunningTime="2026-02-14 04:46:16.983793928 +0000 UTC m=+2209.064731262" watchObservedRunningTime="2026-02-14 04:46:16.990752341 +0000 UTC m=+2209.071689655" Feb 14 04:46:24 crc kubenswrapper[4867]: I0214 04:46:24.048693 4867 generic.go:334] "Generic (PLEG): container finished" podID="e72df4ca-d603-4f2e-9ff1-3ec392ef11b7" containerID="15acb3b34153ca1356737c299c7c242dde0cbf2dfec6a09e182e67becd2cf5ea" exitCode=0 Feb 14 04:46:24 crc kubenswrapper[4867]: I0214 04:46:24.048772 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-5rl49" event={"ID":"e72df4ca-d603-4f2e-9ff1-3ec392ef11b7","Type":"ContainerDied","Data":"15acb3b34153ca1356737c299c7c242dde0cbf2dfec6a09e182e67becd2cf5ea"} Feb 14 04:46:25 crc kubenswrapper[4867]: I0214 04:46:25.620727 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-5rl49" Feb 14 04:46:25 crc kubenswrapper[4867]: I0214 04:46:25.666008 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/e72df4ca-d603-4f2e-9ff1-3ec392ef11b7-inventory-0\") pod \"e72df4ca-d603-4f2e-9ff1-3ec392ef11b7\" (UID: \"e72df4ca-d603-4f2e-9ff1-3ec392ef11b7\") " Feb 14 04:46:25 crc kubenswrapper[4867]: I0214 04:46:25.666465 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e72df4ca-d603-4f2e-9ff1-3ec392ef11b7-ssh-key-openstack-edpm-ipam\") pod \"e72df4ca-d603-4f2e-9ff1-3ec392ef11b7\" (UID: \"e72df4ca-d603-4f2e-9ff1-3ec392ef11b7\") " Feb 14 04:46:25 crc kubenswrapper[4867]: I0214 04:46:25.666632 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-czk66\" (UniqueName: \"kubernetes.io/projected/e72df4ca-d603-4f2e-9ff1-3ec392ef11b7-kube-api-access-czk66\") pod \"e72df4ca-d603-4f2e-9ff1-3ec392ef11b7\" (UID: \"e72df4ca-d603-4f2e-9ff1-3ec392ef11b7\") " Feb 14 04:46:25 crc kubenswrapper[4867]: I0214 04:46:25.672158 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e72df4ca-d603-4f2e-9ff1-3ec392ef11b7-kube-api-access-czk66" (OuterVolumeSpecName: "kube-api-access-czk66") pod "e72df4ca-d603-4f2e-9ff1-3ec392ef11b7" (UID: "e72df4ca-d603-4f2e-9ff1-3ec392ef11b7"). InnerVolumeSpecName "kube-api-access-czk66". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:46:25 crc kubenswrapper[4867]: I0214 04:46:25.706645 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e72df4ca-d603-4f2e-9ff1-3ec392ef11b7-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "e72df4ca-d603-4f2e-9ff1-3ec392ef11b7" (UID: "e72df4ca-d603-4f2e-9ff1-3ec392ef11b7"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:46:25 crc kubenswrapper[4867]: I0214 04:46:25.712446 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e72df4ca-d603-4f2e-9ff1-3ec392ef11b7-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e72df4ca-d603-4f2e-9ff1-3ec392ef11b7" (UID: "e72df4ca-d603-4f2e-9ff1-3ec392ef11b7"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:46:25 crc kubenswrapper[4867]: I0214 04:46:25.769831 4867 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/e72df4ca-d603-4f2e-9ff1-3ec392ef11b7-inventory-0\") on node \"crc\" DevicePath \"\"" Feb 14 04:46:25 crc kubenswrapper[4867]: I0214 04:46:25.769886 4867 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e72df4ca-d603-4f2e-9ff1-3ec392ef11b7-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 14 04:46:25 crc kubenswrapper[4867]: I0214 04:46:25.769902 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-czk66\" (UniqueName: \"kubernetes.io/projected/e72df4ca-d603-4f2e-9ff1-3ec392ef11b7-kube-api-access-czk66\") on node \"crc\" DevicePath \"\"" Feb 14 04:46:26 crc kubenswrapper[4867]: I0214 04:46:26.069589 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-5rl49" event={"ID":"e72df4ca-d603-4f2e-9ff1-3ec392ef11b7","Type":"ContainerDied","Data":"eb32a6602bc91b2bbe627a287febff7d49e7c842a26964d9336fa01d5b7c94b5"} Feb 14 04:46:26 crc kubenswrapper[4867]: I0214 04:46:26.069631 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb32a6602bc91b2bbe627a287febff7d49e7c842a26964d9336fa01d5b7c94b5" Feb 14 04:46:26 crc kubenswrapper[4867]: I0214 04:46:26.069653 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-5rl49" Feb 14 04:46:26 crc kubenswrapper[4867]: I0214 04:46:26.151598 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-lsj48"] Feb 14 04:46:26 crc kubenswrapper[4867]: E0214 04:46:26.152059 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e72df4ca-d603-4f2e-9ff1-3ec392ef11b7" containerName="ssh-known-hosts-edpm-deployment" Feb 14 04:46:26 crc kubenswrapper[4867]: I0214 04:46:26.152078 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="e72df4ca-d603-4f2e-9ff1-3ec392ef11b7" containerName="ssh-known-hosts-edpm-deployment" Feb 14 04:46:26 crc kubenswrapper[4867]: I0214 04:46:26.152297 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="e72df4ca-d603-4f2e-9ff1-3ec392ef11b7" containerName="ssh-known-hosts-edpm-deployment" Feb 14 04:46:26 crc kubenswrapper[4867]: I0214 04:46:26.153177 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lsj48" Feb 14 04:46:26 crc kubenswrapper[4867]: I0214 04:46:26.164171 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-lsj48"] Feb 14 04:46:26 crc kubenswrapper[4867]: I0214 04:46:26.192409 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 14 04:46:26 crc kubenswrapper[4867]: I0214 04:46:26.192617 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 14 04:46:26 crc kubenswrapper[4867]: I0214 04:46:26.192729 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 14 04:46:26 crc kubenswrapper[4867]: I0214 04:46:26.192830 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-24tmg" Feb 14 04:46:26 crc kubenswrapper[4867]: I0214 04:46:26.195682 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/764366f2-ea14-4cc9-a195-52ee347e666d-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lsj48\" (UID: \"764366f2-ea14-4cc9-a195-52ee347e666d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lsj48" Feb 14 04:46:26 crc kubenswrapper[4867]: I0214 04:46:26.195732 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sz2xw\" (UniqueName: \"kubernetes.io/projected/764366f2-ea14-4cc9-a195-52ee347e666d-kube-api-access-sz2xw\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lsj48\" (UID: \"764366f2-ea14-4cc9-a195-52ee347e666d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lsj48" Feb 14 04:46:26 crc kubenswrapper[4867]: I0214 04:46:26.195899 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/764366f2-ea14-4cc9-a195-52ee347e666d-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lsj48\" (UID: \"764366f2-ea14-4cc9-a195-52ee347e666d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lsj48" Feb 14 04:46:26 crc kubenswrapper[4867]: I0214 04:46:26.298021 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/764366f2-ea14-4cc9-a195-52ee347e666d-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lsj48\" (UID: \"764366f2-ea14-4cc9-a195-52ee347e666d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lsj48" Feb 14 04:46:26 crc kubenswrapper[4867]: I0214 04:46:26.298071 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sz2xw\" (UniqueName: \"kubernetes.io/projected/764366f2-ea14-4cc9-a195-52ee347e666d-kube-api-access-sz2xw\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lsj48\" (UID: \"764366f2-ea14-4cc9-a195-52ee347e666d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lsj48" Feb 14 04:46:26 crc kubenswrapper[4867]: I0214 04:46:26.298171 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/764366f2-ea14-4cc9-a195-52ee347e666d-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lsj48\" (UID: \"764366f2-ea14-4cc9-a195-52ee347e666d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lsj48" Feb 14 04:46:26 crc kubenswrapper[4867]: I0214 04:46:26.302046 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/764366f2-ea14-4cc9-a195-52ee347e666d-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lsj48\" (UID: \"764366f2-ea14-4cc9-a195-52ee347e666d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lsj48" Feb 14 04:46:26 crc kubenswrapper[4867]: I0214 04:46:26.306195 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/764366f2-ea14-4cc9-a195-52ee347e666d-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lsj48\" (UID: \"764366f2-ea14-4cc9-a195-52ee347e666d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lsj48" Feb 14 04:46:26 crc kubenswrapper[4867]: I0214 04:46:26.313724 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sz2xw\" (UniqueName: \"kubernetes.io/projected/764366f2-ea14-4cc9-a195-52ee347e666d-kube-api-access-sz2xw\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lsj48\" (UID: \"764366f2-ea14-4cc9-a195-52ee347e666d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lsj48" Feb 14 04:46:26 crc kubenswrapper[4867]: I0214 04:46:26.510144 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lsj48" Feb 14 04:46:27 crc kubenswrapper[4867]: I0214 04:46:27.071408 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-lsj48"] Feb 14 04:46:28 crc kubenswrapper[4867]: I0214 04:46:28.105703 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lsj48" event={"ID":"764366f2-ea14-4cc9-a195-52ee347e666d","Type":"ContainerStarted","Data":"ff04b8b79a32a8da5015e7154d7228eafe8b6b301c3ec642cfae44e02e65557e"} Feb 14 04:46:28 crc kubenswrapper[4867]: I0214 04:46:28.106244 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lsj48" event={"ID":"764366f2-ea14-4cc9-a195-52ee347e666d","Type":"ContainerStarted","Data":"3ad83775e7c29964628420a0feab56890f6cede5166acecef04f67f27b2815da"} Feb 14 04:46:28 crc kubenswrapper[4867]: I0214 04:46:28.137076 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lsj48" podStartSLOduration=1.561428933 podStartE2EDuration="2.137046704s" podCreationTimestamp="2026-02-14 04:46:26 +0000 UTC" firstStartedPulling="2026-02-14 04:46:27.075146564 +0000 UTC m=+2219.156083878" lastFinishedPulling="2026-02-14 04:46:27.650764335 +0000 UTC m=+2219.731701649" observedRunningTime="2026-02-14 04:46:28.124981197 +0000 UTC m=+2220.205918511" watchObservedRunningTime="2026-02-14 04:46:28.137046704 +0000 UTC m=+2220.217984018" Feb 14 04:46:31 crc kubenswrapper[4867]: I0214 04:46:31.251544 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 04:46:31 crc kubenswrapper[4867]: I0214 04:46:31.252118 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 04:46:35 crc kubenswrapper[4867]: I0214 04:46:35.403362 4867 generic.go:334] "Generic (PLEG): container finished" podID="764366f2-ea14-4cc9-a195-52ee347e666d" containerID="ff04b8b79a32a8da5015e7154d7228eafe8b6b301c3ec642cfae44e02e65557e" exitCode=0 Feb 14 04:46:35 crc kubenswrapper[4867]: I0214 04:46:35.403455 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lsj48" event={"ID":"764366f2-ea14-4cc9-a195-52ee347e666d","Type":"ContainerDied","Data":"ff04b8b79a32a8da5015e7154d7228eafe8b6b301c3ec642cfae44e02e65557e"} Feb 14 04:46:37 crc kubenswrapper[4867]: I0214 04:46:37.026177 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lsj48" Feb 14 04:46:37 crc kubenswrapper[4867]: I0214 04:46:37.124531 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/764366f2-ea14-4cc9-a195-52ee347e666d-inventory\") pod \"764366f2-ea14-4cc9-a195-52ee347e666d\" (UID: \"764366f2-ea14-4cc9-a195-52ee347e666d\") " Feb 14 04:46:37 crc kubenswrapper[4867]: I0214 04:46:37.124658 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/764366f2-ea14-4cc9-a195-52ee347e666d-ssh-key-openstack-edpm-ipam\") pod \"764366f2-ea14-4cc9-a195-52ee347e666d\" (UID: \"764366f2-ea14-4cc9-a195-52ee347e666d\") " Feb 14 04:46:37 crc kubenswrapper[4867]: I0214 04:46:37.124721 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sz2xw\" (UniqueName: \"kubernetes.io/projected/764366f2-ea14-4cc9-a195-52ee347e666d-kube-api-access-sz2xw\") pod \"764366f2-ea14-4cc9-a195-52ee347e666d\" (UID: \"764366f2-ea14-4cc9-a195-52ee347e666d\") " Feb 14 04:46:37 crc kubenswrapper[4867]: I0214 04:46:37.130530 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/764366f2-ea14-4cc9-a195-52ee347e666d-kube-api-access-sz2xw" (OuterVolumeSpecName: "kube-api-access-sz2xw") pod "764366f2-ea14-4cc9-a195-52ee347e666d" (UID: "764366f2-ea14-4cc9-a195-52ee347e666d"). InnerVolumeSpecName "kube-api-access-sz2xw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:46:37 crc kubenswrapper[4867]: I0214 04:46:37.160271 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/764366f2-ea14-4cc9-a195-52ee347e666d-inventory" (OuterVolumeSpecName: "inventory") pod "764366f2-ea14-4cc9-a195-52ee347e666d" (UID: "764366f2-ea14-4cc9-a195-52ee347e666d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:46:37 crc kubenswrapper[4867]: I0214 04:46:37.160575 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/764366f2-ea14-4cc9-a195-52ee347e666d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "764366f2-ea14-4cc9-a195-52ee347e666d" (UID: "764366f2-ea14-4cc9-a195-52ee347e666d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:46:37 crc kubenswrapper[4867]: I0214 04:46:37.228397 4867 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/764366f2-ea14-4cc9-a195-52ee347e666d-inventory\") on node \"crc\" DevicePath \"\"" Feb 14 04:46:37 crc kubenswrapper[4867]: I0214 04:46:37.228455 4867 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/764366f2-ea14-4cc9-a195-52ee347e666d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 14 04:46:37 crc kubenswrapper[4867]: I0214 04:46:37.228475 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sz2xw\" (UniqueName: \"kubernetes.io/projected/764366f2-ea14-4cc9-a195-52ee347e666d-kube-api-access-sz2xw\") on node \"crc\" DevicePath \"\"" Feb 14 04:46:37 crc kubenswrapper[4867]: I0214 04:46:37.426459 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lsj48" event={"ID":"764366f2-ea14-4cc9-a195-52ee347e666d","Type":"ContainerDied","Data":"3ad83775e7c29964628420a0feab56890f6cede5166acecef04f67f27b2815da"} Feb 14 04:46:37 crc kubenswrapper[4867]: I0214 04:46:37.426520 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lsj48" Feb 14 04:46:37 crc kubenswrapper[4867]: I0214 04:46:37.426525 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ad83775e7c29964628420a0feab56890f6cede5166acecef04f67f27b2815da" Feb 14 04:46:37 crc kubenswrapper[4867]: I0214 04:46:37.520752 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-8zlml"] Feb 14 04:46:37 crc kubenswrapper[4867]: E0214 04:46:37.521720 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="764366f2-ea14-4cc9-a195-52ee347e666d" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 14 04:46:37 crc kubenswrapper[4867]: I0214 04:46:37.521763 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="764366f2-ea14-4cc9-a195-52ee347e666d" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 14 04:46:37 crc kubenswrapper[4867]: I0214 04:46:37.522042 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="764366f2-ea14-4cc9-a195-52ee347e666d" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 14 04:46:37 crc kubenswrapper[4867]: I0214 04:46:37.523169 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-8zlml" Feb 14 04:46:37 crc kubenswrapper[4867]: I0214 04:46:37.525790 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 14 04:46:37 crc kubenswrapper[4867]: I0214 04:46:37.527222 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-24tmg" Feb 14 04:46:37 crc kubenswrapper[4867]: I0214 04:46:37.527622 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 14 04:46:37 crc kubenswrapper[4867]: I0214 04:46:37.529455 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 14 04:46:37 crc kubenswrapper[4867]: I0214 04:46:37.538383 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-8zlml"] Feb 14 04:46:37 crc kubenswrapper[4867]: I0214 04:46:37.637737 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4a0a98e3-261b-460d-92c2-4fce312f5171-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-8zlml\" (UID: \"4a0a98e3-261b-460d-92c2-4fce312f5171\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-8zlml" Feb 14 04:46:37 crc kubenswrapper[4867]: I0214 04:46:37.638476 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btqxp\" (UniqueName: \"kubernetes.io/projected/4a0a98e3-261b-460d-92c2-4fce312f5171-kube-api-access-btqxp\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-8zlml\" (UID: \"4a0a98e3-261b-460d-92c2-4fce312f5171\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-8zlml" Feb 14 04:46:37 crc kubenswrapper[4867]: I0214 04:46:37.638621 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4a0a98e3-261b-460d-92c2-4fce312f5171-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-8zlml\" (UID: \"4a0a98e3-261b-460d-92c2-4fce312f5171\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-8zlml" Feb 14 04:46:37 crc kubenswrapper[4867]: I0214 04:46:37.741284 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btqxp\" (UniqueName: \"kubernetes.io/projected/4a0a98e3-261b-460d-92c2-4fce312f5171-kube-api-access-btqxp\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-8zlml\" (UID: \"4a0a98e3-261b-460d-92c2-4fce312f5171\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-8zlml" Feb 14 04:46:37 crc kubenswrapper[4867]: I0214 04:46:37.741364 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4a0a98e3-261b-460d-92c2-4fce312f5171-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-8zlml\" (UID: \"4a0a98e3-261b-460d-92c2-4fce312f5171\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-8zlml" Feb 14 04:46:37 crc kubenswrapper[4867]: I0214 04:46:37.741530 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4a0a98e3-261b-460d-92c2-4fce312f5171-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-8zlml\" (UID: \"4a0a98e3-261b-460d-92c2-4fce312f5171\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-8zlml" Feb 14 04:46:37 crc kubenswrapper[4867]: I0214 04:46:37.747055 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4a0a98e3-261b-460d-92c2-4fce312f5171-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-8zlml\" (UID: \"4a0a98e3-261b-460d-92c2-4fce312f5171\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-8zlml" Feb 14 04:46:37 crc kubenswrapper[4867]: I0214 04:46:37.747647 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4a0a98e3-261b-460d-92c2-4fce312f5171-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-8zlml\" (UID: \"4a0a98e3-261b-460d-92c2-4fce312f5171\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-8zlml" Feb 14 04:46:37 crc kubenswrapper[4867]: I0214 04:46:37.759097 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btqxp\" (UniqueName: \"kubernetes.io/projected/4a0a98e3-261b-460d-92c2-4fce312f5171-kube-api-access-btqxp\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-8zlml\" (UID: \"4a0a98e3-261b-460d-92c2-4fce312f5171\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-8zlml" Feb 14 04:46:37 crc kubenswrapper[4867]: I0214 04:46:37.849912 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-8zlml" Feb 14 04:46:38 crc kubenswrapper[4867]: I0214 04:46:38.504456 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-8zlml"] Feb 14 04:46:38 crc kubenswrapper[4867]: I0214 04:46:38.517230 4867 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 14 04:46:39 crc kubenswrapper[4867]: I0214 04:46:39.445406 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-8zlml" event={"ID":"4a0a98e3-261b-460d-92c2-4fce312f5171","Type":"ContainerStarted","Data":"7877765a53214b63333058663a364c04a85140165441843e76a1cd10c91089b6"} Feb 14 04:46:39 crc kubenswrapper[4867]: I0214 04:46:39.446027 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-8zlml" event={"ID":"4a0a98e3-261b-460d-92c2-4fce312f5171","Type":"ContainerStarted","Data":"ba534784dec38ffd594dff4d1903997e1574109c7949bf293e757c30c148d410"} Feb 14 04:46:39 crc kubenswrapper[4867]: I0214 04:46:39.466313 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-8zlml" podStartSLOduration=2.085456559 podStartE2EDuration="2.466291203s" podCreationTimestamp="2026-02-14 04:46:37 +0000 UTC" firstStartedPulling="2026-02-14 04:46:38.516857633 +0000 UTC m=+2230.597794967" lastFinishedPulling="2026-02-14 04:46:38.897692297 +0000 UTC m=+2230.978629611" observedRunningTime="2026-02-14 04:46:39.459119714 +0000 UTC m=+2231.540057028" watchObservedRunningTime="2026-02-14 04:46:39.466291203 +0000 UTC m=+2231.547228517" Feb 14 04:46:48 crc kubenswrapper[4867]: I0214 04:46:48.555854 4867 generic.go:334] "Generic (PLEG): container finished" podID="4a0a98e3-261b-460d-92c2-4fce312f5171" containerID="7877765a53214b63333058663a364c04a85140165441843e76a1cd10c91089b6" exitCode=0 Feb 14 04:46:48 crc kubenswrapper[4867]: I0214 04:46:48.556145 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-8zlml" event={"ID":"4a0a98e3-261b-460d-92c2-4fce312f5171","Type":"ContainerDied","Data":"7877765a53214b63333058663a364c04a85140165441843e76a1cd10c91089b6"} Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.102311 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-8zlml" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.269011 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-btqxp\" (UniqueName: \"kubernetes.io/projected/4a0a98e3-261b-460d-92c2-4fce312f5171-kube-api-access-btqxp\") pod \"4a0a98e3-261b-460d-92c2-4fce312f5171\" (UID: \"4a0a98e3-261b-460d-92c2-4fce312f5171\") " Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.269293 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4a0a98e3-261b-460d-92c2-4fce312f5171-ssh-key-openstack-edpm-ipam\") pod \"4a0a98e3-261b-460d-92c2-4fce312f5171\" (UID: \"4a0a98e3-261b-460d-92c2-4fce312f5171\") " Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.269356 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4a0a98e3-261b-460d-92c2-4fce312f5171-inventory\") pod \"4a0a98e3-261b-460d-92c2-4fce312f5171\" (UID: \"4a0a98e3-261b-460d-92c2-4fce312f5171\") " Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.280468 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a0a98e3-261b-460d-92c2-4fce312f5171-kube-api-access-btqxp" (OuterVolumeSpecName: "kube-api-access-btqxp") pod "4a0a98e3-261b-460d-92c2-4fce312f5171" (UID: "4a0a98e3-261b-460d-92c2-4fce312f5171"). InnerVolumeSpecName "kube-api-access-btqxp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.303853 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a0a98e3-261b-460d-92c2-4fce312f5171-inventory" (OuterVolumeSpecName: "inventory") pod "4a0a98e3-261b-460d-92c2-4fce312f5171" (UID: "4a0a98e3-261b-460d-92c2-4fce312f5171"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.326526 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a0a98e3-261b-460d-92c2-4fce312f5171-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4a0a98e3-261b-460d-92c2-4fce312f5171" (UID: "4a0a98e3-261b-460d-92c2-4fce312f5171"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.372411 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-btqxp\" (UniqueName: \"kubernetes.io/projected/4a0a98e3-261b-460d-92c2-4fce312f5171-kube-api-access-btqxp\") on node \"crc\" DevicePath \"\"" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.372446 4867 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4a0a98e3-261b-460d-92c2-4fce312f5171-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.372458 4867 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4a0a98e3-261b-460d-92c2-4fce312f5171-inventory\") on node \"crc\" DevicePath \"\"" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.602303 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-8zlml" event={"ID":"4a0a98e3-261b-460d-92c2-4fce312f5171","Type":"ContainerDied","Data":"ba534784dec38ffd594dff4d1903997e1574109c7949bf293e757c30c148d410"} Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.602351 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba534784dec38ffd594dff4d1903997e1574109c7949bf293e757c30c148d410" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.602413 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-8zlml" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.726821 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9"] Feb 14 04:46:50 crc kubenswrapper[4867]: E0214 04:46:50.727355 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a0a98e3-261b-460d-92c2-4fce312f5171" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.727377 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a0a98e3-261b-460d-92c2-4fce312f5171" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.727636 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a0a98e3-261b-460d-92c2-4fce312f5171" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.728574 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.732624 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.735352 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.735644 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.735707 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.735750 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-24tmg" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.735978 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.736153 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.736317 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.736593 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.761137 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9"] Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.883710 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/01cb12dd-9d34-4898-941a-05635d21630f-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.883773 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.883825 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.883930 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/01cb12dd-9d34-4898-941a-05635d21630f-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.883998 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.884071 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.884142 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.884174 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7lqb\" (UniqueName: \"kubernetes.io/projected/01cb12dd-9d34-4898-941a-05635d21630f-kube-api-access-j7lqb\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.884265 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/01cb12dd-9d34-4898-941a-05635d21630f-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.884388 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/01cb12dd-9d34-4898-941a-05635d21630f-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.884449 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.884481 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/01cb12dd-9d34-4898-941a-05635d21630f-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.884597 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.884684 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.884798 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.885270 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.987529 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.987601 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.987654 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.987737 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.987770 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/01cb12dd-9d34-4898-941a-05635d21630f-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.987806 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.987852 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.987883 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/01cb12dd-9d34-4898-941a-05635d21630f-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.987927 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.988008 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.988268 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.988317 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7lqb\" (UniqueName: \"kubernetes.io/projected/01cb12dd-9d34-4898-941a-05635d21630f-kube-api-access-j7lqb\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.988377 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/01cb12dd-9d34-4898-941a-05635d21630f-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.988407 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/01cb12dd-9d34-4898-941a-05635d21630f-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.988439 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/01cb12dd-9d34-4898-941a-05635d21630f-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.988462 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.992430 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.994009 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.994148 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.994898 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/01cb12dd-9d34-4898-941a-05635d21630f-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.995026 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/01cb12dd-9d34-4898-941a-05635d21630f-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.995568 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.996711 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.996887 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:46:51 crc kubenswrapper[4867]: I0214 04:46:51.000055 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/01cb12dd-9d34-4898-941a-05635d21630f-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:46:51 crc kubenswrapper[4867]: I0214 04:46:51.000294 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:46:51 crc kubenswrapper[4867]: I0214 04:46:51.000348 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/01cb12dd-9d34-4898-941a-05635d21630f-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:46:51 crc kubenswrapper[4867]: I0214 04:46:51.000728 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/01cb12dd-9d34-4898-941a-05635d21630f-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:46:51 crc kubenswrapper[4867]: I0214 04:46:51.007299 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:46:51 crc kubenswrapper[4867]: I0214 04:46:51.012363 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:46:51 crc kubenswrapper[4867]: I0214 04:46:51.016651 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:46:51 crc kubenswrapper[4867]: I0214 04:46:51.016652 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7lqb\" (UniqueName: \"kubernetes.io/projected/01cb12dd-9d34-4898-941a-05635d21630f-kube-api-access-j7lqb\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:46:51 crc kubenswrapper[4867]: I0214 04:46:51.057363 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:46:51 crc kubenswrapper[4867]: I0214 04:46:51.658959 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9"] Feb 14 04:46:52 crc kubenswrapper[4867]: I0214 04:46:52.628044 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" event={"ID":"01cb12dd-9d34-4898-941a-05635d21630f","Type":"ContainerStarted","Data":"eb50eb14eba880c0f518af2dcfcdf4cf46735bb1f20af3d0acff7d38753ef4e0"} Feb 14 04:46:52 crc kubenswrapper[4867]: I0214 04:46:52.628888 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" event={"ID":"01cb12dd-9d34-4898-941a-05635d21630f","Type":"ContainerStarted","Data":"711f4fe27cebbb2e6c84267ccd7dca6591c48e5cf8880040abe090f7f6d2f6eb"} Feb 14 04:46:52 crc kubenswrapper[4867]: I0214 04:46:52.659165 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" podStartSLOduration=2.275855894 podStartE2EDuration="2.659139622s" podCreationTimestamp="2026-02-14 04:46:50 +0000 UTC" firstStartedPulling="2026-02-14 04:46:51.662562242 +0000 UTC m=+2243.743499556" lastFinishedPulling="2026-02-14 04:46:52.04584597 +0000 UTC m=+2244.126783284" observedRunningTime="2026-02-14 04:46:52.651923302 +0000 UTC m=+2244.732860616" watchObservedRunningTime="2026-02-14 04:46:52.659139622 +0000 UTC m=+2244.740076936" Feb 14 04:47:00 crc kubenswrapper[4867]: I0214 04:47:00.047471 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-l8hr2"] Feb 14 04:47:00 crc kubenswrapper[4867]: I0214 04:47:00.056691 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-l8hr2"] Feb 14 04:47:00 crc kubenswrapper[4867]: I0214 04:47:00.727330 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hthk2"] Feb 14 04:47:00 crc kubenswrapper[4867]: I0214 04:47:00.730939 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hthk2" Feb 14 04:47:00 crc kubenswrapper[4867]: I0214 04:47:00.741042 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hthk2"] Feb 14 04:47:00 crc kubenswrapper[4867]: I0214 04:47:00.903077 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/709ab839-d449-4265-b59d-192b93a2039a-catalog-content\") pod \"certified-operators-hthk2\" (UID: \"709ab839-d449-4265-b59d-192b93a2039a\") " pod="openshift-marketplace/certified-operators-hthk2" Feb 14 04:47:00 crc kubenswrapper[4867]: I0214 04:47:00.903234 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/709ab839-d449-4265-b59d-192b93a2039a-utilities\") pod \"certified-operators-hthk2\" (UID: \"709ab839-d449-4265-b59d-192b93a2039a\") " pod="openshift-marketplace/certified-operators-hthk2" Feb 14 04:47:00 crc kubenswrapper[4867]: I0214 04:47:00.903272 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hv7q\" (UniqueName: \"kubernetes.io/projected/709ab839-d449-4265-b59d-192b93a2039a-kube-api-access-9hv7q\") pod \"certified-operators-hthk2\" (UID: \"709ab839-d449-4265-b59d-192b93a2039a\") " pod="openshift-marketplace/certified-operators-hthk2" Feb 14 04:47:01 crc kubenswrapper[4867]: I0214 04:47:01.005615 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/709ab839-d449-4265-b59d-192b93a2039a-catalog-content\") pod \"certified-operators-hthk2\" (UID: \"709ab839-d449-4265-b59d-192b93a2039a\") " pod="openshift-marketplace/certified-operators-hthk2" Feb 14 04:47:01 crc kubenswrapper[4867]: I0214 04:47:01.005746 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/709ab839-d449-4265-b59d-192b93a2039a-utilities\") pod \"certified-operators-hthk2\" (UID: \"709ab839-d449-4265-b59d-192b93a2039a\") " pod="openshift-marketplace/certified-operators-hthk2" Feb 14 04:47:01 crc kubenswrapper[4867]: I0214 04:47:01.005779 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hv7q\" (UniqueName: \"kubernetes.io/projected/709ab839-d449-4265-b59d-192b93a2039a-kube-api-access-9hv7q\") pod \"certified-operators-hthk2\" (UID: \"709ab839-d449-4265-b59d-192b93a2039a\") " pod="openshift-marketplace/certified-operators-hthk2" Feb 14 04:47:01 crc kubenswrapper[4867]: I0214 04:47:01.006821 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/709ab839-d449-4265-b59d-192b93a2039a-catalog-content\") pod \"certified-operators-hthk2\" (UID: \"709ab839-d449-4265-b59d-192b93a2039a\") " pod="openshift-marketplace/certified-operators-hthk2" Feb 14 04:47:01 crc kubenswrapper[4867]: I0214 04:47:01.007154 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/709ab839-d449-4265-b59d-192b93a2039a-utilities\") pod \"certified-operators-hthk2\" (UID: \"709ab839-d449-4265-b59d-192b93a2039a\") " pod="openshift-marketplace/certified-operators-hthk2" Feb 14 04:47:01 crc kubenswrapper[4867]: I0214 04:47:01.013646 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="632c48c8-f0d5-4dc9-823e-fa96b9265e97" path="/var/lib/kubelet/pods/632c48c8-f0d5-4dc9-823e-fa96b9265e97/volumes" Feb 14 04:47:01 crc kubenswrapper[4867]: I0214 04:47:01.034080 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hv7q\" (UniqueName: \"kubernetes.io/projected/709ab839-d449-4265-b59d-192b93a2039a-kube-api-access-9hv7q\") pod \"certified-operators-hthk2\" (UID: \"709ab839-d449-4265-b59d-192b93a2039a\") " pod="openshift-marketplace/certified-operators-hthk2" Feb 14 04:47:01 crc kubenswrapper[4867]: I0214 04:47:01.082745 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hthk2" Feb 14 04:47:01 crc kubenswrapper[4867]: I0214 04:47:01.251908 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 04:47:01 crc kubenswrapper[4867]: I0214 04:47:01.252266 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 04:47:01 crc kubenswrapper[4867]: I0214 04:47:01.783873 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hthk2"] Feb 14 04:47:01 crc kubenswrapper[4867]: I0214 04:47:01.832311 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hthk2" event={"ID":"709ab839-d449-4265-b59d-192b93a2039a","Type":"ContainerStarted","Data":"79d7515378363ed08b88377b44a53b803ef90ae278e3a0dcb05f423c876bc5f3"} Feb 14 04:47:02 crc kubenswrapper[4867]: I0214 04:47:02.899681 4867 generic.go:334] "Generic (PLEG): container finished" podID="709ab839-d449-4265-b59d-192b93a2039a" containerID="c21c0c4fbc22f9cbe9392d488a21774797ea11e5926859a442df01ad36339416" exitCode=0 Feb 14 04:47:02 crc kubenswrapper[4867]: I0214 04:47:02.900195 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hthk2" event={"ID":"709ab839-d449-4265-b59d-192b93a2039a","Type":"ContainerDied","Data":"c21c0c4fbc22f9cbe9392d488a21774797ea11e5926859a442df01ad36339416"} Feb 14 04:47:04 crc kubenswrapper[4867]: I0214 04:47:04.931080 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hthk2" event={"ID":"709ab839-d449-4265-b59d-192b93a2039a","Type":"ContainerStarted","Data":"5847f981bdfeee05bd39dc4e5dfc6eb0764d7c1bb29f2a7b3006a95305dccd2b"} Feb 14 04:47:06 crc kubenswrapper[4867]: I0214 04:47:06.955868 4867 generic.go:334] "Generic (PLEG): container finished" podID="709ab839-d449-4265-b59d-192b93a2039a" containerID="5847f981bdfeee05bd39dc4e5dfc6eb0764d7c1bb29f2a7b3006a95305dccd2b" exitCode=0 Feb 14 04:47:06 crc kubenswrapper[4867]: I0214 04:47:06.955957 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hthk2" event={"ID":"709ab839-d449-4265-b59d-192b93a2039a","Type":"ContainerDied","Data":"5847f981bdfeee05bd39dc4e5dfc6eb0764d7c1bb29f2a7b3006a95305dccd2b"} Feb 14 04:47:07 crc kubenswrapper[4867]: I0214 04:47:07.967134 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hthk2" event={"ID":"709ab839-d449-4265-b59d-192b93a2039a","Type":"ContainerStarted","Data":"a4e8fb29ae930f04a0251f3a78aea8d3dffb6c99123a0596e532b157ffc496e9"} Feb 14 04:47:07 crc kubenswrapper[4867]: I0214 04:47:07.992393 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hthk2" podStartSLOduration=3.50929959 podStartE2EDuration="7.992375246s" podCreationTimestamp="2026-02-14 04:47:00 +0000 UTC" firstStartedPulling="2026-02-14 04:47:02.915980194 +0000 UTC m=+2254.996917498" lastFinishedPulling="2026-02-14 04:47:07.39905584 +0000 UTC m=+2259.479993154" observedRunningTime="2026-02-14 04:47:07.987446267 +0000 UTC m=+2260.068383591" watchObservedRunningTime="2026-02-14 04:47:07.992375246 +0000 UTC m=+2260.073312550" Feb 14 04:47:11 crc kubenswrapper[4867]: I0214 04:47:11.083238 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hthk2" Feb 14 04:47:11 crc kubenswrapper[4867]: I0214 04:47:11.083948 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hthk2" Feb 14 04:47:11 crc kubenswrapper[4867]: I0214 04:47:11.164597 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hthk2" Feb 14 04:47:12 crc kubenswrapper[4867]: I0214 04:47:12.054559 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hthk2" Feb 14 04:47:12 crc kubenswrapper[4867]: I0214 04:47:12.140369 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hthk2"] Feb 14 04:47:14 crc kubenswrapper[4867]: I0214 04:47:14.025014 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hthk2" podUID="709ab839-d449-4265-b59d-192b93a2039a" containerName="registry-server" containerID="cri-o://a4e8fb29ae930f04a0251f3a78aea8d3dffb6c99123a0596e532b157ffc496e9" gracePeriod=2 Feb 14 04:47:14 crc kubenswrapper[4867]: I0214 04:47:14.551756 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hthk2" Feb 14 04:47:14 crc kubenswrapper[4867]: I0214 04:47:14.688687 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9hv7q\" (UniqueName: \"kubernetes.io/projected/709ab839-d449-4265-b59d-192b93a2039a-kube-api-access-9hv7q\") pod \"709ab839-d449-4265-b59d-192b93a2039a\" (UID: \"709ab839-d449-4265-b59d-192b93a2039a\") " Feb 14 04:47:14 crc kubenswrapper[4867]: I0214 04:47:14.688862 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/709ab839-d449-4265-b59d-192b93a2039a-utilities\") pod \"709ab839-d449-4265-b59d-192b93a2039a\" (UID: \"709ab839-d449-4265-b59d-192b93a2039a\") " Feb 14 04:47:14 crc kubenswrapper[4867]: I0214 04:47:14.689051 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/709ab839-d449-4265-b59d-192b93a2039a-catalog-content\") pod \"709ab839-d449-4265-b59d-192b93a2039a\" (UID: \"709ab839-d449-4265-b59d-192b93a2039a\") " Feb 14 04:47:14 crc kubenswrapper[4867]: I0214 04:47:14.689691 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/709ab839-d449-4265-b59d-192b93a2039a-utilities" (OuterVolumeSpecName: "utilities") pod "709ab839-d449-4265-b59d-192b93a2039a" (UID: "709ab839-d449-4265-b59d-192b93a2039a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:47:14 crc kubenswrapper[4867]: I0214 04:47:14.689954 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/709ab839-d449-4265-b59d-192b93a2039a-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 04:47:14 crc kubenswrapper[4867]: I0214 04:47:14.704342 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/709ab839-d449-4265-b59d-192b93a2039a-kube-api-access-9hv7q" (OuterVolumeSpecName: "kube-api-access-9hv7q") pod "709ab839-d449-4265-b59d-192b93a2039a" (UID: "709ab839-d449-4265-b59d-192b93a2039a"). InnerVolumeSpecName "kube-api-access-9hv7q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:47:14 crc kubenswrapper[4867]: I0214 04:47:14.750790 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/709ab839-d449-4265-b59d-192b93a2039a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "709ab839-d449-4265-b59d-192b93a2039a" (UID: "709ab839-d449-4265-b59d-192b93a2039a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:47:14 crc kubenswrapper[4867]: I0214 04:47:14.792891 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9hv7q\" (UniqueName: \"kubernetes.io/projected/709ab839-d449-4265-b59d-192b93a2039a-kube-api-access-9hv7q\") on node \"crc\" DevicePath \"\"" Feb 14 04:47:14 crc kubenswrapper[4867]: I0214 04:47:14.792927 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/709ab839-d449-4265-b59d-192b93a2039a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 04:47:15 crc kubenswrapper[4867]: I0214 04:47:15.039471 4867 generic.go:334] "Generic (PLEG): container finished" podID="709ab839-d449-4265-b59d-192b93a2039a" containerID="a4e8fb29ae930f04a0251f3a78aea8d3dffb6c99123a0596e532b157ffc496e9" exitCode=0 Feb 14 04:47:15 crc kubenswrapper[4867]: I0214 04:47:15.039545 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hthk2" event={"ID":"709ab839-d449-4265-b59d-192b93a2039a","Type":"ContainerDied","Data":"a4e8fb29ae930f04a0251f3a78aea8d3dffb6c99123a0596e532b157ffc496e9"} Feb 14 04:47:15 crc kubenswrapper[4867]: I0214 04:47:15.039566 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hthk2" Feb 14 04:47:15 crc kubenswrapper[4867]: I0214 04:47:15.039591 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hthk2" event={"ID":"709ab839-d449-4265-b59d-192b93a2039a","Type":"ContainerDied","Data":"79d7515378363ed08b88377b44a53b803ef90ae278e3a0dcb05f423c876bc5f3"} Feb 14 04:47:15 crc kubenswrapper[4867]: I0214 04:47:15.039623 4867 scope.go:117] "RemoveContainer" containerID="a4e8fb29ae930f04a0251f3a78aea8d3dffb6c99123a0596e532b157ffc496e9" Feb 14 04:47:15 crc kubenswrapper[4867]: I0214 04:47:15.074992 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hthk2"] Feb 14 04:47:15 crc kubenswrapper[4867]: I0214 04:47:15.075474 4867 scope.go:117] "RemoveContainer" containerID="5847f981bdfeee05bd39dc4e5dfc6eb0764d7c1bb29f2a7b3006a95305dccd2b" Feb 14 04:47:15 crc kubenswrapper[4867]: I0214 04:47:15.087293 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hthk2"] Feb 14 04:47:15 crc kubenswrapper[4867]: I0214 04:47:15.104663 4867 scope.go:117] "RemoveContainer" containerID="c21c0c4fbc22f9cbe9392d488a21774797ea11e5926859a442df01ad36339416" Feb 14 04:47:15 crc kubenswrapper[4867]: I0214 04:47:15.165892 4867 scope.go:117] "RemoveContainer" containerID="a4e8fb29ae930f04a0251f3a78aea8d3dffb6c99123a0596e532b157ffc496e9" Feb 14 04:47:15 crc kubenswrapper[4867]: E0214 04:47:15.166378 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4e8fb29ae930f04a0251f3a78aea8d3dffb6c99123a0596e532b157ffc496e9\": container with ID starting with a4e8fb29ae930f04a0251f3a78aea8d3dffb6c99123a0596e532b157ffc496e9 not found: ID does not exist" containerID="a4e8fb29ae930f04a0251f3a78aea8d3dffb6c99123a0596e532b157ffc496e9" Feb 14 04:47:15 crc kubenswrapper[4867]: I0214 04:47:15.166432 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4e8fb29ae930f04a0251f3a78aea8d3dffb6c99123a0596e532b157ffc496e9"} err="failed to get container status \"a4e8fb29ae930f04a0251f3a78aea8d3dffb6c99123a0596e532b157ffc496e9\": rpc error: code = NotFound desc = could not find container \"a4e8fb29ae930f04a0251f3a78aea8d3dffb6c99123a0596e532b157ffc496e9\": container with ID starting with a4e8fb29ae930f04a0251f3a78aea8d3dffb6c99123a0596e532b157ffc496e9 not found: ID does not exist" Feb 14 04:47:15 crc kubenswrapper[4867]: I0214 04:47:15.166461 4867 scope.go:117] "RemoveContainer" containerID="5847f981bdfeee05bd39dc4e5dfc6eb0764d7c1bb29f2a7b3006a95305dccd2b" Feb 14 04:47:15 crc kubenswrapper[4867]: E0214 04:47:15.166958 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5847f981bdfeee05bd39dc4e5dfc6eb0764d7c1bb29f2a7b3006a95305dccd2b\": container with ID starting with 5847f981bdfeee05bd39dc4e5dfc6eb0764d7c1bb29f2a7b3006a95305dccd2b not found: ID does not exist" containerID="5847f981bdfeee05bd39dc4e5dfc6eb0764d7c1bb29f2a7b3006a95305dccd2b" Feb 14 04:47:15 crc kubenswrapper[4867]: I0214 04:47:15.167005 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5847f981bdfeee05bd39dc4e5dfc6eb0764d7c1bb29f2a7b3006a95305dccd2b"} err="failed to get container status \"5847f981bdfeee05bd39dc4e5dfc6eb0764d7c1bb29f2a7b3006a95305dccd2b\": rpc error: code = NotFound desc = could not find container \"5847f981bdfeee05bd39dc4e5dfc6eb0764d7c1bb29f2a7b3006a95305dccd2b\": container with ID starting with 5847f981bdfeee05bd39dc4e5dfc6eb0764d7c1bb29f2a7b3006a95305dccd2b not found: ID does not exist" Feb 14 04:47:15 crc kubenswrapper[4867]: I0214 04:47:15.167034 4867 scope.go:117] "RemoveContainer" containerID="c21c0c4fbc22f9cbe9392d488a21774797ea11e5926859a442df01ad36339416" Feb 14 04:47:15 crc kubenswrapper[4867]: E0214 04:47:15.167459 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c21c0c4fbc22f9cbe9392d488a21774797ea11e5926859a442df01ad36339416\": container with ID starting with c21c0c4fbc22f9cbe9392d488a21774797ea11e5926859a442df01ad36339416 not found: ID does not exist" containerID="c21c0c4fbc22f9cbe9392d488a21774797ea11e5926859a442df01ad36339416" Feb 14 04:47:15 crc kubenswrapper[4867]: I0214 04:47:15.167481 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c21c0c4fbc22f9cbe9392d488a21774797ea11e5926859a442df01ad36339416"} err="failed to get container status \"c21c0c4fbc22f9cbe9392d488a21774797ea11e5926859a442df01ad36339416\": rpc error: code = NotFound desc = could not find container \"c21c0c4fbc22f9cbe9392d488a21774797ea11e5926859a442df01ad36339416\": container with ID starting with c21c0c4fbc22f9cbe9392d488a21774797ea11e5926859a442df01ad36339416 not found: ID does not exist" Feb 14 04:47:15 crc kubenswrapper[4867]: I0214 04:47:15.352296 4867 scope.go:117] "RemoveContainer" containerID="de721f6c491679859a0694193254d070c18018a3dbb5ddc13f5e6825aefb8ef2" Feb 14 04:47:17 crc kubenswrapper[4867]: I0214 04:47:17.009861 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="709ab839-d449-4265-b59d-192b93a2039a" path="/var/lib/kubelet/pods/709ab839-d449-4265-b59d-192b93a2039a/volumes" Feb 14 04:47:31 crc kubenswrapper[4867]: I0214 04:47:31.251169 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 04:47:31 crc kubenswrapper[4867]: I0214 04:47:31.251682 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 04:47:31 crc kubenswrapper[4867]: I0214 04:47:31.251731 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" Feb 14 04:47:31 crc kubenswrapper[4867]: I0214 04:47:31.254530 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2e46dcab63865af965f1ceab9775684d2c284c2072e738aed0acdc7b372802d2"} pod="openshift-machine-config-operator/machine-config-daemon-4s95t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 04:47:31 crc kubenswrapper[4867]: I0214 04:47:31.254609 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" containerID="cri-o://2e46dcab63865af965f1ceab9775684d2c284c2072e738aed0acdc7b372802d2" gracePeriod=600 Feb 14 04:47:31 crc kubenswrapper[4867]: E0214 04:47:31.378867 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:47:32 crc kubenswrapper[4867]: I0214 04:47:32.229441 4867 generic.go:334] "Generic (PLEG): container finished" podID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerID="2e46dcab63865af965f1ceab9775684d2c284c2072e738aed0acdc7b372802d2" exitCode=0 Feb 14 04:47:32 crc kubenswrapper[4867]: I0214 04:47:32.229542 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" event={"ID":"5992e46c-bce7-4b9f-82f2-c7ffb93286cd","Type":"ContainerDied","Data":"2e46dcab63865af965f1ceab9775684d2c284c2072e738aed0acdc7b372802d2"} Feb 14 04:47:32 crc kubenswrapper[4867]: I0214 04:47:32.229783 4867 scope.go:117] "RemoveContainer" containerID="8ef22e983ed33de6916be45630c900d98abc980cea24a0e66ba99e9fbf263b65" Feb 14 04:47:32 crc kubenswrapper[4867]: I0214 04:47:32.235181 4867 scope.go:117] "RemoveContainer" containerID="2e46dcab63865af965f1ceab9775684d2c284c2072e738aed0acdc7b372802d2" Feb 14 04:47:32 crc kubenswrapper[4867]: E0214 04:47:32.236216 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:47:36 crc kubenswrapper[4867]: I0214 04:47:36.272193 4867 generic.go:334] "Generic (PLEG): container finished" podID="01cb12dd-9d34-4898-941a-05635d21630f" containerID="eb50eb14eba880c0f518af2dcfcdf4cf46735bb1f20af3d0acff7d38753ef4e0" exitCode=0 Feb 14 04:47:36 crc kubenswrapper[4867]: I0214 04:47:36.272283 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" event={"ID":"01cb12dd-9d34-4898-941a-05635d21630f","Type":"ContainerDied","Data":"eb50eb14eba880c0f518af2dcfcdf4cf46735bb1f20af3d0acff7d38753ef4e0"} Feb 14 04:47:37 crc kubenswrapper[4867]: I0214 04:47:37.791705 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:47:37 crc kubenswrapper[4867]: I0214 04:47:37.900599 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-telemetry-combined-ca-bundle\") pod \"01cb12dd-9d34-4898-941a-05635d21630f\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " Feb 14 04:47:37 crc kubenswrapper[4867]: I0214 04:47:37.900690 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-ovn-combined-ca-bundle\") pod \"01cb12dd-9d34-4898-941a-05635d21630f\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " Feb 14 04:47:37 crc kubenswrapper[4867]: I0214 04:47:37.900761 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-bootstrap-combined-ca-bundle\") pod \"01cb12dd-9d34-4898-941a-05635d21630f\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " Feb 14 04:47:37 crc kubenswrapper[4867]: I0214 04:47:37.900792 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-nova-combined-ca-bundle\") pod \"01cb12dd-9d34-4898-941a-05635d21630f\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " Feb 14 04:47:37 crc kubenswrapper[4867]: I0214 04:47:37.900917 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/01cb12dd-9d34-4898-941a-05635d21630f-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"01cb12dd-9d34-4898-941a-05635d21630f\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " Feb 14 04:47:37 crc kubenswrapper[4867]: I0214 04:47:37.900961 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/01cb12dd-9d34-4898-941a-05635d21630f-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"01cb12dd-9d34-4898-941a-05635d21630f\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " Feb 14 04:47:37 crc kubenswrapper[4867]: I0214 04:47:37.901100 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/01cb12dd-9d34-4898-941a-05635d21630f-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"01cb12dd-9d34-4898-941a-05635d21630f\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " Feb 14 04:47:37 crc kubenswrapper[4867]: I0214 04:47:37.901166 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-inventory\") pod \"01cb12dd-9d34-4898-941a-05635d21630f\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " Feb 14 04:47:37 crc kubenswrapper[4867]: I0214 04:47:37.901304 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-neutron-metadata-combined-ca-bundle\") pod \"01cb12dd-9d34-4898-941a-05635d21630f\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " Feb 14 04:47:37 crc kubenswrapper[4867]: I0214 04:47:37.901347 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-repo-setup-combined-ca-bundle\") pod \"01cb12dd-9d34-4898-941a-05635d21630f\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " Feb 14 04:47:37 crc kubenswrapper[4867]: I0214 04:47:37.901391 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/01cb12dd-9d34-4898-941a-05635d21630f-openstack-edpm-ipam-ovn-default-certs-0\") pod \"01cb12dd-9d34-4898-941a-05635d21630f\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " Feb 14 04:47:37 crc kubenswrapper[4867]: I0214 04:47:37.901418 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7lqb\" (UniqueName: \"kubernetes.io/projected/01cb12dd-9d34-4898-941a-05635d21630f-kube-api-access-j7lqb\") pod \"01cb12dd-9d34-4898-941a-05635d21630f\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " Feb 14 04:47:37 crc kubenswrapper[4867]: I0214 04:47:37.901488 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-ssh-key-openstack-edpm-ipam\") pod \"01cb12dd-9d34-4898-941a-05635d21630f\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " Feb 14 04:47:37 crc kubenswrapper[4867]: I0214 04:47:37.901546 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-telemetry-power-monitoring-combined-ca-bundle\") pod \"01cb12dd-9d34-4898-941a-05635d21630f\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " Feb 14 04:47:37 crc kubenswrapper[4867]: I0214 04:47:37.901680 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-libvirt-combined-ca-bundle\") pod \"01cb12dd-9d34-4898-941a-05635d21630f\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " Feb 14 04:47:37 crc kubenswrapper[4867]: I0214 04:47:37.901756 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/01cb12dd-9d34-4898-941a-05635d21630f-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"01cb12dd-9d34-4898-941a-05635d21630f\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " Feb 14 04:47:37 crc kubenswrapper[4867]: I0214 04:47:37.908992 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "01cb12dd-9d34-4898-941a-05635d21630f" (UID: "01cb12dd-9d34-4898-941a-05635d21630f"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:47:37 crc kubenswrapper[4867]: I0214 04:47:37.909027 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "01cb12dd-9d34-4898-941a-05635d21630f" (UID: "01cb12dd-9d34-4898-941a-05635d21630f"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:47:37 crc kubenswrapper[4867]: I0214 04:47:37.909546 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01cb12dd-9d34-4898-941a-05635d21630f-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "01cb12dd-9d34-4898-941a-05635d21630f" (UID: "01cb12dd-9d34-4898-941a-05635d21630f"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:47:37 crc kubenswrapper[4867]: I0214 04:47:37.909962 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "01cb12dd-9d34-4898-941a-05635d21630f" (UID: "01cb12dd-9d34-4898-941a-05635d21630f"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:47:37 crc kubenswrapper[4867]: I0214 04:47:37.910551 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01cb12dd-9d34-4898-941a-05635d21630f-kube-api-access-j7lqb" (OuterVolumeSpecName: "kube-api-access-j7lqb") pod "01cb12dd-9d34-4898-941a-05635d21630f" (UID: "01cb12dd-9d34-4898-941a-05635d21630f"). InnerVolumeSpecName "kube-api-access-j7lqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:47:37 crc kubenswrapper[4867]: I0214 04:47:37.914185 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01cb12dd-9d34-4898-941a-05635d21630f-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "01cb12dd-9d34-4898-941a-05635d21630f" (UID: "01cb12dd-9d34-4898-941a-05635d21630f"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:47:37 crc kubenswrapper[4867]: I0214 04:47:37.914309 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01cb12dd-9d34-4898-941a-05635d21630f-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "01cb12dd-9d34-4898-941a-05635d21630f" (UID: "01cb12dd-9d34-4898-941a-05635d21630f"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:47:37 crc kubenswrapper[4867]: I0214 04:47:37.914456 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "01cb12dd-9d34-4898-941a-05635d21630f" (UID: "01cb12dd-9d34-4898-941a-05635d21630f"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:47:37 crc kubenswrapper[4867]: I0214 04:47:37.914881 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "01cb12dd-9d34-4898-941a-05635d21630f" (UID: "01cb12dd-9d34-4898-941a-05635d21630f"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:47:37 crc kubenswrapper[4867]: I0214 04:47:37.914458 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "01cb12dd-9d34-4898-941a-05635d21630f" (UID: "01cb12dd-9d34-4898-941a-05635d21630f"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:47:37 crc kubenswrapper[4867]: I0214 04:47:37.916278 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01cb12dd-9d34-4898-941a-05635d21630f-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0") pod "01cb12dd-9d34-4898-941a-05635d21630f" (UID: "01cb12dd-9d34-4898-941a-05635d21630f"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:47:37 crc kubenswrapper[4867]: I0214 04:47:37.916402 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "01cb12dd-9d34-4898-941a-05635d21630f" (UID: "01cb12dd-9d34-4898-941a-05635d21630f"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:47:37 crc kubenswrapper[4867]: I0214 04:47:37.917458 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "01cb12dd-9d34-4898-941a-05635d21630f" (UID: "01cb12dd-9d34-4898-941a-05635d21630f"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:47:37 crc kubenswrapper[4867]: I0214 04:47:37.921535 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01cb12dd-9d34-4898-941a-05635d21630f-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "01cb12dd-9d34-4898-941a-05635d21630f" (UID: "01cb12dd-9d34-4898-941a-05635d21630f"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:47:37 crc kubenswrapper[4867]: I0214 04:47:37.948407 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "01cb12dd-9d34-4898-941a-05635d21630f" (UID: "01cb12dd-9d34-4898-941a-05635d21630f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:47:37 crc kubenswrapper[4867]: I0214 04:47:37.954299 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-inventory" (OuterVolumeSpecName: "inventory") pod "01cb12dd-9d34-4898-941a-05635d21630f" (UID: "01cb12dd-9d34-4898-941a-05635d21630f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:47:38 crc kubenswrapper[4867]: I0214 04:47:38.005452 4867 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-inventory\") on node \"crc\" DevicePath \"\"" Feb 14 04:47:38 crc kubenswrapper[4867]: I0214 04:47:38.005519 4867 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:47:38 crc kubenswrapper[4867]: I0214 04:47:38.005531 4867 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:47:38 crc kubenswrapper[4867]: I0214 04:47:38.005544 4867 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/01cb12dd-9d34-4898-941a-05635d21630f-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 14 04:47:38 crc kubenswrapper[4867]: I0214 04:47:38.005558 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j7lqb\" (UniqueName: \"kubernetes.io/projected/01cb12dd-9d34-4898-941a-05635d21630f-kube-api-access-j7lqb\") on node \"crc\" DevicePath \"\"" Feb 14 04:47:38 crc kubenswrapper[4867]: I0214 04:47:38.005570 4867 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 14 04:47:38 crc kubenswrapper[4867]: I0214 04:47:38.005582 4867 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:47:38 crc kubenswrapper[4867]: I0214 04:47:38.005594 4867 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:47:38 crc kubenswrapper[4867]: I0214 04:47:38.005604 4867 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/01cb12dd-9d34-4898-941a-05635d21630f-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 14 04:47:38 crc kubenswrapper[4867]: I0214 04:47:38.005614 4867 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:47:38 crc kubenswrapper[4867]: I0214 04:47:38.005623 4867 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:47:38 crc kubenswrapper[4867]: I0214 04:47:38.005631 4867 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:47:38 crc kubenswrapper[4867]: I0214 04:47:38.005639 4867 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:47:38 crc kubenswrapper[4867]: I0214 04:47:38.005647 4867 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/01cb12dd-9d34-4898-941a-05635d21630f-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 14 04:47:38 crc kubenswrapper[4867]: I0214 04:47:38.005672 4867 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/01cb12dd-9d34-4898-941a-05635d21630f-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 14 04:47:38 crc kubenswrapper[4867]: I0214 04:47:38.005684 4867 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/01cb12dd-9d34-4898-941a-05635d21630f-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 14 04:47:38 crc kubenswrapper[4867]: I0214 04:47:38.298961 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" event={"ID":"01cb12dd-9d34-4898-941a-05635d21630f","Type":"ContainerDied","Data":"711f4fe27cebbb2e6c84267ccd7dca6591c48e5cf8880040abe090f7f6d2f6eb"} Feb 14 04:47:38 crc kubenswrapper[4867]: I0214 04:47:38.299009 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="711f4fe27cebbb2e6c84267ccd7dca6591c48e5cf8880040abe090f7f6d2f6eb" Feb 14 04:47:38 crc kubenswrapper[4867]: I0214 04:47:38.299070 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:47:38 crc kubenswrapper[4867]: I0214 04:47:38.436947 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-vjz5q"] Feb 14 04:47:38 crc kubenswrapper[4867]: E0214 04:47:38.437836 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="709ab839-d449-4265-b59d-192b93a2039a" containerName="extract-utilities" Feb 14 04:47:38 crc kubenswrapper[4867]: I0214 04:47:38.437863 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="709ab839-d449-4265-b59d-192b93a2039a" containerName="extract-utilities" Feb 14 04:47:38 crc kubenswrapper[4867]: E0214 04:47:38.437886 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="709ab839-d449-4265-b59d-192b93a2039a" containerName="registry-server" Feb 14 04:47:38 crc kubenswrapper[4867]: I0214 04:47:38.437894 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="709ab839-d449-4265-b59d-192b93a2039a" containerName="registry-server" Feb 14 04:47:38 crc kubenswrapper[4867]: E0214 04:47:38.437941 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01cb12dd-9d34-4898-941a-05635d21630f" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 14 04:47:38 crc kubenswrapper[4867]: I0214 04:47:38.437949 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="01cb12dd-9d34-4898-941a-05635d21630f" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 14 04:47:38 crc kubenswrapper[4867]: E0214 04:47:38.437959 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="709ab839-d449-4265-b59d-192b93a2039a" containerName="extract-content" Feb 14 04:47:38 crc kubenswrapper[4867]: I0214 04:47:38.437965 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="709ab839-d449-4265-b59d-192b93a2039a" containerName="extract-content" Feb 14 04:47:38 crc kubenswrapper[4867]: I0214 04:47:38.438181 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="709ab839-d449-4265-b59d-192b93a2039a" containerName="registry-server" Feb 14 04:47:38 crc kubenswrapper[4867]: I0214 04:47:38.438206 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="01cb12dd-9d34-4898-941a-05635d21630f" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 14 04:47:38 crc kubenswrapper[4867]: I0214 04:47:38.439101 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vjz5q" Feb 14 04:47:38 crc kubenswrapper[4867]: I0214 04:47:38.444294 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 14 04:47:38 crc kubenswrapper[4867]: I0214 04:47:38.444486 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Feb 14 04:47:38 crc kubenswrapper[4867]: I0214 04:47:38.444686 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 14 04:47:38 crc kubenswrapper[4867]: I0214 04:47:38.444962 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-24tmg" Feb 14 04:47:38 crc kubenswrapper[4867]: I0214 04:47:38.445111 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 14 04:47:38 crc kubenswrapper[4867]: I0214 04:47:38.450188 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-vjz5q"] Feb 14 04:47:38 crc kubenswrapper[4867]: I0214 04:47:38.518798 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c3ef84d6-150a-46b1-8e93-7e650c8be1ef-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vjz5q\" (UID: \"c3ef84d6-150a-46b1-8e93-7e650c8be1ef\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vjz5q" Feb 14 04:47:38 crc kubenswrapper[4867]: I0214 04:47:38.518901 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3ef84d6-150a-46b1-8e93-7e650c8be1ef-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vjz5q\" (UID: \"c3ef84d6-150a-46b1-8e93-7e650c8be1ef\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vjz5q" Feb 14 04:47:38 crc kubenswrapper[4867]: I0214 04:47:38.518931 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c3ef84d6-150a-46b1-8e93-7e650c8be1ef-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vjz5q\" (UID: \"c3ef84d6-150a-46b1-8e93-7e650c8be1ef\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vjz5q" Feb 14 04:47:38 crc kubenswrapper[4867]: I0214 04:47:38.518971 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/c3ef84d6-150a-46b1-8e93-7e650c8be1ef-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vjz5q\" (UID: \"c3ef84d6-150a-46b1-8e93-7e650c8be1ef\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vjz5q" Feb 14 04:47:38 crc kubenswrapper[4867]: I0214 04:47:38.518989 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h75f6\" (UniqueName: \"kubernetes.io/projected/c3ef84d6-150a-46b1-8e93-7e650c8be1ef-kube-api-access-h75f6\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vjz5q\" (UID: \"c3ef84d6-150a-46b1-8e93-7e650c8be1ef\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vjz5q" Feb 14 04:47:38 crc kubenswrapper[4867]: I0214 04:47:38.621597 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3ef84d6-150a-46b1-8e93-7e650c8be1ef-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vjz5q\" (UID: \"c3ef84d6-150a-46b1-8e93-7e650c8be1ef\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vjz5q" Feb 14 04:47:38 crc kubenswrapper[4867]: I0214 04:47:38.621977 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c3ef84d6-150a-46b1-8e93-7e650c8be1ef-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vjz5q\" (UID: \"c3ef84d6-150a-46b1-8e93-7e650c8be1ef\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vjz5q" Feb 14 04:47:38 crc kubenswrapper[4867]: I0214 04:47:38.622112 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/c3ef84d6-150a-46b1-8e93-7e650c8be1ef-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vjz5q\" (UID: \"c3ef84d6-150a-46b1-8e93-7e650c8be1ef\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vjz5q" Feb 14 04:47:38 crc kubenswrapper[4867]: I0214 04:47:38.622192 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h75f6\" (UniqueName: \"kubernetes.io/projected/c3ef84d6-150a-46b1-8e93-7e650c8be1ef-kube-api-access-h75f6\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vjz5q\" (UID: \"c3ef84d6-150a-46b1-8e93-7e650c8be1ef\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vjz5q" Feb 14 04:47:38 crc kubenswrapper[4867]: I0214 04:47:38.622486 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c3ef84d6-150a-46b1-8e93-7e650c8be1ef-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vjz5q\" (UID: \"c3ef84d6-150a-46b1-8e93-7e650c8be1ef\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vjz5q" Feb 14 04:47:38 crc kubenswrapper[4867]: I0214 04:47:38.623122 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/c3ef84d6-150a-46b1-8e93-7e650c8be1ef-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vjz5q\" (UID: \"c3ef84d6-150a-46b1-8e93-7e650c8be1ef\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vjz5q" Feb 14 04:47:38 crc kubenswrapper[4867]: I0214 04:47:38.625216 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c3ef84d6-150a-46b1-8e93-7e650c8be1ef-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vjz5q\" (UID: \"c3ef84d6-150a-46b1-8e93-7e650c8be1ef\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vjz5q" Feb 14 04:47:38 crc kubenswrapper[4867]: I0214 04:47:38.625597 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3ef84d6-150a-46b1-8e93-7e650c8be1ef-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vjz5q\" (UID: \"c3ef84d6-150a-46b1-8e93-7e650c8be1ef\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vjz5q" Feb 14 04:47:38 crc kubenswrapper[4867]: I0214 04:47:38.627343 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c3ef84d6-150a-46b1-8e93-7e650c8be1ef-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vjz5q\" (UID: \"c3ef84d6-150a-46b1-8e93-7e650c8be1ef\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vjz5q" Feb 14 04:47:38 crc kubenswrapper[4867]: I0214 04:47:38.649688 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h75f6\" (UniqueName: \"kubernetes.io/projected/c3ef84d6-150a-46b1-8e93-7e650c8be1ef-kube-api-access-h75f6\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vjz5q\" (UID: \"c3ef84d6-150a-46b1-8e93-7e650c8be1ef\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vjz5q" Feb 14 04:47:38 crc kubenswrapper[4867]: I0214 04:47:38.762634 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vjz5q" Feb 14 04:47:39 crc kubenswrapper[4867]: I0214 04:47:39.436355 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-vjz5q"] Feb 14 04:47:40 crc kubenswrapper[4867]: I0214 04:47:40.323525 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vjz5q" event={"ID":"c3ef84d6-150a-46b1-8e93-7e650c8be1ef","Type":"ContainerStarted","Data":"c3aca2cdbcd4a8b8f806a2e110ffaff4465e241413b62d01832043305d4c81af"} Feb 14 04:47:40 crc kubenswrapper[4867]: I0214 04:47:40.323931 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vjz5q" event={"ID":"c3ef84d6-150a-46b1-8e93-7e650c8be1ef","Type":"ContainerStarted","Data":"3e3aaf41c3c873c5a763e6d91f73ddb83d4d8bf709983155009d70de531c985d"} Feb 14 04:47:40 crc kubenswrapper[4867]: I0214 04:47:40.388099 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vjz5q" podStartSLOduration=1.854590189 podStartE2EDuration="2.38806498s" podCreationTimestamp="2026-02-14 04:47:38 +0000 UTC" firstStartedPulling="2026-02-14 04:47:39.438737493 +0000 UTC m=+2291.519674807" lastFinishedPulling="2026-02-14 04:47:39.972212244 +0000 UTC m=+2292.053149598" observedRunningTime="2026-02-14 04:47:40.359974241 +0000 UTC m=+2292.440911565" watchObservedRunningTime="2026-02-14 04:47:40.38806498 +0000 UTC m=+2292.469002294" Feb 14 04:47:43 crc kubenswrapper[4867]: I0214 04:47:43.997744 4867 scope.go:117] "RemoveContainer" containerID="2e46dcab63865af965f1ceab9775684d2c284c2072e738aed0acdc7b372802d2" Feb 14 04:47:43 crc kubenswrapper[4867]: E0214 04:47:43.998552 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:47:46 crc kubenswrapper[4867]: I0214 04:47:46.050697 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-vgdj4"] Feb 14 04:47:46 crc kubenswrapper[4867]: I0214 04:47:46.061756 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-vgdj4"] Feb 14 04:47:47 crc kubenswrapper[4867]: I0214 04:47:47.015994 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="844735e8-e1c8-426f-8f5b-ce4f64e2ffbf" path="/var/lib/kubelet/pods/844735e8-e1c8-426f-8f5b-ce4f64e2ffbf/volumes" Feb 14 04:47:54 crc kubenswrapper[4867]: I0214 04:47:54.997446 4867 scope.go:117] "RemoveContainer" containerID="2e46dcab63865af965f1ceab9775684d2c284c2072e738aed0acdc7b372802d2" Feb 14 04:47:54 crc kubenswrapper[4867]: E0214 04:47:54.998156 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:48:05 crc kubenswrapper[4867]: I0214 04:48:05.998375 4867 scope.go:117] "RemoveContainer" containerID="2e46dcab63865af965f1ceab9775684d2c284c2072e738aed0acdc7b372802d2" Feb 14 04:48:06 crc kubenswrapper[4867]: E0214 04:48:05.999473 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:48:15 crc kubenswrapper[4867]: I0214 04:48:15.474127 4867 scope.go:117] "RemoveContainer" containerID="fe59d6a45b3b1f49664971d341b7fc6d30fef719063bc033373a5e6d9bd21e9a" Feb 14 04:48:16 crc kubenswrapper[4867]: I0214 04:48:16.998901 4867 scope.go:117] "RemoveContainer" containerID="2e46dcab63865af965f1ceab9775684d2c284c2072e738aed0acdc7b372802d2" Feb 14 04:48:17 crc kubenswrapper[4867]: E0214 04:48:16.999424 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:48:27 crc kubenswrapper[4867]: I0214 04:48:27.997698 4867 scope.go:117] "RemoveContainer" containerID="2e46dcab63865af965f1ceab9775684d2c284c2072e738aed0acdc7b372802d2" Feb 14 04:48:27 crc kubenswrapper[4867]: E0214 04:48:27.998614 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:48:37 crc kubenswrapper[4867]: I0214 04:48:37.960584 4867 generic.go:334] "Generic (PLEG): container finished" podID="c3ef84d6-150a-46b1-8e93-7e650c8be1ef" containerID="c3aca2cdbcd4a8b8f806a2e110ffaff4465e241413b62d01832043305d4c81af" exitCode=0 Feb 14 04:48:37 crc kubenswrapper[4867]: I0214 04:48:37.960720 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vjz5q" event={"ID":"c3ef84d6-150a-46b1-8e93-7e650c8be1ef","Type":"ContainerDied","Data":"c3aca2cdbcd4a8b8f806a2e110ffaff4465e241413b62d01832043305d4c81af"} Feb 14 04:48:39 crc kubenswrapper[4867]: I0214 04:48:39.515728 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vjz5q" Feb 14 04:48:39 crc kubenswrapper[4867]: I0214 04:48:39.639030 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h75f6\" (UniqueName: \"kubernetes.io/projected/c3ef84d6-150a-46b1-8e93-7e650c8be1ef-kube-api-access-h75f6\") pod \"c3ef84d6-150a-46b1-8e93-7e650c8be1ef\" (UID: \"c3ef84d6-150a-46b1-8e93-7e650c8be1ef\") " Feb 14 04:48:39 crc kubenswrapper[4867]: I0214 04:48:39.639322 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c3ef84d6-150a-46b1-8e93-7e650c8be1ef-inventory\") pod \"c3ef84d6-150a-46b1-8e93-7e650c8be1ef\" (UID: \"c3ef84d6-150a-46b1-8e93-7e650c8be1ef\") " Feb 14 04:48:39 crc kubenswrapper[4867]: I0214 04:48:39.639360 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c3ef84d6-150a-46b1-8e93-7e650c8be1ef-ssh-key-openstack-edpm-ipam\") pod \"c3ef84d6-150a-46b1-8e93-7e650c8be1ef\" (UID: \"c3ef84d6-150a-46b1-8e93-7e650c8be1ef\") " Feb 14 04:48:39 crc kubenswrapper[4867]: I0214 04:48:39.639413 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3ef84d6-150a-46b1-8e93-7e650c8be1ef-ovn-combined-ca-bundle\") pod \"c3ef84d6-150a-46b1-8e93-7e650c8be1ef\" (UID: \"c3ef84d6-150a-46b1-8e93-7e650c8be1ef\") " Feb 14 04:48:39 crc kubenswrapper[4867]: I0214 04:48:39.639475 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/c3ef84d6-150a-46b1-8e93-7e650c8be1ef-ovncontroller-config-0\") pod \"c3ef84d6-150a-46b1-8e93-7e650c8be1ef\" (UID: \"c3ef84d6-150a-46b1-8e93-7e650c8be1ef\") " Feb 14 04:48:39 crc kubenswrapper[4867]: I0214 04:48:39.645937 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3ef84d6-150a-46b1-8e93-7e650c8be1ef-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "c3ef84d6-150a-46b1-8e93-7e650c8be1ef" (UID: "c3ef84d6-150a-46b1-8e93-7e650c8be1ef"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:48:39 crc kubenswrapper[4867]: I0214 04:48:39.645957 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3ef84d6-150a-46b1-8e93-7e650c8be1ef-kube-api-access-h75f6" (OuterVolumeSpecName: "kube-api-access-h75f6") pod "c3ef84d6-150a-46b1-8e93-7e650c8be1ef" (UID: "c3ef84d6-150a-46b1-8e93-7e650c8be1ef"). InnerVolumeSpecName "kube-api-access-h75f6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:48:39 crc kubenswrapper[4867]: I0214 04:48:39.672211 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3ef84d6-150a-46b1-8e93-7e650c8be1ef-inventory" (OuterVolumeSpecName: "inventory") pod "c3ef84d6-150a-46b1-8e93-7e650c8be1ef" (UID: "c3ef84d6-150a-46b1-8e93-7e650c8be1ef"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:48:39 crc kubenswrapper[4867]: I0214 04:48:39.675105 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3ef84d6-150a-46b1-8e93-7e650c8be1ef-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c3ef84d6-150a-46b1-8e93-7e650c8be1ef" (UID: "c3ef84d6-150a-46b1-8e93-7e650c8be1ef"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:48:39 crc kubenswrapper[4867]: I0214 04:48:39.679717 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3ef84d6-150a-46b1-8e93-7e650c8be1ef-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "c3ef84d6-150a-46b1-8e93-7e650c8be1ef" (UID: "c3ef84d6-150a-46b1-8e93-7e650c8be1ef"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:48:39 crc kubenswrapper[4867]: I0214 04:48:39.742419 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h75f6\" (UniqueName: \"kubernetes.io/projected/c3ef84d6-150a-46b1-8e93-7e650c8be1ef-kube-api-access-h75f6\") on node \"crc\" DevicePath \"\"" Feb 14 04:48:39 crc kubenswrapper[4867]: I0214 04:48:39.742464 4867 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c3ef84d6-150a-46b1-8e93-7e650c8be1ef-inventory\") on node \"crc\" DevicePath \"\"" Feb 14 04:48:39 crc kubenswrapper[4867]: I0214 04:48:39.742476 4867 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c3ef84d6-150a-46b1-8e93-7e650c8be1ef-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 14 04:48:39 crc kubenswrapper[4867]: I0214 04:48:39.742486 4867 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3ef84d6-150a-46b1-8e93-7e650c8be1ef-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:48:39 crc kubenswrapper[4867]: I0214 04:48:39.742496 4867 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/c3ef84d6-150a-46b1-8e93-7e650c8be1ef-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Feb 14 04:48:39 crc kubenswrapper[4867]: I0214 04:48:39.981136 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vjz5q" event={"ID":"c3ef84d6-150a-46b1-8e93-7e650c8be1ef","Type":"ContainerDied","Data":"3e3aaf41c3c873c5a763e6d91f73ddb83d4d8bf709983155009d70de531c985d"} Feb 14 04:48:39 crc kubenswrapper[4867]: I0214 04:48:39.981402 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e3aaf41c3c873c5a763e6d91f73ddb83d4d8bf709983155009d70de531c985d" Feb 14 04:48:39 crc kubenswrapper[4867]: I0214 04:48:39.981531 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vjz5q" Feb 14 04:48:40 crc kubenswrapper[4867]: I0214 04:48:40.125998 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wq44m"] Feb 14 04:48:40 crc kubenswrapper[4867]: E0214 04:48:40.126811 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3ef84d6-150a-46b1-8e93-7e650c8be1ef" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 14 04:48:40 crc kubenswrapper[4867]: I0214 04:48:40.126829 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3ef84d6-150a-46b1-8e93-7e650c8be1ef" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 14 04:48:40 crc kubenswrapper[4867]: I0214 04:48:40.127069 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3ef84d6-150a-46b1-8e93-7e650c8be1ef" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 14 04:48:40 crc kubenswrapper[4867]: I0214 04:48:40.128189 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wq44m" Feb 14 04:48:40 crc kubenswrapper[4867]: I0214 04:48:40.130566 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Feb 14 04:48:40 crc kubenswrapper[4867]: I0214 04:48:40.130786 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 14 04:48:40 crc kubenswrapper[4867]: I0214 04:48:40.130945 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 14 04:48:40 crc kubenswrapper[4867]: I0214 04:48:40.131140 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-24tmg" Feb 14 04:48:40 crc kubenswrapper[4867]: I0214 04:48:40.131307 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 14 04:48:40 crc kubenswrapper[4867]: I0214 04:48:40.131529 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Feb 14 04:48:40 crc kubenswrapper[4867]: I0214 04:48:40.141943 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wq44m"] Feb 14 04:48:40 crc kubenswrapper[4867]: I0214 04:48:40.158984 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d07bc498-5b6c-465a-bda2-df814e9c19c8-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wq44m\" (UID: \"d07bc498-5b6c-465a-bda2-df814e9c19c8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wq44m" Feb 14 04:48:40 crc kubenswrapper[4867]: I0214 04:48:40.159175 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/d07bc498-5b6c-465a-bda2-df814e9c19c8-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wq44m\" (UID: \"d07bc498-5b6c-465a-bda2-df814e9c19c8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wq44m" Feb 14 04:48:40 crc kubenswrapper[4867]: I0214 04:48:40.159230 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d07bc498-5b6c-465a-bda2-df814e9c19c8-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wq44m\" (UID: \"d07bc498-5b6c-465a-bda2-df814e9c19c8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wq44m" Feb 14 04:48:40 crc kubenswrapper[4867]: I0214 04:48:40.159324 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d07bc498-5b6c-465a-bda2-df814e9c19c8-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wq44m\" (UID: \"d07bc498-5b6c-465a-bda2-df814e9c19c8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wq44m" Feb 14 04:48:40 crc kubenswrapper[4867]: I0214 04:48:40.159639 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqghv\" (UniqueName: \"kubernetes.io/projected/d07bc498-5b6c-465a-bda2-df814e9c19c8-kube-api-access-jqghv\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wq44m\" (UID: \"d07bc498-5b6c-465a-bda2-df814e9c19c8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wq44m" Feb 14 04:48:40 crc kubenswrapper[4867]: I0214 04:48:40.159676 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/d07bc498-5b6c-465a-bda2-df814e9c19c8-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wq44m\" (UID: \"d07bc498-5b6c-465a-bda2-df814e9c19c8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wq44m" Feb 14 04:48:40 crc kubenswrapper[4867]: I0214 04:48:40.262768 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/d07bc498-5b6c-465a-bda2-df814e9c19c8-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wq44m\" (UID: \"d07bc498-5b6c-465a-bda2-df814e9c19c8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wq44m" Feb 14 04:48:40 crc kubenswrapper[4867]: I0214 04:48:40.263113 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d07bc498-5b6c-465a-bda2-df814e9c19c8-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wq44m\" (UID: \"d07bc498-5b6c-465a-bda2-df814e9c19c8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wq44m" Feb 14 04:48:40 crc kubenswrapper[4867]: I0214 04:48:40.263245 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d07bc498-5b6c-465a-bda2-df814e9c19c8-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wq44m\" (UID: \"d07bc498-5b6c-465a-bda2-df814e9c19c8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wq44m" Feb 14 04:48:40 crc kubenswrapper[4867]: I0214 04:48:40.263425 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqghv\" (UniqueName: \"kubernetes.io/projected/d07bc498-5b6c-465a-bda2-df814e9c19c8-kube-api-access-jqghv\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wq44m\" (UID: \"d07bc498-5b6c-465a-bda2-df814e9c19c8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wq44m" Feb 14 04:48:40 crc kubenswrapper[4867]: I0214 04:48:40.263507 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/d07bc498-5b6c-465a-bda2-df814e9c19c8-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wq44m\" (UID: \"d07bc498-5b6c-465a-bda2-df814e9c19c8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wq44m" Feb 14 04:48:40 crc kubenswrapper[4867]: I0214 04:48:40.263707 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d07bc498-5b6c-465a-bda2-df814e9c19c8-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wq44m\" (UID: \"d07bc498-5b6c-465a-bda2-df814e9c19c8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wq44m" Feb 14 04:48:40 crc kubenswrapper[4867]: I0214 04:48:40.266973 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d07bc498-5b6c-465a-bda2-df814e9c19c8-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wq44m\" (UID: \"d07bc498-5b6c-465a-bda2-df814e9c19c8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wq44m" Feb 14 04:48:40 crc kubenswrapper[4867]: I0214 04:48:40.266983 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d07bc498-5b6c-465a-bda2-df814e9c19c8-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wq44m\" (UID: \"d07bc498-5b6c-465a-bda2-df814e9c19c8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wq44m" Feb 14 04:48:40 crc kubenswrapper[4867]: I0214 04:48:40.267964 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d07bc498-5b6c-465a-bda2-df814e9c19c8-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wq44m\" (UID: \"d07bc498-5b6c-465a-bda2-df814e9c19c8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wq44m" Feb 14 04:48:40 crc kubenswrapper[4867]: I0214 04:48:40.268539 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/d07bc498-5b6c-465a-bda2-df814e9c19c8-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wq44m\" (UID: \"d07bc498-5b6c-465a-bda2-df814e9c19c8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wq44m" Feb 14 04:48:40 crc kubenswrapper[4867]: I0214 04:48:40.270082 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/d07bc498-5b6c-465a-bda2-df814e9c19c8-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wq44m\" (UID: \"d07bc498-5b6c-465a-bda2-df814e9c19c8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wq44m" Feb 14 04:48:40 crc kubenswrapper[4867]: I0214 04:48:40.284102 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqghv\" (UniqueName: \"kubernetes.io/projected/d07bc498-5b6c-465a-bda2-df814e9c19c8-kube-api-access-jqghv\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wq44m\" (UID: \"d07bc498-5b6c-465a-bda2-df814e9c19c8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wq44m" Feb 14 04:48:40 crc kubenswrapper[4867]: I0214 04:48:40.445829 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wq44m" Feb 14 04:48:41 crc kubenswrapper[4867]: I0214 04:48:41.050315 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wq44m"] Feb 14 04:48:42 crc kubenswrapper[4867]: I0214 04:48:42.008205 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wq44m" event={"ID":"d07bc498-5b6c-465a-bda2-df814e9c19c8","Type":"ContainerStarted","Data":"0e52fe21a2c715c09a621b92707814df326780c9f866675ff4fcb182f274d170"} Feb 14 04:48:42 crc kubenswrapper[4867]: I0214 04:48:42.008960 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wq44m" event={"ID":"d07bc498-5b6c-465a-bda2-df814e9c19c8","Type":"ContainerStarted","Data":"1b02f1096ca26a8110685cbd032274948f4f92a37e4ba6e7f6eb9573c02dd7c1"} Feb 14 04:48:42 crc kubenswrapper[4867]: I0214 04:48:42.035642 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wq44m" podStartSLOduration=1.56610504 podStartE2EDuration="2.035619818s" podCreationTimestamp="2026-02-14 04:48:40 +0000 UTC" firstStartedPulling="2026-02-14 04:48:41.061457349 +0000 UTC m=+2353.142394673" lastFinishedPulling="2026-02-14 04:48:41.530972137 +0000 UTC m=+2353.611909451" observedRunningTime="2026-02-14 04:48:42.024141806 +0000 UTC m=+2354.105079140" watchObservedRunningTime="2026-02-14 04:48:42.035619818 +0000 UTC m=+2354.116557132" Feb 14 04:48:42 crc kubenswrapper[4867]: I0214 04:48:42.998315 4867 scope.go:117] "RemoveContainer" containerID="2e46dcab63865af965f1ceab9775684d2c284c2072e738aed0acdc7b372802d2" Feb 14 04:48:42 crc kubenswrapper[4867]: E0214 04:48:42.998718 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:48:54 crc kubenswrapper[4867]: I0214 04:48:54.998473 4867 scope.go:117] "RemoveContainer" containerID="2e46dcab63865af965f1ceab9775684d2c284c2072e738aed0acdc7b372802d2" Feb 14 04:48:54 crc kubenswrapper[4867]: E0214 04:48:54.999563 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:49:05 crc kubenswrapper[4867]: I0214 04:49:05.997356 4867 scope.go:117] "RemoveContainer" containerID="2e46dcab63865af965f1ceab9775684d2c284c2072e738aed0acdc7b372802d2" Feb 14 04:49:05 crc kubenswrapper[4867]: E0214 04:49:05.998241 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:49:19 crc kubenswrapper[4867]: I0214 04:49:19.998245 4867 scope.go:117] "RemoveContainer" containerID="2e46dcab63865af965f1ceab9775684d2c284c2072e738aed0acdc7b372802d2" Feb 14 04:49:20 crc kubenswrapper[4867]: E0214 04:49:20.000285 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:49:26 crc kubenswrapper[4867]: I0214 04:49:26.685105 4867 generic.go:334] "Generic (PLEG): container finished" podID="d07bc498-5b6c-465a-bda2-df814e9c19c8" containerID="0e52fe21a2c715c09a621b92707814df326780c9f866675ff4fcb182f274d170" exitCode=0 Feb 14 04:49:26 crc kubenswrapper[4867]: I0214 04:49:26.685244 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wq44m" event={"ID":"d07bc498-5b6c-465a-bda2-df814e9c19c8","Type":"ContainerDied","Data":"0e52fe21a2c715c09a621b92707814df326780c9f866675ff4fcb182f274d170"} Feb 14 04:49:28 crc kubenswrapper[4867]: I0214 04:49:28.272331 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wq44m" Feb 14 04:49:28 crc kubenswrapper[4867]: I0214 04:49:28.431195 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/d07bc498-5b6c-465a-bda2-df814e9c19c8-neutron-ovn-metadata-agent-neutron-config-0\") pod \"d07bc498-5b6c-465a-bda2-df814e9c19c8\" (UID: \"d07bc498-5b6c-465a-bda2-df814e9c19c8\") " Feb 14 04:49:28 crc kubenswrapper[4867]: I0214 04:49:28.431289 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d07bc498-5b6c-465a-bda2-df814e9c19c8-ssh-key-openstack-edpm-ipam\") pod \"d07bc498-5b6c-465a-bda2-df814e9c19c8\" (UID: \"d07bc498-5b6c-465a-bda2-df814e9c19c8\") " Feb 14 04:49:28 crc kubenswrapper[4867]: I0214 04:49:28.431380 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/d07bc498-5b6c-465a-bda2-df814e9c19c8-nova-metadata-neutron-config-0\") pod \"d07bc498-5b6c-465a-bda2-df814e9c19c8\" (UID: \"d07bc498-5b6c-465a-bda2-df814e9c19c8\") " Feb 14 04:49:28 crc kubenswrapper[4867]: I0214 04:49:28.431401 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d07bc498-5b6c-465a-bda2-df814e9c19c8-neutron-metadata-combined-ca-bundle\") pod \"d07bc498-5b6c-465a-bda2-df814e9c19c8\" (UID: \"d07bc498-5b6c-465a-bda2-df814e9c19c8\") " Feb 14 04:49:28 crc kubenswrapper[4867]: I0214 04:49:28.431422 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d07bc498-5b6c-465a-bda2-df814e9c19c8-inventory\") pod \"d07bc498-5b6c-465a-bda2-df814e9c19c8\" (UID: \"d07bc498-5b6c-465a-bda2-df814e9c19c8\") " Feb 14 04:49:28 crc kubenswrapper[4867]: I0214 04:49:28.431519 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jqghv\" (UniqueName: \"kubernetes.io/projected/d07bc498-5b6c-465a-bda2-df814e9c19c8-kube-api-access-jqghv\") pod \"d07bc498-5b6c-465a-bda2-df814e9c19c8\" (UID: \"d07bc498-5b6c-465a-bda2-df814e9c19c8\") " Feb 14 04:49:28 crc kubenswrapper[4867]: I0214 04:49:28.437219 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d07bc498-5b6c-465a-bda2-df814e9c19c8-kube-api-access-jqghv" (OuterVolumeSpecName: "kube-api-access-jqghv") pod "d07bc498-5b6c-465a-bda2-df814e9c19c8" (UID: "d07bc498-5b6c-465a-bda2-df814e9c19c8"). InnerVolumeSpecName "kube-api-access-jqghv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:49:28 crc kubenswrapper[4867]: I0214 04:49:28.438009 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d07bc498-5b6c-465a-bda2-df814e9c19c8-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "d07bc498-5b6c-465a-bda2-df814e9c19c8" (UID: "d07bc498-5b6c-465a-bda2-df814e9c19c8"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:49:28 crc kubenswrapper[4867]: I0214 04:49:28.465132 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d07bc498-5b6c-465a-bda2-df814e9c19c8-inventory" (OuterVolumeSpecName: "inventory") pod "d07bc498-5b6c-465a-bda2-df814e9c19c8" (UID: "d07bc498-5b6c-465a-bda2-df814e9c19c8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:49:28 crc kubenswrapper[4867]: I0214 04:49:28.465318 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d07bc498-5b6c-465a-bda2-df814e9c19c8-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "d07bc498-5b6c-465a-bda2-df814e9c19c8" (UID: "d07bc498-5b6c-465a-bda2-df814e9c19c8"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:49:28 crc kubenswrapper[4867]: I0214 04:49:28.466125 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d07bc498-5b6c-465a-bda2-df814e9c19c8-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d07bc498-5b6c-465a-bda2-df814e9c19c8" (UID: "d07bc498-5b6c-465a-bda2-df814e9c19c8"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:49:28 crc kubenswrapper[4867]: I0214 04:49:28.485342 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d07bc498-5b6c-465a-bda2-df814e9c19c8-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "d07bc498-5b6c-465a-bda2-df814e9c19c8" (UID: "d07bc498-5b6c-465a-bda2-df814e9c19c8"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:49:28 crc kubenswrapper[4867]: I0214 04:49:28.534636 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jqghv\" (UniqueName: \"kubernetes.io/projected/d07bc498-5b6c-465a-bda2-df814e9c19c8-kube-api-access-jqghv\") on node \"crc\" DevicePath \"\"" Feb 14 04:49:28 crc kubenswrapper[4867]: I0214 04:49:28.534672 4867 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/d07bc498-5b6c-465a-bda2-df814e9c19c8-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 14 04:49:28 crc kubenswrapper[4867]: I0214 04:49:28.534683 4867 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d07bc498-5b6c-465a-bda2-df814e9c19c8-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 14 04:49:28 crc kubenswrapper[4867]: I0214 04:49:28.534693 4867 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/d07bc498-5b6c-465a-bda2-df814e9c19c8-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 14 04:49:28 crc kubenswrapper[4867]: I0214 04:49:28.534702 4867 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d07bc498-5b6c-465a-bda2-df814e9c19c8-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:49:28 crc kubenswrapper[4867]: I0214 04:49:28.534713 4867 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d07bc498-5b6c-465a-bda2-df814e9c19c8-inventory\") on node \"crc\" DevicePath \"\"" Feb 14 04:49:28 crc kubenswrapper[4867]: I0214 04:49:28.715151 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wq44m" event={"ID":"d07bc498-5b6c-465a-bda2-df814e9c19c8","Type":"ContainerDied","Data":"1b02f1096ca26a8110685cbd032274948f4f92a37e4ba6e7f6eb9573c02dd7c1"} Feb 14 04:49:28 crc kubenswrapper[4867]: I0214 04:49:28.715205 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b02f1096ca26a8110685cbd032274948f4f92a37e4ba6e7f6eb9573c02dd7c1" Feb 14 04:49:28 crc kubenswrapper[4867]: I0214 04:49:28.715250 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wq44m" Feb 14 04:49:28 crc kubenswrapper[4867]: I0214 04:49:28.803736 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4rs2p"] Feb 14 04:49:28 crc kubenswrapper[4867]: E0214 04:49:28.804521 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d07bc498-5b6c-465a-bda2-df814e9c19c8" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 14 04:49:28 crc kubenswrapper[4867]: I0214 04:49:28.804564 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="d07bc498-5b6c-465a-bda2-df814e9c19c8" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 14 04:49:28 crc kubenswrapper[4867]: I0214 04:49:28.804878 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="d07bc498-5b6c-465a-bda2-df814e9c19c8" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 14 04:49:28 crc kubenswrapper[4867]: I0214 04:49:28.805955 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4rs2p" Feb 14 04:49:28 crc kubenswrapper[4867]: I0214 04:49:28.809032 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 14 04:49:28 crc kubenswrapper[4867]: I0214 04:49:28.809700 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 14 04:49:28 crc kubenswrapper[4867]: I0214 04:49:28.809843 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Feb 14 04:49:28 crc kubenswrapper[4867]: I0214 04:49:28.809905 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-24tmg" Feb 14 04:49:28 crc kubenswrapper[4867]: I0214 04:49:28.810038 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 14 04:49:28 crc kubenswrapper[4867]: I0214 04:49:28.817141 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4rs2p"] Feb 14 04:49:28 crc kubenswrapper[4867]: I0214 04:49:28.840953 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/8ec3156c-bcce-4dee-8ce5-7773409e880e-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4rs2p\" (UID: \"8ec3156c-bcce-4dee-8ce5-7773409e880e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4rs2p" Feb 14 04:49:28 crc kubenswrapper[4867]: I0214 04:49:28.841378 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ec3156c-bcce-4dee-8ce5-7773409e880e-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4rs2p\" (UID: \"8ec3156c-bcce-4dee-8ce5-7773409e880e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4rs2p" Feb 14 04:49:28 crc kubenswrapper[4867]: I0214 04:49:28.841620 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8ec3156c-bcce-4dee-8ce5-7773409e880e-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4rs2p\" (UID: \"8ec3156c-bcce-4dee-8ce5-7773409e880e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4rs2p" Feb 14 04:49:28 crc kubenswrapper[4867]: I0214 04:49:28.841891 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8ec3156c-bcce-4dee-8ce5-7773409e880e-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4rs2p\" (UID: \"8ec3156c-bcce-4dee-8ce5-7773409e880e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4rs2p" Feb 14 04:49:28 crc kubenswrapper[4867]: I0214 04:49:28.842136 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dbfj\" (UniqueName: \"kubernetes.io/projected/8ec3156c-bcce-4dee-8ce5-7773409e880e-kube-api-access-5dbfj\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4rs2p\" (UID: \"8ec3156c-bcce-4dee-8ce5-7773409e880e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4rs2p" Feb 14 04:49:28 crc kubenswrapper[4867]: I0214 04:49:28.943516 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ec3156c-bcce-4dee-8ce5-7773409e880e-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4rs2p\" (UID: \"8ec3156c-bcce-4dee-8ce5-7773409e880e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4rs2p" Feb 14 04:49:28 crc kubenswrapper[4867]: I0214 04:49:28.943585 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8ec3156c-bcce-4dee-8ce5-7773409e880e-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4rs2p\" (UID: \"8ec3156c-bcce-4dee-8ce5-7773409e880e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4rs2p" Feb 14 04:49:28 crc kubenswrapper[4867]: I0214 04:49:28.943642 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8ec3156c-bcce-4dee-8ce5-7773409e880e-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4rs2p\" (UID: \"8ec3156c-bcce-4dee-8ce5-7773409e880e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4rs2p" Feb 14 04:49:28 crc kubenswrapper[4867]: I0214 04:49:28.943689 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dbfj\" (UniqueName: \"kubernetes.io/projected/8ec3156c-bcce-4dee-8ce5-7773409e880e-kube-api-access-5dbfj\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4rs2p\" (UID: \"8ec3156c-bcce-4dee-8ce5-7773409e880e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4rs2p" Feb 14 04:49:28 crc kubenswrapper[4867]: I0214 04:49:28.943768 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/8ec3156c-bcce-4dee-8ce5-7773409e880e-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4rs2p\" (UID: \"8ec3156c-bcce-4dee-8ce5-7773409e880e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4rs2p" Feb 14 04:49:28 crc kubenswrapper[4867]: I0214 04:49:28.948790 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ec3156c-bcce-4dee-8ce5-7773409e880e-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4rs2p\" (UID: \"8ec3156c-bcce-4dee-8ce5-7773409e880e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4rs2p" Feb 14 04:49:28 crc kubenswrapper[4867]: I0214 04:49:28.950468 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8ec3156c-bcce-4dee-8ce5-7773409e880e-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4rs2p\" (UID: \"8ec3156c-bcce-4dee-8ce5-7773409e880e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4rs2p" Feb 14 04:49:28 crc kubenswrapper[4867]: I0214 04:49:28.951170 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8ec3156c-bcce-4dee-8ce5-7773409e880e-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4rs2p\" (UID: \"8ec3156c-bcce-4dee-8ce5-7773409e880e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4rs2p" Feb 14 04:49:28 crc kubenswrapper[4867]: I0214 04:49:28.952300 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/8ec3156c-bcce-4dee-8ce5-7773409e880e-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4rs2p\" (UID: \"8ec3156c-bcce-4dee-8ce5-7773409e880e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4rs2p" Feb 14 04:49:28 crc kubenswrapper[4867]: I0214 04:49:28.963746 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dbfj\" (UniqueName: \"kubernetes.io/projected/8ec3156c-bcce-4dee-8ce5-7773409e880e-kube-api-access-5dbfj\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4rs2p\" (UID: \"8ec3156c-bcce-4dee-8ce5-7773409e880e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4rs2p" Feb 14 04:49:29 crc kubenswrapper[4867]: I0214 04:49:29.125747 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4rs2p" Feb 14 04:49:29 crc kubenswrapper[4867]: I0214 04:49:29.699201 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4rs2p"] Feb 14 04:49:29 crc kubenswrapper[4867]: I0214 04:49:29.728662 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4rs2p" event={"ID":"8ec3156c-bcce-4dee-8ce5-7773409e880e","Type":"ContainerStarted","Data":"ead84d037c9a6e54041b73f21829e99b2ee13151d4361f6e2bdce7250f6d25ba"} Feb 14 04:49:30 crc kubenswrapper[4867]: I0214 04:49:30.113564 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 14 04:49:30 crc kubenswrapper[4867]: I0214 04:49:30.739336 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4rs2p" event={"ID":"8ec3156c-bcce-4dee-8ce5-7773409e880e","Type":"ContainerStarted","Data":"9ccd9192f7366e861c0e4af53d462de7f1641852a4ce5ef2f14ad11d0dfe79e4"} Feb 14 04:49:30 crc kubenswrapper[4867]: I0214 04:49:30.765559 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4rs2p" podStartSLOduration=2.364376468 podStartE2EDuration="2.765499687s" podCreationTimestamp="2026-02-14 04:49:28 +0000 UTC" firstStartedPulling="2026-02-14 04:49:29.708349837 +0000 UTC m=+2401.789287151" lastFinishedPulling="2026-02-14 04:49:30.109473056 +0000 UTC m=+2402.190410370" observedRunningTime="2026-02-14 04:49:30.762837717 +0000 UTC m=+2402.843775041" watchObservedRunningTime="2026-02-14 04:49:30.765499687 +0000 UTC m=+2402.846437031" Feb 14 04:49:34 crc kubenswrapper[4867]: I0214 04:49:34.997990 4867 scope.go:117] "RemoveContainer" containerID="2e46dcab63865af965f1ceab9775684d2c284c2072e738aed0acdc7b372802d2" Feb 14 04:49:34 crc kubenswrapper[4867]: E0214 04:49:34.998731 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:49:46 crc kubenswrapper[4867]: I0214 04:49:46.998376 4867 scope.go:117] "RemoveContainer" containerID="2e46dcab63865af965f1ceab9775684d2c284c2072e738aed0acdc7b372802d2" Feb 14 04:49:47 crc kubenswrapper[4867]: E0214 04:49:46.999345 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:49:59 crc kubenswrapper[4867]: I0214 04:49:59.998375 4867 scope.go:117] "RemoveContainer" containerID="2e46dcab63865af965f1ceab9775684d2c284c2072e738aed0acdc7b372802d2" Feb 14 04:50:00 crc kubenswrapper[4867]: E0214 04:50:00.000575 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:50:12 crc kubenswrapper[4867]: I0214 04:50:12.998123 4867 scope.go:117] "RemoveContainer" containerID="2e46dcab63865af965f1ceab9775684d2c284c2072e738aed0acdc7b372802d2" Feb 14 04:50:13 crc kubenswrapper[4867]: E0214 04:50:12.999400 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:50:26 crc kubenswrapper[4867]: I0214 04:50:26.997750 4867 scope.go:117] "RemoveContainer" containerID="2e46dcab63865af965f1ceab9775684d2c284c2072e738aed0acdc7b372802d2" Feb 14 04:50:27 crc kubenswrapper[4867]: E0214 04:50:26.998697 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:50:39 crc kubenswrapper[4867]: I0214 04:50:39.006893 4867 scope.go:117] "RemoveContainer" containerID="2e46dcab63865af965f1ceab9775684d2c284c2072e738aed0acdc7b372802d2" Feb 14 04:50:39 crc kubenswrapper[4867]: E0214 04:50:39.007731 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:50:49 crc kubenswrapper[4867]: I0214 04:50:49.996948 4867 scope.go:117] "RemoveContainer" containerID="2e46dcab63865af965f1ceab9775684d2c284c2072e738aed0acdc7b372802d2" Feb 14 04:50:49 crc kubenswrapper[4867]: E0214 04:50:49.997795 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:51:03 crc kubenswrapper[4867]: I0214 04:51:03.997829 4867 scope.go:117] "RemoveContainer" containerID="2e46dcab63865af965f1ceab9775684d2c284c2072e738aed0acdc7b372802d2" Feb 14 04:51:04 crc kubenswrapper[4867]: E0214 04:51:03.998766 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:51:19 crc kubenswrapper[4867]: I0214 04:51:19.004744 4867 scope.go:117] "RemoveContainer" containerID="2e46dcab63865af965f1ceab9775684d2c284c2072e738aed0acdc7b372802d2" Feb 14 04:51:19 crc kubenswrapper[4867]: E0214 04:51:19.005924 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:51:33 crc kubenswrapper[4867]: I0214 04:51:33.998072 4867 scope.go:117] "RemoveContainer" containerID="2e46dcab63865af965f1ceab9775684d2c284c2072e738aed0acdc7b372802d2" Feb 14 04:51:34 crc kubenswrapper[4867]: E0214 04:51:34.002192 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:51:45 crc kubenswrapper[4867]: I0214 04:51:45.996864 4867 scope.go:117] "RemoveContainer" containerID="2e46dcab63865af965f1ceab9775684d2c284c2072e738aed0acdc7b372802d2" Feb 14 04:51:45 crc kubenswrapper[4867]: E0214 04:51:45.997766 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:52:00 crc kubenswrapper[4867]: I0214 04:52:00.998607 4867 scope.go:117] "RemoveContainer" containerID="2e46dcab63865af965f1ceab9775684d2c284c2072e738aed0acdc7b372802d2" Feb 14 04:52:01 crc kubenswrapper[4867]: E0214 04:52:00.999481 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:52:14 crc kubenswrapper[4867]: I0214 04:52:14.998391 4867 scope.go:117] "RemoveContainer" containerID="2e46dcab63865af965f1ceab9775684d2c284c2072e738aed0acdc7b372802d2" Feb 14 04:52:15 crc kubenswrapper[4867]: E0214 04:52:14.999257 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:52:29 crc kubenswrapper[4867]: I0214 04:52:29.997565 4867 scope.go:117] "RemoveContainer" containerID="2e46dcab63865af965f1ceab9775684d2c284c2072e738aed0acdc7b372802d2" Feb 14 04:52:29 crc kubenswrapper[4867]: E0214 04:52:29.998587 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:52:44 crc kubenswrapper[4867]: I0214 04:52:44.997745 4867 scope.go:117] "RemoveContainer" containerID="2e46dcab63865af965f1ceab9775684d2c284c2072e738aed0acdc7b372802d2" Feb 14 04:52:45 crc kubenswrapper[4867]: I0214 04:52:45.890602 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" event={"ID":"5992e46c-bce7-4b9f-82f2-c7ffb93286cd","Type":"ContainerStarted","Data":"e1b89ddb8a2754137d33a14676d4e33653c306a715ebb64010e116482bf849b7"} Feb 14 04:53:18 crc kubenswrapper[4867]: I0214 04:53:18.249987 4867 generic.go:334] "Generic (PLEG): container finished" podID="8ec3156c-bcce-4dee-8ce5-7773409e880e" containerID="9ccd9192f7366e861c0e4af53d462de7f1641852a4ce5ef2f14ad11d0dfe79e4" exitCode=0 Feb 14 04:53:18 crc kubenswrapper[4867]: I0214 04:53:18.250163 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4rs2p" event={"ID":"8ec3156c-bcce-4dee-8ce5-7773409e880e","Type":"ContainerDied","Data":"9ccd9192f7366e861c0e4af53d462de7f1641852a4ce5ef2f14ad11d0dfe79e4"} Feb 14 04:53:19 crc kubenswrapper[4867]: I0214 04:53:19.776990 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4rs2p" Feb 14 04:53:19 crc kubenswrapper[4867]: I0214 04:53:19.873358 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ec3156c-bcce-4dee-8ce5-7773409e880e-libvirt-combined-ca-bundle\") pod \"8ec3156c-bcce-4dee-8ce5-7773409e880e\" (UID: \"8ec3156c-bcce-4dee-8ce5-7773409e880e\") " Feb 14 04:53:19 crc kubenswrapper[4867]: I0214 04:53:19.874085 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/8ec3156c-bcce-4dee-8ce5-7773409e880e-libvirt-secret-0\") pod \"8ec3156c-bcce-4dee-8ce5-7773409e880e\" (UID: \"8ec3156c-bcce-4dee-8ce5-7773409e880e\") " Feb 14 04:53:19 crc kubenswrapper[4867]: I0214 04:53:19.874413 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5dbfj\" (UniqueName: \"kubernetes.io/projected/8ec3156c-bcce-4dee-8ce5-7773409e880e-kube-api-access-5dbfj\") pod \"8ec3156c-bcce-4dee-8ce5-7773409e880e\" (UID: \"8ec3156c-bcce-4dee-8ce5-7773409e880e\") " Feb 14 04:53:19 crc kubenswrapper[4867]: I0214 04:53:19.874687 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8ec3156c-bcce-4dee-8ce5-7773409e880e-ssh-key-openstack-edpm-ipam\") pod \"8ec3156c-bcce-4dee-8ce5-7773409e880e\" (UID: \"8ec3156c-bcce-4dee-8ce5-7773409e880e\") " Feb 14 04:53:19 crc kubenswrapper[4867]: I0214 04:53:19.874755 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8ec3156c-bcce-4dee-8ce5-7773409e880e-inventory\") pod \"8ec3156c-bcce-4dee-8ce5-7773409e880e\" (UID: \"8ec3156c-bcce-4dee-8ce5-7773409e880e\") " Feb 14 04:53:19 crc kubenswrapper[4867]: I0214 04:53:19.880550 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ec3156c-bcce-4dee-8ce5-7773409e880e-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "8ec3156c-bcce-4dee-8ce5-7773409e880e" (UID: "8ec3156c-bcce-4dee-8ce5-7773409e880e"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:53:19 crc kubenswrapper[4867]: I0214 04:53:19.883122 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ec3156c-bcce-4dee-8ce5-7773409e880e-kube-api-access-5dbfj" (OuterVolumeSpecName: "kube-api-access-5dbfj") pod "8ec3156c-bcce-4dee-8ce5-7773409e880e" (UID: "8ec3156c-bcce-4dee-8ce5-7773409e880e"). InnerVolumeSpecName "kube-api-access-5dbfj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:53:19 crc kubenswrapper[4867]: I0214 04:53:19.908174 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ec3156c-bcce-4dee-8ce5-7773409e880e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8ec3156c-bcce-4dee-8ce5-7773409e880e" (UID: "8ec3156c-bcce-4dee-8ce5-7773409e880e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:53:19 crc kubenswrapper[4867]: I0214 04:53:19.910538 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ec3156c-bcce-4dee-8ce5-7773409e880e-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "8ec3156c-bcce-4dee-8ce5-7773409e880e" (UID: "8ec3156c-bcce-4dee-8ce5-7773409e880e"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:53:19 crc kubenswrapper[4867]: I0214 04:53:19.923141 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ec3156c-bcce-4dee-8ce5-7773409e880e-inventory" (OuterVolumeSpecName: "inventory") pod "8ec3156c-bcce-4dee-8ce5-7773409e880e" (UID: "8ec3156c-bcce-4dee-8ce5-7773409e880e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:53:19 crc kubenswrapper[4867]: I0214 04:53:19.978281 4867 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ec3156c-bcce-4dee-8ce5-7773409e880e-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:53:19 crc kubenswrapper[4867]: I0214 04:53:19.978327 4867 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/8ec3156c-bcce-4dee-8ce5-7773409e880e-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Feb 14 04:53:19 crc kubenswrapper[4867]: I0214 04:53:19.978341 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5dbfj\" (UniqueName: \"kubernetes.io/projected/8ec3156c-bcce-4dee-8ce5-7773409e880e-kube-api-access-5dbfj\") on node \"crc\" DevicePath \"\"" Feb 14 04:53:19 crc kubenswrapper[4867]: I0214 04:53:19.978349 4867 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8ec3156c-bcce-4dee-8ce5-7773409e880e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 14 04:53:19 crc kubenswrapper[4867]: I0214 04:53:19.978359 4867 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8ec3156c-bcce-4dee-8ce5-7773409e880e-inventory\") on node \"crc\" DevicePath \"\"" Feb 14 04:53:20 crc kubenswrapper[4867]: I0214 04:53:20.276642 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4rs2p" event={"ID":"8ec3156c-bcce-4dee-8ce5-7773409e880e","Type":"ContainerDied","Data":"ead84d037c9a6e54041b73f21829e99b2ee13151d4361f6e2bdce7250f6d25ba"} Feb 14 04:53:20 crc kubenswrapper[4867]: I0214 04:53:20.276672 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4rs2p" Feb 14 04:53:20 crc kubenswrapper[4867]: I0214 04:53:20.276717 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ead84d037c9a6e54041b73f21829e99b2ee13151d4361f6e2bdce7250f6d25ba" Feb 14 04:53:20 crc kubenswrapper[4867]: I0214 04:53:20.401956 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-s5lc4"] Feb 14 04:53:20 crc kubenswrapper[4867]: E0214 04:53:20.402632 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ec3156c-bcce-4dee-8ce5-7773409e880e" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 14 04:53:20 crc kubenswrapper[4867]: I0214 04:53:20.402658 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ec3156c-bcce-4dee-8ce5-7773409e880e" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 14 04:53:20 crc kubenswrapper[4867]: I0214 04:53:20.402962 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ec3156c-bcce-4dee-8ce5-7773409e880e" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 14 04:53:20 crc kubenswrapper[4867]: I0214 04:53:20.404007 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s5lc4" Feb 14 04:53:20 crc kubenswrapper[4867]: I0214 04:53:20.411334 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Feb 14 04:53:20 crc kubenswrapper[4867]: I0214 04:53:20.411384 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 14 04:53:20 crc kubenswrapper[4867]: I0214 04:53:20.411595 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Feb 14 04:53:20 crc kubenswrapper[4867]: I0214 04:53:20.411986 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 14 04:53:20 crc kubenswrapper[4867]: I0214 04:53:20.413314 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-24tmg" Feb 14 04:53:20 crc kubenswrapper[4867]: I0214 04:53:20.415370 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Feb 14 04:53:20 crc kubenswrapper[4867]: I0214 04:53:20.415390 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 14 04:53:20 crc kubenswrapper[4867]: I0214 04:53:20.440142 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-s5lc4"] Feb 14 04:53:20 crc kubenswrapper[4867]: I0214 04:53:20.493358 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s5lc4\" (UID: \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s5lc4" Feb 14 04:53:20 crc kubenswrapper[4867]: I0214 04:53:20.493410 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s5lc4\" (UID: \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s5lc4" Feb 14 04:53:20 crc kubenswrapper[4867]: I0214 04:53:20.493488 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s5lc4\" (UID: \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s5lc4" Feb 14 04:53:20 crc kubenswrapper[4867]: I0214 04:53:20.493553 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s5lc4\" (UID: \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s5lc4" Feb 14 04:53:20 crc kubenswrapper[4867]: I0214 04:53:20.493615 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s5lc4\" (UID: \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s5lc4" Feb 14 04:53:20 crc kubenswrapper[4867]: I0214 04:53:20.493639 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9ccz\" (UniqueName: \"kubernetes.io/projected/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-kube-api-access-p9ccz\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s5lc4\" (UID: \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s5lc4" Feb 14 04:53:20 crc kubenswrapper[4867]: I0214 04:53:20.493660 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s5lc4\" (UID: \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s5lc4" Feb 14 04:53:20 crc kubenswrapper[4867]: I0214 04:53:20.493756 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s5lc4\" (UID: \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s5lc4" Feb 14 04:53:20 crc kubenswrapper[4867]: I0214 04:53:20.493874 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s5lc4\" (UID: \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s5lc4" Feb 14 04:53:20 crc kubenswrapper[4867]: I0214 04:53:20.493913 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s5lc4\" (UID: \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s5lc4" Feb 14 04:53:20 crc kubenswrapper[4867]: I0214 04:53:20.493929 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s5lc4\" (UID: \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s5lc4" Feb 14 04:53:20 crc kubenswrapper[4867]: I0214 04:53:20.596046 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s5lc4\" (UID: \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s5lc4" Feb 14 04:53:20 crc kubenswrapper[4867]: I0214 04:53:20.596424 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s5lc4\" (UID: \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s5lc4" Feb 14 04:53:20 crc kubenswrapper[4867]: I0214 04:53:20.596553 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9ccz\" (UniqueName: \"kubernetes.io/projected/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-kube-api-access-p9ccz\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s5lc4\" (UID: \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s5lc4" Feb 14 04:53:20 crc kubenswrapper[4867]: I0214 04:53:20.596840 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s5lc4\" (UID: \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s5lc4" Feb 14 04:53:20 crc kubenswrapper[4867]: I0214 04:53:20.596972 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s5lc4\" (UID: \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s5lc4" Feb 14 04:53:20 crc kubenswrapper[4867]: I0214 04:53:20.597144 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s5lc4\" (UID: \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s5lc4" Feb 14 04:53:20 crc kubenswrapper[4867]: I0214 04:53:20.597260 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s5lc4\" (UID: \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s5lc4" Feb 14 04:53:20 crc kubenswrapper[4867]: I0214 04:53:20.597358 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s5lc4\" (UID: \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s5lc4" Feb 14 04:53:20 crc kubenswrapper[4867]: I0214 04:53:20.597688 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s5lc4\" (UID: \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s5lc4" Feb 14 04:53:20 crc kubenswrapper[4867]: I0214 04:53:20.597802 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s5lc4\" (UID: \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s5lc4" Feb 14 04:53:20 crc kubenswrapper[4867]: I0214 04:53:20.597971 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s5lc4\" (UID: \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s5lc4" Feb 14 04:53:20 crc kubenswrapper[4867]: I0214 04:53:20.598163 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s5lc4\" (UID: \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s5lc4" Feb 14 04:53:20 crc kubenswrapper[4867]: I0214 04:53:20.600467 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s5lc4\" (UID: \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s5lc4" Feb 14 04:53:20 crc kubenswrapper[4867]: I0214 04:53:20.600911 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s5lc4\" (UID: \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s5lc4" Feb 14 04:53:20 crc kubenswrapper[4867]: I0214 04:53:20.601019 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s5lc4\" (UID: \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s5lc4" Feb 14 04:53:20 crc kubenswrapper[4867]: I0214 04:53:20.601268 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s5lc4\" (UID: \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s5lc4" Feb 14 04:53:20 crc kubenswrapper[4867]: I0214 04:53:20.601354 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s5lc4\" (UID: \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s5lc4" Feb 14 04:53:20 crc kubenswrapper[4867]: I0214 04:53:20.602233 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s5lc4\" (UID: \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s5lc4" Feb 14 04:53:20 crc kubenswrapper[4867]: I0214 04:53:20.602406 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s5lc4\" (UID: \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s5lc4" Feb 14 04:53:20 crc kubenswrapper[4867]: I0214 04:53:20.608445 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s5lc4\" (UID: \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s5lc4" Feb 14 04:53:20 crc kubenswrapper[4867]: I0214 04:53:20.611344 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s5lc4\" (UID: \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s5lc4" Feb 14 04:53:20 crc kubenswrapper[4867]: I0214 04:53:20.619591 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9ccz\" (UniqueName: \"kubernetes.io/projected/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-kube-api-access-p9ccz\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s5lc4\" (UID: \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s5lc4" Feb 14 04:53:20 crc kubenswrapper[4867]: I0214 04:53:20.732419 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s5lc4" Feb 14 04:53:21 crc kubenswrapper[4867]: I0214 04:53:21.318665 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-s5lc4"] Feb 14 04:53:21 crc kubenswrapper[4867]: I0214 04:53:21.324244 4867 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 14 04:53:22 crc kubenswrapper[4867]: I0214 04:53:22.299026 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s5lc4" event={"ID":"8c3553e4-9d3b-4c1d-bbc3-35371d733c86","Type":"ContainerStarted","Data":"35ba4629751c3d1c99df22ad826fbdecb0b6da7011373c7fcf15710f10455091"} Feb 14 04:53:22 crc kubenswrapper[4867]: I0214 04:53:22.299343 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s5lc4" event={"ID":"8c3553e4-9d3b-4c1d-bbc3-35371d733c86","Type":"ContainerStarted","Data":"22ea790dd323fc348f6fd0cafee4bad57f394f8293bdc77fe1ca0af9b1394a35"} Feb 14 04:53:22 crc kubenswrapper[4867]: I0214 04:53:22.320063 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s5lc4" podStartSLOduration=1.669509908 podStartE2EDuration="2.320042793s" podCreationTimestamp="2026-02-14 04:53:20 +0000 UTC" firstStartedPulling="2026-02-14 04:53:21.324004312 +0000 UTC m=+2633.404941626" lastFinishedPulling="2026-02-14 04:53:21.974537197 +0000 UTC m=+2634.055474511" observedRunningTime="2026-02-14 04:53:22.316651063 +0000 UTC m=+2634.397588407" watchObservedRunningTime="2026-02-14 04:53:22.320042793 +0000 UTC m=+2634.400980107" Feb 14 04:54:36 crc kubenswrapper[4867]: I0214 04:54:36.314931 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-52dzz"] Feb 14 04:54:36 crc kubenswrapper[4867]: I0214 04:54:36.318899 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-52dzz" Feb 14 04:54:36 crc kubenswrapper[4867]: I0214 04:54:36.325947 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-52dzz"] Feb 14 04:54:36 crc kubenswrapper[4867]: I0214 04:54:36.420467 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zjlz\" (UniqueName: \"kubernetes.io/projected/53d6fbce-336b-46b4-85fe-b03c0b7d9339-kube-api-access-8zjlz\") pod \"community-operators-52dzz\" (UID: \"53d6fbce-336b-46b4-85fe-b03c0b7d9339\") " pod="openshift-marketplace/community-operators-52dzz" Feb 14 04:54:36 crc kubenswrapper[4867]: I0214 04:54:36.421201 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53d6fbce-336b-46b4-85fe-b03c0b7d9339-catalog-content\") pod \"community-operators-52dzz\" (UID: \"53d6fbce-336b-46b4-85fe-b03c0b7d9339\") " pod="openshift-marketplace/community-operators-52dzz" Feb 14 04:54:36 crc kubenswrapper[4867]: I0214 04:54:36.421484 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53d6fbce-336b-46b4-85fe-b03c0b7d9339-utilities\") pod \"community-operators-52dzz\" (UID: \"53d6fbce-336b-46b4-85fe-b03c0b7d9339\") " pod="openshift-marketplace/community-operators-52dzz" Feb 14 04:54:36 crc kubenswrapper[4867]: I0214 04:54:36.524209 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53d6fbce-336b-46b4-85fe-b03c0b7d9339-catalog-content\") pod \"community-operators-52dzz\" (UID: \"53d6fbce-336b-46b4-85fe-b03c0b7d9339\") " pod="openshift-marketplace/community-operators-52dzz" Feb 14 04:54:36 crc kubenswrapper[4867]: I0214 04:54:36.524332 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53d6fbce-336b-46b4-85fe-b03c0b7d9339-utilities\") pod \"community-operators-52dzz\" (UID: \"53d6fbce-336b-46b4-85fe-b03c0b7d9339\") " pod="openshift-marketplace/community-operators-52dzz" Feb 14 04:54:36 crc kubenswrapper[4867]: I0214 04:54:36.524915 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zjlz\" (UniqueName: \"kubernetes.io/projected/53d6fbce-336b-46b4-85fe-b03c0b7d9339-kube-api-access-8zjlz\") pod \"community-operators-52dzz\" (UID: \"53d6fbce-336b-46b4-85fe-b03c0b7d9339\") " pod="openshift-marketplace/community-operators-52dzz" Feb 14 04:54:36 crc kubenswrapper[4867]: I0214 04:54:36.525000 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53d6fbce-336b-46b4-85fe-b03c0b7d9339-utilities\") pod \"community-operators-52dzz\" (UID: \"53d6fbce-336b-46b4-85fe-b03c0b7d9339\") " pod="openshift-marketplace/community-operators-52dzz" Feb 14 04:54:36 crc kubenswrapper[4867]: I0214 04:54:36.525243 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53d6fbce-336b-46b4-85fe-b03c0b7d9339-catalog-content\") pod \"community-operators-52dzz\" (UID: \"53d6fbce-336b-46b4-85fe-b03c0b7d9339\") " pod="openshift-marketplace/community-operators-52dzz" Feb 14 04:54:36 crc kubenswrapper[4867]: I0214 04:54:36.546424 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zjlz\" (UniqueName: \"kubernetes.io/projected/53d6fbce-336b-46b4-85fe-b03c0b7d9339-kube-api-access-8zjlz\") pod \"community-operators-52dzz\" (UID: \"53d6fbce-336b-46b4-85fe-b03c0b7d9339\") " pod="openshift-marketplace/community-operators-52dzz" Feb 14 04:54:36 crc kubenswrapper[4867]: I0214 04:54:36.655938 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-52dzz" Feb 14 04:54:37 crc kubenswrapper[4867]: I0214 04:54:37.256586 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-52dzz"] Feb 14 04:54:37 crc kubenswrapper[4867]: I0214 04:54:37.610449 4867 generic.go:334] "Generic (PLEG): container finished" podID="53d6fbce-336b-46b4-85fe-b03c0b7d9339" containerID="3a0158ad58ab99473d0a6771f7b81a8c4f2c53aff6439d0f5cd5ebe48d657a89" exitCode=0 Feb 14 04:54:37 crc kubenswrapper[4867]: I0214 04:54:37.610543 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-52dzz" event={"ID":"53d6fbce-336b-46b4-85fe-b03c0b7d9339","Type":"ContainerDied","Data":"3a0158ad58ab99473d0a6771f7b81a8c4f2c53aff6439d0f5cd5ebe48d657a89"} Feb 14 04:54:37 crc kubenswrapper[4867]: I0214 04:54:37.610949 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-52dzz" event={"ID":"53d6fbce-336b-46b4-85fe-b03c0b7d9339","Type":"ContainerStarted","Data":"26d6ff1e05b77e17e7dadab06eeb78e805a361ff6edbdf729681eeed3227639f"} Feb 14 04:54:38 crc kubenswrapper[4867]: I0214 04:54:38.623912 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-52dzz" event={"ID":"53d6fbce-336b-46b4-85fe-b03c0b7d9339","Type":"ContainerStarted","Data":"7601bce435eea6ccd54cad135f396d925fa3553a98738d45506dc83c1f60bcfc"} Feb 14 04:54:40 crc kubenswrapper[4867]: I0214 04:54:40.646062 4867 generic.go:334] "Generic (PLEG): container finished" podID="53d6fbce-336b-46b4-85fe-b03c0b7d9339" containerID="7601bce435eea6ccd54cad135f396d925fa3553a98738d45506dc83c1f60bcfc" exitCode=0 Feb 14 04:54:40 crc kubenswrapper[4867]: I0214 04:54:40.646141 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-52dzz" event={"ID":"53d6fbce-336b-46b4-85fe-b03c0b7d9339","Type":"ContainerDied","Data":"7601bce435eea6ccd54cad135f396d925fa3553a98738d45506dc83c1f60bcfc"} Feb 14 04:54:41 crc kubenswrapper[4867]: I0214 04:54:41.673011 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-52dzz" event={"ID":"53d6fbce-336b-46b4-85fe-b03c0b7d9339","Type":"ContainerStarted","Data":"09625455fde410a4535cb3c133cf3c021a93293ab1b62943f8d8ab93001e22a1"} Feb 14 04:54:41 crc kubenswrapper[4867]: I0214 04:54:41.706425 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-52dzz" podStartSLOduration=2.264111703 podStartE2EDuration="5.706401399s" podCreationTimestamp="2026-02-14 04:54:36 +0000 UTC" firstStartedPulling="2026-02-14 04:54:37.613184892 +0000 UTC m=+2709.694122206" lastFinishedPulling="2026-02-14 04:54:41.055474588 +0000 UTC m=+2713.136411902" observedRunningTime="2026-02-14 04:54:41.694375093 +0000 UTC m=+2713.775312417" watchObservedRunningTime="2026-02-14 04:54:41.706401399 +0000 UTC m=+2713.787338713" Feb 14 04:54:46 crc kubenswrapper[4867]: I0214 04:54:46.656288 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-52dzz" Feb 14 04:54:46 crc kubenswrapper[4867]: I0214 04:54:46.656849 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-52dzz" Feb 14 04:54:46 crc kubenswrapper[4867]: I0214 04:54:46.711318 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-52dzz" Feb 14 04:54:46 crc kubenswrapper[4867]: I0214 04:54:46.778710 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-52dzz" Feb 14 04:54:46 crc kubenswrapper[4867]: I0214 04:54:46.953717 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-52dzz"] Feb 14 04:54:48 crc kubenswrapper[4867]: I0214 04:54:48.753610 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-52dzz" podUID="53d6fbce-336b-46b4-85fe-b03c0b7d9339" containerName="registry-server" containerID="cri-o://09625455fde410a4535cb3c133cf3c021a93293ab1b62943f8d8ab93001e22a1" gracePeriod=2 Feb 14 04:54:49 crc kubenswrapper[4867]: I0214 04:54:49.303174 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-52dzz" Feb 14 04:54:49 crc kubenswrapper[4867]: I0214 04:54:49.432627 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zjlz\" (UniqueName: \"kubernetes.io/projected/53d6fbce-336b-46b4-85fe-b03c0b7d9339-kube-api-access-8zjlz\") pod \"53d6fbce-336b-46b4-85fe-b03c0b7d9339\" (UID: \"53d6fbce-336b-46b4-85fe-b03c0b7d9339\") " Feb 14 04:54:49 crc kubenswrapper[4867]: I0214 04:54:49.432840 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53d6fbce-336b-46b4-85fe-b03c0b7d9339-catalog-content\") pod \"53d6fbce-336b-46b4-85fe-b03c0b7d9339\" (UID: \"53d6fbce-336b-46b4-85fe-b03c0b7d9339\") " Feb 14 04:54:49 crc kubenswrapper[4867]: I0214 04:54:49.432885 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53d6fbce-336b-46b4-85fe-b03c0b7d9339-utilities\") pod \"53d6fbce-336b-46b4-85fe-b03c0b7d9339\" (UID: \"53d6fbce-336b-46b4-85fe-b03c0b7d9339\") " Feb 14 04:54:49 crc kubenswrapper[4867]: I0214 04:54:49.433544 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53d6fbce-336b-46b4-85fe-b03c0b7d9339-utilities" (OuterVolumeSpecName: "utilities") pod "53d6fbce-336b-46b4-85fe-b03c0b7d9339" (UID: "53d6fbce-336b-46b4-85fe-b03c0b7d9339"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:54:49 crc kubenswrapper[4867]: I0214 04:54:49.434622 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53d6fbce-336b-46b4-85fe-b03c0b7d9339-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 04:54:49 crc kubenswrapper[4867]: I0214 04:54:49.439450 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53d6fbce-336b-46b4-85fe-b03c0b7d9339-kube-api-access-8zjlz" (OuterVolumeSpecName: "kube-api-access-8zjlz") pod "53d6fbce-336b-46b4-85fe-b03c0b7d9339" (UID: "53d6fbce-336b-46b4-85fe-b03c0b7d9339"). InnerVolumeSpecName "kube-api-access-8zjlz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:54:49 crc kubenswrapper[4867]: I0214 04:54:49.496160 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53d6fbce-336b-46b4-85fe-b03c0b7d9339-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "53d6fbce-336b-46b4-85fe-b03c0b7d9339" (UID: "53d6fbce-336b-46b4-85fe-b03c0b7d9339"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:54:49 crc kubenswrapper[4867]: I0214 04:54:49.538189 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8zjlz\" (UniqueName: \"kubernetes.io/projected/53d6fbce-336b-46b4-85fe-b03c0b7d9339-kube-api-access-8zjlz\") on node \"crc\" DevicePath \"\"" Feb 14 04:54:49 crc kubenswrapper[4867]: I0214 04:54:49.538230 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53d6fbce-336b-46b4-85fe-b03c0b7d9339-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 04:54:49 crc kubenswrapper[4867]: I0214 04:54:49.766132 4867 generic.go:334] "Generic (PLEG): container finished" podID="53d6fbce-336b-46b4-85fe-b03c0b7d9339" containerID="09625455fde410a4535cb3c133cf3c021a93293ab1b62943f8d8ab93001e22a1" exitCode=0 Feb 14 04:54:49 crc kubenswrapper[4867]: I0214 04:54:49.766201 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-52dzz" Feb 14 04:54:49 crc kubenswrapper[4867]: I0214 04:54:49.766238 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-52dzz" event={"ID":"53d6fbce-336b-46b4-85fe-b03c0b7d9339","Type":"ContainerDied","Data":"09625455fde410a4535cb3c133cf3c021a93293ab1b62943f8d8ab93001e22a1"} Feb 14 04:54:49 crc kubenswrapper[4867]: I0214 04:54:49.766746 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-52dzz" event={"ID":"53d6fbce-336b-46b4-85fe-b03c0b7d9339","Type":"ContainerDied","Data":"26d6ff1e05b77e17e7dadab06eeb78e805a361ff6edbdf729681eeed3227639f"} Feb 14 04:54:49 crc kubenswrapper[4867]: I0214 04:54:49.766788 4867 scope.go:117] "RemoveContainer" containerID="09625455fde410a4535cb3c133cf3c021a93293ab1b62943f8d8ab93001e22a1" Feb 14 04:54:49 crc kubenswrapper[4867]: I0214 04:54:49.803806 4867 scope.go:117] "RemoveContainer" containerID="7601bce435eea6ccd54cad135f396d925fa3553a98738d45506dc83c1f60bcfc" Feb 14 04:54:49 crc kubenswrapper[4867]: I0214 04:54:49.830142 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-52dzz"] Feb 14 04:54:49 crc kubenswrapper[4867]: I0214 04:54:49.838330 4867 scope.go:117] "RemoveContainer" containerID="3a0158ad58ab99473d0a6771f7b81a8c4f2c53aff6439d0f5cd5ebe48d657a89" Feb 14 04:54:49 crc kubenswrapper[4867]: I0214 04:54:49.854059 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-52dzz"] Feb 14 04:54:49 crc kubenswrapper[4867]: I0214 04:54:49.926427 4867 scope.go:117] "RemoveContainer" containerID="09625455fde410a4535cb3c133cf3c021a93293ab1b62943f8d8ab93001e22a1" Feb 14 04:54:49 crc kubenswrapper[4867]: E0214 04:54:49.927089 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09625455fde410a4535cb3c133cf3c021a93293ab1b62943f8d8ab93001e22a1\": container with ID starting with 09625455fde410a4535cb3c133cf3c021a93293ab1b62943f8d8ab93001e22a1 not found: ID does not exist" containerID="09625455fde410a4535cb3c133cf3c021a93293ab1b62943f8d8ab93001e22a1" Feb 14 04:54:49 crc kubenswrapper[4867]: I0214 04:54:49.927174 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09625455fde410a4535cb3c133cf3c021a93293ab1b62943f8d8ab93001e22a1"} err="failed to get container status \"09625455fde410a4535cb3c133cf3c021a93293ab1b62943f8d8ab93001e22a1\": rpc error: code = NotFound desc = could not find container \"09625455fde410a4535cb3c133cf3c021a93293ab1b62943f8d8ab93001e22a1\": container with ID starting with 09625455fde410a4535cb3c133cf3c021a93293ab1b62943f8d8ab93001e22a1 not found: ID does not exist" Feb 14 04:54:49 crc kubenswrapper[4867]: I0214 04:54:49.927217 4867 scope.go:117] "RemoveContainer" containerID="7601bce435eea6ccd54cad135f396d925fa3553a98738d45506dc83c1f60bcfc" Feb 14 04:54:49 crc kubenswrapper[4867]: E0214 04:54:49.927912 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7601bce435eea6ccd54cad135f396d925fa3553a98738d45506dc83c1f60bcfc\": container with ID starting with 7601bce435eea6ccd54cad135f396d925fa3553a98738d45506dc83c1f60bcfc not found: ID does not exist" containerID="7601bce435eea6ccd54cad135f396d925fa3553a98738d45506dc83c1f60bcfc" Feb 14 04:54:49 crc kubenswrapper[4867]: I0214 04:54:49.927947 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7601bce435eea6ccd54cad135f396d925fa3553a98738d45506dc83c1f60bcfc"} err="failed to get container status \"7601bce435eea6ccd54cad135f396d925fa3553a98738d45506dc83c1f60bcfc\": rpc error: code = NotFound desc = could not find container \"7601bce435eea6ccd54cad135f396d925fa3553a98738d45506dc83c1f60bcfc\": container with ID starting with 7601bce435eea6ccd54cad135f396d925fa3553a98738d45506dc83c1f60bcfc not found: ID does not exist" Feb 14 04:54:49 crc kubenswrapper[4867]: I0214 04:54:49.927970 4867 scope.go:117] "RemoveContainer" containerID="3a0158ad58ab99473d0a6771f7b81a8c4f2c53aff6439d0f5cd5ebe48d657a89" Feb 14 04:54:49 crc kubenswrapper[4867]: E0214 04:54:49.928256 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a0158ad58ab99473d0a6771f7b81a8c4f2c53aff6439d0f5cd5ebe48d657a89\": container with ID starting with 3a0158ad58ab99473d0a6771f7b81a8c4f2c53aff6439d0f5cd5ebe48d657a89 not found: ID does not exist" containerID="3a0158ad58ab99473d0a6771f7b81a8c4f2c53aff6439d0f5cd5ebe48d657a89" Feb 14 04:54:49 crc kubenswrapper[4867]: I0214 04:54:49.928300 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a0158ad58ab99473d0a6771f7b81a8c4f2c53aff6439d0f5cd5ebe48d657a89"} err="failed to get container status \"3a0158ad58ab99473d0a6771f7b81a8c4f2c53aff6439d0f5cd5ebe48d657a89\": rpc error: code = NotFound desc = could not find container \"3a0158ad58ab99473d0a6771f7b81a8c4f2c53aff6439d0f5cd5ebe48d657a89\": container with ID starting with 3a0158ad58ab99473d0a6771f7b81a8c4f2c53aff6439d0f5cd5ebe48d657a89 not found: ID does not exist" Feb 14 04:54:51 crc kubenswrapper[4867]: I0214 04:54:51.019722 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53d6fbce-336b-46b4-85fe-b03c0b7d9339" path="/var/lib/kubelet/pods/53d6fbce-336b-46b4-85fe-b03c0b7d9339/volumes" Feb 14 04:55:01 crc kubenswrapper[4867]: I0214 04:55:01.251378 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 04:55:01 crc kubenswrapper[4867]: I0214 04:55:01.252057 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 04:55:20 crc kubenswrapper[4867]: I0214 04:55:20.703021 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2r6zz"] Feb 14 04:55:20 crc kubenswrapper[4867]: E0214 04:55:20.704187 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53d6fbce-336b-46b4-85fe-b03c0b7d9339" containerName="registry-server" Feb 14 04:55:20 crc kubenswrapper[4867]: I0214 04:55:20.704207 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="53d6fbce-336b-46b4-85fe-b03c0b7d9339" containerName="registry-server" Feb 14 04:55:20 crc kubenswrapper[4867]: E0214 04:55:20.704223 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53d6fbce-336b-46b4-85fe-b03c0b7d9339" containerName="extract-utilities" Feb 14 04:55:20 crc kubenswrapper[4867]: I0214 04:55:20.704231 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="53d6fbce-336b-46b4-85fe-b03c0b7d9339" containerName="extract-utilities" Feb 14 04:55:20 crc kubenswrapper[4867]: E0214 04:55:20.704281 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53d6fbce-336b-46b4-85fe-b03c0b7d9339" containerName="extract-content" Feb 14 04:55:20 crc kubenswrapper[4867]: I0214 04:55:20.704288 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="53d6fbce-336b-46b4-85fe-b03c0b7d9339" containerName="extract-content" Feb 14 04:55:20 crc kubenswrapper[4867]: I0214 04:55:20.704583 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="53d6fbce-336b-46b4-85fe-b03c0b7d9339" containerName="registry-server" Feb 14 04:55:20 crc kubenswrapper[4867]: I0214 04:55:20.752408 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2r6zz"] Feb 14 04:55:20 crc kubenswrapper[4867]: I0214 04:55:20.752664 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2r6zz" Feb 14 04:55:20 crc kubenswrapper[4867]: I0214 04:55:20.879275 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e-catalog-content\") pod \"redhat-marketplace-2r6zz\" (UID: \"6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e\") " pod="openshift-marketplace/redhat-marketplace-2r6zz" Feb 14 04:55:20 crc kubenswrapper[4867]: I0214 04:55:20.879676 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e-utilities\") pod \"redhat-marketplace-2r6zz\" (UID: \"6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e\") " pod="openshift-marketplace/redhat-marketplace-2r6zz" Feb 14 04:55:20 crc kubenswrapper[4867]: I0214 04:55:20.879766 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8tkv\" (UniqueName: \"kubernetes.io/projected/6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e-kube-api-access-k8tkv\") pod \"redhat-marketplace-2r6zz\" (UID: \"6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e\") " pod="openshift-marketplace/redhat-marketplace-2r6zz" Feb 14 04:55:20 crc kubenswrapper[4867]: I0214 04:55:20.982646 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e-utilities\") pod \"redhat-marketplace-2r6zz\" (UID: \"6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e\") " pod="openshift-marketplace/redhat-marketplace-2r6zz" Feb 14 04:55:20 crc kubenswrapper[4867]: I0214 04:55:20.982757 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8tkv\" (UniqueName: \"kubernetes.io/projected/6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e-kube-api-access-k8tkv\") pod \"redhat-marketplace-2r6zz\" (UID: \"6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e\") " pod="openshift-marketplace/redhat-marketplace-2r6zz" Feb 14 04:55:20 crc kubenswrapper[4867]: I0214 04:55:20.982879 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e-catalog-content\") pod \"redhat-marketplace-2r6zz\" (UID: \"6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e\") " pod="openshift-marketplace/redhat-marketplace-2r6zz" Feb 14 04:55:20 crc kubenswrapper[4867]: I0214 04:55:20.983157 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e-utilities\") pod \"redhat-marketplace-2r6zz\" (UID: \"6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e\") " pod="openshift-marketplace/redhat-marketplace-2r6zz" Feb 14 04:55:20 crc kubenswrapper[4867]: I0214 04:55:20.983257 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e-catalog-content\") pod \"redhat-marketplace-2r6zz\" (UID: \"6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e\") " pod="openshift-marketplace/redhat-marketplace-2r6zz" Feb 14 04:55:21 crc kubenswrapper[4867]: I0214 04:55:21.008973 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8tkv\" (UniqueName: \"kubernetes.io/projected/6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e-kube-api-access-k8tkv\") pod \"redhat-marketplace-2r6zz\" (UID: \"6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e\") " pod="openshift-marketplace/redhat-marketplace-2r6zz" Feb 14 04:55:21 crc kubenswrapper[4867]: I0214 04:55:21.126797 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2r6zz" Feb 14 04:55:21 crc kubenswrapper[4867]: I0214 04:55:21.654779 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2r6zz"] Feb 14 04:55:22 crc kubenswrapper[4867]: I0214 04:55:22.157075 4867 generic.go:334] "Generic (PLEG): container finished" podID="6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e" containerID="864cea9c6fd51a05a021fd70f34da6d876138831664ba7f7b5515cfa137ca162" exitCode=0 Feb 14 04:55:22 crc kubenswrapper[4867]: I0214 04:55:22.157117 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2r6zz" event={"ID":"6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e","Type":"ContainerDied","Data":"864cea9c6fd51a05a021fd70f34da6d876138831664ba7f7b5515cfa137ca162"} Feb 14 04:55:22 crc kubenswrapper[4867]: I0214 04:55:22.157551 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2r6zz" event={"ID":"6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e","Type":"ContainerStarted","Data":"801c2a49873ba7dc052c0cafff2d252c8c67d675c1ccdad781acd1f9ae903e7b"} Feb 14 04:55:23 crc kubenswrapper[4867]: I0214 04:55:23.179170 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2r6zz" event={"ID":"6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e","Type":"ContainerStarted","Data":"76c330e82722774886ebf7ff260e7aa7cd9756e5216bc8267edf5c81673342eb"} Feb 14 04:55:24 crc kubenswrapper[4867]: I0214 04:55:24.192845 4867 generic.go:334] "Generic (PLEG): container finished" podID="6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e" containerID="76c330e82722774886ebf7ff260e7aa7cd9756e5216bc8267edf5c81673342eb" exitCode=0 Feb 14 04:55:24 crc kubenswrapper[4867]: I0214 04:55:24.193043 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2r6zz" event={"ID":"6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e","Type":"ContainerDied","Data":"76c330e82722774886ebf7ff260e7aa7cd9756e5216bc8267edf5c81673342eb"} Feb 14 04:55:25 crc kubenswrapper[4867]: I0214 04:55:25.210634 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2r6zz" event={"ID":"6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e","Type":"ContainerStarted","Data":"8d21984e496d82e59be4bd5aa1d091470381a66d27d163c606b9657fb5273f2a"} Feb 14 04:55:25 crc kubenswrapper[4867]: I0214 04:55:25.245047 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2r6zz" podStartSLOduration=2.770071637 podStartE2EDuration="5.245017588s" podCreationTimestamp="2026-02-14 04:55:20 +0000 UTC" firstStartedPulling="2026-02-14 04:55:22.163166483 +0000 UTC m=+2754.244103837" lastFinishedPulling="2026-02-14 04:55:24.638112474 +0000 UTC m=+2756.719049788" observedRunningTime="2026-02-14 04:55:25.230184058 +0000 UTC m=+2757.311121372" watchObservedRunningTime="2026-02-14 04:55:25.245017588 +0000 UTC m=+2757.325954902" Feb 14 04:55:31 crc kubenswrapper[4867]: I0214 04:55:31.127781 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2r6zz" Feb 14 04:55:31 crc kubenswrapper[4867]: I0214 04:55:31.128405 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2r6zz" Feb 14 04:55:31 crc kubenswrapper[4867]: I0214 04:55:31.190978 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2r6zz" Feb 14 04:55:31 crc kubenswrapper[4867]: I0214 04:55:31.251467 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 04:55:31 crc kubenswrapper[4867]: I0214 04:55:31.254837 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 04:55:31 crc kubenswrapper[4867]: I0214 04:55:31.348880 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2r6zz" Feb 14 04:55:31 crc kubenswrapper[4867]: I0214 04:55:31.444963 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2r6zz"] Feb 14 04:55:33 crc kubenswrapper[4867]: I0214 04:55:33.305761 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2r6zz" podUID="6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e" containerName="registry-server" containerID="cri-o://8d21984e496d82e59be4bd5aa1d091470381a66d27d163c606b9657fb5273f2a" gracePeriod=2 Feb 14 04:55:33 crc kubenswrapper[4867]: I0214 04:55:33.921296 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2r6zz" Feb 14 04:55:34 crc kubenswrapper[4867]: I0214 04:55:34.080967 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8tkv\" (UniqueName: \"kubernetes.io/projected/6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e-kube-api-access-k8tkv\") pod \"6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e\" (UID: \"6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e\") " Feb 14 04:55:34 crc kubenswrapper[4867]: I0214 04:55:34.081092 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e-catalog-content\") pod \"6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e\" (UID: \"6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e\") " Feb 14 04:55:34 crc kubenswrapper[4867]: I0214 04:55:34.081413 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e-utilities\") pod \"6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e\" (UID: \"6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e\") " Feb 14 04:55:34 crc kubenswrapper[4867]: I0214 04:55:34.082369 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e-utilities" (OuterVolumeSpecName: "utilities") pod "6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e" (UID: "6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:55:34 crc kubenswrapper[4867]: I0214 04:55:34.083583 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 04:55:34 crc kubenswrapper[4867]: I0214 04:55:34.094924 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e-kube-api-access-k8tkv" (OuterVolumeSpecName: "kube-api-access-k8tkv") pod "6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e" (UID: "6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e"). InnerVolumeSpecName "kube-api-access-k8tkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:55:34 crc kubenswrapper[4867]: I0214 04:55:34.106400 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e" (UID: "6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:55:34 crc kubenswrapper[4867]: I0214 04:55:34.186750 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8tkv\" (UniqueName: \"kubernetes.io/projected/6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e-kube-api-access-k8tkv\") on node \"crc\" DevicePath \"\"" Feb 14 04:55:34 crc kubenswrapper[4867]: I0214 04:55:34.186794 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 04:55:34 crc kubenswrapper[4867]: I0214 04:55:34.322336 4867 generic.go:334] "Generic (PLEG): container finished" podID="6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e" containerID="8d21984e496d82e59be4bd5aa1d091470381a66d27d163c606b9657fb5273f2a" exitCode=0 Feb 14 04:55:34 crc kubenswrapper[4867]: I0214 04:55:34.322458 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2r6zz" event={"ID":"6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e","Type":"ContainerDied","Data":"8d21984e496d82e59be4bd5aa1d091470381a66d27d163c606b9657fb5273f2a"} Feb 14 04:55:34 crc kubenswrapper[4867]: I0214 04:55:34.322554 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2r6zz" event={"ID":"6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e","Type":"ContainerDied","Data":"801c2a49873ba7dc052c0cafff2d252c8c67d675c1ccdad781acd1f9ae903e7b"} Feb 14 04:55:34 crc kubenswrapper[4867]: I0214 04:55:34.322493 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2r6zz" Feb 14 04:55:34 crc kubenswrapper[4867]: I0214 04:55:34.322582 4867 scope.go:117] "RemoveContainer" containerID="8d21984e496d82e59be4bd5aa1d091470381a66d27d163c606b9657fb5273f2a" Feb 14 04:55:34 crc kubenswrapper[4867]: I0214 04:55:34.358669 4867 scope.go:117] "RemoveContainer" containerID="76c330e82722774886ebf7ff260e7aa7cd9756e5216bc8267edf5c81673342eb" Feb 14 04:55:34 crc kubenswrapper[4867]: I0214 04:55:34.381873 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2r6zz"] Feb 14 04:55:34 crc kubenswrapper[4867]: I0214 04:55:34.392687 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2r6zz"] Feb 14 04:55:34 crc kubenswrapper[4867]: I0214 04:55:34.411044 4867 scope.go:117] "RemoveContainer" containerID="864cea9c6fd51a05a021fd70f34da6d876138831664ba7f7b5515cfa137ca162" Feb 14 04:55:34 crc kubenswrapper[4867]: I0214 04:55:34.469957 4867 scope.go:117] "RemoveContainer" containerID="8d21984e496d82e59be4bd5aa1d091470381a66d27d163c606b9657fb5273f2a" Feb 14 04:55:34 crc kubenswrapper[4867]: E0214 04:55:34.470650 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d21984e496d82e59be4bd5aa1d091470381a66d27d163c606b9657fb5273f2a\": container with ID starting with 8d21984e496d82e59be4bd5aa1d091470381a66d27d163c606b9657fb5273f2a not found: ID does not exist" containerID="8d21984e496d82e59be4bd5aa1d091470381a66d27d163c606b9657fb5273f2a" Feb 14 04:55:34 crc kubenswrapper[4867]: I0214 04:55:34.470742 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d21984e496d82e59be4bd5aa1d091470381a66d27d163c606b9657fb5273f2a"} err="failed to get container status \"8d21984e496d82e59be4bd5aa1d091470381a66d27d163c606b9657fb5273f2a\": rpc error: code = NotFound desc = could not find container \"8d21984e496d82e59be4bd5aa1d091470381a66d27d163c606b9657fb5273f2a\": container with ID starting with 8d21984e496d82e59be4bd5aa1d091470381a66d27d163c606b9657fb5273f2a not found: ID does not exist" Feb 14 04:55:34 crc kubenswrapper[4867]: I0214 04:55:34.470821 4867 scope.go:117] "RemoveContainer" containerID="76c330e82722774886ebf7ff260e7aa7cd9756e5216bc8267edf5c81673342eb" Feb 14 04:55:34 crc kubenswrapper[4867]: E0214 04:55:34.471271 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76c330e82722774886ebf7ff260e7aa7cd9756e5216bc8267edf5c81673342eb\": container with ID starting with 76c330e82722774886ebf7ff260e7aa7cd9756e5216bc8267edf5c81673342eb not found: ID does not exist" containerID="76c330e82722774886ebf7ff260e7aa7cd9756e5216bc8267edf5c81673342eb" Feb 14 04:55:34 crc kubenswrapper[4867]: I0214 04:55:34.471744 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76c330e82722774886ebf7ff260e7aa7cd9756e5216bc8267edf5c81673342eb"} err="failed to get container status \"76c330e82722774886ebf7ff260e7aa7cd9756e5216bc8267edf5c81673342eb\": rpc error: code = NotFound desc = could not find container \"76c330e82722774886ebf7ff260e7aa7cd9756e5216bc8267edf5c81673342eb\": container with ID starting with 76c330e82722774886ebf7ff260e7aa7cd9756e5216bc8267edf5c81673342eb not found: ID does not exist" Feb 14 04:55:34 crc kubenswrapper[4867]: I0214 04:55:34.471831 4867 scope.go:117] "RemoveContainer" containerID="864cea9c6fd51a05a021fd70f34da6d876138831664ba7f7b5515cfa137ca162" Feb 14 04:55:34 crc kubenswrapper[4867]: E0214 04:55:34.472266 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"864cea9c6fd51a05a021fd70f34da6d876138831664ba7f7b5515cfa137ca162\": container with ID starting with 864cea9c6fd51a05a021fd70f34da6d876138831664ba7f7b5515cfa137ca162 not found: ID does not exist" containerID="864cea9c6fd51a05a021fd70f34da6d876138831664ba7f7b5515cfa137ca162" Feb 14 04:55:34 crc kubenswrapper[4867]: I0214 04:55:34.472357 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"864cea9c6fd51a05a021fd70f34da6d876138831664ba7f7b5515cfa137ca162"} err="failed to get container status \"864cea9c6fd51a05a021fd70f34da6d876138831664ba7f7b5515cfa137ca162\": rpc error: code = NotFound desc = could not find container \"864cea9c6fd51a05a021fd70f34da6d876138831664ba7f7b5515cfa137ca162\": container with ID starting with 864cea9c6fd51a05a021fd70f34da6d876138831664ba7f7b5515cfa137ca162 not found: ID does not exist" Feb 14 04:55:35 crc kubenswrapper[4867]: I0214 04:55:35.011527 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e" path="/var/lib/kubelet/pods/6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e/volumes" Feb 14 04:55:43 crc kubenswrapper[4867]: I0214 04:55:43.459939 4867 generic.go:334] "Generic (PLEG): container finished" podID="8c3553e4-9d3b-4c1d-bbc3-35371d733c86" containerID="35ba4629751c3d1c99df22ad826fbdecb0b6da7011373c7fcf15710f10455091" exitCode=0 Feb 14 04:55:43 crc kubenswrapper[4867]: I0214 04:55:43.460805 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s5lc4" event={"ID":"8c3553e4-9d3b-4c1d-bbc3-35371d733c86","Type":"ContainerDied","Data":"35ba4629751c3d1c99df22ad826fbdecb0b6da7011373c7fcf15710f10455091"} Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.072393 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s5lc4" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.204636 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-inventory\") pod \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\" (UID: \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\") " Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.204763 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-nova-cell1-compute-config-0\") pod \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\" (UID: \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\") " Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.204807 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-nova-cell1-compute-config-2\") pod \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\" (UID: \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\") " Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.204932 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-nova-cell1-compute-config-3\") pod \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\" (UID: \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\") " Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.204970 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-nova-extra-config-0\") pod \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\" (UID: \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\") " Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.205011 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-nova-migration-ssh-key-0\") pod \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\" (UID: \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\") " Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.205063 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-ssh-key-openstack-edpm-ipam\") pod \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\" (UID: \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\") " Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.205115 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p9ccz\" (UniqueName: \"kubernetes.io/projected/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-kube-api-access-p9ccz\") pod \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\" (UID: \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\") " Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.205187 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-nova-combined-ca-bundle\") pod \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\" (UID: \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\") " Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.205342 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-nova-cell1-compute-config-1\") pod \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\" (UID: \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\") " Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.205455 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-nova-migration-ssh-key-1\") pod \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\" (UID: \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\") " Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.212991 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-kube-api-access-p9ccz" (OuterVolumeSpecName: "kube-api-access-p9ccz") pod "8c3553e4-9d3b-4c1d-bbc3-35371d733c86" (UID: "8c3553e4-9d3b-4c1d-bbc3-35371d733c86"). InnerVolumeSpecName "kube-api-access-p9ccz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.213738 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "8c3553e4-9d3b-4c1d-bbc3-35371d733c86" (UID: "8c3553e4-9d3b-4c1d-bbc3-35371d733c86"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.239345 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "8c3553e4-9d3b-4c1d-bbc3-35371d733c86" (UID: "8c3553e4-9d3b-4c1d-bbc3-35371d733c86"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.245602 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "8c3553e4-9d3b-4c1d-bbc3-35371d733c86" (UID: "8c3553e4-9d3b-4c1d-bbc3-35371d733c86"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.252273 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "8c3553e4-9d3b-4c1d-bbc3-35371d733c86" (UID: "8c3553e4-9d3b-4c1d-bbc3-35371d733c86"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.252739 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8c3553e4-9d3b-4c1d-bbc3-35371d733c86" (UID: "8c3553e4-9d3b-4c1d-bbc3-35371d733c86"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.262533 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-nova-cell1-compute-config-3" (OuterVolumeSpecName: "nova-cell1-compute-config-3") pod "8c3553e4-9d3b-4c1d-bbc3-35371d733c86" (UID: "8c3553e4-9d3b-4c1d-bbc3-35371d733c86"). InnerVolumeSpecName "nova-cell1-compute-config-3". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.267861 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "8c3553e4-9d3b-4c1d-bbc3-35371d733c86" (UID: "8c3553e4-9d3b-4c1d-bbc3-35371d733c86"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.270235 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-nova-cell1-compute-config-2" (OuterVolumeSpecName: "nova-cell1-compute-config-2") pod "8c3553e4-9d3b-4c1d-bbc3-35371d733c86" (UID: "8c3553e4-9d3b-4c1d-bbc3-35371d733c86"). InnerVolumeSpecName "nova-cell1-compute-config-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.270603 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "8c3553e4-9d3b-4c1d-bbc3-35371d733c86" (UID: "8c3553e4-9d3b-4c1d-bbc3-35371d733c86"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.273766 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-inventory" (OuterVolumeSpecName: "inventory") pod "8c3553e4-9d3b-4c1d-bbc3-35371d733c86" (UID: "8c3553e4-9d3b-4c1d-bbc3-35371d733c86"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.309075 4867 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.309113 4867 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-inventory\") on node \"crc\" DevicePath \"\"" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.309123 4867 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.309133 4867 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-nova-cell1-compute-config-2\") on node \"crc\" DevicePath \"\"" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.309142 4867 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-nova-cell1-compute-config-3\") on node \"crc\" DevicePath \"\"" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.309152 4867 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.309161 4867 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.309170 4867 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.309178 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p9ccz\" (UniqueName: \"kubernetes.io/projected/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-kube-api-access-p9ccz\") on node \"crc\" DevicePath \"\"" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.309186 4867 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.309197 4867 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.487316 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s5lc4" event={"ID":"8c3553e4-9d3b-4c1d-bbc3-35371d733c86","Type":"ContainerDied","Data":"22ea790dd323fc348f6fd0cafee4bad57f394f8293bdc77fe1ca0af9b1394a35"} Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.487377 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22ea790dd323fc348f6fd0cafee4bad57f394f8293bdc77fe1ca0af9b1394a35" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.487708 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s5lc4" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.622796 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq"] Feb 14 04:55:45 crc kubenswrapper[4867]: E0214 04:55:45.623607 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c3553e4-9d3b-4c1d-bbc3-35371d733c86" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.623628 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c3553e4-9d3b-4c1d-bbc3-35371d733c86" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 14 04:55:45 crc kubenswrapper[4867]: E0214 04:55:45.623653 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e" containerName="extract-content" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.623661 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e" containerName="extract-content" Feb 14 04:55:45 crc kubenswrapper[4867]: E0214 04:55:45.623685 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e" containerName="extract-utilities" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.623693 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e" containerName="extract-utilities" Feb 14 04:55:45 crc kubenswrapper[4867]: E0214 04:55:45.623733 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e" containerName="registry-server" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.623741 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e" containerName="registry-server" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.624114 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c3553e4-9d3b-4c1d-bbc3-35371d733c86" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.624135 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e" containerName="registry-server" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.625611 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.629440 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.629497 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.629829 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.630000 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-24tmg" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.635839 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.639071 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq"] Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.720373 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/b70721c5-f29f-4cc4-8ee7-88341a81765d-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq\" (UID: \"b70721c5-f29f-4cc4-8ee7-88341a81765d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.720473 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b70721c5-f29f-4cc4-8ee7-88341a81765d-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq\" (UID: \"b70721c5-f29f-4cc4-8ee7-88341a81765d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.720498 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/b70721c5-f29f-4cc4-8ee7-88341a81765d-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq\" (UID: \"b70721c5-f29f-4cc4-8ee7-88341a81765d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.720594 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b70721c5-f29f-4cc4-8ee7-88341a81765d-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq\" (UID: \"b70721c5-f29f-4cc4-8ee7-88341a81765d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.720655 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b70721c5-f29f-4cc4-8ee7-88341a81765d-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq\" (UID: \"b70721c5-f29f-4cc4-8ee7-88341a81765d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.720693 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zhlx\" (UniqueName: \"kubernetes.io/projected/b70721c5-f29f-4cc4-8ee7-88341a81765d-kube-api-access-5zhlx\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq\" (UID: \"b70721c5-f29f-4cc4-8ee7-88341a81765d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.720762 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/b70721c5-f29f-4cc4-8ee7-88341a81765d-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq\" (UID: \"b70721c5-f29f-4cc4-8ee7-88341a81765d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.822608 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/b70721c5-f29f-4cc4-8ee7-88341a81765d-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq\" (UID: \"b70721c5-f29f-4cc4-8ee7-88341a81765d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.823296 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/b70721c5-f29f-4cc4-8ee7-88341a81765d-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq\" (UID: \"b70721c5-f29f-4cc4-8ee7-88341a81765d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.823327 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b70721c5-f29f-4cc4-8ee7-88341a81765d-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq\" (UID: \"b70721c5-f29f-4cc4-8ee7-88341a81765d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.823348 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/b70721c5-f29f-4cc4-8ee7-88341a81765d-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq\" (UID: \"b70721c5-f29f-4cc4-8ee7-88341a81765d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.823417 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b70721c5-f29f-4cc4-8ee7-88341a81765d-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq\" (UID: \"b70721c5-f29f-4cc4-8ee7-88341a81765d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.823441 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b70721c5-f29f-4cc4-8ee7-88341a81765d-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq\" (UID: \"b70721c5-f29f-4cc4-8ee7-88341a81765d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.823473 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zhlx\" (UniqueName: \"kubernetes.io/projected/b70721c5-f29f-4cc4-8ee7-88341a81765d-kube-api-access-5zhlx\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq\" (UID: \"b70721c5-f29f-4cc4-8ee7-88341a81765d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.830238 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b70721c5-f29f-4cc4-8ee7-88341a81765d-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq\" (UID: \"b70721c5-f29f-4cc4-8ee7-88341a81765d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.830789 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/b70721c5-f29f-4cc4-8ee7-88341a81765d-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq\" (UID: \"b70721c5-f29f-4cc4-8ee7-88341a81765d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.831068 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b70721c5-f29f-4cc4-8ee7-88341a81765d-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq\" (UID: \"b70721c5-f29f-4cc4-8ee7-88341a81765d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.831768 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/b70721c5-f29f-4cc4-8ee7-88341a81765d-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq\" (UID: \"b70721c5-f29f-4cc4-8ee7-88341a81765d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.832222 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/b70721c5-f29f-4cc4-8ee7-88341a81765d-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq\" (UID: \"b70721c5-f29f-4cc4-8ee7-88341a81765d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.832592 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b70721c5-f29f-4cc4-8ee7-88341a81765d-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq\" (UID: \"b70721c5-f29f-4cc4-8ee7-88341a81765d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.845957 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zhlx\" (UniqueName: \"kubernetes.io/projected/b70721c5-f29f-4cc4-8ee7-88341a81765d-kube-api-access-5zhlx\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq\" (UID: \"b70721c5-f29f-4cc4-8ee7-88341a81765d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.950090 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq" Feb 14 04:55:46 crc kubenswrapper[4867]: I0214 04:55:46.689033 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq"] Feb 14 04:55:47 crc kubenswrapper[4867]: I0214 04:55:47.525253 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq" event={"ID":"b70721c5-f29f-4cc4-8ee7-88341a81765d","Type":"ContainerStarted","Data":"1537e8bfe998fee74f949f5917923a54ff718a7829d5e8a62f41549a3acc0bf4"} Feb 14 04:55:48 crc kubenswrapper[4867]: I0214 04:55:48.539850 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq" event={"ID":"b70721c5-f29f-4cc4-8ee7-88341a81765d","Type":"ContainerStarted","Data":"fb47d4a4c558dace70949450fb42adb65e005b406785bd04b7e7c0bb95c122a8"} Feb 14 04:56:01 crc kubenswrapper[4867]: I0214 04:56:01.251420 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 04:56:01 crc kubenswrapper[4867]: I0214 04:56:01.252036 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 04:56:01 crc kubenswrapper[4867]: I0214 04:56:01.252096 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" Feb 14 04:56:01 crc kubenswrapper[4867]: I0214 04:56:01.253135 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e1b89ddb8a2754137d33a14676d4e33653c306a715ebb64010e116482bf849b7"} pod="openshift-machine-config-operator/machine-config-daemon-4s95t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 04:56:01 crc kubenswrapper[4867]: I0214 04:56:01.253188 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" containerID="cri-o://e1b89ddb8a2754137d33a14676d4e33653c306a715ebb64010e116482bf849b7" gracePeriod=600 Feb 14 04:56:01 crc kubenswrapper[4867]: I0214 04:56:01.703815 4867 generic.go:334] "Generic (PLEG): container finished" podID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerID="e1b89ddb8a2754137d33a14676d4e33653c306a715ebb64010e116482bf849b7" exitCode=0 Feb 14 04:56:01 crc kubenswrapper[4867]: I0214 04:56:01.703893 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" event={"ID":"5992e46c-bce7-4b9f-82f2-c7ffb93286cd","Type":"ContainerDied","Data":"e1b89ddb8a2754137d33a14676d4e33653c306a715ebb64010e116482bf849b7"} Feb 14 04:56:01 crc kubenswrapper[4867]: I0214 04:56:01.704212 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" event={"ID":"5992e46c-bce7-4b9f-82f2-c7ffb93286cd","Type":"ContainerStarted","Data":"af0906a53bc116fc9f684815c9db0ec3a71e62ba875fd0da6af484a9d2f2ec7d"} Feb 14 04:56:01 crc kubenswrapper[4867]: I0214 04:56:01.704247 4867 scope.go:117] "RemoveContainer" containerID="2e46dcab63865af965f1ceab9775684d2c284c2072e738aed0acdc7b372802d2" Feb 14 04:56:01 crc kubenswrapper[4867]: I0214 04:56:01.733017 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq" podStartSLOduration=16.059550371 podStartE2EDuration="16.732993094s" podCreationTimestamp="2026-02-14 04:55:45 +0000 UTC" firstStartedPulling="2026-02-14 04:55:46.693450578 +0000 UTC m=+2778.774387902" lastFinishedPulling="2026-02-14 04:55:47.366893311 +0000 UTC m=+2779.447830625" observedRunningTime="2026-02-14 04:55:48.563269471 +0000 UTC m=+2780.644206785" watchObservedRunningTime="2026-02-14 04:56:01.732993094 +0000 UTC m=+2793.813930408" Feb 14 04:56:19 crc kubenswrapper[4867]: I0214 04:56:19.983787 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5zhn6"] Feb 14 04:56:19 crc kubenswrapper[4867]: I0214 04:56:19.987782 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5zhn6" Feb 14 04:56:20 crc kubenswrapper[4867]: I0214 04:56:20.006972 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5zhn6"] Feb 14 04:56:20 crc kubenswrapper[4867]: I0214 04:56:20.098405 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlzbp\" (UniqueName: \"kubernetes.io/projected/7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0-kube-api-access-vlzbp\") pod \"redhat-operators-5zhn6\" (UID: \"7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0\") " pod="openshift-marketplace/redhat-operators-5zhn6" Feb 14 04:56:20 crc kubenswrapper[4867]: I0214 04:56:20.098461 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0-utilities\") pod \"redhat-operators-5zhn6\" (UID: \"7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0\") " pod="openshift-marketplace/redhat-operators-5zhn6" Feb 14 04:56:20 crc kubenswrapper[4867]: I0214 04:56:20.098565 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0-catalog-content\") pod \"redhat-operators-5zhn6\" (UID: \"7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0\") " pod="openshift-marketplace/redhat-operators-5zhn6" Feb 14 04:56:20 crc kubenswrapper[4867]: I0214 04:56:20.201382 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlzbp\" (UniqueName: \"kubernetes.io/projected/7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0-kube-api-access-vlzbp\") pod \"redhat-operators-5zhn6\" (UID: \"7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0\") " pod="openshift-marketplace/redhat-operators-5zhn6" Feb 14 04:56:20 crc kubenswrapper[4867]: I0214 04:56:20.201464 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0-utilities\") pod \"redhat-operators-5zhn6\" (UID: \"7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0\") " pod="openshift-marketplace/redhat-operators-5zhn6" Feb 14 04:56:20 crc kubenswrapper[4867]: I0214 04:56:20.201598 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0-catalog-content\") pod \"redhat-operators-5zhn6\" (UID: \"7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0\") " pod="openshift-marketplace/redhat-operators-5zhn6" Feb 14 04:56:20 crc kubenswrapper[4867]: I0214 04:56:20.202146 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0-utilities\") pod \"redhat-operators-5zhn6\" (UID: \"7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0\") " pod="openshift-marketplace/redhat-operators-5zhn6" Feb 14 04:56:20 crc kubenswrapper[4867]: I0214 04:56:20.202211 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0-catalog-content\") pod \"redhat-operators-5zhn6\" (UID: \"7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0\") " pod="openshift-marketplace/redhat-operators-5zhn6" Feb 14 04:56:20 crc kubenswrapper[4867]: I0214 04:56:20.232449 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlzbp\" (UniqueName: \"kubernetes.io/projected/7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0-kube-api-access-vlzbp\") pod \"redhat-operators-5zhn6\" (UID: \"7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0\") " pod="openshift-marketplace/redhat-operators-5zhn6" Feb 14 04:56:20 crc kubenswrapper[4867]: I0214 04:56:20.327615 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5zhn6" Feb 14 04:56:20 crc kubenswrapper[4867]: I0214 04:56:20.919652 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5zhn6"] Feb 14 04:56:21 crc kubenswrapper[4867]: I0214 04:56:21.975367 4867 generic.go:334] "Generic (PLEG): container finished" podID="7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0" containerID="725585c3102ae70fa410a91152e5b75475823051c87b1cb7f8007f0a066df3e9" exitCode=0 Feb 14 04:56:21 crc kubenswrapper[4867]: I0214 04:56:21.975838 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5zhn6" event={"ID":"7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0","Type":"ContainerDied","Data":"725585c3102ae70fa410a91152e5b75475823051c87b1cb7f8007f0a066df3e9"} Feb 14 04:56:21 crc kubenswrapper[4867]: I0214 04:56:21.975872 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5zhn6" event={"ID":"7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0","Type":"ContainerStarted","Data":"56a1eb0b8c1466acdb90dcebf861602011eca4a1fc13f846a3780bf30b13d856"} Feb 14 04:56:24 crc kubenswrapper[4867]: I0214 04:56:24.013667 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5zhn6" event={"ID":"7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0","Type":"ContainerStarted","Data":"11dedad862d5970ba831e4baa8e2a52888ae530c6cd750bd2a3fd72654bd608b"} Feb 14 04:56:31 crc kubenswrapper[4867]: I0214 04:56:31.120543 4867 generic.go:334] "Generic (PLEG): container finished" podID="7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0" containerID="11dedad862d5970ba831e4baa8e2a52888ae530c6cd750bd2a3fd72654bd608b" exitCode=0 Feb 14 04:56:31 crc kubenswrapper[4867]: I0214 04:56:31.120662 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5zhn6" event={"ID":"7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0","Type":"ContainerDied","Data":"11dedad862d5970ba831e4baa8e2a52888ae530c6cd750bd2a3fd72654bd608b"} Feb 14 04:56:33 crc kubenswrapper[4867]: I0214 04:56:33.144448 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5zhn6" event={"ID":"7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0","Type":"ContainerStarted","Data":"55bc23e5514e0a902ef30ceb2885c5568cc7b8adceac585adb80b612dd609517"} Feb 14 04:56:33 crc kubenswrapper[4867]: I0214 04:56:33.173117 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5zhn6" podStartSLOduration=4.089503238 podStartE2EDuration="14.173097343s" podCreationTimestamp="2026-02-14 04:56:19 +0000 UTC" firstStartedPulling="2026-02-14 04:56:21.979755752 +0000 UTC m=+2814.060693066" lastFinishedPulling="2026-02-14 04:56:32.063349857 +0000 UTC m=+2824.144287171" observedRunningTime="2026-02-14 04:56:33.163824089 +0000 UTC m=+2825.244761423" watchObservedRunningTime="2026-02-14 04:56:33.173097343 +0000 UTC m=+2825.254034657" Feb 14 04:56:40 crc kubenswrapper[4867]: I0214 04:56:40.327811 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5zhn6" Feb 14 04:56:40 crc kubenswrapper[4867]: I0214 04:56:40.328727 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5zhn6" Feb 14 04:56:41 crc kubenswrapper[4867]: I0214 04:56:41.416607 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5zhn6" podUID="7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0" containerName="registry-server" probeResult="failure" output=< Feb 14 04:56:41 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 04:56:41 crc kubenswrapper[4867]: > Feb 14 04:56:51 crc kubenswrapper[4867]: I0214 04:56:51.374680 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5zhn6" podUID="7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0" containerName="registry-server" probeResult="failure" output=< Feb 14 04:56:51 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 04:56:51 crc kubenswrapper[4867]: > Feb 14 04:57:00 crc kubenswrapper[4867]: I0214 04:57:00.403811 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5zhn6" Feb 14 04:57:00 crc kubenswrapper[4867]: I0214 04:57:00.470828 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5zhn6" Feb 14 04:57:00 crc kubenswrapper[4867]: I0214 04:57:00.666980 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5zhn6"] Feb 14 04:57:01 crc kubenswrapper[4867]: I0214 04:57:01.464199 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5zhn6" podUID="7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0" containerName="registry-server" containerID="cri-o://55bc23e5514e0a902ef30ceb2885c5568cc7b8adceac585adb80b612dd609517" gracePeriod=2 Feb 14 04:57:02 crc kubenswrapper[4867]: I0214 04:57:02.012388 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5zhn6" Feb 14 04:57:02 crc kubenswrapper[4867]: I0214 04:57:02.195828 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0-utilities\") pod \"7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0\" (UID: \"7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0\") " Feb 14 04:57:02 crc kubenswrapper[4867]: I0214 04:57:02.196313 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vlzbp\" (UniqueName: \"kubernetes.io/projected/7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0-kube-api-access-vlzbp\") pod \"7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0\" (UID: \"7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0\") " Feb 14 04:57:02 crc kubenswrapper[4867]: I0214 04:57:02.196361 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0-catalog-content\") pod \"7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0\" (UID: \"7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0\") " Feb 14 04:57:02 crc kubenswrapper[4867]: I0214 04:57:02.196889 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0-utilities" (OuterVolumeSpecName: "utilities") pod "7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0" (UID: "7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:57:02 crc kubenswrapper[4867]: I0214 04:57:02.197063 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 04:57:02 crc kubenswrapper[4867]: I0214 04:57:02.209801 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0-kube-api-access-vlzbp" (OuterVolumeSpecName: "kube-api-access-vlzbp") pod "7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0" (UID: "7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0"). InnerVolumeSpecName "kube-api-access-vlzbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:57:02 crc kubenswrapper[4867]: I0214 04:57:02.299354 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vlzbp\" (UniqueName: \"kubernetes.io/projected/7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0-kube-api-access-vlzbp\") on node \"crc\" DevicePath \"\"" Feb 14 04:57:02 crc kubenswrapper[4867]: I0214 04:57:02.339590 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0" (UID: "7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:57:02 crc kubenswrapper[4867]: I0214 04:57:02.401641 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 04:57:02 crc kubenswrapper[4867]: I0214 04:57:02.489123 4867 generic.go:334] "Generic (PLEG): container finished" podID="7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0" containerID="55bc23e5514e0a902ef30ceb2885c5568cc7b8adceac585adb80b612dd609517" exitCode=0 Feb 14 04:57:02 crc kubenswrapper[4867]: I0214 04:57:02.489196 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5zhn6" event={"ID":"7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0","Type":"ContainerDied","Data":"55bc23e5514e0a902ef30ceb2885c5568cc7b8adceac585adb80b612dd609517"} Feb 14 04:57:02 crc kubenswrapper[4867]: I0214 04:57:02.489234 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5zhn6" event={"ID":"7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0","Type":"ContainerDied","Data":"56a1eb0b8c1466acdb90dcebf861602011eca4a1fc13f846a3780bf30b13d856"} Feb 14 04:57:02 crc kubenswrapper[4867]: I0214 04:57:02.489258 4867 scope.go:117] "RemoveContainer" containerID="55bc23e5514e0a902ef30ceb2885c5568cc7b8adceac585adb80b612dd609517" Feb 14 04:57:02 crc kubenswrapper[4867]: I0214 04:57:02.491476 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5zhn6" Feb 14 04:57:02 crc kubenswrapper[4867]: I0214 04:57:02.531191 4867 scope.go:117] "RemoveContainer" containerID="11dedad862d5970ba831e4baa8e2a52888ae530c6cd750bd2a3fd72654bd608b" Feb 14 04:57:02 crc kubenswrapper[4867]: I0214 04:57:02.566625 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5zhn6"] Feb 14 04:57:02 crc kubenswrapper[4867]: I0214 04:57:02.581331 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5zhn6"] Feb 14 04:57:02 crc kubenswrapper[4867]: I0214 04:57:02.593483 4867 scope.go:117] "RemoveContainer" containerID="725585c3102ae70fa410a91152e5b75475823051c87b1cb7f8007f0a066df3e9" Feb 14 04:57:02 crc kubenswrapper[4867]: I0214 04:57:02.622733 4867 scope.go:117] "RemoveContainer" containerID="55bc23e5514e0a902ef30ceb2885c5568cc7b8adceac585adb80b612dd609517" Feb 14 04:57:02 crc kubenswrapper[4867]: E0214 04:57:02.623385 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55bc23e5514e0a902ef30ceb2885c5568cc7b8adceac585adb80b612dd609517\": container with ID starting with 55bc23e5514e0a902ef30ceb2885c5568cc7b8adceac585adb80b612dd609517 not found: ID does not exist" containerID="55bc23e5514e0a902ef30ceb2885c5568cc7b8adceac585adb80b612dd609517" Feb 14 04:57:02 crc kubenswrapper[4867]: I0214 04:57:02.623752 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55bc23e5514e0a902ef30ceb2885c5568cc7b8adceac585adb80b612dd609517"} err="failed to get container status \"55bc23e5514e0a902ef30ceb2885c5568cc7b8adceac585adb80b612dd609517\": rpc error: code = NotFound desc = could not find container \"55bc23e5514e0a902ef30ceb2885c5568cc7b8adceac585adb80b612dd609517\": container with ID starting with 55bc23e5514e0a902ef30ceb2885c5568cc7b8adceac585adb80b612dd609517 not found: ID does not exist" Feb 14 04:57:02 crc kubenswrapper[4867]: I0214 04:57:02.623965 4867 scope.go:117] "RemoveContainer" containerID="11dedad862d5970ba831e4baa8e2a52888ae530c6cd750bd2a3fd72654bd608b" Feb 14 04:57:02 crc kubenswrapper[4867]: E0214 04:57:02.624639 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11dedad862d5970ba831e4baa8e2a52888ae530c6cd750bd2a3fd72654bd608b\": container with ID starting with 11dedad862d5970ba831e4baa8e2a52888ae530c6cd750bd2a3fd72654bd608b not found: ID does not exist" containerID="11dedad862d5970ba831e4baa8e2a52888ae530c6cd750bd2a3fd72654bd608b" Feb 14 04:57:02 crc kubenswrapper[4867]: I0214 04:57:02.624698 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11dedad862d5970ba831e4baa8e2a52888ae530c6cd750bd2a3fd72654bd608b"} err="failed to get container status \"11dedad862d5970ba831e4baa8e2a52888ae530c6cd750bd2a3fd72654bd608b\": rpc error: code = NotFound desc = could not find container \"11dedad862d5970ba831e4baa8e2a52888ae530c6cd750bd2a3fd72654bd608b\": container with ID starting with 11dedad862d5970ba831e4baa8e2a52888ae530c6cd750bd2a3fd72654bd608b not found: ID does not exist" Feb 14 04:57:02 crc kubenswrapper[4867]: I0214 04:57:02.624812 4867 scope.go:117] "RemoveContainer" containerID="725585c3102ae70fa410a91152e5b75475823051c87b1cb7f8007f0a066df3e9" Feb 14 04:57:02 crc kubenswrapper[4867]: E0214 04:57:02.626462 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"725585c3102ae70fa410a91152e5b75475823051c87b1cb7f8007f0a066df3e9\": container with ID starting with 725585c3102ae70fa410a91152e5b75475823051c87b1cb7f8007f0a066df3e9 not found: ID does not exist" containerID="725585c3102ae70fa410a91152e5b75475823051c87b1cb7f8007f0a066df3e9" Feb 14 04:57:02 crc kubenswrapper[4867]: I0214 04:57:02.626541 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"725585c3102ae70fa410a91152e5b75475823051c87b1cb7f8007f0a066df3e9"} err="failed to get container status \"725585c3102ae70fa410a91152e5b75475823051c87b1cb7f8007f0a066df3e9\": rpc error: code = NotFound desc = could not find container \"725585c3102ae70fa410a91152e5b75475823051c87b1cb7f8007f0a066df3e9\": container with ID starting with 725585c3102ae70fa410a91152e5b75475823051c87b1cb7f8007f0a066df3e9 not found: ID does not exist" Feb 14 04:57:03 crc kubenswrapper[4867]: I0214 04:57:03.013268 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0" path="/var/lib/kubelet/pods/7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0/volumes" Feb 14 04:57:44 crc kubenswrapper[4867]: I0214 04:57:44.685623 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-nx5fz"] Feb 14 04:57:44 crc kubenswrapper[4867]: E0214 04:57:44.687454 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0" containerName="extract-content" Feb 14 04:57:44 crc kubenswrapper[4867]: I0214 04:57:44.687476 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0" containerName="extract-content" Feb 14 04:57:44 crc kubenswrapper[4867]: E0214 04:57:44.687519 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0" containerName="extract-utilities" Feb 14 04:57:44 crc kubenswrapper[4867]: I0214 04:57:44.687527 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0" containerName="extract-utilities" Feb 14 04:57:44 crc kubenswrapper[4867]: E0214 04:57:44.687542 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0" containerName="registry-server" Feb 14 04:57:44 crc kubenswrapper[4867]: I0214 04:57:44.687548 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0" containerName="registry-server" Feb 14 04:57:44 crc kubenswrapper[4867]: I0214 04:57:44.687781 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0" containerName="registry-server" Feb 14 04:57:44 crc kubenswrapper[4867]: I0214 04:57:44.692131 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nx5fz" Feb 14 04:57:44 crc kubenswrapper[4867]: I0214 04:57:44.704321 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nx5fz"] Feb 14 04:57:44 crc kubenswrapper[4867]: I0214 04:57:44.730070 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngblr\" (UniqueName: \"kubernetes.io/projected/5c7159af-0dbf-4a2b-b483-522d4e6a28ab-kube-api-access-ngblr\") pod \"certified-operators-nx5fz\" (UID: \"5c7159af-0dbf-4a2b-b483-522d4e6a28ab\") " pod="openshift-marketplace/certified-operators-nx5fz" Feb 14 04:57:44 crc kubenswrapper[4867]: I0214 04:57:44.730217 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c7159af-0dbf-4a2b-b483-522d4e6a28ab-utilities\") pod \"certified-operators-nx5fz\" (UID: \"5c7159af-0dbf-4a2b-b483-522d4e6a28ab\") " pod="openshift-marketplace/certified-operators-nx5fz" Feb 14 04:57:44 crc kubenswrapper[4867]: I0214 04:57:44.730251 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c7159af-0dbf-4a2b-b483-522d4e6a28ab-catalog-content\") pod \"certified-operators-nx5fz\" (UID: \"5c7159af-0dbf-4a2b-b483-522d4e6a28ab\") " pod="openshift-marketplace/certified-operators-nx5fz" Feb 14 04:57:44 crc kubenswrapper[4867]: I0214 04:57:44.838560 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngblr\" (UniqueName: \"kubernetes.io/projected/5c7159af-0dbf-4a2b-b483-522d4e6a28ab-kube-api-access-ngblr\") pod \"certified-operators-nx5fz\" (UID: \"5c7159af-0dbf-4a2b-b483-522d4e6a28ab\") " pod="openshift-marketplace/certified-operators-nx5fz" Feb 14 04:57:44 crc kubenswrapper[4867]: I0214 04:57:44.839141 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c7159af-0dbf-4a2b-b483-522d4e6a28ab-utilities\") pod \"certified-operators-nx5fz\" (UID: \"5c7159af-0dbf-4a2b-b483-522d4e6a28ab\") " pod="openshift-marketplace/certified-operators-nx5fz" Feb 14 04:57:44 crc kubenswrapper[4867]: I0214 04:57:44.839280 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c7159af-0dbf-4a2b-b483-522d4e6a28ab-catalog-content\") pod \"certified-operators-nx5fz\" (UID: \"5c7159af-0dbf-4a2b-b483-522d4e6a28ab\") " pod="openshift-marketplace/certified-operators-nx5fz" Feb 14 04:57:44 crc kubenswrapper[4867]: I0214 04:57:44.840149 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c7159af-0dbf-4a2b-b483-522d4e6a28ab-catalog-content\") pod \"certified-operators-nx5fz\" (UID: \"5c7159af-0dbf-4a2b-b483-522d4e6a28ab\") " pod="openshift-marketplace/certified-operators-nx5fz" Feb 14 04:57:44 crc kubenswrapper[4867]: I0214 04:57:44.840431 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c7159af-0dbf-4a2b-b483-522d4e6a28ab-utilities\") pod \"certified-operators-nx5fz\" (UID: \"5c7159af-0dbf-4a2b-b483-522d4e6a28ab\") " pod="openshift-marketplace/certified-operators-nx5fz" Feb 14 04:57:44 crc kubenswrapper[4867]: I0214 04:57:44.872971 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngblr\" (UniqueName: \"kubernetes.io/projected/5c7159af-0dbf-4a2b-b483-522d4e6a28ab-kube-api-access-ngblr\") pod \"certified-operators-nx5fz\" (UID: \"5c7159af-0dbf-4a2b-b483-522d4e6a28ab\") " pod="openshift-marketplace/certified-operators-nx5fz" Feb 14 04:57:45 crc kubenswrapper[4867]: I0214 04:57:45.029228 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nx5fz" Feb 14 04:57:45 crc kubenswrapper[4867]: I0214 04:57:45.599040 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nx5fz"] Feb 14 04:57:46 crc kubenswrapper[4867]: I0214 04:57:46.038128 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nx5fz" event={"ID":"5c7159af-0dbf-4a2b-b483-522d4e6a28ab","Type":"ContainerStarted","Data":"ada88f8d7d9e2b4d7ac7ce8690527bc5fd6680a0ad7c523addf8e3c666af1e66"} Feb 14 04:57:46 crc kubenswrapper[4867]: I0214 04:57:46.038533 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nx5fz" event={"ID":"5c7159af-0dbf-4a2b-b483-522d4e6a28ab","Type":"ContainerStarted","Data":"88f2bea0ce99dfcf034026bb7b57d2e0b66ee5141d7ee7aec3701eb987c003d7"} Feb 14 04:57:47 crc kubenswrapper[4867]: I0214 04:57:47.054477 4867 generic.go:334] "Generic (PLEG): container finished" podID="5c7159af-0dbf-4a2b-b483-522d4e6a28ab" containerID="ada88f8d7d9e2b4d7ac7ce8690527bc5fd6680a0ad7c523addf8e3c666af1e66" exitCode=0 Feb 14 04:57:47 crc kubenswrapper[4867]: I0214 04:57:47.054553 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nx5fz" event={"ID":"5c7159af-0dbf-4a2b-b483-522d4e6a28ab","Type":"ContainerDied","Data":"ada88f8d7d9e2b4d7ac7ce8690527bc5fd6680a0ad7c523addf8e3c666af1e66"} Feb 14 04:57:49 crc kubenswrapper[4867]: I0214 04:57:49.079899 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nx5fz" event={"ID":"5c7159af-0dbf-4a2b-b483-522d4e6a28ab","Type":"ContainerStarted","Data":"4430d66ac8e03a617f21f2b5aafada4dd2fbeac1543b1271caa80ec11fcd3af9"} Feb 14 04:57:51 crc kubenswrapper[4867]: I0214 04:57:51.106596 4867 generic.go:334] "Generic (PLEG): container finished" podID="5c7159af-0dbf-4a2b-b483-522d4e6a28ab" containerID="4430d66ac8e03a617f21f2b5aafada4dd2fbeac1543b1271caa80ec11fcd3af9" exitCode=0 Feb 14 04:57:51 crc kubenswrapper[4867]: I0214 04:57:51.106695 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nx5fz" event={"ID":"5c7159af-0dbf-4a2b-b483-522d4e6a28ab","Type":"ContainerDied","Data":"4430d66ac8e03a617f21f2b5aafada4dd2fbeac1543b1271caa80ec11fcd3af9"} Feb 14 04:57:52 crc kubenswrapper[4867]: I0214 04:57:52.123007 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nx5fz" event={"ID":"5c7159af-0dbf-4a2b-b483-522d4e6a28ab","Type":"ContainerStarted","Data":"e64b661e3de3dfc596fc4969138b032bf4c10f106ac72d06eba4224f9349acfa"} Feb 14 04:57:52 crc kubenswrapper[4867]: I0214 04:57:52.144903 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-nx5fz" podStartSLOduration=3.6881285679999998 podStartE2EDuration="8.144878326s" podCreationTimestamp="2026-02-14 04:57:44 +0000 UTC" firstStartedPulling="2026-02-14 04:57:47.05747451 +0000 UTC m=+2899.138411824" lastFinishedPulling="2026-02-14 04:57:51.514224268 +0000 UTC m=+2903.595161582" observedRunningTime="2026-02-14 04:57:52.140668986 +0000 UTC m=+2904.221606300" watchObservedRunningTime="2026-02-14 04:57:52.144878326 +0000 UTC m=+2904.225815650" Feb 14 04:57:55 crc kubenswrapper[4867]: I0214 04:57:55.029996 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-nx5fz" Feb 14 04:57:55 crc kubenswrapper[4867]: I0214 04:57:55.030633 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-nx5fz" Feb 14 04:57:55 crc kubenswrapper[4867]: I0214 04:57:55.085169 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-nx5fz" Feb 14 04:58:01 crc kubenswrapper[4867]: I0214 04:58:01.250627 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 04:58:01 crc kubenswrapper[4867]: I0214 04:58:01.250985 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 04:58:05 crc kubenswrapper[4867]: I0214 04:58:05.094787 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-nx5fz" Feb 14 04:58:05 crc kubenswrapper[4867]: I0214 04:58:05.167308 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nx5fz"] Feb 14 04:58:05 crc kubenswrapper[4867]: I0214 04:58:05.264039 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-nx5fz" podUID="5c7159af-0dbf-4a2b-b483-522d4e6a28ab" containerName="registry-server" containerID="cri-o://e64b661e3de3dfc596fc4969138b032bf4c10f106ac72d06eba4224f9349acfa" gracePeriod=2 Feb 14 04:58:05 crc kubenswrapper[4867]: I0214 04:58:05.888644 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nx5fz" Feb 14 04:58:05 crc kubenswrapper[4867]: I0214 04:58:05.978984 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c7159af-0dbf-4a2b-b483-522d4e6a28ab-catalog-content\") pod \"5c7159af-0dbf-4a2b-b483-522d4e6a28ab\" (UID: \"5c7159af-0dbf-4a2b-b483-522d4e6a28ab\") " Feb 14 04:58:05 crc kubenswrapper[4867]: I0214 04:58:05.979064 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c7159af-0dbf-4a2b-b483-522d4e6a28ab-utilities\") pod \"5c7159af-0dbf-4a2b-b483-522d4e6a28ab\" (UID: \"5c7159af-0dbf-4a2b-b483-522d4e6a28ab\") " Feb 14 04:58:05 crc kubenswrapper[4867]: I0214 04:58:05.979369 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngblr\" (UniqueName: \"kubernetes.io/projected/5c7159af-0dbf-4a2b-b483-522d4e6a28ab-kube-api-access-ngblr\") pod \"5c7159af-0dbf-4a2b-b483-522d4e6a28ab\" (UID: \"5c7159af-0dbf-4a2b-b483-522d4e6a28ab\") " Feb 14 04:58:05 crc kubenswrapper[4867]: I0214 04:58:05.980092 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c7159af-0dbf-4a2b-b483-522d4e6a28ab-utilities" (OuterVolumeSpecName: "utilities") pod "5c7159af-0dbf-4a2b-b483-522d4e6a28ab" (UID: "5c7159af-0dbf-4a2b-b483-522d4e6a28ab"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:58:05 crc kubenswrapper[4867]: I0214 04:58:05.986552 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c7159af-0dbf-4a2b-b483-522d4e6a28ab-kube-api-access-ngblr" (OuterVolumeSpecName: "kube-api-access-ngblr") pod "5c7159af-0dbf-4a2b-b483-522d4e6a28ab" (UID: "5c7159af-0dbf-4a2b-b483-522d4e6a28ab"). InnerVolumeSpecName "kube-api-access-ngblr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:58:06 crc kubenswrapper[4867]: I0214 04:58:06.025137 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c7159af-0dbf-4a2b-b483-522d4e6a28ab-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5c7159af-0dbf-4a2b-b483-522d4e6a28ab" (UID: "5c7159af-0dbf-4a2b-b483-522d4e6a28ab"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:58:06 crc kubenswrapper[4867]: I0214 04:58:06.083217 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c7159af-0dbf-4a2b-b483-522d4e6a28ab-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 04:58:06 crc kubenswrapper[4867]: I0214 04:58:06.083262 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c7159af-0dbf-4a2b-b483-522d4e6a28ab-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 04:58:06 crc kubenswrapper[4867]: I0214 04:58:06.083275 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngblr\" (UniqueName: \"kubernetes.io/projected/5c7159af-0dbf-4a2b-b483-522d4e6a28ab-kube-api-access-ngblr\") on node \"crc\" DevicePath \"\"" Feb 14 04:58:06 crc kubenswrapper[4867]: I0214 04:58:06.277608 4867 generic.go:334] "Generic (PLEG): container finished" podID="5c7159af-0dbf-4a2b-b483-522d4e6a28ab" containerID="e64b661e3de3dfc596fc4969138b032bf4c10f106ac72d06eba4224f9349acfa" exitCode=0 Feb 14 04:58:06 crc kubenswrapper[4867]: I0214 04:58:06.277721 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nx5fz" Feb 14 04:58:06 crc kubenswrapper[4867]: I0214 04:58:06.277687 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nx5fz" event={"ID":"5c7159af-0dbf-4a2b-b483-522d4e6a28ab","Type":"ContainerDied","Data":"e64b661e3de3dfc596fc4969138b032bf4c10f106ac72d06eba4224f9349acfa"} Feb 14 04:58:06 crc kubenswrapper[4867]: I0214 04:58:06.279482 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nx5fz" event={"ID":"5c7159af-0dbf-4a2b-b483-522d4e6a28ab","Type":"ContainerDied","Data":"88f2bea0ce99dfcf034026bb7b57d2e0b66ee5141d7ee7aec3701eb987c003d7"} Feb 14 04:58:06 crc kubenswrapper[4867]: I0214 04:58:06.279564 4867 scope.go:117] "RemoveContainer" containerID="e64b661e3de3dfc596fc4969138b032bf4c10f106ac72d06eba4224f9349acfa" Feb 14 04:58:06 crc kubenswrapper[4867]: I0214 04:58:06.305563 4867 scope.go:117] "RemoveContainer" containerID="4430d66ac8e03a617f21f2b5aafada4dd2fbeac1543b1271caa80ec11fcd3af9" Feb 14 04:58:06 crc kubenswrapper[4867]: I0214 04:58:06.321341 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nx5fz"] Feb 14 04:58:06 crc kubenswrapper[4867]: I0214 04:58:06.336927 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-nx5fz"] Feb 14 04:58:06 crc kubenswrapper[4867]: I0214 04:58:06.346848 4867 scope.go:117] "RemoveContainer" containerID="ada88f8d7d9e2b4d7ac7ce8690527bc5fd6680a0ad7c523addf8e3c666af1e66" Feb 14 04:58:06 crc kubenswrapper[4867]: I0214 04:58:06.399984 4867 scope.go:117] "RemoveContainer" containerID="e64b661e3de3dfc596fc4969138b032bf4c10f106ac72d06eba4224f9349acfa" Feb 14 04:58:06 crc kubenswrapper[4867]: E0214 04:58:06.400854 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e64b661e3de3dfc596fc4969138b032bf4c10f106ac72d06eba4224f9349acfa\": container with ID starting with e64b661e3de3dfc596fc4969138b032bf4c10f106ac72d06eba4224f9349acfa not found: ID does not exist" containerID="e64b661e3de3dfc596fc4969138b032bf4c10f106ac72d06eba4224f9349acfa" Feb 14 04:58:06 crc kubenswrapper[4867]: I0214 04:58:06.400922 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e64b661e3de3dfc596fc4969138b032bf4c10f106ac72d06eba4224f9349acfa"} err="failed to get container status \"e64b661e3de3dfc596fc4969138b032bf4c10f106ac72d06eba4224f9349acfa\": rpc error: code = NotFound desc = could not find container \"e64b661e3de3dfc596fc4969138b032bf4c10f106ac72d06eba4224f9349acfa\": container with ID starting with e64b661e3de3dfc596fc4969138b032bf4c10f106ac72d06eba4224f9349acfa not found: ID does not exist" Feb 14 04:58:06 crc kubenswrapper[4867]: I0214 04:58:06.400983 4867 scope.go:117] "RemoveContainer" containerID="4430d66ac8e03a617f21f2b5aafada4dd2fbeac1543b1271caa80ec11fcd3af9" Feb 14 04:58:06 crc kubenswrapper[4867]: E0214 04:58:06.401635 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4430d66ac8e03a617f21f2b5aafada4dd2fbeac1543b1271caa80ec11fcd3af9\": container with ID starting with 4430d66ac8e03a617f21f2b5aafada4dd2fbeac1543b1271caa80ec11fcd3af9 not found: ID does not exist" containerID="4430d66ac8e03a617f21f2b5aafada4dd2fbeac1543b1271caa80ec11fcd3af9" Feb 14 04:58:06 crc kubenswrapper[4867]: I0214 04:58:06.401738 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4430d66ac8e03a617f21f2b5aafada4dd2fbeac1543b1271caa80ec11fcd3af9"} err="failed to get container status \"4430d66ac8e03a617f21f2b5aafada4dd2fbeac1543b1271caa80ec11fcd3af9\": rpc error: code = NotFound desc = could not find container \"4430d66ac8e03a617f21f2b5aafada4dd2fbeac1543b1271caa80ec11fcd3af9\": container with ID starting with 4430d66ac8e03a617f21f2b5aafada4dd2fbeac1543b1271caa80ec11fcd3af9 not found: ID does not exist" Feb 14 04:58:06 crc kubenswrapper[4867]: I0214 04:58:06.401757 4867 scope.go:117] "RemoveContainer" containerID="ada88f8d7d9e2b4d7ac7ce8690527bc5fd6680a0ad7c523addf8e3c666af1e66" Feb 14 04:58:06 crc kubenswrapper[4867]: E0214 04:58:06.403865 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ada88f8d7d9e2b4d7ac7ce8690527bc5fd6680a0ad7c523addf8e3c666af1e66\": container with ID starting with ada88f8d7d9e2b4d7ac7ce8690527bc5fd6680a0ad7c523addf8e3c666af1e66 not found: ID does not exist" containerID="ada88f8d7d9e2b4d7ac7ce8690527bc5fd6680a0ad7c523addf8e3c666af1e66" Feb 14 04:58:06 crc kubenswrapper[4867]: I0214 04:58:06.403935 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ada88f8d7d9e2b4d7ac7ce8690527bc5fd6680a0ad7c523addf8e3c666af1e66"} err="failed to get container status \"ada88f8d7d9e2b4d7ac7ce8690527bc5fd6680a0ad7c523addf8e3c666af1e66\": rpc error: code = NotFound desc = could not find container \"ada88f8d7d9e2b4d7ac7ce8690527bc5fd6680a0ad7c523addf8e3c666af1e66\": container with ID starting with ada88f8d7d9e2b4d7ac7ce8690527bc5fd6680a0ad7c523addf8e3c666af1e66 not found: ID does not exist" Feb 14 04:58:07 crc kubenswrapper[4867]: I0214 04:58:07.013084 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c7159af-0dbf-4a2b-b483-522d4e6a28ab" path="/var/lib/kubelet/pods/5c7159af-0dbf-4a2b-b483-522d4e6a28ab/volumes" Feb 14 04:58:08 crc kubenswrapper[4867]: I0214 04:58:08.325664 4867 generic.go:334] "Generic (PLEG): container finished" podID="b70721c5-f29f-4cc4-8ee7-88341a81765d" containerID="fb47d4a4c558dace70949450fb42adb65e005b406785bd04b7e7c0bb95c122a8" exitCode=0 Feb 14 04:58:08 crc kubenswrapper[4867]: I0214 04:58:08.326204 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq" event={"ID":"b70721c5-f29f-4cc4-8ee7-88341a81765d","Type":"ContainerDied","Data":"fb47d4a4c558dace70949450fb42adb65e005b406785bd04b7e7c0bb95c122a8"} Feb 14 04:58:09 crc kubenswrapper[4867]: I0214 04:58:09.859874 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq" Feb 14 04:58:09 crc kubenswrapper[4867]: I0214 04:58:09.989447 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b70721c5-f29f-4cc4-8ee7-88341a81765d-ssh-key-openstack-edpm-ipam\") pod \"b70721c5-f29f-4cc4-8ee7-88341a81765d\" (UID: \"b70721c5-f29f-4cc4-8ee7-88341a81765d\") " Feb 14 04:58:09 crc kubenswrapper[4867]: I0214 04:58:09.989635 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zhlx\" (UniqueName: \"kubernetes.io/projected/b70721c5-f29f-4cc4-8ee7-88341a81765d-kube-api-access-5zhlx\") pod \"b70721c5-f29f-4cc4-8ee7-88341a81765d\" (UID: \"b70721c5-f29f-4cc4-8ee7-88341a81765d\") " Feb 14 04:58:09 crc kubenswrapper[4867]: I0214 04:58:09.989731 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/b70721c5-f29f-4cc4-8ee7-88341a81765d-ceilometer-compute-config-data-2\") pod \"b70721c5-f29f-4cc4-8ee7-88341a81765d\" (UID: \"b70721c5-f29f-4cc4-8ee7-88341a81765d\") " Feb 14 04:58:09 crc kubenswrapper[4867]: I0214 04:58:09.989812 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b70721c5-f29f-4cc4-8ee7-88341a81765d-telemetry-combined-ca-bundle\") pod \"b70721c5-f29f-4cc4-8ee7-88341a81765d\" (UID: \"b70721c5-f29f-4cc4-8ee7-88341a81765d\") " Feb 14 04:58:09 crc kubenswrapper[4867]: I0214 04:58:09.989868 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b70721c5-f29f-4cc4-8ee7-88341a81765d-inventory\") pod \"b70721c5-f29f-4cc4-8ee7-88341a81765d\" (UID: \"b70721c5-f29f-4cc4-8ee7-88341a81765d\") " Feb 14 04:58:09 crc kubenswrapper[4867]: I0214 04:58:09.990661 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/b70721c5-f29f-4cc4-8ee7-88341a81765d-ceilometer-compute-config-data-1\") pod \"b70721c5-f29f-4cc4-8ee7-88341a81765d\" (UID: \"b70721c5-f29f-4cc4-8ee7-88341a81765d\") " Feb 14 04:58:09 crc kubenswrapper[4867]: I0214 04:58:09.990732 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/b70721c5-f29f-4cc4-8ee7-88341a81765d-ceilometer-compute-config-data-0\") pod \"b70721c5-f29f-4cc4-8ee7-88341a81765d\" (UID: \"b70721c5-f29f-4cc4-8ee7-88341a81765d\") " Feb 14 04:58:09 crc kubenswrapper[4867]: I0214 04:58:09.996249 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b70721c5-f29f-4cc4-8ee7-88341a81765d-kube-api-access-5zhlx" (OuterVolumeSpecName: "kube-api-access-5zhlx") pod "b70721c5-f29f-4cc4-8ee7-88341a81765d" (UID: "b70721c5-f29f-4cc4-8ee7-88341a81765d"). InnerVolumeSpecName "kube-api-access-5zhlx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:58:09 crc kubenswrapper[4867]: I0214 04:58:09.998087 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b70721c5-f29f-4cc4-8ee7-88341a81765d-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "b70721c5-f29f-4cc4-8ee7-88341a81765d" (UID: "b70721c5-f29f-4cc4-8ee7-88341a81765d"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.022302 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b70721c5-f29f-4cc4-8ee7-88341a81765d-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "b70721c5-f29f-4cc4-8ee7-88341a81765d" (UID: "b70721c5-f29f-4cc4-8ee7-88341a81765d"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.023536 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b70721c5-f29f-4cc4-8ee7-88341a81765d-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "b70721c5-f29f-4cc4-8ee7-88341a81765d" (UID: "b70721c5-f29f-4cc4-8ee7-88341a81765d"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.025428 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b70721c5-f29f-4cc4-8ee7-88341a81765d-inventory" (OuterVolumeSpecName: "inventory") pod "b70721c5-f29f-4cc4-8ee7-88341a81765d" (UID: "b70721c5-f29f-4cc4-8ee7-88341a81765d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.031316 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b70721c5-f29f-4cc4-8ee7-88341a81765d-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "b70721c5-f29f-4cc4-8ee7-88341a81765d" (UID: "b70721c5-f29f-4cc4-8ee7-88341a81765d"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.032296 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b70721c5-f29f-4cc4-8ee7-88341a81765d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b70721c5-f29f-4cc4-8ee7-88341a81765d" (UID: "b70721c5-f29f-4cc4-8ee7-88341a81765d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.094696 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5zhlx\" (UniqueName: \"kubernetes.io/projected/b70721c5-f29f-4cc4-8ee7-88341a81765d-kube-api-access-5zhlx\") on node \"crc\" DevicePath \"\"" Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.094740 4867 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/b70721c5-f29f-4cc4-8ee7-88341a81765d-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.094779 4867 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b70721c5-f29f-4cc4-8ee7-88341a81765d-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.094794 4867 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b70721c5-f29f-4cc4-8ee7-88341a81765d-inventory\") on node \"crc\" DevicePath \"\"" Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.094808 4867 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/b70721c5-f29f-4cc4-8ee7-88341a81765d-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.094819 4867 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/b70721c5-f29f-4cc4-8ee7-88341a81765d-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.094831 4867 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b70721c5-f29f-4cc4-8ee7-88341a81765d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.347647 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq" event={"ID":"b70721c5-f29f-4cc4-8ee7-88341a81765d","Type":"ContainerDied","Data":"1537e8bfe998fee74f949f5917923a54ff718a7829d5e8a62f41549a3acc0bf4"} Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.347694 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1537e8bfe998fee74f949f5917923a54ff718a7829d5e8a62f41549a3acc0bf4" Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.347714 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq" Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.472181 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps"] Feb 14 04:58:10 crc kubenswrapper[4867]: E0214 04:58:10.472823 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c7159af-0dbf-4a2b-b483-522d4e6a28ab" containerName="extract-utilities" Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.472851 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c7159af-0dbf-4a2b-b483-522d4e6a28ab" containerName="extract-utilities" Feb 14 04:58:10 crc kubenswrapper[4867]: E0214 04:58:10.472874 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b70721c5-f29f-4cc4-8ee7-88341a81765d" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.472883 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="b70721c5-f29f-4cc4-8ee7-88341a81765d" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 14 04:58:10 crc kubenswrapper[4867]: E0214 04:58:10.472917 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c7159af-0dbf-4a2b-b483-522d4e6a28ab" containerName="extract-content" Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.472925 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c7159af-0dbf-4a2b-b483-522d4e6a28ab" containerName="extract-content" Feb 14 04:58:10 crc kubenswrapper[4867]: E0214 04:58:10.472951 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c7159af-0dbf-4a2b-b483-522d4e6a28ab" containerName="registry-server" Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.472958 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c7159af-0dbf-4a2b-b483-522d4e6a28ab" containerName="registry-server" Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.473254 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="b70721c5-f29f-4cc4-8ee7-88341a81765d" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.473311 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c7159af-0dbf-4a2b-b483-522d4e6a28ab" containerName="registry-server" Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.474772 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps" Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.491064 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.491270 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-24tmg" Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.491370 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-ipmi-config-data" Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.491798 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.492248 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.494592 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps"] Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.508533 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zt5fm\" (UniqueName: \"kubernetes.io/projected/43f6ac0f-9203-4827-bd57-acbae7793028-kube-api-access-zt5fm\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps\" (UID: \"43f6ac0f-9203-4827-bd57-acbae7793028\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps" Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.508648 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/43f6ac0f-9203-4827-bd57-acbae7793028-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps\" (UID: \"43f6ac0f-9203-4827-bd57-acbae7793028\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps" Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.508687 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43f6ac0f-9203-4827-bd57-acbae7793028-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps\" (UID: \"43f6ac0f-9203-4827-bd57-acbae7793028\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps" Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.508722 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/43f6ac0f-9203-4827-bd57-acbae7793028-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps\" (UID: \"43f6ac0f-9203-4827-bd57-acbae7793028\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps" Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.509022 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/43f6ac0f-9203-4827-bd57-acbae7793028-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps\" (UID: \"43f6ac0f-9203-4827-bd57-acbae7793028\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps" Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.509463 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/43f6ac0f-9203-4827-bd57-acbae7793028-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps\" (UID: \"43f6ac0f-9203-4827-bd57-acbae7793028\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps" Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.509564 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/43f6ac0f-9203-4827-bd57-acbae7793028-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps\" (UID: \"43f6ac0f-9203-4827-bd57-acbae7793028\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps" Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.611954 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zt5fm\" (UniqueName: \"kubernetes.io/projected/43f6ac0f-9203-4827-bd57-acbae7793028-kube-api-access-zt5fm\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps\" (UID: \"43f6ac0f-9203-4827-bd57-acbae7793028\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps" Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.612419 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43f6ac0f-9203-4827-bd57-acbae7793028-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps\" (UID: \"43f6ac0f-9203-4827-bd57-acbae7793028\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps" Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.612442 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/43f6ac0f-9203-4827-bd57-acbae7793028-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps\" (UID: \"43f6ac0f-9203-4827-bd57-acbae7793028\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps" Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.613249 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/43f6ac0f-9203-4827-bd57-acbae7793028-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps\" (UID: \"43f6ac0f-9203-4827-bd57-acbae7793028\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps" Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.613378 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/43f6ac0f-9203-4827-bd57-acbae7793028-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps\" (UID: \"43f6ac0f-9203-4827-bd57-acbae7793028\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps" Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.613664 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/43f6ac0f-9203-4827-bd57-acbae7793028-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps\" (UID: \"43f6ac0f-9203-4827-bd57-acbae7793028\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps" Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.613735 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/43f6ac0f-9203-4827-bd57-acbae7793028-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps\" (UID: \"43f6ac0f-9203-4827-bd57-acbae7793028\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps" Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.616366 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/43f6ac0f-9203-4827-bd57-acbae7793028-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps\" (UID: \"43f6ac0f-9203-4827-bd57-acbae7793028\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps" Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.617025 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/43f6ac0f-9203-4827-bd57-acbae7793028-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps\" (UID: \"43f6ac0f-9203-4827-bd57-acbae7793028\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps" Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.617440 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/43f6ac0f-9203-4827-bd57-acbae7793028-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps\" (UID: \"43f6ac0f-9203-4827-bd57-acbae7793028\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps" Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.617868 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/43f6ac0f-9203-4827-bd57-acbae7793028-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps\" (UID: \"43f6ac0f-9203-4827-bd57-acbae7793028\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps" Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.617947 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43f6ac0f-9203-4827-bd57-acbae7793028-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps\" (UID: \"43f6ac0f-9203-4827-bd57-acbae7793028\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps" Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.618666 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/43f6ac0f-9203-4827-bd57-acbae7793028-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps\" (UID: \"43f6ac0f-9203-4827-bd57-acbae7793028\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps" Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.628828 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zt5fm\" (UniqueName: \"kubernetes.io/projected/43f6ac0f-9203-4827-bd57-acbae7793028-kube-api-access-zt5fm\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps\" (UID: \"43f6ac0f-9203-4827-bd57-acbae7793028\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps" Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.805693 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps" Feb 14 04:58:11 crc kubenswrapper[4867]: I0214 04:58:11.341446 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps"] Feb 14 04:58:11 crc kubenswrapper[4867]: I0214 04:58:11.360540 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps" event={"ID":"43f6ac0f-9203-4827-bd57-acbae7793028","Type":"ContainerStarted","Data":"a78661dba6d024e4f135e76ec3bde6ffb1cabf67e82e662a787795dbe9e05ef1"} Feb 14 04:58:12 crc kubenswrapper[4867]: I0214 04:58:12.372279 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps" event={"ID":"43f6ac0f-9203-4827-bd57-acbae7793028","Type":"ContainerStarted","Data":"003d01ed9d647e03defd92a68ed32472c72d8cbdda637fda0cbae83f953fc73d"} Feb 14 04:58:31 crc kubenswrapper[4867]: I0214 04:58:31.250847 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 04:58:31 crc kubenswrapper[4867]: I0214 04:58:31.252677 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 04:59:01 crc kubenswrapper[4867]: I0214 04:59:01.250870 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 04:59:01 crc kubenswrapper[4867]: I0214 04:59:01.251471 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 04:59:01 crc kubenswrapper[4867]: I0214 04:59:01.251538 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" Feb 14 04:59:01 crc kubenswrapper[4867]: I0214 04:59:01.252486 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"af0906a53bc116fc9f684815c9db0ec3a71e62ba875fd0da6af484a9d2f2ec7d"} pod="openshift-machine-config-operator/machine-config-daemon-4s95t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 04:59:01 crc kubenswrapper[4867]: I0214 04:59:01.252574 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" containerID="cri-o://af0906a53bc116fc9f684815c9db0ec3a71e62ba875fd0da6af484a9d2f2ec7d" gracePeriod=600 Feb 14 04:59:01 crc kubenswrapper[4867]: E0214 04:59:01.375706 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:59:01 crc kubenswrapper[4867]: I0214 04:59:01.922520 4867 generic.go:334] "Generic (PLEG): container finished" podID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerID="af0906a53bc116fc9f684815c9db0ec3a71e62ba875fd0da6af484a9d2f2ec7d" exitCode=0 Feb 14 04:59:01 crc kubenswrapper[4867]: I0214 04:59:01.922573 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" event={"ID":"5992e46c-bce7-4b9f-82f2-c7ffb93286cd","Type":"ContainerDied","Data":"af0906a53bc116fc9f684815c9db0ec3a71e62ba875fd0da6af484a9d2f2ec7d"} Feb 14 04:59:01 crc kubenswrapper[4867]: I0214 04:59:01.922848 4867 scope.go:117] "RemoveContainer" containerID="e1b89ddb8a2754137d33a14676d4e33653c306a715ebb64010e116482bf849b7" Feb 14 04:59:01 crc kubenswrapper[4867]: I0214 04:59:01.923466 4867 scope.go:117] "RemoveContainer" containerID="af0906a53bc116fc9f684815c9db0ec3a71e62ba875fd0da6af484a9d2f2ec7d" Feb 14 04:59:01 crc kubenswrapper[4867]: E0214 04:59:01.924112 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:59:01 crc kubenswrapper[4867]: I0214 04:59:01.966077 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps" podStartSLOduration=51.587980054 podStartE2EDuration="51.966055947s" podCreationTimestamp="2026-02-14 04:58:10 +0000 UTC" firstStartedPulling="2026-02-14 04:58:11.350068092 +0000 UTC m=+2923.431005406" lastFinishedPulling="2026-02-14 04:58:11.728143985 +0000 UTC m=+2923.809081299" observedRunningTime="2026-02-14 04:58:12.391129333 +0000 UTC m=+2924.472066647" watchObservedRunningTime="2026-02-14 04:59:01.966055947 +0000 UTC m=+2974.046993261" Feb 14 04:59:15 crc kubenswrapper[4867]: I0214 04:59:15.998456 4867 scope.go:117] "RemoveContainer" containerID="af0906a53bc116fc9f684815c9db0ec3a71e62ba875fd0da6af484a9d2f2ec7d" Feb 14 04:59:16 crc kubenswrapper[4867]: E0214 04:59:15.999773 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:59:29 crc kubenswrapper[4867]: I0214 04:59:29.997353 4867 scope.go:117] "RemoveContainer" containerID="af0906a53bc116fc9f684815c9db0ec3a71e62ba875fd0da6af484a9d2f2ec7d" Feb 14 04:59:30 crc kubenswrapper[4867]: E0214 04:59:29.998413 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:59:44 crc kubenswrapper[4867]: I0214 04:59:44.998026 4867 scope.go:117] "RemoveContainer" containerID="af0906a53bc116fc9f684815c9db0ec3a71e62ba875fd0da6af484a9d2f2ec7d" Feb 14 04:59:44 crc kubenswrapper[4867]: E0214 04:59:44.998997 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:59:59 crc kubenswrapper[4867]: I0214 04:59:59.021733 4867 scope.go:117] "RemoveContainer" containerID="af0906a53bc116fc9f684815c9db0ec3a71e62ba875fd0da6af484a9d2f2ec7d" Feb 14 04:59:59 crc kubenswrapper[4867]: E0214 04:59:59.022520 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:00:00 crc kubenswrapper[4867]: I0214 05:00:00.169334 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517420-spkbd"] Feb 14 05:00:00 crc kubenswrapper[4867]: I0214 05:00:00.173542 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517420-spkbd" Feb 14 05:00:00 crc kubenswrapper[4867]: I0214 05:00:00.176069 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 14 05:00:00 crc kubenswrapper[4867]: I0214 05:00:00.178020 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 14 05:00:00 crc kubenswrapper[4867]: I0214 05:00:00.187645 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517420-spkbd"] Feb 14 05:00:00 crc kubenswrapper[4867]: I0214 05:00:00.242669 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9f3d9933-ea61-47f2-a857-edd1af2baf67-config-volume\") pod \"collect-profiles-29517420-spkbd\" (UID: \"9f3d9933-ea61-47f2-a857-edd1af2baf67\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517420-spkbd" Feb 14 05:00:00 crc kubenswrapper[4867]: I0214 05:00:00.243094 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9f3d9933-ea61-47f2-a857-edd1af2baf67-secret-volume\") pod \"collect-profiles-29517420-spkbd\" (UID: \"9f3d9933-ea61-47f2-a857-edd1af2baf67\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517420-spkbd" Feb 14 05:00:00 crc kubenswrapper[4867]: I0214 05:00:00.243892 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ff5vd\" (UniqueName: \"kubernetes.io/projected/9f3d9933-ea61-47f2-a857-edd1af2baf67-kube-api-access-ff5vd\") pod \"collect-profiles-29517420-spkbd\" (UID: \"9f3d9933-ea61-47f2-a857-edd1af2baf67\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517420-spkbd" Feb 14 05:00:00 crc kubenswrapper[4867]: I0214 05:00:00.348175 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ff5vd\" (UniqueName: \"kubernetes.io/projected/9f3d9933-ea61-47f2-a857-edd1af2baf67-kube-api-access-ff5vd\") pod \"collect-profiles-29517420-spkbd\" (UID: \"9f3d9933-ea61-47f2-a857-edd1af2baf67\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517420-spkbd" Feb 14 05:00:00 crc kubenswrapper[4867]: I0214 05:00:00.348473 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9f3d9933-ea61-47f2-a857-edd1af2baf67-config-volume\") pod \"collect-profiles-29517420-spkbd\" (UID: \"9f3d9933-ea61-47f2-a857-edd1af2baf67\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517420-spkbd" Feb 14 05:00:00 crc kubenswrapper[4867]: I0214 05:00:00.348613 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9f3d9933-ea61-47f2-a857-edd1af2baf67-secret-volume\") pod \"collect-profiles-29517420-spkbd\" (UID: \"9f3d9933-ea61-47f2-a857-edd1af2baf67\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517420-spkbd" Feb 14 05:00:00 crc kubenswrapper[4867]: I0214 05:00:00.349848 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9f3d9933-ea61-47f2-a857-edd1af2baf67-config-volume\") pod \"collect-profiles-29517420-spkbd\" (UID: \"9f3d9933-ea61-47f2-a857-edd1af2baf67\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517420-spkbd" Feb 14 05:00:00 crc kubenswrapper[4867]: I0214 05:00:00.358556 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9f3d9933-ea61-47f2-a857-edd1af2baf67-secret-volume\") pod \"collect-profiles-29517420-spkbd\" (UID: \"9f3d9933-ea61-47f2-a857-edd1af2baf67\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517420-spkbd" Feb 14 05:00:00 crc kubenswrapper[4867]: I0214 05:00:00.371197 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ff5vd\" (UniqueName: \"kubernetes.io/projected/9f3d9933-ea61-47f2-a857-edd1af2baf67-kube-api-access-ff5vd\") pod \"collect-profiles-29517420-spkbd\" (UID: \"9f3d9933-ea61-47f2-a857-edd1af2baf67\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517420-spkbd" Feb 14 05:00:00 crc kubenswrapper[4867]: I0214 05:00:00.505196 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517420-spkbd" Feb 14 05:00:01 crc kubenswrapper[4867]: I0214 05:00:01.015749 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517420-spkbd"] Feb 14 05:00:01 crc kubenswrapper[4867]: I0214 05:00:01.608078 4867 generic.go:334] "Generic (PLEG): container finished" podID="9f3d9933-ea61-47f2-a857-edd1af2baf67" containerID="7e47076001317bcb38834fe5f61417f02ae8109c8832987a242d29c2b0b144fa" exitCode=0 Feb 14 05:00:01 crc kubenswrapper[4867]: I0214 05:00:01.608667 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517420-spkbd" event={"ID":"9f3d9933-ea61-47f2-a857-edd1af2baf67","Type":"ContainerDied","Data":"7e47076001317bcb38834fe5f61417f02ae8109c8832987a242d29c2b0b144fa"} Feb 14 05:00:01 crc kubenswrapper[4867]: I0214 05:00:01.608710 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517420-spkbd" event={"ID":"9f3d9933-ea61-47f2-a857-edd1af2baf67","Type":"ContainerStarted","Data":"07c5b4135b70b75f89742118bc951a47f292a000ce7087054d3474ffc91ebd6c"} Feb 14 05:00:03 crc kubenswrapper[4867]: I0214 05:00:03.044722 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517420-spkbd" Feb 14 05:00:03 crc kubenswrapper[4867]: I0214 05:00:03.134076 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ff5vd\" (UniqueName: \"kubernetes.io/projected/9f3d9933-ea61-47f2-a857-edd1af2baf67-kube-api-access-ff5vd\") pod \"9f3d9933-ea61-47f2-a857-edd1af2baf67\" (UID: \"9f3d9933-ea61-47f2-a857-edd1af2baf67\") " Feb 14 05:00:03 crc kubenswrapper[4867]: I0214 05:00:03.134556 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9f3d9933-ea61-47f2-a857-edd1af2baf67-secret-volume\") pod \"9f3d9933-ea61-47f2-a857-edd1af2baf67\" (UID: \"9f3d9933-ea61-47f2-a857-edd1af2baf67\") " Feb 14 05:00:03 crc kubenswrapper[4867]: I0214 05:00:03.135743 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9f3d9933-ea61-47f2-a857-edd1af2baf67-config-volume\") pod \"9f3d9933-ea61-47f2-a857-edd1af2baf67\" (UID: \"9f3d9933-ea61-47f2-a857-edd1af2baf67\") " Feb 14 05:00:03 crc kubenswrapper[4867]: I0214 05:00:03.137697 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f3d9933-ea61-47f2-a857-edd1af2baf67-config-volume" (OuterVolumeSpecName: "config-volume") pod "9f3d9933-ea61-47f2-a857-edd1af2baf67" (UID: "9f3d9933-ea61-47f2-a857-edd1af2baf67"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 05:00:03 crc kubenswrapper[4867]: I0214 05:00:03.150882 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f3d9933-ea61-47f2-a857-edd1af2baf67-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9f3d9933-ea61-47f2-a857-edd1af2baf67" (UID: "9f3d9933-ea61-47f2-a857-edd1af2baf67"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 05:00:03 crc kubenswrapper[4867]: I0214 05:00:03.169038 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f3d9933-ea61-47f2-a857-edd1af2baf67-kube-api-access-ff5vd" (OuterVolumeSpecName: "kube-api-access-ff5vd") pod "9f3d9933-ea61-47f2-a857-edd1af2baf67" (UID: "9f3d9933-ea61-47f2-a857-edd1af2baf67"). InnerVolumeSpecName "kube-api-access-ff5vd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 05:00:03 crc kubenswrapper[4867]: I0214 05:00:03.239869 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ff5vd\" (UniqueName: \"kubernetes.io/projected/9f3d9933-ea61-47f2-a857-edd1af2baf67-kube-api-access-ff5vd\") on node \"crc\" DevicePath \"\"" Feb 14 05:00:03 crc kubenswrapper[4867]: I0214 05:00:03.239930 4867 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9f3d9933-ea61-47f2-a857-edd1af2baf67-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 14 05:00:03 crc kubenswrapper[4867]: I0214 05:00:03.239947 4867 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9f3d9933-ea61-47f2-a857-edd1af2baf67-config-volume\") on node \"crc\" DevicePath \"\"" Feb 14 05:00:03 crc kubenswrapper[4867]: I0214 05:00:03.633918 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517420-spkbd" event={"ID":"9f3d9933-ea61-47f2-a857-edd1af2baf67","Type":"ContainerDied","Data":"07c5b4135b70b75f89742118bc951a47f292a000ce7087054d3474ffc91ebd6c"} Feb 14 05:00:03 crc kubenswrapper[4867]: I0214 05:00:03.634041 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07c5b4135b70b75f89742118bc951a47f292a000ce7087054d3474ffc91ebd6c" Feb 14 05:00:03 crc kubenswrapper[4867]: I0214 05:00:03.634003 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517420-spkbd" Feb 14 05:00:04 crc kubenswrapper[4867]: I0214 05:00:04.136563 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517375-78vgp"] Feb 14 05:00:04 crc kubenswrapper[4867]: I0214 05:00:04.149238 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517375-78vgp"] Feb 14 05:00:05 crc kubenswrapper[4867]: I0214 05:00:05.045767 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb80aae8-69eb-4098-af64-8a1ace025d53" path="/var/lib/kubelet/pods/cb80aae8-69eb-4098-af64-8a1ace025d53/volumes" Feb 14 05:00:11 crc kubenswrapper[4867]: I0214 05:00:11.833841 4867 generic.go:334] "Generic (PLEG): container finished" podID="43f6ac0f-9203-4827-bd57-acbae7793028" containerID="003d01ed9d647e03defd92a68ed32472c72d8cbdda637fda0cbae83f953fc73d" exitCode=0 Feb 14 05:00:11 crc kubenswrapper[4867]: I0214 05:00:11.833927 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps" event={"ID":"43f6ac0f-9203-4827-bd57-acbae7793028","Type":"ContainerDied","Data":"003d01ed9d647e03defd92a68ed32472c72d8cbdda637fda0cbae83f953fc73d"} Feb 14 05:00:12 crc kubenswrapper[4867]: I0214 05:00:12.010555 4867 scope.go:117] "RemoveContainer" containerID="af0906a53bc116fc9f684815c9db0ec3a71e62ba875fd0da6af484a9d2f2ec7d" Feb 14 05:00:12 crc kubenswrapper[4867]: E0214 05:00:12.011128 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:00:13 crc kubenswrapper[4867]: I0214 05:00:13.364535 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps" Feb 14 05:00:13 crc kubenswrapper[4867]: I0214 05:00:13.415577 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/43f6ac0f-9203-4827-bd57-acbae7793028-ceilometer-ipmi-config-data-2\") pod \"43f6ac0f-9203-4827-bd57-acbae7793028\" (UID: \"43f6ac0f-9203-4827-bd57-acbae7793028\") " Feb 14 05:00:13 crc kubenswrapper[4867]: I0214 05:00:13.415714 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/43f6ac0f-9203-4827-bd57-acbae7793028-inventory\") pod \"43f6ac0f-9203-4827-bd57-acbae7793028\" (UID: \"43f6ac0f-9203-4827-bd57-acbae7793028\") " Feb 14 05:00:13 crc kubenswrapper[4867]: I0214 05:00:13.415830 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/43f6ac0f-9203-4827-bd57-acbae7793028-ceilometer-ipmi-config-data-0\") pod \"43f6ac0f-9203-4827-bd57-acbae7793028\" (UID: \"43f6ac0f-9203-4827-bd57-acbae7793028\") " Feb 14 05:00:13 crc kubenswrapper[4867]: I0214 05:00:13.415984 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/43f6ac0f-9203-4827-bd57-acbae7793028-ceilometer-ipmi-config-data-1\") pod \"43f6ac0f-9203-4827-bd57-acbae7793028\" (UID: \"43f6ac0f-9203-4827-bd57-acbae7793028\") " Feb 14 05:00:13 crc kubenswrapper[4867]: I0214 05:00:13.416029 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/43f6ac0f-9203-4827-bd57-acbae7793028-ssh-key-openstack-edpm-ipam\") pod \"43f6ac0f-9203-4827-bd57-acbae7793028\" (UID: \"43f6ac0f-9203-4827-bd57-acbae7793028\") " Feb 14 05:00:13 crc kubenswrapper[4867]: I0214 05:00:13.416103 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zt5fm\" (UniqueName: \"kubernetes.io/projected/43f6ac0f-9203-4827-bd57-acbae7793028-kube-api-access-zt5fm\") pod \"43f6ac0f-9203-4827-bd57-acbae7793028\" (UID: \"43f6ac0f-9203-4827-bd57-acbae7793028\") " Feb 14 05:00:13 crc kubenswrapper[4867]: I0214 05:00:13.416190 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43f6ac0f-9203-4827-bd57-acbae7793028-telemetry-power-monitoring-combined-ca-bundle\") pod \"43f6ac0f-9203-4827-bd57-acbae7793028\" (UID: \"43f6ac0f-9203-4827-bd57-acbae7793028\") " Feb 14 05:00:13 crc kubenswrapper[4867]: I0214 05:00:13.421912 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43f6ac0f-9203-4827-bd57-acbae7793028-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "43f6ac0f-9203-4827-bd57-acbae7793028" (UID: "43f6ac0f-9203-4827-bd57-acbae7793028"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 05:00:13 crc kubenswrapper[4867]: I0214 05:00:13.429155 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43f6ac0f-9203-4827-bd57-acbae7793028-kube-api-access-zt5fm" (OuterVolumeSpecName: "kube-api-access-zt5fm") pod "43f6ac0f-9203-4827-bd57-acbae7793028" (UID: "43f6ac0f-9203-4827-bd57-acbae7793028"). InnerVolumeSpecName "kube-api-access-zt5fm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 05:00:13 crc kubenswrapper[4867]: I0214 05:00:13.451004 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43f6ac0f-9203-4827-bd57-acbae7793028-inventory" (OuterVolumeSpecName: "inventory") pod "43f6ac0f-9203-4827-bd57-acbae7793028" (UID: "43f6ac0f-9203-4827-bd57-acbae7793028"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 05:00:13 crc kubenswrapper[4867]: I0214 05:00:13.464587 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43f6ac0f-9203-4827-bd57-acbae7793028-ceilometer-ipmi-config-data-1" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-1") pod "43f6ac0f-9203-4827-bd57-acbae7793028" (UID: "43f6ac0f-9203-4827-bd57-acbae7793028"). InnerVolumeSpecName "ceilometer-ipmi-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 05:00:13 crc kubenswrapper[4867]: I0214 05:00:13.480594 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43f6ac0f-9203-4827-bd57-acbae7793028-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "43f6ac0f-9203-4827-bd57-acbae7793028" (UID: "43f6ac0f-9203-4827-bd57-acbae7793028"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 05:00:13 crc kubenswrapper[4867]: I0214 05:00:13.484390 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43f6ac0f-9203-4827-bd57-acbae7793028-ceilometer-ipmi-config-data-0" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-0") pod "43f6ac0f-9203-4827-bd57-acbae7793028" (UID: "43f6ac0f-9203-4827-bd57-acbae7793028"). InnerVolumeSpecName "ceilometer-ipmi-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 05:00:13 crc kubenswrapper[4867]: I0214 05:00:13.488428 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43f6ac0f-9203-4827-bd57-acbae7793028-ceilometer-ipmi-config-data-2" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-2") pod "43f6ac0f-9203-4827-bd57-acbae7793028" (UID: "43f6ac0f-9203-4827-bd57-acbae7793028"). InnerVolumeSpecName "ceilometer-ipmi-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 05:00:13 crc kubenswrapper[4867]: I0214 05:00:13.519288 4867 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/43f6ac0f-9203-4827-bd57-acbae7793028-inventory\") on node \"crc\" DevicePath \"\"" Feb 14 05:00:13 crc kubenswrapper[4867]: I0214 05:00:13.519325 4867 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/43f6ac0f-9203-4827-bd57-acbae7793028-ceilometer-ipmi-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 14 05:00:13 crc kubenswrapper[4867]: I0214 05:00:13.519336 4867 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/43f6ac0f-9203-4827-bd57-acbae7793028-ceilometer-ipmi-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 14 05:00:13 crc kubenswrapper[4867]: I0214 05:00:13.519347 4867 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/43f6ac0f-9203-4827-bd57-acbae7793028-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 14 05:00:13 crc kubenswrapper[4867]: I0214 05:00:13.519357 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zt5fm\" (UniqueName: \"kubernetes.io/projected/43f6ac0f-9203-4827-bd57-acbae7793028-kube-api-access-zt5fm\") on node \"crc\" DevicePath \"\"" Feb 14 05:00:13 crc kubenswrapper[4867]: I0214 05:00:13.519366 4867 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43f6ac0f-9203-4827-bd57-acbae7793028-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 05:00:13 crc kubenswrapper[4867]: I0214 05:00:13.519376 4867 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/43f6ac0f-9203-4827-bd57-acbae7793028-ceilometer-ipmi-config-data-2\") on node \"crc\" DevicePath \"\"" Feb 14 05:00:13 crc kubenswrapper[4867]: I0214 05:00:13.855369 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps" event={"ID":"43f6ac0f-9203-4827-bd57-acbae7793028","Type":"ContainerDied","Data":"a78661dba6d024e4f135e76ec3bde6ffb1cabf67e82e662a787795dbe9e05ef1"} Feb 14 05:00:13 crc kubenswrapper[4867]: I0214 05:00:13.855732 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a78661dba6d024e4f135e76ec3bde6ffb1cabf67e82e662a787795dbe9e05ef1" Feb 14 05:00:13 crc kubenswrapper[4867]: I0214 05:00:13.855467 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps" Feb 14 05:00:13 crc kubenswrapper[4867]: I0214 05:00:13.963346 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-jgnc5"] Feb 14 05:00:13 crc kubenswrapper[4867]: E0214 05:00:13.964044 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f3d9933-ea61-47f2-a857-edd1af2baf67" containerName="collect-profiles" Feb 14 05:00:13 crc kubenswrapper[4867]: I0214 05:00:13.964071 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f3d9933-ea61-47f2-a857-edd1af2baf67" containerName="collect-profiles" Feb 14 05:00:13 crc kubenswrapper[4867]: E0214 05:00:13.964095 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43f6ac0f-9203-4827-bd57-acbae7793028" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Feb 14 05:00:13 crc kubenswrapper[4867]: I0214 05:00:13.964107 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="43f6ac0f-9203-4827-bd57-acbae7793028" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Feb 14 05:00:13 crc kubenswrapper[4867]: I0214 05:00:13.964403 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f3d9933-ea61-47f2-a857-edd1af2baf67" containerName="collect-profiles" Feb 14 05:00:13 crc kubenswrapper[4867]: I0214 05:00:13.964450 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="43f6ac0f-9203-4827-bd57-acbae7793028" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Feb 14 05:00:13 crc kubenswrapper[4867]: I0214 05:00:13.965847 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jgnc5" Feb 14 05:00:13 crc kubenswrapper[4867]: I0214 05:00:13.968487 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 14 05:00:13 crc kubenswrapper[4867]: I0214 05:00:13.968537 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-24tmg" Feb 14 05:00:13 crc kubenswrapper[4867]: I0214 05:00:13.969135 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 14 05:00:13 crc kubenswrapper[4867]: I0214 05:00:13.969390 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 14 05:00:13 crc kubenswrapper[4867]: I0214 05:00:13.969719 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"logging-compute-config-data" Feb 14 05:00:13 crc kubenswrapper[4867]: I0214 05:00:13.981653 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-jgnc5"] Feb 14 05:00:14 crc kubenswrapper[4867]: I0214 05:00:14.030812 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsxkb\" (UniqueName: \"kubernetes.io/projected/6e133b22-e3ca-4be2-8e71-56b6ca79dab2-kube-api-access-xsxkb\") pod \"logging-edpm-deployment-openstack-edpm-ipam-jgnc5\" (UID: \"6e133b22-e3ca-4be2-8e71-56b6ca79dab2\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jgnc5" Feb 14 05:00:14 crc kubenswrapper[4867]: I0214 05:00:14.030883 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/6e133b22-e3ca-4be2-8e71-56b6ca79dab2-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-jgnc5\" (UID: \"6e133b22-e3ca-4be2-8e71-56b6ca79dab2\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jgnc5" Feb 14 05:00:14 crc kubenswrapper[4867]: I0214 05:00:14.030950 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/6e133b22-e3ca-4be2-8e71-56b6ca79dab2-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-jgnc5\" (UID: \"6e133b22-e3ca-4be2-8e71-56b6ca79dab2\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jgnc5" Feb 14 05:00:14 crc kubenswrapper[4867]: I0214 05:00:14.031147 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6e133b22-e3ca-4be2-8e71-56b6ca79dab2-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-jgnc5\" (UID: \"6e133b22-e3ca-4be2-8e71-56b6ca79dab2\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jgnc5" Feb 14 05:00:14 crc kubenswrapper[4867]: I0214 05:00:14.031187 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6e133b22-e3ca-4be2-8e71-56b6ca79dab2-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-jgnc5\" (UID: \"6e133b22-e3ca-4be2-8e71-56b6ca79dab2\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jgnc5" Feb 14 05:00:14 crc kubenswrapper[4867]: I0214 05:00:14.133402 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6e133b22-e3ca-4be2-8e71-56b6ca79dab2-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-jgnc5\" (UID: \"6e133b22-e3ca-4be2-8e71-56b6ca79dab2\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jgnc5" Feb 14 05:00:14 crc kubenswrapper[4867]: I0214 05:00:14.133876 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6e133b22-e3ca-4be2-8e71-56b6ca79dab2-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-jgnc5\" (UID: \"6e133b22-e3ca-4be2-8e71-56b6ca79dab2\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jgnc5" Feb 14 05:00:14 crc kubenswrapper[4867]: I0214 05:00:14.134054 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xsxkb\" (UniqueName: \"kubernetes.io/projected/6e133b22-e3ca-4be2-8e71-56b6ca79dab2-kube-api-access-xsxkb\") pod \"logging-edpm-deployment-openstack-edpm-ipam-jgnc5\" (UID: \"6e133b22-e3ca-4be2-8e71-56b6ca79dab2\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jgnc5" Feb 14 05:00:14 crc kubenswrapper[4867]: I0214 05:00:14.134187 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/6e133b22-e3ca-4be2-8e71-56b6ca79dab2-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-jgnc5\" (UID: \"6e133b22-e3ca-4be2-8e71-56b6ca79dab2\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jgnc5" Feb 14 05:00:14 crc kubenswrapper[4867]: I0214 05:00:14.134347 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/6e133b22-e3ca-4be2-8e71-56b6ca79dab2-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-jgnc5\" (UID: \"6e133b22-e3ca-4be2-8e71-56b6ca79dab2\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jgnc5" Feb 14 05:00:14 crc kubenswrapper[4867]: I0214 05:00:14.139990 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6e133b22-e3ca-4be2-8e71-56b6ca79dab2-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-jgnc5\" (UID: \"6e133b22-e3ca-4be2-8e71-56b6ca79dab2\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jgnc5" Feb 14 05:00:14 crc kubenswrapper[4867]: I0214 05:00:14.140269 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/6e133b22-e3ca-4be2-8e71-56b6ca79dab2-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-jgnc5\" (UID: \"6e133b22-e3ca-4be2-8e71-56b6ca79dab2\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jgnc5" Feb 14 05:00:14 crc kubenswrapper[4867]: I0214 05:00:14.147075 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6e133b22-e3ca-4be2-8e71-56b6ca79dab2-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-jgnc5\" (UID: \"6e133b22-e3ca-4be2-8e71-56b6ca79dab2\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jgnc5" Feb 14 05:00:14 crc kubenswrapper[4867]: I0214 05:00:14.148977 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/6e133b22-e3ca-4be2-8e71-56b6ca79dab2-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-jgnc5\" (UID: \"6e133b22-e3ca-4be2-8e71-56b6ca79dab2\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jgnc5" Feb 14 05:00:14 crc kubenswrapper[4867]: I0214 05:00:14.151636 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsxkb\" (UniqueName: \"kubernetes.io/projected/6e133b22-e3ca-4be2-8e71-56b6ca79dab2-kube-api-access-xsxkb\") pod \"logging-edpm-deployment-openstack-edpm-ipam-jgnc5\" (UID: \"6e133b22-e3ca-4be2-8e71-56b6ca79dab2\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jgnc5" Feb 14 05:00:14 crc kubenswrapper[4867]: I0214 05:00:14.290124 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jgnc5" Feb 14 05:00:14 crc kubenswrapper[4867]: I0214 05:00:14.836593 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-jgnc5"] Feb 14 05:00:14 crc kubenswrapper[4867]: I0214 05:00:14.845410 4867 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 14 05:00:14 crc kubenswrapper[4867]: I0214 05:00:14.870391 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jgnc5" event={"ID":"6e133b22-e3ca-4be2-8e71-56b6ca79dab2","Type":"ContainerStarted","Data":"1e7f446376a872199c67180e67f61670518fde5c6f9e9ab3cf68a8b60a35e783"} Feb 14 05:00:15 crc kubenswrapper[4867]: I0214 05:00:15.855546 4867 scope.go:117] "RemoveContainer" containerID="5dc1b7ab37c9c3df2b530ac74d487ec3f80c14970b4446bee10e3a796e0af837" Feb 14 05:00:15 crc kubenswrapper[4867]: I0214 05:00:15.882129 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jgnc5" event={"ID":"6e133b22-e3ca-4be2-8e71-56b6ca79dab2","Type":"ContainerStarted","Data":"fd5ea480ef3a3e063a60881d0bda6df9eff17175cb1496b51571f74ef0c13c57"} Feb 14 05:00:15 crc kubenswrapper[4867]: I0214 05:00:15.966137 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jgnc5" podStartSLOduration=2.542089626 podStartE2EDuration="2.966109116s" podCreationTimestamp="2026-02-14 05:00:13 +0000 UTC" firstStartedPulling="2026-02-14 05:00:14.845218947 +0000 UTC m=+3046.926156261" lastFinishedPulling="2026-02-14 05:00:15.269238427 +0000 UTC m=+3047.350175751" observedRunningTime="2026-02-14 05:00:15.93163682 +0000 UTC m=+3048.012574154" watchObservedRunningTime="2026-02-14 05:00:15.966109116 +0000 UTC m=+3048.047046430" Feb 14 05:00:22 crc kubenswrapper[4867]: I0214 05:00:22.998250 4867 scope.go:117] "RemoveContainer" containerID="af0906a53bc116fc9f684815c9db0ec3a71e62ba875fd0da6af484a9d2f2ec7d" Feb 14 05:00:23 crc kubenswrapper[4867]: E0214 05:00:23.000789 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:00:31 crc kubenswrapper[4867]: I0214 05:00:31.044652 4867 generic.go:334] "Generic (PLEG): container finished" podID="6e133b22-e3ca-4be2-8e71-56b6ca79dab2" containerID="fd5ea480ef3a3e063a60881d0bda6df9eff17175cb1496b51571f74ef0c13c57" exitCode=0 Feb 14 05:00:31 crc kubenswrapper[4867]: I0214 05:00:31.044733 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jgnc5" event={"ID":"6e133b22-e3ca-4be2-8e71-56b6ca79dab2","Type":"ContainerDied","Data":"fd5ea480ef3a3e063a60881d0bda6df9eff17175cb1496b51571f74ef0c13c57"} Feb 14 05:00:32 crc kubenswrapper[4867]: I0214 05:00:32.558855 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jgnc5" Feb 14 05:00:32 crc kubenswrapper[4867]: I0214 05:00:32.694907 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6e133b22-e3ca-4be2-8e71-56b6ca79dab2-inventory\") pod \"6e133b22-e3ca-4be2-8e71-56b6ca79dab2\" (UID: \"6e133b22-e3ca-4be2-8e71-56b6ca79dab2\") " Feb 14 05:00:32 crc kubenswrapper[4867]: I0214 05:00:32.695107 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/6e133b22-e3ca-4be2-8e71-56b6ca79dab2-logging-compute-config-data-1\") pod \"6e133b22-e3ca-4be2-8e71-56b6ca79dab2\" (UID: \"6e133b22-e3ca-4be2-8e71-56b6ca79dab2\") " Feb 14 05:00:32 crc kubenswrapper[4867]: I0214 05:00:32.695390 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xsxkb\" (UniqueName: \"kubernetes.io/projected/6e133b22-e3ca-4be2-8e71-56b6ca79dab2-kube-api-access-xsxkb\") pod \"6e133b22-e3ca-4be2-8e71-56b6ca79dab2\" (UID: \"6e133b22-e3ca-4be2-8e71-56b6ca79dab2\") " Feb 14 05:00:32 crc kubenswrapper[4867]: I0214 05:00:32.695562 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6e133b22-e3ca-4be2-8e71-56b6ca79dab2-ssh-key-openstack-edpm-ipam\") pod \"6e133b22-e3ca-4be2-8e71-56b6ca79dab2\" (UID: \"6e133b22-e3ca-4be2-8e71-56b6ca79dab2\") " Feb 14 05:00:32 crc kubenswrapper[4867]: I0214 05:00:32.695860 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/6e133b22-e3ca-4be2-8e71-56b6ca79dab2-logging-compute-config-data-0\") pod \"6e133b22-e3ca-4be2-8e71-56b6ca79dab2\" (UID: \"6e133b22-e3ca-4be2-8e71-56b6ca79dab2\") " Feb 14 05:00:32 crc kubenswrapper[4867]: I0214 05:00:32.713290 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e133b22-e3ca-4be2-8e71-56b6ca79dab2-kube-api-access-xsxkb" (OuterVolumeSpecName: "kube-api-access-xsxkb") pod "6e133b22-e3ca-4be2-8e71-56b6ca79dab2" (UID: "6e133b22-e3ca-4be2-8e71-56b6ca79dab2"). InnerVolumeSpecName "kube-api-access-xsxkb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 05:00:32 crc kubenswrapper[4867]: I0214 05:00:32.740731 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e133b22-e3ca-4be2-8e71-56b6ca79dab2-logging-compute-config-data-0" (OuterVolumeSpecName: "logging-compute-config-data-0") pod "6e133b22-e3ca-4be2-8e71-56b6ca79dab2" (UID: "6e133b22-e3ca-4be2-8e71-56b6ca79dab2"). InnerVolumeSpecName "logging-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 05:00:32 crc kubenswrapper[4867]: I0214 05:00:32.743197 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e133b22-e3ca-4be2-8e71-56b6ca79dab2-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6e133b22-e3ca-4be2-8e71-56b6ca79dab2" (UID: "6e133b22-e3ca-4be2-8e71-56b6ca79dab2"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 05:00:32 crc kubenswrapper[4867]: I0214 05:00:32.751741 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e133b22-e3ca-4be2-8e71-56b6ca79dab2-logging-compute-config-data-1" (OuterVolumeSpecName: "logging-compute-config-data-1") pod "6e133b22-e3ca-4be2-8e71-56b6ca79dab2" (UID: "6e133b22-e3ca-4be2-8e71-56b6ca79dab2"). InnerVolumeSpecName "logging-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 05:00:32 crc kubenswrapper[4867]: I0214 05:00:32.767997 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e133b22-e3ca-4be2-8e71-56b6ca79dab2-inventory" (OuterVolumeSpecName: "inventory") pod "6e133b22-e3ca-4be2-8e71-56b6ca79dab2" (UID: "6e133b22-e3ca-4be2-8e71-56b6ca79dab2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 05:00:32 crc kubenswrapper[4867]: I0214 05:00:32.800432 4867 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/6e133b22-e3ca-4be2-8e71-56b6ca79dab2-logging-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 14 05:00:32 crc kubenswrapper[4867]: I0214 05:00:32.800472 4867 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6e133b22-e3ca-4be2-8e71-56b6ca79dab2-inventory\") on node \"crc\" DevicePath \"\"" Feb 14 05:00:32 crc kubenswrapper[4867]: I0214 05:00:32.800485 4867 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/6e133b22-e3ca-4be2-8e71-56b6ca79dab2-logging-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 14 05:00:32 crc kubenswrapper[4867]: I0214 05:00:32.800495 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xsxkb\" (UniqueName: \"kubernetes.io/projected/6e133b22-e3ca-4be2-8e71-56b6ca79dab2-kube-api-access-xsxkb\") on node \"crc\" DevicePath \"\"" Feb 14 05:00:32 crc kubenswrapper[4867]: I0214 05:00:32.800526 4867 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6e133b22-e3ca-4be2-8e71-56b6ca79dab2-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 14 05:00:33 crc kubenswrapper[4867]: I0214 05:00:33.066128 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jgnc5" event={"ID":"6e133b22-e3ca-4be2-8e71-56b6ca79dab2","Type":"ContainerDied","Data":"1e7f446376a872199c67180e67f61670518fde5c6f9e9ab3cf68a8b60a35e783"} Feb 14 05:00:33 crc kubenswrapper[4867]: I0214 05:00:33.066191 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e7f446376a872199c67180e67f61670518fde5c6f9e9ab3cf68a8b60a35e783" Feb 14 05:00:33 crc kubenswrapper[4867]: I0214 05:00:33.066265 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jgnc5" Feb 14 05:00:37 crc kubenswrapper[4867]: I0214 05:00:37.998112 4867 scope.go:117] "RemoveContainer" containerID="af0906a53bc116fc9f684815c9db0ec3a71e62ba875fd0da6af484a9d2f2ec7d" Feb 14 05:00:37 crc kubenswrapper[4867]: E0214 05:00:37.998889 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:00:49 crc kubenswrapper[4867]: I0214 05:00:49.012571 4867 scope.go:117] "RemoveContainer" containerID="af0906a53bc116fc9f684815c9db0ec3a71e62ba875fd0da6af484a9d2f2ec7d" Feb 14 05:00:49 crc kubenswrapper[4867]: E0214 05:00:49.014011 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:01:00 crc kubenswrapper[4867]: I0214 05:01:00.165433 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29517421-jh7t8"] Feb 14 05:01:00 crc kubenswrapper[4867]: E0214 05:01:00.166399 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e133b22-e3ca-4be2-8e71-56b6ca79dab2" containerName="logging-edpm-deployment-openstack-edpm-ipam" Feb 14 05:01:00 crc kubenswrapper[4867]: I0214 05:01:00.166415 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e133b22-e3ca-4be2-8e71-56b6ca79dab2" containerName="logging-edpm-deployment-openstack-edpm-ipam" Feb 14 05:01:00 crc kubenswrapper[4867]: I0214 05:01:00.166676 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e133b22-e3ca-4be2-8e71-56b6ca79dab2" containerName="logging-edpm-deployment-openstack-edpm-ipam" Feb 14 05:01:00 crc kubenswrapper[4867]: I0214 05:01:00.167443 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29517421-jh7t8" Feb 14 05:01:00 crc kubenswrapper[4867]: I0214 05:01:00.179349 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29517421-jh7t8"] Feb 14 05:01:00 crc kubenswrapper[4867]: I0214 05:01:00.241295 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dabbee2b-0869-439e-8c9c-f417ab44f850-config-data\") pod \"keystone-cron-29517421-jh7t8\" (UID: \"dabbee2b-0869-439e-8c9c-f417ab44f850\") " pod="openstack/keystone-cron-29517421-jh7t8" Feb 14 05:01:00 crc kubenswrapper[4867]: I0214 05:01:00.241865 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9rjf\" (UniqueName: \"kubernetes.io/projected/dabbee2b-0869-439e-8c9c-f417ab44f850-kube-api-access-f9rjf\") pod \"keystone-cron-29517421-jh7t8\" (UID: \"dabbee2b-0869-439e-8c9c-f417ab44f850\") " pod="openstack/keystone-cron-29517421-jh7t8" Feb 14 05:01:00 crc kubenswrapper[4867]: I0214 05:01:00.242186 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dabbee2b-0869-439e-8c9c-f417ab44f850-fernet-keys\") pod \"keystone-cron-29517421-jh7t8\" (UID: \"dabbee2b-0869-439e-8c9c-f417ab44f850\") " pod="openstack/keystone-cron-29517421-jh7t8" Feb 14 05:01:00 crc kubenswrapper[4867]: I0214 05:01:00.242235 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dabbee2b-0869-439e-8c9c-f417ab44f850-combined-ca-bundle\") pod \"keystone-cron-29517421-jh7t8\" (UID: \"dabbee2b-0869-439e-8c9c-f417ab44f850\") " pod="openstack/keystone-cron-29517421-jh7t8" Feb 14 05:01:00 crc kubenswrapper[4867]: I0214 05:01:00.345137 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dabbee2b-0869-439e-8c9c-f417ab44f850-fernet-keys\") pod \"keystone-cron-29517421-jh7t8\" (UID: \"dabbee2b-0869-439e-8c9c-f417ab44f850\") " pod="openstack/keystone-cron-29517421-jh7t8" Feb 14 05:01:00 crc kubenswrapper[4867]: I0214 05:01:00.345193 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dabbee2b-0869-439e-8c9c-f417ab44f850-combined-ca-bundle\") pod \"keystone-cron-29517421-jh7t8\" (UID: \"dabbee2b-0869-439e-8c9c-f417ab44f850\") " pod="openstack/keystone-cron-29517421-jh7t8" Feb 14 05:01:00 crc kubenswrapper[4867]: I0214 05:01:00.345291 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dabbee2b-0869-439e-8c9c-f417ab44f850-config-data\") pod \"keystone-cron-29517421-jh7t8\" (UID: \"dabbee2b-0869-439e-8c9c-f417ab44f850\") " pod="openstack/keystone-cron-29517421-jh7t8" Feb 14 05:01:00 crc kubenswrapper[4867]: I0214 05:01:00.345417 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9rjf\" (UniqueName: \"kubernetes.io/projected/dabbee2b-0869-439e-8c9c-f417ab44f850-kube-api-access-f9rjf\") pod \"keystone-cron-29517421-jh7t8\" (UID: \"dabbee2b-0869-439e-8c9c-f417ab44f850\") " pod="openstack/keystone-cron-29517421-jh7t8" Feb 14 05:01:00 crc kubenswrapper[4867]: I0214 05:01:00.360299 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dabbee2b-0869-439e-8c9c-f417ab44f850-fernet-keys\") pod \"keystone-cron-29517421-jh7t8\" (UID: \"dabbee2b-0869-439e-8c9c-f417ab44f850\") " pod="openstack/keystone-cron-29517421-jh7t8" Feb 14 05:01:00 crc kubenswrapper[4867]: I0214 05:01:00.361131 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dabbee2b-0869-439e-8c9c-f417ab44f850-combined-ca-bundle\") pod \"keystone-cron-29517421-jh7t8\" (UID: \"dabbee2b-0869-439e-8c9c-f417ab44f850\") " pod="openstack/keystone-cron-29517421-jh7t8" Feb 14 05:01:00 crc kubenswrapper[4867]: I0214 05:01:00.364228 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dabbee2b-0869-439e-8c9c-f417ab44f850-config-data\") pod \"keystone-cron-29517421-jh7t8\" (UID: \"dabbee2b-0869-439e-8c9c-f417ab44f850\") " pod="openstack/keystone-cron-29517421-jh7t8" Feb 14 05:01:00 crc kubenswrapper[4867]: I0214 05:01:00.370644 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9rjf\" (UniqueName: \"kubernetes.io/projected/dabbee2b-0869-439e-8c9c-f417ab44f850-kube-api-access-f9rjf\") pod \"keystone-cron-29517421-jh7t8\" (UID: \"dabbee2b-0869-439e-8c9c-f417ab44f850\") " pod="openstack/keystone-cron-29517421-jh7t8" Feb 14 05:01:00 crc kubenswrapper[4867]: I0214 05:01:00.500267 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29517421-jh7t8" Feb 14 05:01:01 crc kubenswrapper[4867]: I0214 05:01:01.001379 4867 scope.go:117] "RemoveContainer" containerID="af0906a53bc116fc9f684815c9db0ec3a71e62ba875fd0da6af484a9d2f2ec7d" Feb 14 05:01:01 crc kubenswrapper[4867]: E0214 05:01:01.003050 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:01:01 crc kubenswrapper[4867]: I0214 05:01:01.180248 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29517421-jh7t8"] Feb 14 05:01:01 crc kubenswrapper[4867]: I0214 05:01:01.387290 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29517421-jh7t8" event={"ID":"dabbee2b-0869-439e-8c9c-f417ab44f850","Type":"ContainerStarted","Data":"816ebd413d81e166bfe420e2d22e7ab22783d6ed6ec35937830d65a3c1c8e37d"} Feb 14 05:01:02 crc kubenswrapper[4867]: I0214 05:01:02.402149 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29517421-jh7t8" event={"ID":"dabbee2b-0869-439e-8c9c-f417ab44f850","Type":"ContainerStarted","Data":"7677ee816b0e5bb144d41267e4d59e1a5c59160f5592ed5850a45af78284d93b"} Feb 14 05:01:02 crc kubenswrapper[4867]: I0214 05:01:02.439172 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29517421-jh7t8" podStartSLOduration=2.439141227 podStartE2EDuration="2.439141227s" podCreationTimestamp="2026-02-14 05:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 05:01:02.434652879 +0000 UTC m=+3094.515590203" watchObservedRunningTime="2026-02-14 05:01:02.439141227 +0000 UTC m=+3094.520078541" Feb 14 05:01:05 crc kubenswrapper[4867]: I0214 05:01:05.447383 4867 generic.go:334] "Generic (PLEG): container finished" podID="dabbee2b-0869-439e-8c9c-f417ab44f850" containerID="7677ee816b0e5bb144d41267e4d59e1a5c59160f5592ed5850a45af78284d93b" exitCode=0 Feb 14 05:01:05 crc kubenswrapper[4867]: I0214 05:01:05.447458 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29517421-jh7t8" event={"ID":"dabbee2b-0869-439e-8c9c-f417ab44f850","Type":"ContainerDied","Data":"7677ee816b0e5bb144d41267e4d59e1a5c59160f5592ed5850a45af78284d93b"} Feb 14 05:01:06 crc kubenswrapper[4867]: I0214 05:01:06.956607 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29517421-jh7t8" Feb 14 05:01:07 crc kubenswrapper[4867]: I0214 05:01:07.064466 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9rjf\" (UniqueName: \"kubernetes.io/projected/dabbee2b-0869-439e-8c9c-f417ab44f850-kube-api-access-f9rjf\") pod \"dabbee2b-0869-439e-8c9c-f417ab44f850\" (UID: \"dabbee2b-0869-439e-8c9c-f417ab44f850\") " Feb 14 05:01:07 crc kubenswrapper[4867]: I0214 05:01:07.064537 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dabbee2b-0869-439e-8c9c-f417ab44f850-config-data\") pod \"dabbee2b-0869-439e-8c9c-f417ab44f850\" (UID: \"dabbee2b-0869-439e-8c9c-f417ab44f850\") " Feb 14 05:01:07 crc kubenswrapper[4867]: I0214 05:01:07.064613 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dabbee2b-0869-439e-8c9c-f417ab44f850-combined-ca-bundle\") pod \"dabbee2b-0869-439e-8c9c-f417ab44f850\" (UID: \"dabbee2b-0869-439e-8c9c-f417ab44f850\") " Feb 14 05:01:07 crc kubenswrapper[4867]: I0214 05:01:07.064775 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dabbee2b-0869-439e-8c9c-f417ab44f850-fernet-keys\") pod \"dabbee2b-0869-439e-8c9c-f417ab44f850\" (UID: \"dabbee2b-0869-439e-8c9c-f417ab44f850\") " Feb 14 05:01:07 crc kubenswrapper[4867]: I0214 05:01:07.086926 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dabbee2b-0869-439e-8c9c-f417ab44f850-kube-api-access-f9rjf" (OuterVolumeSpecName: "kube-api-access-f9rjf") pod "dabbee2b-0869-439e-8c9c-f417ab44f850" (UID: "dabbee2b-0869-439e-8c9c-f417ab44f850"). InnerVolumeSpecName "kube-api-access-f9rjf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 05:01:07 crc kubenswrapper[4867]: I0214 05:01:07.092157 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dabbee2b-0869-439e-8c9c-f417ab44f850-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "dabbee2b-0869-439e-8c9c-f417ab44f850" (UID: "dabbee2b-0869-439e-8c9c-f417ab44f850"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 05:01:07 crc kubenswrapper[4867]: I0214 05:01:07.139991 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dabbee2b-0869-439e-8c9c-f417ab44f850-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dabbee2b-0869-439e-8c9c-f417ab44f850" (UID: "dabbee2b-0869-439e-8c9c-f417ab44f850"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 05:01:07 crc kubenswrapper[4867]: I0214 05:01:07.165050 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dabbee2b-0869-439e-8c9c-f417ab44f850-config-data" (OuterVolumeSpecName: "config-data") pod "dabbee2b-0869-439e-8c9c-f417ab44f850" (UID: "dabbee2b-0869-439e-8c9c-f417ab44f850"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 05:01:07 crc kubenswrapper[4867]: I0214 05:01:07.174257 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f9rjf\" (UniqueName: \"kubernetes.io/projected/dabbee2b-0869-439e-8c9c-f417ab44f850-kube-api-access-f9rjf\") on node \"crc\" DevicePath \"\"" Feb 14 05:01:07 crc kubenswrapper[4867]: I0214 05:01:07.174314 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dabbee2b-0869-439e-8c9c-f417ab44f850-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 05:01:07 crc kubenswrapper[4867]: I0214 05:01:07.174329 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dabbee2b-0869-439e-8c9c-f417ab44f850-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 05:01:07 crc kubenswrapper[4867]: I0214 05:01:07.174338 4867 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dabbee2b-0869-439e-8c9c-f417ab44f850-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 14 05:01:07 crc kubenswrapper[4867]: I0214 05:01:07.478217 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29517421-jh7t8" event={"ID":"dabbee2b-0869-439e-8c9c-f417ab44f850","Type":"ContainerDied","Data":"816ebd413d81e166bfe420e2d22e7ab22783d6ed6ec35937830d65a3c1c8e37d"} Feb 14 05:01:07 crc kubenswrapper[4867]: I0214 05:01:07.478261 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="816ebd413d81e166bfe420e2d22e7ab22783d6ed6ec35937830d65a3c1c8e37d" Feb 14 05:01:07 crc kubenswrapper[4867]: I0214 05:01:07.478299 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29517421-jh7t8" Feb 14 05:01:15 crc kubenswrapper[4867]: I0214 05:01:15.997966 4867 scope.go:117] "RemoveContainer" containerID="af0906a53bc116fc9f684815c9db0ec3a71e62ba875fd0da6af484a9d2f2ec7d" Feb 14 05:01:16 crc kubenswrapper[4867]: E0214 05:01:15.998833 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:01:27 crc kubenswrapper[4867]: I0214 05:01:27.998007 4867 scope.go:117] "RemoveContainer" containerID="af0906a53bc116fc9f684815c9db0ec3a71e62ba875fd0da6af484a9d2f2ec7d" Feb 14 05:01:27 crc kubenswrapper[4867]: E0214 05:01:27.998853 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:01:41 crc kubenswrapper[4867]: I0214 05:01:40.999142 4867 scope.go:117] "RemoveContainer" containerID="af0906a53bc116fc9f684815c9db0ec3a71e62ba875fd0da6af484a9d2f2ec7d" Feb 14 05:01:41 crc kubenswrapper[4867]: E0214 05:01:41.001955 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:01:54 crc kubenswrapper[4867]: I0214 05:01:54.997560 4867 scope.go:117] "RemoveContainer" containerID="af0906a53bc116fc9f684815c9db0ec3a71e62ba875fd0da6af484a9d2f2ec7d" Feb 14 05:01:54 crc kubenswrapper[4867]: E0214 05:01:54.998623 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:02:09 crc kubenswrapper[4867]: I0214 05:02:09.998156 4867 scope.go:117] "RemoveContainer" containerID="af0906a53bc116fc9f684815c9db0ec3a71e62ba875fd0da6af484a9d2f2ec7d" Feb 14 05:02:10 crc kubenswrapper[4867]: E0214 05:02:09.999311 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:02:20 crc kubenswrapper[4867]: I0214 05:02:20.997550 4867 scope.go:117] "RemoveContainer" containerID="af0906a53bc116fc9f684815c9db0ec3a71e62ba875fd0da6af484a9d2f2ec7d" Feb 14 05:02:20 crc kubenswrapper[4867]: E0214 05:02:20.998926 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:02:33 crc kubenswrapper[4867]: I0214 05:02:33.997406 4867 scope.go:117] "RemoveContainer" containerID="af0906a53bc116fc9f684815c9db0ec3a71e62ba875fd0da6af484a9d2f2ec7d" Feb 14 05:02:33 crc kubenswrapper[4867]: E0214 05:02:33.998194 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:02:44 crc kubenswrapper[4867]: I0214 05:02:44.997939 4867 scope.go:117] "RemoveContainer" containerID="af0906a53bc116fc9f684815c9db0ec3a71e62ba875fd0da6af484a9d2f2ec7d" Feb 14 05:02:44 crc kubenswrapper[4867]: E0214 05:02:44.999033 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:02:59 crc kubenswrapper[4867]: I0214 05:02:59.997531 4867 scope.go:117] "RemoveContainer" containerID="af0906a53bc116fc9f684815c9db0ec3a71e62ba875fd0da6af484a9d2f2ec7d" Feb 14 05:03:00 crc kubenswrapper[4867]: E0214 05:02:59.998487 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:03:13 crc kubenswrapper[4867]: I0214 05:03:13.997990 4867 scope.go:117] "RemoveContainer" containerID="af0906a53bc116fc9f684815c9db0ec3a71e62ba875fd0da6af484a9d2f2ec7d" Feb 14 05:03:14 crc kubenswrapper[4867]: E0214 05:03:13.998845 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:03:25 crc kubenswrapper[4867]: I0214 05:03:25.998254 4867 scope.go:117] "RemoveContainer" containerID="af0906a53bc116fc9f684815c9db0ec3a71e62ba875fd0da6af484a9d2f2ec7d" Feb 14 05:03:26 crc kubenswrapper[4867]: E0214 05:03:25.999414 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:03:40 crc kubenswrapper[4867]: I0214 05:03:40.997462 4867 scope.go:117] "RemoveContainer" containerID="af0906a53bc116fc9f684815c9db0ec3a71e62ba875fd0da6af484a9d2f2ec7d" Feb 14 05:03:40 crc kubenswrapper[4867]: E0214 05:03:40.998663 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:03:54 crc kubenswrapper[4867]: I0214 05:03:54.998173 4867 scope.go:117] "RemoveContainer" containerID="af0906a53bc116fc9f684815c9db0ec3a71e62ba875fd0da6af484a9d2f2ec7d" Feb 14 05:03:54 crc kubenswrapper[4867]: E0214 05:03:54.999019 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:04:05 crc kubenswrapper[4867]: I0214 05:04:05.998280 4867 scope.go:117] "RemoveContainer" containerID="af0906a53bc116fc9f684815c9db0ec3a71e62ba875fd0da6af484a9d2f2ec7d" Feb 14 05:04:06 crc kubenswrapper[4867]: I0214 05:04:06.613531 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" event={"ID":"5992e46c-bce7-4b9f-82f2-c7ffb93286cd","Type":"ContainerStarted","Data":"863d05e2c2e5d1963a43470517034f45e340fcf76621f87d3a0804ee07159c7e"} Feb 14 05:05:35 crc kubenswrapper[4867]: E0214 05:05:35.218803 4867 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.113:53458->38.102.83.113:33373: write tcp 38.102.83.113:53458->38.102.83.113:33373: write: broken pipe Feb 14 05:06:31 crc kubenswrapper[4867]: I0214 05:06:31.250611 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 05:06:31 crc kubenswrapper[4867]: I0214 05:06:31.251226 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 05:07:01 crc kubenswrapper[4867]: I0214 05:07:01.250698 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 05:07:01 crc kubenswrapper[4867]: I0214 05:07:01.251222 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 05:07:31 crc kubenswrapper[4867]: I0214 05:07:31.250497 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 05:07:31 crc kubenswrapper[4867]: I0214 05:07:31.251551 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 05:07:31 crc kubenswrapper[4867]: I0214 05:07:31.251631 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" Feb 14 05:07:31 crc kubenswrapper[4867]: I0214 05:07:31.253100 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"863d05e2c2e5d1963a43470517034f45e340fcf76621f87d3a0804ee07159c7e"} pod="openshift-machine-config-operator/machine-config-daemon-4s95t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 05:07:31 crc kubenswrapper[4867]: I0214 05:07:31.253186 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" containerID="cri-o://863d05e2c2e5d1963a43470517034f45e340fcf76621f87d3a0804ee07159c7e" gracePeriod=600 Feb 14 05:07:31 crc kubenswrapper[4867]: I0214 05:07:31.619530 4867 generic.go:334] "Generic (PLEG): container finished" podID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerID="863d05e2c2e5d1963a43470517034f45e340fcf76621f87d3a0804ee07159c7e" exitCode=0 Feb 14 05:07:31 crc kubenswrapper[4867]: I0214 05:07:31.619621 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" event={"ID":"5992e46c-bce7-4b9f-82f2-c7ffb93286cd","Type":"ContainerDied","Data":"863d05e2c2e5d1963a43470517034f45e340fcf76621f87d3a0804ee07159c7e"} Feb 14 05:07:31 crc kubenswrapper[4867]: I0214 05:07:31.619959 4867 scope.go:117] "RemoveContainer" containerID="af0906a53bc116fc9f684815c9db0ec3a71e62ba875fd0da6af484a9d2f2ec7d" Feb 14 05:07:32 crc kubenswrapper[4867]: I0214 05:07:32.632275 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" event={"ID":"5992e46c-bce7-4b9f-82f2-c7ffb93286cd","Type":"ContainerStarted","Data":"734a61f9c7ed9ca50b3d56703c2d5beedaf665574b56c30e78eaf04e359de025"} Feb 14 05:07:44 crc kubenswrapper[4867]: I0214 05:07:44.820479 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-849hf"] Feb 14 05:07:44 crc kubenswrapper[4867]: E0214 05:07:44.821666 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dabbee2b-0869-439e-8c9c-f417ab44f850" containerName="keystone-cron" Feb 14 05:07:44 crc kubenswrapper[4867]: I0214 05:07:44.821682 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="dabbee2b-0869-439e-8c9c-f417ab44f850" containerName="keystone-cron" Feb 14 05:07:44 crc kubenswrapper[4867]: I0214 05:07:44.821987 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="dabbee2b-0869-439e-8c9c-f417ab44f850" containerName="keystone-cron" Feb 14 05:07:44 crc kubenswrapper[4867]: I0214 05:07:44.824099 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-849hf" Feb 14 05:07:44 crc kubenswrapper[4867]: I0214 05:07:44.841106 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-849hf"] Feb 14 05:07:44 crc kubenswrapper[4867]: I0214 05:07:44.981640 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913-catalog-content\") pod \"redhat-operators-849hf\" (UID: \"fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913\") " pod="openshift-marketplace/redhat-operators-849hf" Feb 14 05:07:44 crc kubenswrapper[4867]: I0214 05:07:44.981773 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkwvx\" (UniqueName: \"kubernetes.io/projected/fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913-kube-api-access-jkwvx\") pod \"redhat-operators-849hf\" (UID: \"fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913\") " pod="openshift-marketplace/redhat-operators-849hf" Feb 14 05:07:44 crc kubenswrapper[4867]: I0214 05:07:44.981867 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913-utilities\") pod \"redhat-operators-849hf\" (UID: \"fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913\") " pod="openshift-marketplace/redhat-operators-849hf" Feb 14 05:07:45 crc kubenswrapper[4867]: I0214 05:07:45.085571 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913-catalog-content\") pod \"redhat-operators-849hf\" (UID: \"fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913\") " pod="openshift-marketplace/redhat-operators-849hf" Feb 14 05:07:45 crc kubenswrapper[4867]: I0214 05:07:45.086035 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913-catalog-content\") pod \"redhat-operators-849hf\" (UID: \"fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913\") " pod="openshift-marketplace/redhat-operators-849hf" Feb 14 05:07:45 crc kubenswrapper[4867]: I0214 05:07:45.086194 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkwvx\" (UniqueName: \"kubernetes.io/projected/fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913-kube-api-access-jkwvx\") pod \"redhat-operators-849hf\" (UID: \"fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913\") " pod="openshift-marketplace/redhat-operators-849hf" Feb 14 05:07:45 crc kubenswrapper[4867]: I0214 05:07:45.086659 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913-utilities\") pod \"redhat-operators-849hf\" (UID: \"fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913\") " pod="openshift-marketplace/redhat-operators-849hf" Feb 14 05:07:45 crc kubenswrapper[4867]: I0214 05:07:45.087183 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913-utilities\") pod \"redhat-operators-849hf\" (UID: \"fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913\") " pod="openshift-marketplace/redhat-operators-849hf" Feb 14 05:07:45 crc kubenswrapper[4867]: I0214 05:07:45.110132 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkwvx\" (UniqueName: \"kubernetes.io/projected/fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913-kube-api-access-jkwvx\") pod \"redhat-operators-849hf\" (UID: \"fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913\") " pod="openshift-marketplace/redhat-operators-849hf" Feb 14 05:07:45 crc kubenswrapper[4867]: I0214 05:07:45.153750 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-849hf" Feb 14 05:07:45 crc kubenswrapper[4867]: I0214 05:07:45.928290 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-849hf"] Feb 14 05:07:46 crc kubenswrapper[4867]: I0214 05:07:46.781726 4867 generic.go:334] "Generic (PLEG): container finished" podID="fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913" containerID="531118b5698c29e4c554c835ddc5e56e0cb2165c80336c4df5e447f587f66a36" exitCode=0 Feb 14 05:07:46 crc kubenswrapper[4867]: I0214 05:07:46.782694 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-849hf" event={"ID":"fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913","Type":"ContainerDied","Data":"531118b5698c29e4c554c835ddc5e56e0cb2165c80336c4df5e447f587f66a36"} Feb 14 05:07:46 crc kubenswrapper[4867]: I0214 05:07:46.782793 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-849hf" event={"ID":"fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913","Type":"ContainerStarted","Data":"dafc13745014642f6bd9d9412ddef647b7ac82a22ff94a3893f227e1e4e1bb8d"} Feb 14 05:07:46 crc kubenswrapper[4867]: I0214 05:07:46.786386 4867 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 14 05:07:47 crc kubenswrapper[4867]: I0214 05:07:47.797395 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-849hf" event={"ID":"fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913","Type":"ContainerStarted","Data":"1a1f33a026be29ec895d79cdceabf3f96f8e193872c0565e786f286f62513737"} Feb 14 05:07:53 crc kubenswrapper[4867]: I0214 05:07:53.856835 4867 generic.go:334] "Generic (PLEG): container finished" podID="fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913" containerID="1a1f33a026be29ec895d79cdceabf3f96f8e193872c0565e786f286f62513737" exitCode=0 Feb 14 05:07:53 crc kubenswrapper[4867]: I0214 05:07:53.856945 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-849hf" event={"ID":"fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913","Type":"ContainerDied","Data":"1a1f33a026be29ec895d79cdceabf3f96f8e193872c0565e786f286f62513737"} Feb 14 05:07:54 crc kubenswrapper[4867]: I0214 05:07:54.871324 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-849hf" event={"ID":"fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913","Type":"ContainerStarted","Data":"e597dfd95c0e082f7c06168f6200f691b69f3d9758280a3b64ceb7062e323750"} Feb 14 05:07:54 crc kubenswrapper[4867]: I0214 05:07:54.901702 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-849hf" podStartSLOduration=3.462764642 podStartE2EDuration="10.901679183s" podCreationTimestamp="2026-02-14 05:07:44 +0000 UTC" firstStartedPulling="2026-02-14 05:07:46.786161684 +0000 UTC m=+3498.867098998" lastFinishedPulling="2026-02-14 05:07:54.225076225 +0000 UTC m=+3506.306013539" observedRunningTime="2026-02-14 05:07:54.895140051 +0000 UTC m=+3506.976077375" watchObservedRunningTime="2026-02-14 05:07:54.901679183 +0000 UTC m=+3506.982616507" Feb 14 05:07:55 crc kubenswrapper[4867]: I0214 05:07:55.155606 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-849hf" Feb 14 05:07:55 crc kubenswrapper[4867]: I0214 05:07:55.156107 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-849hf" Feb 14 05:07:56 crc kubenswrapper[4867]: I0214 05:07:56.212475 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-849hf" podUID="fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913" containerName="registry-server" probeResult="failure" output=< Feb 14 05:07:56 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:07:56 crc kubenswrapper[4867]: > Feb 14 05:08:03 crc kubenswrapper[4867]: I0214 05:08:03.231175 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tq9n4"] Feb 14 05:08:03 crc kubenswrapper[4867]: I0214 05:08:03.236146 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tq9n4" Feb 14 05:08:03 crc kubenswrapper[4867]: I0214 05:08:03.250093 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tq9n4"] Feb 14 05:08:03 crc kubenswrapper[4867]: I0214 05:08:03.326964 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61a1135a-8f12-45c1-95f2-b7892a0533bf-catalog-content\") pod \"certified-operators-tq9n4\" (UID: \"61a1135a-8f12-45c1-95f2-b7892a0533bf\") " pod="openshift-marketplace/certified-operators-tq9n4" Feb 14 05:08:03 crc kubenswrapper[4867]: I0214 05:08:03.329190 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61a1135a-8f12-45c1-95f2-b7892a0533bf-utilities\") pod \"certified-operators-tq9n4\" (UID: \"61a1135a-8f12-45c1-95f2-b7892a0533bf\") " pod="openshift-marketplace/certified-operators-tq9n4" Feb 14 05:08:03 crc kubenswrapper[4867]: I0214 05:08:03.329525 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2h7j\" (UniqueName: \"kubernetes.io/projected/61a1135a-8f12-45c1-95f2-b7892a0533bf-kube-api-access-b2h7j\") pod \"certified-operators-tq9n4\" (UID: \"61a1135a-8f12-45c1-95f2-b7892a0533bf\") " pod="openshift-marketplace/certified-operators-tq9n4" Feb 14 05:08:03 crc kubenswrapper[4867]: I0214 05:08:03.433646 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61a1135a-8f12-45c1-95f2-b7892a0533bf-catalog-content\") pod \"certified-operators-tq9n4\" (UID: \"61a1135a-8f12-45c1-95f2-b7892a0533bf\") " pod="openshift-marketplace/certified-operators-tq9n4" Feb 14 05:08:03 crc kubenswrapper[4867]: I0214 05:08:03.434006 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61a1135a-8f12-45c1-95f2-b7892a0533bf-utilities\") pod \"certified-operators-tq9n4\" (UID: \"61a1135a-8f12-45c1-95f2-b7892a0533bf\") " pod="openshift-marketplace/certified-operators-tq9n4" Feb 14 05:08:03 crc kubenswrapper[4867]: I0214 05:08:03.434210 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2h7j\" (UniqueName: \"kubernetes.io/projected/61a1135a-8f12-45c1-95f2-b7892a0533bf-kube-api-access-b2h7j\") pod \"certified-operators-tq9n4\" (UID: \"61a1135a-8f12-45c1-95f2-b7892a0533bf\") " pod="openshift-marketplace/certified-operators-tq9n4" Feb 14 05:08:03 crc kubenswrapper[4867]: I0214 05:08:03.434272 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61a1135a-8f12-45c1-95f2-b7892a0533bf-catalog-content\") pod \"certified-operators-tq9n4\" (UID: \"61a1135a-8f12-45c1-95f2-b7892a0533bf\") " pod="openshift-marketplace/certified-operators-tq9n4" Feb 14 05:08:03 crc kubenswrapper[4867]: I0214 05:08:03.434584 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61a1135a-8f12-45c1-95f2-b7892a0533bf-utilities\") pod \"certified-operators-tq9n4\" (UID: \"61a1135a-8f12-45c1-95f2-b7892a0533bf\") " pod="openshift-marketplace/certified-operators-tq9n4" Feb 14 05:08:03 crc kubenswrapper[4867]: I0214 05:08:03.479426 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2h7j\" (UniqueName: \"kubernetes.io/projected/61a1135a-8f12-45c1-95f2-b7892a0533bf-kube-api-access-b2h7j\") pod \"certified-operators-tq9n4\" (UID: \"61a1135a-8f12-45c1-95f2-b7892a0533bf\") " pod="openshift-marketplace/certified-operators-tq9n4" Feb 14 05:08:03 crc kubenswrapper[4867]: I0214 05:08:03.574260 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tq9n4" Feb 14 05:08:04 crc kubenswrapper[4867]: I0214 05:08:04.258861 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tq9n4"] Feb 14 05:08:05 crc kubenswrapper[4867]: I0214 05:08:05.279158 4867 generic.go:334] "Generic (PLEG): container finished" podID="61a1135a-8f12-45c1-95f2-b7892a0533bf" containerID="e05129467c11e060aff7a1a17b25c836377e0d3898482df922fa8384386b3fbf" exitCode=0 Feb 14 05:08:05 crc kubenswrapper[4867]: I0214 05:08:05.279487 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tq9n4" event={"ID":"61a1135a-8f12-45c1-95f2-b7892a0533bf","Type":"ContainerDied","Data":"e05129467c11e060aff7a1a17b25c836377e0d3898482df922fa8384386b3fbf"} Feb 14 05:08:05 crc kubenswrapper[4867]: I0214 05:08:05.279540 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tq9n4" event={"ID":"61a1135a-8f12-45c1-95f2-b7892a0533bf","Type":"ContainerStarted","Data":"092ed30559d49b05e26166d7bc8674d52cd854597a2573bcd4d145dc2358a4ed"} Feb 14 05:08:06 crc kubenswrapper[4867]: I0214 05:08:06.270409 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-849hf" podUID="fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913" containerName="registry-server" probeResult="failure" output=< Feb 14 05:08:06 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:08:06 crc kubenswrapper[4867]: > Feb 14 05:08:09 crc kubenswrapper[4867]: I0214 05:08:09.332557 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tq9n4" event={"ID":"61a1135a-8f12-45c1-95f2-b7892a0533bf","Type":"ContainerStarted","Data":"6e8583e981e98adc699d970a530bd1a566e8915d3232bc5dd40842eb4e1c21a0"} Feb 14 05:08:11 crc kubenswrapper[4867]: I0214 05:08:11.615378 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-r5spn"] Feb 14 05:08:11 crc kubenswrapper[4867]: I0214 05:08:11.619036 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r5spn" Feb 14 05:08:11 crc kubenswrapper[4867]: I0214 05:08:11.636944 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-r5spn"] Feb 14 05:08:11 crc kubenswrapper[4867]: I0214 05:08:11.695174 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkqrj\" (UniqueName: \"kubernetes.io/projected/1a9e54e7-1fab-4191-b99b-b976ff519072-kube-api-access-xkqrj\") pod \"community-operators-r5spn\" (UID: \"1a9e54e7-1fab-4191-b99b-b976ff519072\") " pod="openshift-marketplace/community-operators-r5spn" Feb 14 05:08:11 crc kubenswrapper[4867]: I0214 05:08:11.695250 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a9e54e7-1fab-4191-b99b-b976ff519072-catalog-content\") pod \"community-operators-r5spn\" (UID: \"1a9e54e7-1fab-4191-b99b-b976ff519072\") " pod="openshift-marketplace/community-operators-r5spn" Feb 14 05:08:11 crc kubenswrapper[4867]: I0214 05:08:11.695445 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a9e54e7-1fab-4191-b99b-b976ff519072-utilities\") pod \"community-operators-r5spn\" (UID: \"1a9e54e7-1fab-4191-b99b-b976ff519072\") " pod="openshift-marketplace/community-operators-r5spn" Feb 14 05:08:11 crc kubenswrapper[4867]: I0214 05:08:11.797869 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkqrj\" (UniqueName: \"kubernetes.io/projected/1a9e54e7-1fab-4191-b99b-b976ff519072-kube-api-access-xkqrj\") pod \"community-operators-r5spn\" (UID: \"1a9e54e7-1fab-4191-b99b-b976ff519072\") " pod="openshift-marketplace/community-operators-r5spn" Feb 14 05:08:11 crc kubenswrapper[4867]: I0214 05:08:11.797978 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a9e54e7-1fab-4191-b99b-b976ff519072-catalog-content\") pod \"community-operators-r5spn\" (UID: \"1a9e54e7-1fab-4191-b99b-b976ff519072\") " pod="openshift-marketplace/community-operators-r5spn" Feb 14 05:08:11 crc kubenswrapper[4867]: I0214 05:08:11.798089 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a9e54e7-1fab-4191-b99b-b976ff519072-utilities\") pod \"community-operators-r5spn\" (UID: \"1a9e54e7-1fab-4191-b99b-b976ff519072\") " pod="openshift-marketplace/community-operators-r5spn" Feb 14 05:08:11 crc kubenswrapper[4867]: I0214 05:08:11.798457 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a9e54e7-1fab-4191-b99b-b976ff519072-catalog-content\") pod \"community-operators-r5spn\" (UID: \"1a9e54e7-1fab-4191-b99b-b976ff519072\") " pod="openshift-marketplace/community-operators-r5spn" Feb 14 05:08:11 crc kubenswrapper[4867]: I0214 05:08:11.798760 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a9e54e7-1fab-4191-b99b-b976ff519072-utilities\") pod \"community-operators-r5spn\" (UID: \"1a9e54e7-1fab-4191-b99b-b976ff519072\") " pod="openshift-marketplace/community-operators-r5spn" Feb 14 05:08:11 crc kubenswrapper[4867]: I0214 05:08:11.830122 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkqrj\" (UniqueName: \"kubernetes.io/projected/1a9e54e7-1fab-4191-b99b-b976ff519072-kube-api-access-xkqrj\") pod \"community-operators-r5spn\" (UID: \"1a9e54e7-1fab-4191-b99b-b976ff519072\") " pod="openshift-marketplace/community-operators-r5spn" Feb 14 05:08:11 crc kubenswrapper[4867]: I0214 05:08:11.946345 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r5spn" Feb 14 05:08:12 crc kubenswrapper[4867]: I0214 05:08:12.576088 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-r5spn"] Feb 14 05:08:12 crc kubenswrapper[4867]: I0214 05:08:12.611597 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9fzdp"] Feb 14 05:08:12 crc kubenswrapper[4867]: I0214 05:08:12.614062 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9fzdp" Feb 14 05:08:12 crc kubenswrapper[4867]: I0214 05:08:12.626568 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9fzdp"] Feb 14 05:08:12 crc kubenswrapper[4867]: I0214 05:08:12.720967 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5smr\" (UniqueName: \"kubernetes.io/projected/140ec2e6-ad78-48a9-b040-c957a66a3455-kube-api-access-b5smr\") pod \"redhat-marketplace-9fzdp\" (UID: \"140ec2e6-ad78-48a9-b040-c957a66a3455\") " pod="openshift-marketplace/redhat-marketplace-9fzdp" Feb 14 05:08:12 crc kubenswrapper[4867]: I0214 05:08:12.721095 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/140ec2e6-ad78-48a9-b040-c957a66a3455-catalog-content\") pod \"redhat-marketplace-9fzdp\" (UID: \"140ec2e6-ad78-48a9-b040-c957a66a3455\") " pod="openshift-marketplace/redhat-marketplace-9fzdp" Feb 14 05:08:12 crc kubenswrapper[4867]: I0214 05:08:12.721214 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/140ec2e6-ad78-48a9-b040-c957a66a3455-utilities\") pod \"redhat-marketplace-9fzdp\" (UID: \"140ec2e6-ad78-48a9-b040-c957a66a3455\") " pod="openshift-marketplace/redhat-marketplace-9fzdp" Feb 14 05:08:12 crc kubenswrapper[4867]: I0214 05:08:12.823044 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/140ec2e6-ad78-48a9-b040-c957a66a3455-catalog-content\") pod \"redhat-marketplace-9fzdp\" (UID: \"140ec2e6-ad78-48a9-b040-c957a66a3455\") " pod="openshift-marketplace/redhat-marketplace-9fzdp" Feb 14 05:08:12 crc kubenswrapper[4867]: I0214 05:08:12.823138 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/140ec2e6-ad78-48a9-b040-c957a66a3455-utilities\") pod \"redhat-marketplace-9fzdp\" (UID: \"140ec2e6-ad78-48a9-b040-c957a66a3455\") " pod="openshift-marketplace/redhat-marketplace-9fzdp" Feb 14 05:08:12 crc kubenswrapper[4867]: I0214 05:08:12.823300 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5smr\" (UniqueName: \"kubernetes.io/projected/140ec2e6-ad78-48a9-b040-c957a66a3455-kube-api-access-b5smr\") pod \"redhat-marketplace-9fzdp\" (UID: \"140ec2e6-ad78-48a9-b040-c957a66a3455\") " pod="openshift-marketplace/redhat-marketplace-9fzdp" Feb 14 05:08:12 crc kubenswrapper[4867]: I0214 05:08:12.823621 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/140ec2e6-ad78-48a9-b040-c957a66a3455-catalog-content\") pod \"redhat-marketplace-9fzdp\" (UID: \"140ec2e6-ad78-48a9-b040-c957a66a3455\") " pod="openshift-marketplace/redhat-marketplace-9fzdp" Feb 14 05:08:12 crc kubenswrapper[4867]: I0214 05:08:12.823787 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/140ec2e6-ad78-48a9-b040-c957a66a3455-utilities\") pod \"redhat-marketplace-9fzdp\" (UID: \"140ec2e6-ad78-48a9-b040-c957a66a3455\") " pod="openshift-marketplace/redhat-marketplace-9fzdp" Feb 14 05:08:12 crc kubenswrapper[4867]: I0214 05:08:12.849024 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5smr\" (UniqueName: \"kubernetes.io/projected/140ec2e6-ad78-48a9-b040-c957a66a3455-kube-api-access-b5smr\") pod \"redhat-marketplace-9fzdp\" (UID: \"140ec2e6-ad78-48a9-b040-c957a66a3455\") " pod="openshift-marketplace/redhat-marketplace-9fzdp" Feb 14 05:08:12 crc kubenswrapper[4867]: I0214 05:08:12.953704 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9fzdp" Feb 14 05:08:13 crc kubenswrapper[4867]: I0214 05:08:13.404856 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r5spn" event={"ID":"1a9e54e7-1fab-4191-b99b-b976ff519072","Type":"ContainerStarted","Data":"49932758371f9d334694dab80b28a433bd0d81bb061d82cb8f2927692cb34418"} Feb 14 05:08:13 crc kubenswrapper[4867]: I0214 05:08:13.405192 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r5spn" event={"ID":"1a9e54e7-1fab-4191-b99b-b976ff519072","Type":"ContainerStarted","Data":"c79540bf7b44b50c23a2fe282f0135035cd8741d7a1d186a283a56cc1e861b81"} Feb 14 05:08:13 crc kubenswrapper[4867]: I0214 05:08:13.569090 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9fzdp"] Feb 14 05:08:14 crc kubenswrapper[4867]: I0214 05:08:14.415205 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9fzdp" event={"ID":"140ec2e6-ad78-48a9-b040-c957a66a3455","Type":"ContainerStarted","Data":"53a76ae710346cf06fcde776027b3014f0c654a9a0ed4d21f46f194e33a884c4"} Feb 14 05:08:14 crc kubenswrapper[4867]: I0214 05:08:14.418488 4867 generic.go:334] "Generic (PLEG): container finished" podID="1a9e54e7-1fab-4191-b99b-b976ff519072" containerID="49932758371f9d334694dab80b28a433bd0d81bb061d82cb8f2927692cb34418" exitCode=0 Feb 14 05:08:14 crc kubenswrapper[4867]: I0214 05:08:14.418631 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r5spn" event={"ID":"1a9e54e7-1fab-4191-b99b-b976ff519072","Type":"ContainerDied","Data":"49932758371f9d334694dab80b28a433bd0d81bb061d82cb8f2927692cb34418"} Feb 14 05:08:15 crc kubenswrapper[4867]: I0214 05:08:15.429784 4867 generic.go:334] "Generic (PLEG): container finished" podID="140ec2e6-ad78-48a9-b040-c957a66a3455" containerID="84104b91d44c2273913c65b18ba62b49ea914033f2b77a0d38858fd584db5b02" exitCode=0 Feb 14 05:08:15 crc kubenswrapper[4867]: I0214 05:08:15.429834 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9fzdp" event={"ID":"140ec2e6-ad78-48a9-b040-c957a66a3455","Type":"ContainerDied","Data":"84104b91d44c2273913c65b18ba62b49ea914033f2b77a0d38858fd584db5b02"} Feb 14 05:08:16 crc kubenswrapper[4867]: I0214 05:08:16.208429 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-849hf" podUID="fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913" containerName="registry-server" probeResult="failure" output=< Feb 14 05:08:16 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:08:16 crc kubenswrapper[4867]: > Feb 14 05:08:16 crc kubenswrapper[4867]: I0214 05:08:16.454718 4867 generic.go:334] "Generic (PLEG): container finished" podID="61a1135a-8f12-45c1-95f2-b7892a0533bf" containerID="6e8583e981e98adc699d970a530bd1a566e8915d3232bc5dd40842eb4e1c21a0" exitCode=0 Feb 14 05:08:16 crc kubenswrapper[4867]: I0214 05:08:16.454782 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tq9n4" event={"ID":"61a1135a-8f12-45c1-95f2-b7892a0533bf","Type":"ContainerDied","Data":"6e8583e981e98adc699d970a530bd1a566e8915d3232bc5dd40842eb4e1c21a0"} Feb 14 05:08:16 crc kubenswrapper[4867]: I0214 05:08:16.461869 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r5spn" event={"ID":"1a9e54e7-1fab-4191-b99b-b976ff519072","Type":"ContainerStarted","Data":"daee3912bfff3bc6d2562589922c61f1a311cce4acc698856ba514100abe8d95"} Feb 14 05:08:16 crc kubenswrapper[4867]: I0214 05:08:16.466076 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9fzdp" event={"ID":"140ec2e6-ad78-48a9-b040-c957a66a3455","Type":"ContainerStarted","Data":"d89b05b6d58f4cd442926058c08bcf9ea79ccff80e84314c7e648619c249a02b"} Feb 14 05:08:19 crc kubenswrapper[4867]: I0214 05:08:19.497384 4867 generic.go:334] "Generic (PLEG): container finished" podID="1a9e54e7-1fab-4191-b99b-b976ff519072" containerID="daee3912bfff3bc6d2562589922c61f1a311cce4acc698856ba514100abe8d95" exitCode=0 Feb 14 05:08:19 crc kubenswrapper[4867]: I0214 05:08:19.497420 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r5spn" event={"ID":"1a9e54e7-1fab-4191-b99b-b976ff519072","Type":"ContainerDied","Data":"daee3912bfff3bc6d2562589922c61f1a311cce4acc698856ba514100abe8d95"} Feb 14 05:08:19 crc kubenswrapper[4867]: I0214 05:08:19.502623 4867 generic.go:334] "Generic (PLEG): container finished" podID="140ec2e6-ad78-48a9-b040-c957a66a3455" containerID="d89b05b6d58f4cd442926058c08bcf9ea79ccff80e84314c7e648619c249a02b" exitCode=0 Feb 14 05:08:19 crc kubenswrapper[4867]: I0214 05:08:19.502702 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9fzdp" event={"ID":"140ec2e6-ad78-48a9-b040-c957a66a3455","Type":"ContainerDied","Data":"d89b05b6d58f4cd442926058c08bcf9ea79ccff80e84314c7e648619c249a02b"} Feb 14 05:08:19 crc kubenswrapper[4867]: I0214 05:08:19.507838 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tq9n4" event={"ID":"61a1135a-8f12-45c1-95f2-b7892a0533bf","Type":"ContainerStarted","Data":"406624fcb2004e26ee93fe465e74d4df5d2d90c3fb84b0fdbf7f4c7494f9a104"} Feb 14 05:08:19 crc kubenswrapper[4867]: I0214 05:08:19.565415 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tq9n4" podStartSLOduration=4.553566536 podStartE2EDuration="17.56539113s" podCreationTimestamp="2026-02-14 05:08:02 +0000 UTC" firstStartedPulling="2026-02-14 05:08:05.283422604 +0000 UTC m=+3517.364359918" lastFinishedPulling="2026-02-14 05:08:18.295247198 +0000 UTC m=+3530.376184512" observedRunningTime="2026-02-14 05:08:19.553921809 +0000 UTC m=+3531.634859113" watchObservedRunningTime="2026-02-14 05:08:19.56539113 +0000 UTC m=+3531.646328444" Feb 14 05:08:20 crc kubenswrapper[4867]: I0214 05:08:20.525785 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r5spn" event={"ID":"1a9e54e7-1fab-4191-b99b-b976ff519072","Type":"ContainerStarted","Data":"698f7f8abfbff13aaa725af14d8f5d2627ab98c98f10f308a3f4c2a0d4d80346"} Feb 14 05:08:20 crc kubenswrapper[4867]: I0214 05:08:20.531983 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9fzdp" event={"ID":"140ec2e6-ad78-48a9-b040-c957a66a3455","Type":"ContainerStarted","Data":"9f1f5d2a852335cd2627fafce6cf77ae91ff86308778c50e32db7793b9f53590"} Feb 14 05:08:20 crc kubenswrapper[4867]: I0214 05:08:20.555039 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-r5spn" podStartSLOduration=3.833202197 podStartE2EDuration="9.555016878s" podCreationTimestamp="2026-02-14 05:08:11 +0000 UTC" firstStartedPulling="2026-02-14 05:08:14.42092485 +0000 UTC m=+3526.501862164" lastFinishedPulling="2026-02-14 05:08:20.142739531 +0000 UTC m=+3532.223676845" observedRunningTime="2026-02-14 05:08:20.550345355 +0000 UTC m=+3532.631282669" watchObservedRunningTime="2026-02-14 05:08:20.555016878 +0000 UTC m=+3532.635954192" Feb 14 05:08:20 crc kubenswrapper[4867]: I0214 05:08:20.575878 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9fzdp" podStartSLOduration=4.064465459 podStartE2EDuration="8.575857045s" podCreationTimestamp="2026-02-14 05:08:12 +0000 UTC" firstStartedPulling="2026-02-14 05:08:15.436107808 +0000 UTC m=+3527.517045122" lastFinishedPulling="2026-02-14 05:08:19.947499394 +0000 UTC m=+3532.028436708" observedRunningTime="2026-02-14 05:08:20.571393508 +0000 UTC m=+3532.652330862" watchObservedRunningTime="2026-02-14 05:08:20.575857045 +0000 UTC m=+3532.656794359" Feb 14 05:08:21 crc kubenswrapper[4867]: I0214 05:08:21.947816 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-r5spn" Feb 14 05:08:21 crc kubenswrapper[4867]: I0214 05:08:21.948545 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-r5spn" Feb 14 05:08:22 crc kubenswrapper[4867]: I0214 05:08:22.954811 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9fzdp" Feb 14 05:08:22 crc kubenswrapper[4867]: I0214 05:08:22.956066 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9fzdp" Feb 14 05:08:22 crc kubenswrapper[4867]: I0214 05:08:22.997093 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-r5spn" podUID="1a9e54e7-1fab-4191-b99b-b976ff519072" containerName="registry-server" probeResult="failure" output=< Feb 14 05:08:22 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:08:22 crc kubenswrapper[4867]: > Feb 14 05:08:23 crc kubenswrapper[4867]: I0214 05:08:23.576279 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tq9n4" Feb 14 05:08:23 crc kubenswrapper[4867]: I0214 05:08:23.576344 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tq9n4" Feb 14 05:08:24 crc kubenswrapper[4867]: I0214 05:08:24.004163 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-9fzdp" podUID="140ec2e6-ad78-48a9-b040-c957a66a3455" containerName="registry-server" probeResult="failure" output=< Feb 14 05:08:24 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:08:24 crc kubenswrapper[4867]: > Feb 14 05:08:24 crc kubenswrapper[4867]: I0214 05:08:24.625851 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-tq9n4" podUID="61a1135a-8f12-45c1-95f2-b7892a0533bf" containerName="registry-server" probeResult="failure" output=< Feb 14 05:08:24 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:08:24 crc kubenswrapper[4867]: > Feb 14 05:08:26 crc kubenswrapper[4867]: I0214 05:08:26.202784 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-849hf" podUID="fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913" containerName="registry-server" probeResult="failure" output=< Feb 14 05:08:26 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:08:26 crc kubenswrapper[4867]: > Feb 14 05:08:33 crc kubenswrapper[4867]: I0214 05:08:33.007191 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-r5spn" podUID="1a9e54e7-1fab-4191-b99b-b976ff519072" containerName="registry-server" probeResult="failure" output=< Feb 14 05:08:33 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:08:33 crc kubenswrapper[4867]: > Feb 14 05:08:33 crc kubenswrapper[4867]: I0214 05:08:33.010014 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9fzdp" Feb 14 05:08:33 crc kubenswrapper[4867]: I0214 05:08:33.061110 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9fzdp" Feb 14 05:08:34 crc kubenswrapper[4867]: I0214 05:08:34.242364 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9fzdp"] Feb 14 05:08:34 crc kubenswrapper[4867]: I0214 05:08:34.620339 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-tq9n4" podUID="61a1135a-8f12-45c1-95f2-b7892a0533bf" containerName="registry-server" probeResult="failure" output=< Feb 14 05:08:34 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:08:34 crc kubenswrapper[4867]: > Feb 14 05:08:34 crc kubenswrapper[4867]: I0214 05:08:34.698139 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9fzdp" podUID="140ec2e6-ad78-48a9-b040-c957a66a3455" containerName="registry-server" containerID="cri-o://9f1f5d2a852335cd2627fafce6cf77ae91ff86308778c50e32db7793b9f53590" gracePeriod=2 Feb 14 05:08:35 crc kubenswrapper[4867]: I0214 05:08:35.313612 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9fzdp" Feb 14 05:08:35 crc kubenswrapper[4867]: I0214 05:08:35.453601 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5smr\" (UniqueName: \"kubernetes.io/projected/140ec2e6-ad78-48a9-b040-c957a66a3455-kube-api-access-b5smr\") pod \"140ec2e6-ad78-48a9-b040-c957a66a3455\" (UID: \"140ec2e6-ad78-48a9-b040-c957a66a3455\") " Feb 14 05:08:35 crc kubenswrapper[4867]: I0214 05:08:35.453764 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/140ec2e6-ad78-48a9-b040-c957a66a3455-utilities\") pod \"140ec2e6-ad78-48a9-b040-c957a66a3455\" (UID: \"140ec2e6-ad78-48a9-b040-c957a66a3455\") " Feb 14 05:08:35 crc kubenswrapper[4867]: I0214 05:08:35.453969 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/140ec2e6-ad78-48a9-b040-c957a66a3455-catalog-content\") pod \"140ec2e6-ad78-48a9-b040-c957a66a3455\" (UID: \"140ec2e6-ad78-48a9-b040-c957a66a3455\") " Feb 14 05:08:35 crc kubenswrapper[4867]: I0214 05:08:35.454172 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/140ec2e6-ad78-48a9-b040-c957a66a3455-utilities" (OuterVolumeSpecName: "utilities") pod "140ec2e6-ad78-48a9-b040-c957a66a3455" (UID: "140ec2e6-ad78-48a9-b040-c957a66a3455"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 05:08:35 crc kubenswrapper[4867]: I0214 05:08:35.454645 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/140ec2e6-ad78-48a9-b040-c957a66a3455-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 05:08:35 crc kubenswrapper[4867]: I0214 05:08:35.461930 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/140ec2e6-ad78-48a9-b040-c957a66a3455-kube-api-access-b5smr" (OuterVolumeSpecName: "kube-api-access-b5smr") pod "140ec2e6-ad78-48a9-b040-c957a66a3455" (UID: "140ec2e6-ad78-48a9-b040-c957a66a3455"). InnerVolumeSpecName "kube-api-access-b5smr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 05:08:35 crc kubenswrapper[4867]: I0214 05:08:35.479628 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/140ec2e6-ad78-48a9-b040-c957a66a3455-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "140ec2e6-ad78-48a9-b040-c957a66a3455" (UID: "140ec2e6-ad78-48a9-b040-c957a66a3455"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 05:08:35 crc kubenswrapper[4867]: I0214 05:08:35.557869 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/140ec2e6-ad78-48a9-b040-c957a66a3455-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 05:08:35 crc kubenswrapper[4867]: I0214 05:08:35.557926 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b5smr\" (UniqueName: \"kubernetes.io/projected/140ec2e6-ad78-48a9-b040-c957a66a3455-kube-api-access-b5smr\") on node \"crc\" DevicePath \"\"" Feb 14 05:08:35 crc kubenswrapper[4867]: I0214 05:08:35.709955 4867 generic.go:334] "Generic (PLEG): container finished" podID="140ec2e6-ad78-48a9-b040-c957a66a3455" containerID="9f1f5d2a852335cd2627fafce6cf77ae91ff86308778c50e32db7793b9f53590" exitCode=0 Feb 14 05:08:35 crc kubenswrapper[4867]: I0214 05:08:35.709999 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9fzdp" event={"ID":"140ec2e6-ad78-48a9-b040-c957a66a3455","Type":"ContainerDied","Data":"9f1f5d2a852335cd2627fafce6cf77ae91ff86308778c50e32db7793b9f53590"} Feb 14 05:08:35 crc kubenswrapper[4867]: I0214 05:08:35.710076 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9fzdp" event={"ID":"140ec2e6-ad78-48a9-b040-c957a66a3455","Type":"ContainerDied","Data":"53a76ae710346cf06fcde776027b3014f0c654a9a0ed4d21f46f194e33a884c4"} Feb 14 05:08:35 crc kubenswrapper[4867]: I0214 05:08:35.710104 4867 scope.go:117] "RemoveContainer" containerID="9f1f5d2a852335cd2627fafce6cf77ae91ff86308778c50e32db7793b9f53590" Feb 14 05:08:35 crc kubenswrapper[4867]: I0214 05:08:35.710031 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9fzdp" Feb 14 05:08:35 crc kubenswrapper[4867]: I0214 05:08:35.738617 4867 scope.go:117] "RemoveContainer" containerID="d89b05b6d58f4cd442926058c08bcf9ea79ccff80e84314c7e648619c249a02b" Feb 14 05:08:35 crc kubenswrapper[4867]: I0214 05:08:35.755938 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9fzdp"] Feb 14 05:08:35 crc kubenswrapper[4867]: I0214 05:08:35.770183 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9fzdp"] Feb 14 05:08:35 crc kubenswrapper[4867]: I0214 05:08:35.778608 4867 scope.go:117] "RemoveContainer" containerID="84104b91d44c2273913c65b18ba62b49ea914033f2b77a0d38858fd584db5b02" Feb 14 05:08:35 crc kubenswrapper[4867]: I0214 05:08:35.831395 4867 scope.go:117] "RemoveContainer" containerID="9f1f5d2a852335cd2627fafce6cf77ae91ff86308778c50e32db7793b9f53590" Feb 14 05:08:35 crc kubenswrapper[4867]: E0214 05:08:35.831843 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f1f5d2a852335cd2627fafce6cf77ae91ff86308778c50e32db7793b9f53590\": container with ID starting with 9f1f5d2a852335cd2627fafce6cf77ae91ff86308778c50e32db7793b9f53590 not found: ID does not exist" containerID="9f1f5d2a852335cd2627fafce6cf77ae91ff86308778c50e32db7793b9f53590" Feb 14 05:08:35 crc kubenswrapper[4867]: I0214 05:08:35.831900 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f1f5d2a852335cd2627fafce6cf77ae91ff86308778c50e32db7793b9f53590"} err="failed to get container status \"9f1f5d2a852335cd2627fafce6cf77ae91ff86308778c50e32db7793b9f53590\": rpc error: code = NotFound desc = could not find container \"9f1f5d2a852335cd2627fafce6cf77ae91ff86308778c50e32db7793b9f53590\": container with ID starting with 9f1f5d2a852335cd2627fafce6cf77ae91ff86308778c50e32db7793b9f53590 not found: ID does not exist" Feb 14 05:08:35 crc kubenswrapper[4867]: I0214 05:08:35.831936 4867 scope.go:117] "RemoveContainer" containerID="d89b05b6d58f4cd442926058c08bcf9ea79ccff80e84314c7e648619c249a02b" Feb 14 05:08:35 crc kubenswrapper[4867]: E0214 05:08:35.832254 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d89b05b6d58f4cd442926058c08bcf9ea79ccff80e84314c7e648619c249a02b\": container with ID starting with d89b05b6d58f4cd442926058c08bcf9ea79ccff80e84314c7e648619c249a02b not found: ID does not exist" containerID="d89b05b6d58f4cd442926058c08bcf9ea79ccff80e84314c7e648619c249a02b" Feb 14 05:08:35 crc kubenswrapper[4867]: I0214 05:08:35.832286 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d89b05b6d58f4cd442926058c08bcf9ea79ccff80e84314c7e648619c249a02b"} err="failed to get container status \"d89b05b6d58f4cd442926058c08bcf9ea79ccff80e84314c7e648619c249a02b\": rpc error: code = NotFound desc = could not find container \"d89b05b6d58f4cd442926058c08bcf9ea79ccff80e84314c7e648619c249a02b\": container with ID starting with d89b05b6d58f4cd442926058c08bcf9ea79ccff80e84314c7e648619c249a02b not found: ID does not exist" Feb 14 05:08:35 crc kubenswrapper[4867]: I0214 05:08:35.832309 4867 scope.go:117] "RemoveContainer" containerID="84104b91d44c2273913c65b18ba62b49ea914033f2b77a0d38858fd584db5b02" Feb 14 05:08:35 crc kubenswrapper[4867]: E0214 05:08:35.832649 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84104b91d44c2273913c65b18ba62b49ea914033f2b77a0d38858fd584db5b02\": container with ID starting with 84104b91d44c2273913c65b18ba62b49ea914033f2b77a0d38858fd584db5b02 not found: ID does not exist" containerID="84104b91d44c2273913c65b18ba62b49ea914033f2b77a0d38858fd584db5b02" Feb 14 05:08:35 crc kubenswrapper[4867]: I0214 05:08:35.832681 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84104b91d44c2273913c65b18ba62b49ea914033f2b77a0d38858fd584db5b02"} err="failed to get container status \"84104b91d44c2273913c65b18ba62b49ea914033f2b77a0d38858fd584db5b02\": rpc error: code = NotFound desc = could not find container \"84104b91d44c2273913c65b18ba62b49ea914033f2b77a0d38858fd584db5b02\": container with ID starting with 84104b91d44c2273913c65b18ba62b49ea914033f2b77a0d38858fd584db5b02 not found: ID does not exist" Feb 14 05:08:36 crc kubenswrapper[4867]: I0214 05:08:36.224951 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-849hf" podUID="fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913" containerName="registry-server" probeResult="failure" output=< Feb 14 05:08:36 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:08:36 crc kubenswrapper[4867]: > Feb 14 05:08:37 crc kubenswrapper[4867]: I0214 05:08:37.010812 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="140ec2e6-ad78-48a9-b040-c957a66a3455" path="/var/lib/kubelet/pods/140ec2e6-ad78-48a9-b040-c957a66a3455/volumes" Feb 14 05:08:42 crc kubenswrapper[4867]: I0214 05:08:42.008137 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-r5spn" Feb 14 05:08:42 crc kubenswrapper[4867]: I0214 05:08:42.074180 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-r5spn" Feb 14 05:08:43 crc kubenswrapper[4867]: I0214 05:08:43.011121 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-r5spn"] Feb 14 05:08:43 crc kubenswrapper[4867]: I0214 05:08:43.630183 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tq9n4" Feb 14 05:08:43 crc kubenswrapper[4867]: I0214 05:08:43.689001 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tq9n4" Feb 14 05:08:43 crc kubenswrapper[4867]: I0214 05:08:43.802181 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-r5spn" podUID="1a9e54e7-1fab-4191-b99b-b976ff519072" containerName="registry-server" containerID="cri-o://698f7f8abfbff13aaa725af14d8f5d2627ab98c98f10f308a3f4c2a0d4d80346" gracePeriod=2 Feb 14 05:08:44 crc kubenswrapper[4867]: I0214 05:08:44.357375 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r5spn" Feb 14 05:08:44 crc kubenswrapper[4867]: I0214 05:08:44.481954 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a9e54e7-1fab-4191-b99b-b976ff519072-catalog-content\") pod \"1a9e54e7-1fab-4191-b99b-b976ff519072\" (UID: \"1a9e54e7-1fab-4191-b99b-b976ff519072\") " Feb 14 05:08:44 crc kubenswrapper[4867]: I0214 05:08:44.482331 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a9e54e7-1fab-4191-b99b-b976ff519072-utilities\") pod \"1a9e54e7-1fab-4191-b99b-b976ff519072\" (UID: \"1a9e54e7-1fab-4191-b99b-b976ff519072\") " Feb 14 05:08:44 crc kubenswrapper[4867]: I0214 05:08:44.482490 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xkqrj\" (UniqueName: \"kubernetes.io/projected/1a9e54e7-1fab-4191-b99b-b976ff519072-kube-api-access-xkqrj\") pod \"1a9e54e7-1fab-4191-b99b-b976ff519072\" (UID: \"1a9e54e7-1fab-4191-b99b-b976ff519072\") " Feb 14 05:08:44 crc kubenswrapper[4867]: I0214 05:08:44.483174 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a9e54e7-1fab-4191-b99b-b976ff519072-utilities" (OuterVolumeSpecName: "utilities") pod "1a9e54e7-1fab-4191-b99b-b976ff519072" (UID: "1a9e54e7-1fab-4191-b99b-b976ff519072"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 05:08:44 crc kubenswrapper[4867]: I0214 05:08:44.497699 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a9e54e7-1fab-4191-b99b-b976ff519072-kube-api-access-xkqrj" (OuterVolumeSpecName: "kube-api-access-xkqrj") pod "1a9e54e7-1fab-4191-b99b-b976ff519072" (UID: "1a9e54e7-1fab-4191-b99b-b976ff519072"). InnerVolumeSpecName "kube-api-access-xkqrj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 05:08:44 crc kubenswrapper[4867]: I0214 05:08:44.538792 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a9e54e7-1fab-4191-b99b-b976ff519072-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1a9e54e7-1fab-4191-b99b-b976ff519072" (UID: "1a9e54e7-1fab-4191-b99b-b976ff519072"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 05:08:44 crc kubenswrapper[4867]: I0214 05:08:44.585273 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a9e54e7-1fab-4191-b99b-b976ff519072-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 05:08:44 crc kubenswrapper[4867]: I0214 05:08:44.585535 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xkqrj\" (UniqueName: \"kubernetes.io/projected/1a9e54e7-1fab-4191-b99b-b976ff519072-kube-api-access-xkqrj\") on node \"crc\" DevicePath \"\"" Feb 14 05:08:44 crc kubenswrapper[4867]: I0214 05:08:44.585623 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a9e54e7-1fab-4191-b99b-b976ff519072-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 05:08:44 crc kubenswrapper[4867]: I0214 05:08:44.815400 4867 generic.go:334] "Generic (PLEG): container finished" podID="1a9e54e7-1fab-4191-b99b-b976ff519072" containerID="698f7f8abfbff13aaa725af14d8f5d2627ab98c98f10f308a3f4c2a0d4d80346" exitCode=0 Feb 14 05:08:44 crc kubenswrapper[4867]: I0214 05:08:44.815490 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r5spn" Feb 14 05:08:44 crc kubenswrapper[4867]: I0214 05:08:44.815487 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r5spn" event={"ID":"1a9e54e7-1fab-4191-b99b-b976ff519072","Type":"ContainerDied","Data":"698f7f8abfbff13aaa725af14d8f5d2627ab98c98f10f308a3f4c2a0d4d80346"} Feb 14 05:08:44 crc kubenswrapper[4867]: I0214 05:08:44.815941 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r5spn" event={"ID":"1a9e54e7-1fab-4191-b99b-b976ff519072","Type":"ContainerDied","Data":"c79540bf7b44b50c23a2fe282f0135035cd8741d7a1d186a283a56cc1e861b81"} Feb 14 05:08:44 crc kubenswrapper[4867]: I0214 05:08:44.815960 4867 scope.go:117] "RemoveContainer" containerID="698f7f8abfbff13aaa725af14d8f5d2627ab98c98f10f308a3f4c2a0d4d80346" Feb 14 05:08:44 crc kubenswrapper[4867]: I0214 05:08:44.852557 4867 scope.go:117] "RemoveContainer" containerID="daee3912bfff3bc6d2562589922c61f1a311cce4acc698856ba514100abe8d95" Feb 14 05:08:44 crc kubenswrapper[4867]: I0214 05:08:44.860425 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-r5spn"] Feb 14 05:08:44 crc kubenswrapper[4867]: I0214 05:08:44.871317 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-r5spn"] Feb 14 05:08:44 crc kubenswrapper[4867]: I0214 05:08:44.881034 4867 scope.go:117] "RemoveContainer" containerID="49932758371f9d334694dab80b28a433bd0d81bb061d82cb8f2927692cb34418" Feb 14 05:08:44 crc kubenswrapper[4867]: I0214 05:08:44.947048 4867 scope.go:117] "RemoveContainer" containerID="698f7f8abfbff13aaa725af14d8f5d2627ab98c98f10f308a3f4c2a0d4d80346" Feb 14 05:08:44 crc kubenswrapper[4867]: E0214 05:08:44.947588 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"698f7f8abfbff13aaa725af14d8f5d2627ab98c98f10f308a3f4c2a0d4d80346\": container with ID starting with 698f7f8abfbff13aaa725af14d8f5d2627ab98c98f10f308a3f4c2a0d4d80346 not found: ID does not exist" containerID="698f7f8abfbff13aaa725af14d8f5d2627ab98c98f10f308a3f4c2a0d4d80346" Feb 14 05:08:44 crc kubenswrapper[4867]: I0214 05:08:44.947720 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"698f7f8abfbff13aaa725af14d8f5d2627ab98c98f10f308a3f4c2a0d4d80346"} err="failed to get container status \"698f7f8abfbff13aaa725af14d8f5d2627ab98c98f10f308a3f4c2a0d4d80346\": rpc error: code = NotFound desc = could not find container \"698f7f8abfbff13aaa725af14d8f5d2627ab98c98f10f308a3f4c2a0d4d80346\": container with ID starting with 698f7f8abfbff13aaa725af14d8f5d2627ab98c98f10f308a3f4c2a0d4d80346 not found: ID does not exist" Feb 14 05:08:44 crc kubenswrapper[4867]: I0214 05:08:44.947813 4867 scope.go:117] "RemoveContainer" containerID="daee3912bfff3bc6d2562589922c61f1a311cce4acc698856ba514100abe8d95" Feb 14 05:08:44 crc kubenswrapper[4867]: E0214 05:08:44.948372 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"daee3912bfff3bc6d2562589922c61f1a311cce4acc698856ba514100abe8d95\": container with ID starting with daee3912bfff3bc6d2562589922c61f1a311cce4acc698856ba514100abe8d95 not found: ID does not exist" containerID="daee3912bfff3bc6d2562589922c61f1a311cce4acc698856ba514100abe8d95" Feb 14 05:08:44 crc kubenswrapper[4867]: I0214 05:08:44.948481 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"daee3912bfff3bc6d2562589922c61f1a311cce4acc698856ba514100abe8d95"} err="failed to get container status \"daee3912bfff3bc6d2562589922c61f1a311cce4acc698856ba514100abe8d95\": rpc error: code = NotFound desc = could not find container \"daee3912bfff3bc6d2562589922c61f1a311cce4acc698856ba514100abe8d95\": container with ID starting with daee3912bfff3bc6d2562589922c61f1a311cce4acc698856ba514100abe8d95 not found: ID does not exist" Feb 14 05:08:44 crc kubenswrapper[4867]: I0214 05:08:44.948577 4867 scope.go:117] "RemoveContainer" containerID="49932758371f9d334694dab80b28a433bd0d81bb061d82cb8f2927692cb34418" Feb 14 05:08:44 crc kubenswrapper[4867]: E0214 05:08:44.949135 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49932758371f9d334694dab80b28a433bd0d81bb061d82cb8f2927692cb34418\": container with ID starting with 49932758371f9d334694dab80b28a433bd0d81bb061d82cb8f2927692cb34418 not found: ID does not exist" containerID="49932758371f9d334694dab80b28a433bd0d81bb061d82cb8f2927692cb34418" Feb 14 05:08:44 crc kubenswrapper[4867]: I0214 05:08:44.949167 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49932758371f9d334694dab80b28a433bd0d81bb061d82cb8f2927692cb34418"} err="failed to get container status \"49932758371f9d334694dab80b28a433bd0d81bb061d82cb8f2927692cb34418\": rpc error: code = NotFound desc = could not find container \"49932758371f9d334694dab80b28a433bd0d81bb061d82cb8f2927692cb34418\": container with ID starting with 49932758371f9d334694dab80b28a433bd0d81bb061d82cb8f2927692cb34418 not found: ID does not exist" Feb 14 05:08:45 crc kubenswrapper[4867]: I0214 05:08:45.009349 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a9e54e7-1fab-4191-b99b-b976ff519072" path="/var/lib/kubelet/pods/1a9e54e7-1fab-4191-b99b-b976ff519072/volumes" Feb 14 05:08:46 crc kubenswrapper[4867]: I0214 05:08:46.204663 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-849hf" podUID="fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913" containerName="registry-server" probeResult="failure" output=< Feb 14 05:08:46 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:08:46 crc kubenswrapper[4867]: > Feb 14 05:08:46 crc kubenswrapper[4867]: I0214 05:08:46.803844 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tq9n4"] Feb 14 05:08:46 crc kubenswrapper[4867]: I0214 05:08:46.804448 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tq9n4" podUID="61a1135a-8f12-45c1-95f2-b7892a0533bf" containerName="registry-server" containerID="cri-o://406624fcb2004e26ee93fe465e74d4df5d2d90c3fb84b0fdbf7f4c7494f9a104" gracePeriod=2 Feb 14 05:08:47 crc kubenswrapper[4867]: I0214 05:08:47.319448 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tq9n4" Feb 14 05:08:47 crc kubenswrapper[4867]: I0214 05:08:47.485838 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61a1135a-8f12-45c1-95f2-b7892a0533bf-utilities\") pod \"61a1135a-8f12-45c1-95f2-b7892a0533bf\" (UID: \"61a1135a-8f12-45c1-95f2-b7892a0533bf\") " Feb 14 05:08:47 crc kubenswrapper[4867]: I0214 05:08:47.486349 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b2h7j\" (UniqueName: \"kubernetes.io/projected/61a1135a-8f12-45c1-95f2-b7892a0533bf-kube-api-access-b2h7j\") pod \"61a1135a-8f12-45c1-95f2-b7892a0533bf\" (UID: \"61a1135a-8f12-45c1-95f2-b7892a0533bf\") " Feb 14 05:08:47 crc kubenswrapper[4867]: I0214 05:08:47.486534 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61a1135a-8f12-45c1-95f2-b7892a0533bf-catalog-content\") pod \"61a1135a-8f12-45c1-95f2-b7892a0533bf\" (UID: \"61a1135a-8f12-45c1-95f2-b7892a0533bf\") " Feb 14 05:08:47 crc kubenswrapper[4867]: I0214 05:08:47.486588 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61a1135a-8f12-45c1-95f2-b7892a0533bf-utilities" (OuterVolumeSpecName: "utilities") pod "61a1135a-8f12-45c1-95f2-b7892a0533bf" (UID: "61a1135a-8f12-45c1-95f2-b7892a0533bf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 05:08:47 crc kubenswrapper[4867]: I0214 05:08:47.487522 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61a1135a-8f12-45c1-95f2-b7892a0533bf-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 05:08:47 crc kubenswrapper[4867]: I0214 05:08:47.491785 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61a1135a-8f12-45c1-95f2-b7892a0533bf-kube-api-access-b2h7j" (OuterVolumeSpecName: "kube-api-access-b2h7j") pod "61a1135a-8f12-45c1-95f2-b7892a0533bf" (UID: "61a1135a-8f12-45c1-95f2-b7892a0533bf"). InnerVolumeSpecName "kube-api-access-b2h7j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 05:08:47 crc kubenswrapper[4867]: I0214 05:08:47.537952 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61a1135a-8f12-45c1-95f2-b7892a0533bf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "61a1135a-8f12-45c1-95f2-b7892a0533bf" (UID: "61a1135a-8f12-45c1-95f2-b7892a0533bf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 05:08:47 crc kubenswrapper[4867]: I0214 05:08:47.589776 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b2h7j\" (UniqueName: \"kubernetes.io/projected/61a1135a-8f12-45c1-95f2-b7892a0533bf-kube-api-access-b2h7j\") on node \"crc\" DevicePath \"\"" Feb 14 05:08:47 crc kubenswrapper[4867]: I0214 05:08:47.589816 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61a1135a-8f12-45c1-95f2-b7892a0533bf-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 05:08:47 crc kubenswrapper[4867]: I0214 05:08:47.854067 4867 generic.go:334] "Generic (PLEG): container finished" podID="61a1135a-8f12-45c1-95f2-b7892a0533bf" containerID="406624fcb2004e26ee93fe465e74d4df5d2d90c3fb84b0fdbf7f4c7494f9a104" exitCode=0 Feb 14 05:08:47 crc kubenswrapper[4867]: I0214 05:08:47.854113 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tq9n4" event={"ID":"61a1135a-8f12-45c1-95f2-b7892a0533bf","Type":"ContainerDied","Data":"406624fcb2004e26ee93fe465e74d4df5d2d90c3fb84b0fdbf7f4c7494f9a104"} Feb 14 05:08:47 crc kubenswrapper[4867]: I0214 05:08:47.854141 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tq9n4" event={"ID":"61a1135a-8f12-45c1-95f2-b7892a0533bf","Type":"ContainerDied","Data":"092ed30559d49b05e26166d7bc8674d52cd854597a2573bcd4d145dc2358a4ed"} Feb 14 05:08:47 crc kubenswrapper[4867]: I0214 05:08:47.854158 4867 scope.go:117] "RemoveContainer" containerID="406624fcb2004e26ee93fe465e74d4df5d2d90c3fb84b0fdbf7f4c7494f9a104" Feb 14 05:08:47 crc kubenswrapper[4867]: I0214 05:08:47.854186 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tq9n4" Feb 14 05:08:47 crc kubenswrapper[4867]: I0214 05:08:47.895571 4867 scope.go:117] "RemoveContainer" containerID="6e8583e981e98adc699d970a530bd1a566e8915d3232bc5dd40842eb4e1c21a0" Feb 14 05:08:47 crc kubenswrapper[4867]: I0214 05:08:47.901410 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tq9n4"] Feb 14 05:08:47 crc kubenswrapper[4867]: I0214 05:08:47.915450 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tq9n4"] Feb 14 05:08:47 crc kubenswrapper[4867]: I0214 05:08:47.928203 4867 scope.go:117] "RemoveContainer" containerID="e05129467c11e060aff7a1a17b25c836377e0d3898482df922fa8384386b3fbf" Feb 14 05:08:47 crc kubenswrapper[4867]: I0214 05:08:47.980122 4867 scope.go:117] "RemoveContainer" containerID="406624fcb2004e26ee93fe465e74d4df5d2d90c3fb84b0fdbf7f4c7494f9a104" Feb 14 05:08:47 crc kubenswrapper[4867]: E0214 05:08:47.980551 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"406624fcb2004e26ee93fe465e74d4df5d2d90c3fb84b0fdbf7f4c7494f9a104\": container with ID starting with 406624fcb2004e26ee93fe465e74d4df5d2d90c3fb84b0fdbf7f4c7494f9a104 not found: ID does not exist" containerID="406624fcb2004e26ee93fe465e74d4df5d2d90c3fb84b0fdbf7f4c7494f9a104" Feb 14 05:08:47 crc kubenswrapper[4867]: I0214 05:08:47.980593 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"406624fcb2004e26ee93fe465e74d4df5d2d90c3fb84b0fdbf7f4c7494f9a104"} err="failed to get container status \"406624fcb2004e26ee93fe465e74d4df5d2d90c3fb84b0fdbf7f4c7494f9a104\": rpc error: code = NotFound desc = could not find container \"406624fcb2004e26ee93fe465e74d4df5d2d90c3fb84b0fdbf7f4c7494f9a104\": container with ID starting with 406624fcb2004e26ee93fe465e74d4df5d2d90c3fb84b0fdbf7f4c7494f9a104 not found: ID does not exist" Feb 14 05:08:47 crc kubenswrapper[4867]: I0214 05:08:47.980620 4867 scope.go:117] "RemoveContainer" containerID="6e8583e981e98adc699d970a530bd1a566e8915d3232bc5dd40842eb4e1c21a0" Feb 14 05:08:47 crc kubenswrapper[4867]: E0214 05:08:47.980875 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e8583e981e98adc699d970a530bd1a566e8915d3232bc5dd40842eb4e1c21a0\": container with ID starting with 6e8583e981e98adc699d970a530bd1a566e8915d3232bc5dd40842eb4e1c21a0 not found: ID does not exist" containerID="6e8583e981e98adc699d970a530bd1a566e8915d3232bc5dd40842eb4e1c21a0" Feb 14 05:08:47 crc kubenswrapper[4867]: I0214 05:08:47.980900 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e8583e981e98adc699d970a530bd1a566e8915d3232bc5dd40842eb4e1c21a0"} err="failed to get container status \"6e8583e981e98adc699d970a530bd1a566e8915d3232bc5dd40842eb4e1c21a0\": rpc error: code = NotFound desc = could not find container \"6e8583e981e98adc699d970a530bd1a566e8915d3232bc5dd40842eb4e1c21a0\": container with ID starting with 6e8583e981e98adc699d970a530bd1a566e8915d3232bc5dd40842eb4e1c21a0 not found: ID does not exist" Feb 14 05:08:47 crc kubenswrapper[4867]: I0214 05:08:47.980913 4867 scope.go:117] "RemoveContainer" containerID="e05129467c11e060aff7a1a17b25c836377e0d3898482df922fa8384386b3fbf" Feb 14 05:08:47 crc kubenswrapper[4867]: E0214 05:08:47.981187 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e05129467c11e060aff7a1a17b25c836377e0d3898482df922fa8384386b3fbf\": container with ID starting with e05129467c11e060aff7a1a17b25c836377e0d3898482df922fa8384386b3fbf not found: ID does not exist" containerID="e05129467c11e060aff7a1a17b25c836377e0d3898482df922fa8384386b3fbf" Feb 14 05:08:47 crc kubenswrapper[4867]: I0214 05:08:47.981218 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e05129467c11e060aff7a1a17b25c836377e0d3898482df922fa8384386b3fbf"} err="failed to get container status \"e05129467c11e060aff7a1a17b25c836377e0d3898482df922fa8384386b3fbf\": rpc error: code = NotFound desc = could not find container \"e05129467c11e060aff7a1a17b25c836377e0d3898482df922fa8384386b3fbf\": container with ID starting with e05129467c11e060aff7a1a17b25c836377e0d3898482df922fa8384386b3fbf not found: ID does not exist" Feb 14 05:08:49 crc kubenswrapper[4867]: I0214 05:08:49.019771 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61a1135a-8f12-45c1-95f2-b7892a0533bf" path="/var/lib/kubelet/pods/61a1135a-8f12-45c1-95f2-b7892a0533bf/volumes" Feb 14 05:08:55 crc kubenswrapper[4867]: I0214 05:08:55.218855 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-849hf" Feb 14 05:08:55 crc kubenswrapper[4867]: I0214 05:08:55.273355 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-849hf" Feb 14 05:08:55 crc kubenswrapper[4867]: I0214 05:08:55.470321 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-849hf"] Feb 14 05:08:56 crc kubenswrapper[4867]: I0214 05:08:56.954216 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-849hf" podUID="fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913" containerName="registry-server" containerID="cri-o://e597dfd95c0e082f7c06168f6200f691b69f3d9758280a3b64ceb7062e323750" gracePeriod=2 Feb 14 05:08:57 crc kubenswrapper[4867]: I0214 05:08:57.458084 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-849hf" Feb 14 05:08:57 crc kubenswrapper[4867]: I0214 05:08:57.633100 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913-utilities\") pod \"fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913\" (UID: \"fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913\") " Feb 14 05:08:57 crc kubenswrapper[4867]: I0214 05:08:57.634092 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913-utilities" (OuterVolumeSpecName: "utilities") pod "fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913" (UID: "fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 05:08:57 crc kubenswrapper[4867]: I0214 05:08:57.634217 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913-catalog-content\") pod \"fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913\" (UID: \"fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913\") " Feb 14 05:08:57 crc kubenswrapper[4867]: I0214 05:08:57.634316 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwvx\" (UniqueName: \"kubernetes.io/projected/fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913-kube-api-access-jkwvx\") pod \"fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913\" (UID: \"fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913\") " Feb 14 05:08:57 crc kubenswrapper[4867]: I0214 05:08:57.635678 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 05:08:57 crc kubenswrapper[4867]: I0214 05:08:57.641949 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913-kube-api-access-jkwvx" (OuterVolumeSpecName: "kube-api-access-jkwvx") pod "fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913" (UID: "fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913"). InnerVolumeSpecName "kube-api-access-jkwvx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 05:08:57 crc kubenswrapper[4867]: I0214 05:08:57.738651 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwvx\" (UniqueName: \"kubernetes.io/projected/fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913-kube-api-access-jkwvx\") on node \"crc\" DevicePath \"\"" Feb 14 05:08:57 crc kubenswrapper[4867]: I0214 05:08:57.762280 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913" (UID: "fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 05:08:57 crc kubenswrapper[4867]: I0214 05:08:57.840494 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 05:08:57 crc kubenswrapper[4867]: I0214 05:08:57.965986 4867 generic.go:334] "Generic (PLEG): container finished" podID="fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913" containerID="e597dfd95c0e082f7c06168f6200f691b69f3d9758280a3b64ceb7062e323750" exitCode=0 Feb 14 05:08:57 crc kubenswrapper[4867]: I0214 05:08:57.966045 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-849hf" Feb 14 05:08:57 crc kubenswrapper[4867]: I0214 05:08:57.966061 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-849hf" event={"ID":"fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913","Type":"ContainerDied","Data":"e597dfd95c0e082f7c06168f6200f691b69f3d9758280a3b64ceb7062e323750"} Feb 14 05:08:57 crc kubenswrapper[4867]: I0214 05:08:57.966405 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-849hf" event={"ID":"fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913","Type":"ContainerDied","Data":"dafc13745014642f6bd9d9412ddef647b7ac82a22ff94a3893f227e1e4e1bb8d"} Feb 14 05:08:57 crc kubenswrapper[4867]: I0214 05:08:57.966429 4867 scope.go:117] "RemoveContainer" containerID="e597dfd95c0e082f7c06168f6200f691b69f3d9758280a3b64ceb7062e323750" Feb 14 05:08:57 crc kubenswrapper[4867]: I0214 05:08:57.989415 4867 scope.go:117] "RemoveContainer" containerID="1a1f33a026be29ec895d79cdceabf3f96f8e193872c0565e786f286f62513737" Feb 14 05:08:58 crc kubenswrapper[4867]: I0214 05:08:58.005502 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-849hf"] Feb 14 05:08:58 crc kubenswrapper[4867]: I0214 05:08:58.016713 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-849hf"] Feb 14 05:08:58 crc kubenswrapper[4867]: I0214 05:08:58.031257 4867 scope.go:117] "RemoveContainer" containerID="531118b5698c29e4c554c835ddc5e56e0cb2165c80336c4df5e447f587f66a36" Feb 14 05:08:58 crc kubenswrapper[4867]: I0214 05:08:58.090946 4867 scope.go:117] "RemoveContainer" containerID="e597dfd95c0e082f7c06168f6200f691b69f3d9758280a3b64ceb7062e323750" Feb 14 05:08:58 crc kubenswrapper[4867]: E0214 05:08:58.091429 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e597dfd95c0e082f7c06168f6200f691b69f3d9758280a3b64ceb7062e323750\": container with ID starting with e597dfd95c0e082f7c06168f6200f691b69f3d9758280a3b64ceb7062e323750 not found: ID does not exist" containerID="e597dfd95c0e082f7c06168f6200f691b69f3d9758280a3b64ceb7062e323750" Feb 14 05:08:58 crc kubenswrapper[4867]: I0214 05:08:58.091461 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e597dfd95c0e082f7c06168f6200f691b69f3d9758280a3b64ceb7062e323750"} err="failed to get container status \"e597dfd95c0e082f7c06168f6200f691b69f3d9758280a3b64ceb7062e323750\": rpc error: code = NotFound desc = could not find container \"e597dfd95c0e082f7c06168f6200f691b69f3d9758280a3b64ceb7062e323750\": container with ID starting with e597dfd95c0e082f7c06168f6200f691b69f3d9758280a3b64ceb7062e323750 not found: ID does not exist" Feb 14 05:08:58 crc kubenswrapper[4867]: I0214 05:08:58.091486 4867 scope.go:117] "RemoveContainer" containerID="1a1f33a026be29ec895d79cdceabf3f96f8e193872c0565e786f286f62513737" Feb 14 05:08:58 crc kubenswrapper[4867]: E0214 05:08:58.091856 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a1f33a026be29ec895d79cdceabf3f96f8e193872c0565e786f286f62513737\": container with ID starting with 1a1f33a026be29ec895d79cdceabf3f96f8e193872c0565e786f286f62513737 not found: ID does not exist" containerID="1a1f33a026be29ec895d79cdceabf3f96f8e193872c0565e786f286f62513737" Feb 14 05:08:58 crc kubenswrapper[4867]: I0214 05:08:58.091888 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a1f33a026be29ec895d79cdceabf3f96f8e193872c0565e786f286f62513737"} err="failed to get container status \"1a1f33a026be29ec895d79cdceabf3f96f8e193872c0565e786f286f62513737\": rpc error: code = NotFound desc = could not find container \"1a1f33a026be29ec895d79cdceabf3f96f8e193872c0565e786f286f62513737\": container with ID starting with 1a1f33a026be29ec895d79cdceabf3f96f8e193872c0565e786f286f62513737 not found: ID does not exist" Feb 14 05:08:58 crc kubenswrapper[4867]: I0214 05:08:58.091907 4867 scope.go:117] "RemoveContainer" containerID="531118b5698c29e4c554c835ddc5e56e0cb2165c80336c4df5e447f587f66a36" Feb 14 05:08:58 crc kubenswrapper[4867]: E0214 05:08:58.092143 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"531118b5698c29e4c554c835ddc5e56e0cb2165c80336c4df5e447f587f66a36\": container with ID starting with 531118b5698c29e4c554c835ddc5e56e0cb2165c80336c4df5e447f587f66a36 not found: ID does not exist" containerID="531118b5698c29e4c554c835ddc5e56e0cb2165c80336c4df5e447f587f66a36" Feb 14 05:08:58 crc kubenswrapper[4867]: I0214 05:08:58.092166 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"531118b5698c29e4c554c835ddc5e56e0cb2165c80336c4df5e447f587f66a36"} err="failed to get container status \"531118b5698c29e4c554c835ddc5e56e0cb2165c80336c4df5e447f587f66a36\": rpc error: code = NotFound desc = could not find container \"531118b5698c29e4c554c835ddc5e56e0cb2165c80336c4df5e447f587f66a36\": container with ID starting with 531118b5698c29e4c554c835ddc5e56e0cb2165c80336c4df5e447f587f66a36 not found: ID does not exist" Feb 14 05:08:59 crc kubenswrapper[4867]: I0214 05:08:59.009329 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913" path="/var/lib/kubelet/pods/fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913/volumes" Feb 14 05:09:31 crc kubenswrapper[4867]: I0214 05:09:31.250965 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 05:09:31 crc kubenswrapper[4867]: I0214 05:09:31.251658 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 05:10:01 crc kubenswrapper[4867]: I0214 05:10:01.251634 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 05:10:01 crc kubenswrapper[4867]: I0214 05:10:01.252645 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 05:10:31 crc kubenswrapper[4867]: I0214 05:10:31.251148 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 05:10:31 crc kubenswrapper[4867]: I0214 05:10:31.251726 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 05:10:31 crc kubenswrapper[4867]: I0214 05:10:31.251773 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" Feb 14 05:10:31 crc kubenswrapper[4867]: I0214 05:10:31.252771 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"734a61f9c7ed9ca50b3d56703c2d5beedaf665574b56c30e78eaf04e359de025"} pod="openshift-machine-config-operator/machine-config-daemon-4s95t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 05:10:31 crc kubenswrapper[4867]: I0214 05:10:31.252828 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" containerID="cri-o://734a61f9c7ed9ca50b3d56703c2d5beedaf665574b56c30e78eaf04e359de025" gracePeriod=600 Feb 14 05:10:31 crc kubenswrapper[4867]: E0214 05:10:31.373267 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:10:32 crc kubenswrapper[4867]: I0214 05:10:32.010439 4867 generic.go:334] "Generic (PLEG): container finished" podID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerID="734a61f9c7ed9ca50b3d56703c2d5beedaf665574b56c30e78eaf04e359de025" exitCode=0 Feb 14 05:10:32 crc kubenswrapper[4867]: I0214 05:10:32.010536 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" event={"ID":"5992e46c-bce7-4b9f-82f2-c7ffb93286cd","Type":"ContainerDied","Data":"734a61f9c7ed9ca50b3d56703c2d5beedaf665574b56c30e78eaf04e359de025"} Feb 14 05:10:32 crc kubenswrapper[4867]: I0214 05:10:32.010864 4867 scope.go:117] "RemoveContainer" containerID="863d05e2c2e5d1963a43470517034f45e340fcf76621f87d3a0804ee07159c7e" Feb 14 05:10:32 crc kubenswrapper[4867]: I0214 05:10:32.011724 4867 scope.go:117] "RemoveContainer" containerID="734a61f9c7ed9ca50b3d56703c2d5beedaf665574b56c30e78eaf04e359de025" Feb 14 05:10:32 crc kubenswrapper[4867]: E0214 05:10:32.012023 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:10:44 crc kubenswrapper[4867]: I0214 05:10:44.997175 4867 scope.go:117] "RemoveContainer" containerID="734a61f9c7ed9ca50b3d56703c2d5beedaf665574b56c30e78eaf04e359de025" Feb 14 05:10:44 crc kubenswrapper[4867]: E0214 05:10:44.998068 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:10:55 crc kubenswrapper[4867]: I0214 05:10:55.998282 4867 scope.go:117] "RemoveContainer" containerID="734a61f9c7ed9ca50b3d56703c2d5beedaf665574b56c30e78eaf04e359de025" Feb 14 05:10:56 crc kubenswrapper[4867]: E0214 05:10:55.999808 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:11:09 crc kubenswrapper[4867]: I0214 05:11:09.997887 4867 scope.go:117] "RemoveContainer" containerID="734a61f9c7ed9ca50b3d56703c2d5beedaf665574b56c30e78eaf04e359de025" Feb 14 05:11:09 crc kubenswrapper[4867]: E0214 05:11:09.998800 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:11:22 crc kubenswrapper[4867]: I0214 05:11:22.998560 4867 scope.go:117] "RemoveContainer" containerID="734a61f9c7ed9ca50b3d56703c2d5beedaf665574b56c30e78eaf04e359de025" Feb 14 05:11:23 crc kubenswrapper[4867]: E0214 05:11:22.999472 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:11:35 crc kubenswrapper[4867]: I0214 05:11:35.997635 4867 scope.go:117] "RemoveContainer" containerID="734a61f9c7ed9ca50b3d56703c2d5beedaf665574b56c30e78eaf04e359de025" Feb 14 05:11:36 crc kubenswrapper[4867]: E0214 05:11:36.000932 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:11:47 crc kubenswrapper[4867]: I0214 05:11:47.998448 4867 scope.go:117] "RemoveContainer" containerID="734a61f9c7ed9ca50b3d56703c2d5beedaf665574b56c30e78eaf04e359de025" Feb 14 05:11:48 crc kubenswrapper[4867]: E0214 05:11:47.999151 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:12:02 crc kubenswrapper[4867]: I0214 05:12:01.998319 4867 scope.go:117] "RemoveContainer" containerID="734a61f9c7ed9ca50b3d56703c2d5beedaf665574b56c30e78eaf04e359de025" Feb 14 05:12:02 crc kubenswrapper[4867]: E0214 05:12:02.003776 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:12:13 crc kubenswrapper[4867]: I0214 05:12:13.997544 4867 scope.go:117] "RemoveContainer" containerID="734a61f9c7ed9ca50b3d56703c2d5beedaf665574b56c30e78eaf04e359de025" Feb 14 05:12:13 crc kubenswrapper[4867]: E0214 05:12:13.999571 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:12:29 crc kubenswrapper[4867]: I0214 05:12:29.007454 4867 scope.go:117] "RemoveContainer" containerID="734a61f9c7ed9ca50b3d56703c2d5beedaf665574b56c30e78eaf04e359de025" Feb 14 05:12:29 crc kubenswrapper[4867]: E0214 05:12:29.016414 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:12:42 crc kubenswrapper[4867]: I0214 05:12:42.998328 4867 scope.go:117] "RemoveContainer" containerID="734a61f9c7ed9ca50b3d56703c2d5beedaf665574b56c30e78eaf04e359de025" Feb 14 05:12:43 crc kubenswrapper[4867]: E0214 05:12:42.999119 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:12:55 crc kubenswrapper[4867]: I0214 05:12:55.998654 4867 scope.go:117] "RemoveContainer" containerID="734a61f9c7ed9ca50b3d56703c2d5beedaf665574b56c30e78eaf04e359de025" Feb 14 05:12:56 crc kubenswrapper[4867]: E0214 05:12:55.999379 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:13:09 crc kubenswrapper[4867]: I0214 05:13:09.997570 4867 scope.go:117] "RemoveContainer" containerID="734a61f9c7ed9ca50b3d56703c2d5beedaf665574b56c30e78eaf04e359de025" Feb 14 05:13:09 crc kubenswrapper[4867]: E0214 05:13:09.999786 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:13:23 crc kubenswrapper[4867]: I0214 05:13:23.997878 4867 scope.go:117] "RemoveContainer" containerID="734a61f9c7ed9ca50b3d56703c2d5beedaf665574b56c30e78eaf04e359de025" Feb 14 05:13:23 crc kubenswrapper[4867]: E0214 05:13:23.998696 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:13:35 crc kubenswrapper[4867]: I0214 05:13:35.997546 4867 scope.go:117] "RemoveContainer" containerID="734a61f9c7ed9ca50b3d56703c2d5beedaf665574b56c30e78eaf04e359de025" Feb 14 05:13:35 crc kubenswrapper[4867]: E0214 05:13:35.998347 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:13:46 crc kubenswrapper[4867]: I0214 05:13:46.997576 4867 scope.go:117] "RemoveContainer" containerID="734a61f9c7ed9ca50b3d56703c2d5beedaf665574b56c30e78eaf04e359de025" Feb 14 05:13:46 crc kubenswrapper[4867]: E0214 05:13:46.998245 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:14:01 crc kubenswrapper[4867]: I0214 05:14:01.002534 4867 scope.go:117] "RemoveContainer" containerID="734a61f9c7ed9ca50b3d56703c2d5beedaf665574b56c30e78eaf04e359de025" Feb 14 05:14:01 crc kubenswrapper[4867]: E0214 05:14:01.003225 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:14:13 crc kubenswrapper[4867]: I0214 05:14:13.998340 4867 scope.go:117] "RemoveContainer" containerID="734a61f9c7ed9ca50b3d56703c2d5beedaf665574b56c30e78eaf04e359de025" Feb 14 05:14:14 crc kubenswrapper[4867]: E0214 05:14:13.999397 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:14:26 crc kubenswrapper[4867]: I0214 05:14:26.997709 4867 scope.go:117] "RemoveContainer" containerID="734a61f9c7ed9ca50b3d56703c2d5beedaf665574b56c30e78eaf04e359de025" Feb 14 05:14:26 crc kubenswrapper[4867]: E0214 05:14:26.998360 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:14:40 crc kubenswrapper[4867]: I0214 05:14:40.997888 4867 scope.go:117] "RemoveContainer" containerID="734a61f9c7ed9ca50b3d56703c2d5beedaf665574b56c30e78eaf04e359de025" Feb 14 05:14:40 crc kubenswrapper[4867]: E0214 05:14:40.998792 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:14:53 crc kubenswrapper[4867]: I0214 05:14:53.004081 4867 scope.go:117] "RemoveContainer" containerID="734a61f9c7ed9ca50b3d56703c2d5beedaf665574b56c30e78eaf04e359de025" Feb 14 05:14:53 crc kubenswrapper[4867]: E0214 05:14:53.004994 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:15:00 crc kubenswrapper[4867]: I0214 05:15:00.188293 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517435-sp924"] Feb 14 05:15:00 crc kubenswrapper[4867]: E0214 05:15:00.189851 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61a1135a-8f12-45c1-95f2-b7892a0533bf" containerName="extract-utilities" Feb 14 05:15:00 crc kubenswrapper[4867]: I0214 05:15:00.189868 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="61a1135a-8f12-45c1-95f2-b7892a0533bf" containerName="extract-utilities" Feb 14 05:15:00 crc kubenswrapper[4867]: E0214 05:15:00.189881 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61a1135a-8f12-45c1-95f2-b7892a0533bf" containerName="extract-content" Feb 14 05:15:00 crc kubenswrapper[4867]: I0214 05:15:00.189887 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="61a1135a-8f12-45c1-95f2-b7892a0533bf" containerName="extract-content" Feb 14 05:15:00 crc kubenswrapper[4867]: E0214 05:15:00.189895 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a9e54e7-1fab-4191-b99b-b976ff519072" containerName="extract-utilities" Feb 14 05:15:00 crc kubenswrapper[4867]: I0214 05:15:00.189901 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a9e54e7-1fab-4191-b99b-b976ff519072" containerName="extract-utilities" Feb 14 05:15:00 crc kubenswrapper[4867]: E0214 05:15:00.189933 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61a1135a-8f12-45c1-95f2-b7892a0533bf" containerName="registry-server" Feb 14 05:15:00 crc kubenswrapper[4867]: I0214 05:15:00.189939 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="61a1135a-8f12-45c1-95f2-b7892a0533bf" containerName="registry-server" Feb 14 05:15:00 crc kubenswrapper[4867]: E0214 05:15:00.189960 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a9e54e7-1fab-4191-b99b-b976ff519072" containerName="extract-content" Feb 14 05:15:00 crc kubenswrapper[4867]: I0214 05:15:00.189967 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a9e54e7-1fab-4191-b99b-b976ff519072" containerName="extract-content" Feb 14 05:15:00 crc kubenswrapper[4867]: E0214 05:15:00.189979 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913" containerName="extract-content" Feb 14 05:15:00 crc kubenswrapper[4867]: I0214 05:15:00.189985 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913" containerName="extract-content" Feb 14 05:15:00 crc kubenswrapper[4867]: E0214 05:15:00.190008 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a9e54e7-1fab-4191-b99b-b976ff519072" containerName="registry-server" Feb 14 05:15:00 crc kubenswrapper[4867]: I0214 05:15:00.190015 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a9e54e7-1fab-4191-b99b-b976ff519072" containerName="registry-server" Feb 14 05:15:00 crc kubenswrapper[4867]: E0214 05:15:00.190031 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="140ec2e6-ad78-48a9-b040-c957a66a3455" containerName="extract-utilities" Feb 14 05:15:00 crc kubenswrapper[4867]: I0214 05:15:00.190038 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="140ec2e6-ad78-48a9-b040-c957a66a3455" containerName="extract-utilities" Feb 14 05:15:00 crc kubenswrapper[4867]: E0214 05:15:00.190050 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="140ec2e6-ad78-48a9-b040-c957a66a3455" containerName="registry-server" Feb 14 05:15:00 crc kubenswrapper[4867]: I0214 05:15:00.190055 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="140ec2e6-ad78-48a9-b040-c957a66a3455" containerName="registry-server" Feb 14 05:15:00 crc kubenswrapper[4867]: E0214 05:15:00.190068 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="140ec2e6-ad78-48a9-b040-c957a66a3455" containerName="extract-content" Feb 14 05:15:00 crc kubenswrapper[4867]: I0214 05:15:00.190074 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="140ec2e6-ad78-48a9-b040-c957a66a3455" containerName="extract-content" Feb 14 05:15:00 crc kubenswrapper[4867]: E0214 05:15:00.190092 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913" containerName="extract-utilities" Feb 14 05:15:00 crc kubenswrapper[4867]: I0214 05:15:00.190099 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913" containerName="extract-utilities" Feb 14 05:15:00 crc kubenswrapper[4867]: E0214 05:15:00.190117 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913" containerName="registry-server" Feb 14 05:15:00 crc kubenswrapper[4867]: I0214 05:15:00.190126 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913" containerName="registry-server" Feb 14 05:15:00 crc kubenswrapper[4867]: I0214 05:15:00.190368 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913" containerName="registry-server" Feb 14 05:15:00 crc kubenswrapper[4867]: I0214 05:15:00.190382 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="61a1135a-8f12-45c1-95f2-b7892a0533bf" containerName="registry-server" Feb 14 05:15:00 crc kubenswrapper[4867]: I0214 05:15:00.190416 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="140ec2e6-ad78-48a9-b040-c957a66a3455" containerName="registry-server" Feb 14 05:15:00 crc kubenswrapper[4867]: I0214 05:15:00.190434 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a9e54e7-1fab-4191-b99b-b976ff519072" containerName="registry-server" Feb 14 05:15:00 crc kubenswrapper[4867]: I0214 05:15:00.191801 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517435-sp924" Feb 14 05:15:00 crc kubenswrapper[4867]: I0214 05:15:00.195199 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 14 05:15:00 crc kubenswrapper[4867]: I0214 05:15:00.195288 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 14 05:15:00 crc kubenswrapper[4867]: I0214 05:15:00.205367 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517435-sp924"] Feb 14 05:15:00 crc kubenswrapper[4867]: I0214 05:15:00.295943 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mt8z6\" (UniqueName: \"kubernetes.io/projected/4d32d646-2d3a-40db-acb7-a2c9e410c655-kube-api-access-mt8z6\") pod \"collect-profiles-29517435-sp924\" (UID: \"4d32d646-2d3a-40db-acb7-a2c9e410c655\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517435-sp924" Feb 14 05:15:00 crc kubenswrapper[4867]: I0214 05:15:00.296138 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4d32d646-2d3a-40db-acb7-a2c9e410c655-secret-volume\") pod \"collect-profiles-29517435-sp924\" (UID: \"4d32d646-2d3a-40db-acb7-a2c9e410c655\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517435-sp924" Feb 14 05:15:00 crc kubenswrapper[4867]: I0214 05:15:00.296160 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4d32d646-2d3a-40db-acb7-a2c9e410c655-config-volume\") pod \"collect-profiles-29517435-sp924\" (UID: \"4d32d646-2d3a-40db-acb7-a2c9e410c655\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517435-sp924" Feb 14 05:15:00 crc kubenswrapper[4867]: I0214 05:15:00.402591 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4d32d646-2d3a-40db-acb7-a2c9e410c655-secret-volume\") pod \"collect-profiles-29517435-sp924\" (UID: \"4d32d646-2d3a-40db-acb7-a2c9e410c655\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517435-sp924" Feb 14 05:15:00 crc kubenswrapper[4867]: I0214 05:15:00.402672 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4d32d646-2d3a-40db-acb7-a2c9e410c655-config-volume\") pod \"collect-profiles-29517435-sp924\" (UID: \"4d32d646-2d3a-40db-acb7-a2c9e410c655\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517435-sp924" Feb 14 05:15:00 crc kubenswrapper[4867]: I0214 05:15:00.402867 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mt8z6\" (UniqueName: \"kubernetes.io/projected/4d32d646-2d3a-40db-acb7-a2c9e410c655-kube-api-access-mt8z6\") pod \"collect-profiles-29517435-sp924\" (UID: \"4d32d646-2d3a-40db-acb7-a2c9e410c655\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517435-sp924" Feb 14 05:15:00 crc kubenswrapper[4867]: I0214 05:15:00.403806 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4d32d646-2d3a-40db-acb7-a2c9e410c655-config-volume\") pod \"collect-profiles-29517435-sp924\" (UID: \"4d32d646-2d3a-40db-acb7-a2c9e410c655\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517435-sp924" Feb 14 05:15:01 crc kubenswrapper[4867]: I0214 05:15:01.061021 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mt8z6\" (UniqueName: \"kubernetes.io/projected/4d32d646-2d3a-40db-acb7-a2c9e410c655-kube-api-access-mt8z6\") pod \"collect-profiles-29517435-sp924\" (UID: \"4d32d646-2d3a-40db-acb7-a2c9e410c655\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517435-sp924" Feb 14 05:15:01 crc kubenswrapper[4867]: I0214 05:15:01.061460 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4d32d646-2d3a-40db-acb7-a2c9e410c655-secret-volume\") pod \"collect-profiles-29517435-sp924\" (UID: \"4d32d646-2d3a-40db-acb7-a2c9e410c655\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517435-sp924" Feb 14 05:15:01 crc kubenswrapper[4867]: I0214 05:15:01.124146 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517435-sp924" Feb 14 05:15:01 crc kubenswrapper[4867]: I0214 05:15:01.641985 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517435-sp924"] Feb 14 05:15:01 crc kubenswrapper[4867]: I0214 05:15:01.913663 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517435-sp924" event={"ID":"4d32d646-2d3a-40db-acb7-a2c9e410c655","Type":"ContainerStarted","Data":"57685fa039b788fdc3d04fb1da2849cb66a1a8363710569f8bd5ff77b56239d6"} Feb 14 05:15:01 crc kubenswrapper[4867]: I0214 05:15:01.913990 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517435-sp924" event={"ID":"4d32d646-2d3a-40db-acb7-a2c9e410c655","Type":"ContainerStarted","Data":"f359105f56e5cfa82013265a8942223d6c5a788a74259ac8eae7176b4ebbf7e3"} Feb 14 05:15:01 crc kubenswrapper[4867]: I0214 05:15:01.932843 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29517435-sp924" podStartSLOduration=1.9328306290000001 podStartE2EDuration="1.932830629s" podCreationTimestamp="2026-02-14 05:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 05:15:01.932669265 +0000 UTC m=+3934.013606579" watchObservedRunningTime="2026-02-14 05:15:01.932830629 +0000 UTC m=+3934.013767944" Feb 14 05:15:02 crc kubenswrapper[4867]: I0214 05:15:02.929156 4867 generic.go:334] "Generic (PLEG): container finished" podID="4d32d646-2d3a-40db-acb7-a2c9e410c655" containerID="57685fa039b788fdc3d04fb1da2849cb66a1a8363710569f8bd5ff77b56239d6" exitCode=0 Feb 14 05:15:02 crc kubenswrapper[4867]: I0214 05:15:02.929238 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517435-sp924" event={"ID":"4d32d646-2d3a-40db-acb7-a2c9e410c655","Type":"ContainerDied","Data":"57685fa039b788fdc3d04fb1da2849cb66a1a8363710569f8bd5ff77b56239d6"} Feb 14 05:15:04 crc kubenswrapper[4867]: I0214 05:15:04.402258 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517435-sp924" Feb 14 05:15:04 crc kubenswrapper[4867]: I0214 05:15:04.519001 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4d32d646-2d3a-40db-acb7-a2c9e410c655-secret-volume\") pod \"4d32d646-2d3a-40db-acb7-a2c9e410c655\" (UID: \"4d32d646-2d3a-40db-acb7-a2c9e410c655\") " Feb 14 05:15:04 crc kubenswrapper[4867]: I0214 05:15:04.519123 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mt8z6\" (UniqueName: \"kubernetes.io/projected/4d32d646-2d3a-40db-acb7-a2c9e410c655-kube-api-access-mt8z6\") pod \"4d32d646-2d3a-40db-acb7-a2c9e410c655\" (UID: \"4d32d646-2d3a-40db-acb7-a2c9e410c655\") " Feb 14 05:15:04 crc kubenswrapper[4867]: I0214 05:15:04.519618 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4d32d646-2d3a-40db-acb7-a2c9e410c655-config-volume\") pod \"4d32d646-2d3a-40db-acb7-a2c9e410c655\" (UID: \"4d32d646-2d3a-40db-acb7-a2c9e410c655\") " Feb 14 05:15:04 crc kubenswrapper[4867]: I0214 05:15:04.520224 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d32d646-2d3a-40db-acb7-a2c9e410c655-config-volume" (OuterVolumeSpecName: "config-volume") pod "4d32d646-2d3a-40db-acb7-a2c9e410c655" (UID: "4d32d646-2d3a-40db-acb7-a2c9e410c655"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 05:15:04 crc kubenswrapper[4867]: I0214 05:15:04.526811 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d32d646-2d3a-40db-acb7-a2c9e410c655-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4d32d646-2d3a-40db-acb7-a2c9e410c655" (UID: "4d32d646-2d3a-40db-acb7-a2c9e410c655"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 05:15:04 crc kubenswrapper[4867]: I0214 05:15:04.527994 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d32d646-2d3a-40db-acb7-a2c9e410c655-kube-api-access-mt8z6" (OuterVolumeSpecName: "kube-api-access-mt8z6") pod "4d32d646-2d3a-40db-acb7-a2c9e410c655" (UID: "4d32d646-2d3a-40db-acb7-a2c9e410c655"). InnerVolumeSpecName "kube-api-access-mt8z6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 05:15:04 crc kubenswrapper[4867]: I0214 05:15:04.622210 4867 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4d32d646-2d3a-40db-acb7-a2c9e410c655-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 14 05:15:04 crc kubenswrapper[4867]: I0214 05:15:04.622244 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mt8z6\" (UniqueName: \"kubernetes.io/projected/4d32d646-2d3a-40db-acb7-a2c9e410c655-kube-api-access-mt8z6\") on node \"crc\" DevicePath \"\"" Feb 14 05:15:04 crc kubenswrapper[4867]: I0214 05:15:04.622255 4867 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4d32d646-2d3a-40db-acb7-a2c9e410c655-config-volume\") on node \"crc\" DevicePath \"\"" Feb 14 05:15:04 crc kubenswrapper[4867]: I0214 05:15:04.718392 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517390-kwnnx"] Feb 14 05:15:04 crc kubenswrapper[4867]: I0214 05:15:04.729083 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517390-kwnnx"] Feb 14 05:15:04 crc kubenswrapper[4867]: I0214 05:15:04.948634 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517435-sp924" event={"ID":"4d32d646-2d3a-40db-acb7-a2c9e410c655","Type":"ContainerDied","Data":"f359105f56e5cfa82013265a8942223d6c5a788a74259ac8eae7176b4ebbf7e3"} Feb 14 05:15:04 crc kubenswrapper[4867]: I0214 05:15:04.949156 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f359105f56e5cfa82013265a8942223d6c5a788a74259ac8eae7176b4ebbf7e3" Feb 14 05:15:04 crc kubenswrapper[4867]: I0214 05:15:04.949340 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517435-sp924" Feb 14 05:15:05 crc kubenswrapper[4867]: I0214 05:15:05.011205 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7c88887-cc0d-4b61-9ccc-e5583c27322f" path="/var/lib/kubelet/pods/f7c88887-cc0d-4b61-9ccc-e5583c27322f/volumes" Feb 14 05:15:07 crc kubenswrapper[4867]: I0214 05:15:07.997688 4867 scope.go:117] "RemoveContainer" containerID="734a61f9c7ed9ca50b3d56703c2d5beedaf665574b56c30e78eaf04e359de025" Feb 14 05:15:07 crc kubenswrapper[4867]: E0214 05:15:07.998461 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:15:16 crc kubenswrapper[4867]: I0214 05:15:16.399279 4867 scope.go:117] "RemoveContainer" containerID="1ad9cf29f8ad6082a18e81d3f3baec01fbc4267f231e524551a2925f597e672d" Feb 14 05:15:21 crc kubenswrapper[4867]: I0214 05:15:21.997726 4867 scope.go:117] "RemoveContainer" containerID="734a61f9c7ed9ca50b3d56703c2d5beedaf665574b56c30e78eaf04e359de025" Feb 14 05:15:21 crc kubenswrapper[4867]: E0214 05:15:21.998470 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:15:35 crc kubenswrapper[4867]: I0214 05:15:35.998337 4867 scope.go:117] "RemoveContainer" containerID="734a61f9c7ed9ca50b3d56703c2d5beedaf665574b56c30e78eaf04e359de025" Feb 14 05:15:36 crc kubenswrapper[4867]: I0214 05:15:36.289468 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" event={"ID":"5992e46c-bce7-4b9f-82f2-c7ffb93286cd","Type":"ContainerStarted","Data":"1e3602f7b703c67cfacb5cb1380c16876968a54c75c8bfed3061dc4a8fbe9713"} Feb 14 05:18:01 crc kubenswrapper[4867]: I0214 05:18:01.251016 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 05:18:01 crc kubenswrapper[4867]: I0214 05:18:01.251630 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 05:18:31 crc kubenswrapper[4867]: I0214 05:18:31.251044 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 05:18:31 crc kubenswrapper[4867]: I0214 05:18:31.252624 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 05:18:41 crc kubenswrapper[4867]: I0214 05:18:41.364486 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-sfn52"] Feb 14 05:18:41 crc kubenswrapper[4867]: E0214 05:18:41.365610 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d32d646-2d3a-40db-acb7-a2c9e410c655" containerName="collect-profiles" Feb 14 05:18:41 crc kubenswrapper[4867]: I0214 05:18:41.365627 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d32d646-2d3a-40db-acb7-a2c9e410c655" containerName="collect-profiles" Feb 14 05:18:41 crc kubenswrapper[4867]: I0214 05:18:41.365902 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d32d646-2d3a-40db-acb7-a2c9e410c655" containerName="collect-profiles" Feb 14 05:18:41 crc kubenswrapper[4867]: I0214 05:18:41.367835 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sfn52" Feb 14 05:18:41 crc kubenswrapper[4867]: I0214 05:18:41.397279 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sfn52"] Feb 14 05:18:41 crc kubenswrapper[4867]: I0214 05:18:41.439340 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffaef1cf-3868-4c30-a1db-f2f0e2305795-utilities\") pod \"certified-operators-sfn52\" (UID: \"ffaef1cf-3868-4c30-a1db-f2f0e2305795\") " pod="openshift-marketplace/certified-operators-sfn52" Feb 14 05:18:41 crc kubenswrapper[4867]: I0214 05:18:41.439568 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65jzr\" (UniqueName: \"kubernetes.io/projected/ffaef1cf-3868-4c30-a1db-f2f0e2305795-kube-api-access-65jzr\") pod \"certified-operators-sfn52\" (UID: \"ffaef1cf-3868-4c30-a1db-f2f0e2305795\") " pod="openshift-marketplace/certified-operators-sfn52" Feb 14 05:18:41 crc kubenswrapper[4867]: I0214 05:18:41.442458 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffaef1cf-3868-4c30-a1db-f2f0e2305795-catalog-content\") pod \"certified-operators-sfn52\" (UID: \"ffaef1cf-3868-4c30-a1db-f2f0e2305795\") " pod="openshift-marketplace/certified-operators-sfn52" Feb 14 05:18:41 crc kubenswrapper[4867]: I0214 05:18:41.545490 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffaef1cf-3868-4c30-a1db-f2f0e2305795-catalog-content\") pod \"certified-operators-sfn52\" (UID: \"ffaef1cf-3868-4c30-a1db-f2f0e2305795\") " pod="openshift-marketplace/certified-operators-sfn52" Feb 14 05:18:41 crc kubenswrapper[4867]: I0214 05:18:41.546111 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffaef1cf-3868-4c30-a1db-f2f0e2305795-catalog-content\") pod \"certified-operators-sfn52\" (UID: \"ffaef1cf-3868-4c30-a1db-f2f0e2305795\") " pod="openshift-marketplace/certified-operators-sfn52" Feb 14 05:18:41 crc kubenswrapper[4867]: I0214 05:18:41.546130 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffaef1cf-3868-4c30-a1db-f2f0e2305795-utilities\") pod \"certified-operators-sfn52\" (UID: \"ffaef1cf-3868-4c30-a1db-f2f0e2305795\") " pod="openshift-marketplace/certified-operators-sfn52" Feb 14 05:18:41 crc kubenswrapper[4867]: I0214 05:18:41.546407 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65jzr\" (UniqueName: \"kubernetes.io/projected/ffaef1cf-3868-4c30-a1db-f2f0e2305795-kube-api-access-65jzr\") pod \"certified-operators-sfn52\" (UID: \"ffaef1cf-3868-4c30-a1db-f2f0e2305795\") " pod="openshift-marketplace/certified-operators-sfn52" Feb 14 05:18:41 crc kubenswrapper[4867]: I0214 05:18:41.546681 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffaef1cf-3868-4c30-a1db-f2f0e2305795-utilities\") pod \"certified-operators-sfn52\" (UID: \"ffaef1cf-3868-4c30-a1db-f2f0e2305795\") " pod="openshift-marketplace/certified-operators-sfn52" Feb 14 05:18:41 crc kubenswrapper[4867]: I0214 05:18:41.568422 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65jzr\" (UniqueName: \"kubernetes.io/projected/ffaef1cf-3868-4c30-a1db-f2f0e2305795-kube-api-access-65jzr\") pod \"certified-operators-sfn52\" (UID: \"ffaef1cf-3868-4c30-a1db-f2f0e2305795\") " pod="openshift-marketplace/certified-operators-sfn52" Feb 14 05:18:41 crc kubenswrapper[4867]: I0214 05:18:41.697927 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sfn52" Feb 14 05:18:42 crc kubenswrapper[4867]: I0214 05:18:42.313534 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sfn52"] Feb 14 05:18:42 crc kubenswrapper[4867]: W0214 05:18:42.323710 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podffaef1cf_3868_4c30_a1db_f2f0e2305795.slice/crio-561dea0fcb8fcf098edac916c2d02df5df7a3f9411632f29686b83622385f99e WatchSource:0}: Error finding container 561dea0fcb8fcf098edac916c2d02df5df7a3f9411632f29686b83622385f99e: Status 404 returned error can't find the container with id 561dea0fcb8fcf098edac916c2d02df5df7a3f9411632f29686b83622385f99e Feb 14 05:18:42 crc kubenswrapper[4867]: I0214 05:18:42.835835 4867 generic.go:334] "Generic (PLEG): container finished" podID="ffaef1cf-3868-4c30-a1db-f2f0e2305795" containerID="3d381218bc5be9c471f4279313529f67a85bd7d11d6f891202c7c8e3b688be41" exitCode=0 Feb 14 05:18:42 crc kubenswrapper[4867]: I0214 05:18:42.835885 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sfn52" event={"ID":"ffaef1cf-3868-4c30-a1db-f2f0e2305795","Type":"ContainerDied","Data":"3d381218bc5be9c471f4279313529f67a85bd7d11d6f891202c7c8e3b688be41"} Feb 14 05:18:42 crc kubenswrapper[4867]: I0214 05:18:42.835933 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sfn52" event={"ID":"ffaef1cf-3868-4c30-a1db-f2f0e2305795","Type":"ContainerStarted","Data":"561dea0fcb8fcf098edac916c2d02df5df7a3f9411632f29686b83622385f99e"} Feb 14 05:18:42 crc kubenswrapper[4867]: I0214 05:18:42.837925 4867 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 14 05:18:43 crc kubenswrapper[4867]: I0214 05:18:43.849910 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sfn52" event={"ID":"ffaef1cf-3868-4c30-a1db-f2f0e2305795","Type":"ContainerStarted","Data":"ab65aa7384e5ee7a4d08c632a4c36a58f1df873877f1acaf2626c9ba9431eee7"} Feb 14 05:18:45 crc kubenswrapper[4867]: I0214 05:18:45.886802 4867 generic.go:334] "Generic (PLEG): container finished" podID="ffaef1cf-3868-4c30-a1db-f2f0e2305795" containerID="ab65aa7384e5ee7a4d08c632a4c36a58f1df873877f1acaf2626c9ba9431eee7" exitCode=0 Feb 14 05:18:45 crc kubenswrapper[4867]: I0214 05:18:45.886875 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sfn52" event={"ID":"ffaef1cf-3868-4c30-a1db-f2f0e2305795","Type":"ContainerDied","Data":"ab65aa7384e5ee7a4d08c632a4c36a58f1df873877f1acaf2626c9ba9431eee7"} Feb 14 05:18:46 crc kubenswrapper[4867]: I0214 05:18:46.900375 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sfn52" event={"ID":"ffaef1cf-3868-4c30-a1db-f2f0e2305795","Type":"ContainerStarted","Data":"7405032c933c85d1feef84e2dd428f0653b5798ccfb12c68c957dc6492227356"} Feb 14 05:18:46 crc kubenswrapper[4867]: I0214 05:18:46.936234 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-sfn52" podStartSLOduration=2.506869348 podStartE2EDuration="5.936208952s" podCreationTimestamp="2026-02-14 05:18:41 +0000 UTC" firstStartedPulling="2026-02-14 05:18:42.837567138 +0000 UTC m=+4154.918504472" lastFinishedPulling="2026-02-14 05:18:46.266906772 +0000 UTC m=+4158.347844076" observedRunningTime="2026-02-14 05:18:46.925929882 +0000 UTC m=+4159.006867206" watchObservedRunningTime="2026-02-14 05:18:46.936208952 +0000 UTC m=+4159.017146286" Feb 14 05:18:51 crc kubenswrapper[4867]: I0214 05:18:51.698516 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-sfn52" Feb 14 05:18:51 crc kubenswrapper[4867]: I0214 05:18:51.700062 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-sfn52" Feb 14 05:18:51 crc kubenswrapper[4867]: I0214 05:18:51.746528 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-sfn52" Feb 14 05:18:52 crc kubenswrapper[4867]: I0214 05:18:52.826078 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-sfn52" Feb 14 05:18:52 crc kubenswrapper[4867]: I0214 05:18:52.879494 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sfn52"] Feb 14 05:18:53 crc kubenswrapper[4867]: I0214 05:18:53.966705 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-sfn52" podUID="ffaef1cf-3868-4c30-a1db-f2f0e2305795" containerName="registry-server" containerID="cri-o://7405032c933c85d1feef84e2dd428f0653b5798ccfb12c68c957dc6492227356" gracePeriod=2 Feb 14 05:18:54 crc kubenswrapper[4867]: I0214 05:18:54.471426 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sfn52" Feb 14 05:18:54 crc kubenswrapper[4867]: I0214 05:18:54.482481 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-65jzr\" (UniqueName: \"kubernetes.io/projected/ffaef1cf-3868-4c30-a1db-f2f0e2305795-kube-api-access-65jzr\") pod \"ffaef1cf-3868-4c30-a1db-f2f0e2305795\" (UID: \"ffaef1cf-3868-4c30-a1db-f2f0e2305795\") " Feb 14 05:18:54 crc kubenswrapper[4867]: I0214 05:18:54.482540 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffaef1cf-3868-4c30-a1db-f2f0e2305795-utilities\") pod \"ffaef1cf-3868-4c30-a1db-f2f0e2305795\" (UID: \"ffaef1cf-3868-4c30-a1db-f2f0e2305795\") " Feb 14 05:18:54 crc kubenswrapper[4867]: I0214 05:18:54.483673 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ffaef1cf-3868-4c30-a1db-f2f0e2305795-utilities" (OuterVolumeSpecName: "utilities") pod "ffaef1cf-3868-4c30-a1db-f2f0e2305795" (UID: "ffaef1cf-3868-4c30-a1db-f2f0e2305795"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 05:18:54 crc kubenswrapper[4867]: I0214 05:18:54.488291 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffaef1cf-3868-4c30-a1db-f2f0e2305795-kube-api-access-65jzr" (OuterVolumeSpecName: "kube-api-access-65jzr") pod "ffaef1cf-3868-4c30-a1db-f2f0e2305795" (UID: "ffaef1cf-3868-4c30-a1db-f2f0e2305795"). InnerVolumeSpecName "kube-api-access-65jzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 05:18:54 crc kubenswrapper[4867]: I0214 05:18:54.585126 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffaef1cf-3868-4c30-a1db-f2f0e2305795-catalog-content\") pod \"ffaef1cf-3868-4c30-a1db-f2f0e2305795\" (UID: \"ffaef1cf-3868-4c30-a1db-f2f0e2305795\") " Feb 14 05:18:54 crc kubenswrapper[4867]: I0214 05:18:54.590437 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-65jzr\" (UniqueName: \"kubernetes.io/projected/ffaef1cf-3868-4c30-a1db-f2f0e2305795-kube-api-access-65jzr\") on node \"crc\" DevicePath \"\"" Feb 14 05:18:54 crc kubenswrapper[4867]: I0214 05:18:54.590478 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffaef1cf-3868-4c30-a1db-f2f0e2305795-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 05:18:54 crc kubenswrapper[4867]: I0214 05:18:54.640015 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ffaef1cf-3868-4c30-a1db-f2f0e2305795-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ffaef1cf-3868-4c30-a1db-f2f0e2305795" (UID: "ffaef1cf-3868-4c30-a1db-f2f0e2305795"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 05:18:54 crc kubenswrapper[4867]: I0214 05:18:54.693397 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffaef1cf-3868-4c30-a1db-f2f0e2305795-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 05:18:54 crc kubenswrapper[4867]: I0214 05:18:54.978223 4867 generic.go:334] "Generic (PLEG): container finished" podID="ffaef1cf-3868-4c30-a1db-f2f0e2305795" containerID="7405032c933c85d1feef84e2dd428f0653b5798ccfb12c68c957dc6492227356" exitCode=0 Feb 14 05:18:54 crc kubenswrapper[4867]: I0214 05:18:54.978263 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sfn52" event={"ID":"ffaef1cf-3868-4c30-a1db-f2f0e2305795","Type":"ContainerDied","Data":"7405032c933c85d1feef84e2dd428f0653b5798ccfb12c68c957dc6492227356"} Feb 14 05:18:54 crc kubenswrapper[4867]: I0214 05:18:54.978299 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sfn52" event={"ID":"ffaef1cf-3868-4c30-a1db-f2f0e2305795","Type":"ContainerDied","Data":"561dea0fcb8fcf098edac916c2d02df5df7a3f9411632f29686b83622385f99e"} Feb 14 05:18:54 crc kubenswrapper[4867]: I0214 05:18:54.978317 4867 scope.go:117] "RemoveContainer" containerID="7405032c933c85d1feef84e2dd428f0653b5798ccfb12c68c957dc6492227356" Feb 14 05:18:54 crc kubenswrapper[4867]: I0214 05:18:54.980202 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sfn52" Feb 14 05:18:55 crc kubenswrapper[4867]: I0214 05:18:55.017241 4867 scope.go:117] "RemoveContainer" containerID="ab65aa7384e5ee7a4d08c632a4c36a58f1df873877f1acaf2626c9ba9431eee7" Feb 14 05:18:55 crc kubenswrapper[4867]: I0214 05:18:55.032803 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sfn52"] Feb 14 05:18:55 crc kubenswrapper[4867]: I0214 05:18:55.047785 4867 scope.go:117] "RemoveContainer" containerID="3d381218bc5be9c471f4279313529f67a85bd7d11d6f891202c7c8e3b688be41" Feb 14 05:18:55 crc kubenswrapper[4867]: I0214 05:18:55.049009 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-sfn52"] Feb 14 05:18:55 crc kubenswrapper[4867]: I0214 05:18:55.101387 4867 scope.go:117] "RemoveContainer" containerID="7405032c933c85d1feef84e2dd428f0653b5798ccfb12c68c957dc6492227356" Feb 14 05:18:55 crc kubenswrapper[4867]: E0214 05:18:55.101875 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7405032c933c85d1feef84e2dd428f0653b5798ccfb12c68c957dc6492227356\": container with ID starting with 7405032c933c85d1feef84e2dd428f0653b5798ccfb12c68c957dc6492227356 not found: ID does not exist" containerID="7405032c933c85d1feef84e2dd428f0653b5798ccfb12c68c957dc6492227356" Feb 14 05:18:55 crc kubenswrapper[4867]: I0214 05:18:55.101917 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7405032c933c85d1feef84e2dd428f0653b5798ccfb12c68c957dc6492227356"} err="failed to get container status \"7405032c933c85d1feef84e2dd428f0653b5798ccfb12c68c957dc6492227356\": rpc error: code = NotFound desc = could not find container \"7405032c933c85d1feef84e2dd428f0653b5798ccfb12c68c957dc6492227356\": container with ID starting with 7405032c933c85d1feef84e2dd428f0653b5798ccfb12c68c957dc6492227356 not found: ID does not exist" Feb 14 05:18:55 crc kubenswrapper[4867]: I0214 05:18:55.101954 4867 scope.go:117] "RemoveContainer" containerID="ab65aa7384e5ee7a4d08c632a4c36a58f1df873877f1acaf2626c9ba9431eee7" Feb 14 05:18:55 crc kubenswrapper[4867]: E0214 05:18:55.102338 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab65aa7384e5ee7a4d08c632a4c36a58f1df873877f1acaf2626c9ba9431eee7\": container with ID starting with ab65aa7384e5ee7a4d08c632a4c36a58f1df873877f1acaf2626c9ba9431eee7 not found: ID does not exist" containerID="ab65aa7384e5ee7a4d08c632a4c36a58f1df873877f1acaf2626c9ba9431eee7" Feb 14 05:18:55 crc kubenswrapper[4867]: I0214 05:18:55.102369 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab65aa7384e5ee7a4d08c632a4c36a58f1df873877f1acaf2626c9ba9431eee7"} err="failed to get container status \"ab65aa7384e5ee7a4d08c632a4c36a58f1df873877f1acaf2626c9ba9431eee7\": rpc error: code = NotFound desc = could not find container \"ab65aa7384e5ee7a4d08c632a4c36a58f1df873877f1acaf2626c9ba9431eee7\": container with ID starting with ab65aa7384e5ee7a4d08c632a4c36a58f1df873877f1acaf2626c9ba9431eee7 not found: ID does not exist" Feb 14 05:18:55 crc kubenswrapper[4867]: I0214 05:18:55.102388 4867 scope.go:117] "RemoveContainer" containerID="3d381218bc5be9c471f4279313529f67a85bd7d11d6f891202c7c8e3b688be41" Feb 14 05:18:55 crc kubenswrapper[4867]: E0214 05:18:55.102682 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d381218bc5be9c471f4279313529f67a85bd7d11d6f891202c7c8e3b688be41\": container with ID starting with 3d381218bc5be9c471f4279313529f67a85bd7d11d6f891202c7c8e3b688be41 not found: ID does not exist" containerID="3d381218bc5be9c471f4279313529f67a85bd7d11d6f891202c7c8e3b688be41" Feb 14 05:18:55 crc kubenswrapper[4867]: I0214 05:18:55.102708 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d381218bc5be9c471f4279313529f67a85bd7d11d6f891202c7c8e3b688be41"} err="failed to get container status \"3d381218bc5be9c471f4279313529f67a85bd7d11d6f891202c7c8e3b688be41\": rpc error: code = NotFound desc = could not find container \"3d381218bc5be9c471f4279313529f67a85bd7d11d6f891202c7c8e3b688be41\": container with ID starting with 3d381218bc5be9c471f4279313529f67a85bd7d11d6f891202c7c8e3b688be41 not found: ID does not exist" Feb 14 05:18:57 crc kubenswrapper[4867]: I0214 05:18:57.009455 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffaef1cf-3868-4c30-a1db-f2f0e2305795" path="/var/lib/kubelet/pods/ffaef1cf-3868-4c30-a1db-f2f0e2305795/volumes" Feb 14 05:18:58 crc kubenswrapper[4867]: I0214 05:18:58.561779 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ccpdl"] Feb 14 05:18:58 crc kubenswrapper[4867]: E0214 05:18:58.562980 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffaef1cf-3868-4c30-a1db-f2f0e2305795" containerName="extract-utilities" Feb 14 05:18:58 crc kubenswrapper[4867]: I0214 05:18:58.562999 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffaef1cf-3868-4c30-a1db-f2f0e2305795" containerName="extract-utilities" Feb 14 05:18:58 crc kubenswrapper[4867]: E0214 05:18:58.563063 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffaef1cf-3868-4c30-a1db-f2f0e2305795" containerName="extract-content" Feb 14 05:18:58 crc kubenswrapper[4867]: I0214 05:18:58.563071 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffaef1cf-3868-4c30-a1db-f2f0e2305795" containerName="extract-content" Feb 14 05:18:58 crc kubenswrapper[4867]: E0214 05:18:58.563095 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffaef1cf-3868-4c30-a1db-f2f0e2305795" containerName="registry-server" Feb 14 05:18:58 crc kubenswrapper[4867]: I0214 05:18:58.563104 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffaef1cf-3868-4c30-a1db-f2f0e2305795" containerName="registry-server" Feb 14 05:18:58 crc kubenswrapper[4867]: I0214 05:18:58.563413 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffaef1cf-3868-4c30-a1db-f2f0e2305795" containerName="registry-server" Feb 14 05:18:58 crc kubenswrapper[4867]: I0214 05:18:58.565834 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ccpdl" Feb 14 05:18:58 crc kubenswrapper[4867]: I0214 05:18:58.575273 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ccpdl"] Feb 14 05:18:58 crc kubenswrapper[4867]: I0214 05:18:58.599894 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9l2t\" (UniqueName: \"kubernetes.io/projected/4cd89cb2-e3ca-4d2c-8ac0-55877cda3728-kube-api-access-d9l2t\") pod \"community-operators-ccpdl\" (UID: \"4cd89cb2-e3ca-4d2c-8ac0-55877cda3728\") " pod="openshift-marketplace/community-operators-ccpdl" Feb 14 05:18:58 crc kubenswrapper[4867]: I0214 05:18:58.599993 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cd89cb2-e3ca-4d2c-8ac0-55877cda3728-catalog-content\") pod \"community-operators-ccpdl\" (UID: \"4cd89cb2-e3ca-4d2c-8ac0-55877cda3728\") " pod="openshift-marketplace/community-operators-ccpdl" Feb 14 05:18:58 crc kubenswrapper[4867]: I0214 05:18:58.600419 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cd89cb2-e3ca-4d2c-8ac0-55877cda3728-utilities\") pod \"community-operators-ccpdl\" (UID: \"4cd89cb2-e3ca-4d2c-8ac0-55877cda3728\") " pod="openshift-marketplace/community-operators-ccpdl" Feb 14 05:18:58 crc kubenswrapper[4867]: I0214 05:18:58.702157 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9l2t\" (UniqueName: \"kubernetes.io/projected/4cd89cb2-e3ca-4d2c-8ac0-55877cda3728-kube-api-access-d9l2t\") pod \"community-operators-ccpdl\" (UID: \"4cd89cb2-e3ca-4d2c-8ac0-55877cda3728\") " pod="openshift-marketplace/community-operators-ccpdl" Feb 14 05:18:58 crc kubenswrapper[4867]: I0214 05:18:58.702247 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cd89cb2-e3ca-4d2c-8ac0-55877cda3728-catalog-content\") pod \"community-operators-ccpdl\" (UID: \"4cd89cb2-e3ca-4d2c-8ac0-55877cda3728\") " pod="openshift-marketplace/community-operators-ccpdl" Feb 14 05:18:58 crc kubenswrapper[4867]: I0214 05:18:58.702337 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cd89cb2-e3ca-4d2c-8ac0-55877cda3728-utilities\") pod \"community-operators-ccpdl\" (UID: \"4cd89cb2-e3ca-4d2c-8ac0-55877cda3728\") " pod="openshift-marketplace/community-operators-ccpdl" Feb 14 05:18:58 crc kubenswrapper[4867]: I0214 05:18:58.702890 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cd89cb2-e3ca-4d2c-8ac0-55877cda3728-utilities\") pod \"community-operators-ccpdl\" (UID: \"4cd89cb2-e3ca-4d2c-8ac0-55877cda3728\") " pod="openshift-marketplace/community-operators-ccpdl" Feb 14 05:18:58 crc kubenswrapper[4867]: I0214 05:18:58.702944 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cd89cb2-e3ca-4d2c-8ac0-55877cda3728-catalog-content\") pod \"community-operators-ccpdl\" (UID: \"4cd89cb2-e3ca-4d2c-8ac0-55877cda3728\") " pod="openshift-marketplace/community-operators-ccpdl" Feb 14 05:18:58 crc kubenswrapper[4867]: I0214 05:18:58.733933 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9l2t\" (UniqueName: \"kubernetes.io/projected/4cd89cb2-e3ca-4d2c-8ac0-55877cda3728-kube-api-access-d9l2t\") pod \"community-operators-ccpdl\" (UID: \"4cd89cb2-e3ca-4d2c-8ac0-55877cda3728\") " pod="openshift-marketplace/community-operators-ccpdl" Feb 14 05:18:58 crc kubenswrapper[4867]: I0214 05:18:58.889129 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ccpdl" Feb 14 05:18:59 crc kubenswrapper[4867]: I0214 05:18:59.550333 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ccpdl"] Feb 14 05:19:00 crc kubenswrapper[4867]: I0214 05:19:00.056678 4867 generic.go:334] "Generic (PLEG): container finished" podID="4cd89cb2-e3ca-4d2c-8ac0-55877cda3728" containerID="09cfccff68a249443672b44dc5ba251bcdd3149c3967997820634457c340ac96" exitCode=0 Feb 14 05:19:00 crc kubenswrapper[4867]: I0214 05:19:00.056731 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ccpdl" event={"ID":"4cd89cb2-e3ca-4d2c-8ac0-55877cda3728","Type":"ContainerDied","Data":"09cfccff68a249443672b44dc5ba251bcdd3149c3967997820634457c340ac96"} Feb 14 05:19:00 crc kubenswrapper[4867]: I0214 05:19:00.057815 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ccpdl" event={"ID":"4cd89cb2-e3ca-4d2c-8ac0-55877cda3728","Type":"ContainerStarted","Data":"b915280cad8dbdbca2545e56ab124792f335b9bf6c5c77908ebfda56bab51bc4"} Feb 14 05:19:01 crc kubenswrapper[4867]: I0214 05:19:01.250994 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 05:19:01 crc kubenswrapper[4867]: I0214 05:19:01.251728 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 05:19:01 crc kubenswrapper[4867]: I0214 05:19:01.251786 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" Feb 14 05:19:01 crc kubenswrapper[4867]: I0214 05:19:01.252699 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1e3602f7b703c67cfacb5cb1380c16876968a54c75c8bfed3061dc4a8fbe9713"} pod="openshift-machine-config-operator/machine-config-daemon-4s95t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 05:19:01 crc kubenswrapper[4867]: I0214 05:19:01.252767 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" containerID="cri-o://1e3602f7b703c67cfacb5cb1380c16876968a54c75c8bfed3061dc4a8fbe9713" gracePeriod=600 Feb 14 05:19:02 crc kubenswrapper[4867]: I0214 05:19:02.084019 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ccpdl" event={"ID":"4cd89cb2-e3ca-4d2c-8ac0-55877cda3728","Type":"ContainerStarted","Data":"41c0e6e6883de66e197f82001e61b38a735dc8ff5d59d0c42b5e561095eb0e8c"} Feb 14 05:19:02 crc kubenswrapper[4867]: I0214 05:19:02.087494 4867 generic.go:334] "Generic (PLEG): container finished" podID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerID="1e3602f7b703c67cfacb5cb1380c16876968a54c75c8bfed3061dc4a8fbe9713" exitCode=0 Feb 14 05:19:02 crc kubenswrapper[4867]: I0214 05:19:02.087560 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" event={"ID":"5992e46c-bce7-4b9f-82f2-c7ffb93286cd","Type":"ContainerDied","Data":"1e3602f7b703c67cfacb5cb1380c16876968a54c75c8bfed3061dc4a8fbe9713"} Feb 14 05:19:02 crc kubenswrapper[4867]: I0214 05:19:02.087590 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" event={"ID":"5992e46c-bce7-4b9f-82f2-c7ffb93286cd","Type":"ContainerStarted","Data":"ec987150c85caa2259b5e07a0130f2569d95269321a91ae517f52e3f4caa949a"} Feb 14 05:19:02 crc kubenswrapper[4867]: I0214 05:19:02.087614 4867 scope.go:117] "RemoveContainer" containerID="734a61f9c7ed9ca50b3d56703c2d5beedaf665574b56c30e78eaf04e359de025" Feb 14 05:19:04 crc kubenswrapper[4867]: I0214 05:19:04.132637 4867 generic.go:334] "Generic (PLEG): container finished" podID="4cd89cb2-e3ca-4d2c-8ac0-55877cda3728" containerID="41c0e6e6883de66e197f82001e61b38a735dc8ff5d59d0c42b5e561095eb0e8c" exitCode=0 Feb 14 05:19:04 crc kubenswrapper[4867]: I0214 05:19:04.134248 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ccpdl" event={"ID":"4cd89cb2-e3ca-4d2c-8ac0-55877cda3728","Type":"ContainerDied","Data":"41c0e6e6883de66e197f82001e61b38a735dc8ff5d59d0c42b5e561095eb0e8c"} Feb 14 05:19:05 crc kubenswrapper[4867]: I0214 05:19:05.145890 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ccpdl" event={"ID":"4cd89cb2-e3ca-4d2c-8ac0-55877cda3728","Type":"ContainerStarted","Data":"4386cbb7e5d69769887f0ad9bfdb3e124aab417ec53b5bee032ddbbed7cdb381"} Feb 14 05:19:05 crc kubenswrapper[4867]: I0214 05:19:05.193715 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ccpdl" podStartSLOduration=2.747283703 podStartE2EDuration="7.19369229s" podCreationTimestamp="2026-02-14 05:18:58 +0000 UTC" firstStartedPulling="2026-02-14 05:19:00.058450881 +0000 UTC m=+4172.139388195" lastFinishedPulling="2026-02-14 05:19:04.504859468 +0000 UTC m=+4176.585796782" observedRunningTime="2026-02-14 05:19:05.180943356 +0000 UTC m=+4177.261880670" watchObservedRunningTime="2026-02-14 05:19:05.19369229 +0000 UTC m=+4177.274629624" Feb 14 05:19:08 crc kubenswrapper[4867]: I0214 05:19:08.889582 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ccpdl" Feb 14 05:19:08 crc kubenswrapper[4867]: I0214 05:19:08.890109 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ccpdl" Feb 14 05:19:09 crc kubenswrapper[4867]: I0214 05:19:09.942998 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-ccpdl" podUID="4cd89cb2-e3ca-4d2c-8ac0-55877cda3728" containerName="registry-server" probeResult="failure" output=< Feb 14 05:19:09 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:19:09 crc kubenswrapper[4867]: > Feb 14 05:19:13 crc kubenswrapper[4867]: I0214 05:19:13.650881 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-49vhq"] Feb 14 05:19:13 crc kubenswrapper[4867]: I0214 05:19:13.654898 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-49vhq" Feb 14 05:19:13 crc kubenswrapper[4867]: I0214 05:19:13.665617 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-49vhq"] Feb 14 05:19:13 crc kubenswrapper[4867]: I0214 05:19:13.712303 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c-utilities\") pod \"redhat-operators-49vhq\" (UID: \"5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c\") " pod="openshift-marketplace/redhat-operators-49vhq" Feb 14 05:19:13 crc kubenswrapper[4867]: I0214 05:19:13.712562 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c-catalog-content\") pod \"redhat-operators-49vhq\" (UID: \"5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c\") " pod="openshift-marketplace/redhat-operators-49vhq" Feb 14 05:19:13 crc kubenswrapper[4867]: I0214 05:19:13.712623 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5qjj\" (UniqueName: \"kubernetes.io/projected/5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c-kube-api-access-g5qjj\") pod \"redhat-operators-49vhq\" (UID: \"5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c\") " pod="openshift-marketplace/redhat-operators-49vhq" Feb 14 05:19:13 crc kubenswrapper[4867]: I0214 05:19:13.814867 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c-catalog-content\") pod \"redhat-operators-49vhq\" (UID: \"5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c\") " pod="openshift-marketplace/redhat-operators-49vhq" Feb 14 05:19:13 crc kubenswrapper[4867]: I0214 05:19:13.814931 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5qjj\" (UniqueName: \"kubernetes.io/projected/5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c-kube-api-access-g5qjj\") pod \"redhat-operators-49vhq\" (UID: \"5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c\") " pod="openshift-marketplace/redhat-operators-49vhq" Feb 14 05:19:13 crc kubenswrapper[4867]: I0214 05:19:13.815018 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c-utilities\") pod \"redhat-operators-49vhq\" (UID: \"5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c\") " pod="openshift-marketplace/redhat-operators-49vhq" Feb 14 05:19:13 crc kubenswrapper[4867]: I0214 05:19:13.815444 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c-utilities\") pod \"redhat-operators-49vhq\" (UID: \"5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c\") " pod="openshift-marketplace/redhat-operators-49vhq" Feb 14 05:19:13 crc kubenswrapper[4867]: I0214 05:19:13.815450 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c-catalog-content\") pod \"redhat-operators-49vhq\" (UID: \"5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c\") " pod="openshift-marketplace/redhat-operators-49vhq" Feb 14 05:19:13 crc kubenswrapper[4867]: I0214 05:19:13.833610 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5qjj\" (UniqueName: \"kubernetes.io/projected/5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c-kube-api-access-g5qjj\") pod \"redhat-operators-49vhq\" (UID: \"5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c\") " pod="openshift-marketplace/redhat-operators-49vhq" Feb 14 05:19:13 crc kubenswrapper[4867]: I0214 05:19:13.991145 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-49vhq" Feb 14 05:19:14 crc kubenswrapper[4867]: I0214 05:19:14.506279 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-49vhq"] Feb 14 05:19:15 crc kubenswrapper[4867]: I0214 05:19:15.266831 4867 generic.go:334] "Generic (PLEG): container finished" podID="5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c" containerID="9122fb4db282317ba1bf48a8c758a31672f3b60aaf65417427c85eedf72c5eab" exitCode=0 Feb 14 05:19:15 crc kubenswrapper[4867]: I0214 05:19:15.266910 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-49vhq" event={"ID":"5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c","Type":"ContainerDied","Data":"9122fb4db282317ba1bf48a8c758a31672f3b60aaf65417427c85eedf72c5eab"} Feb 14 05:19:15 crc kubenswrapper[4867]: I0214 05:19:15.267702 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-49vhq" event={"ID":"5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c","Type":"ContainerStarted","Data":"e85219fe308c4b890aa56acecb51d412bff27bd86fbcf1d4ca701931e660ccea"} Feb 14 05:19:17 crc kubenswrapper[4867]: I0214 05:19:17.296046 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-49vhq" event={"ID":"5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c","Type":"ContainerStarted","Data":"32e9234c8f61ccd977c1f7d1a44ad3e20f3d183b405ce1c31f2ab20e2ed59d21"} Feb 14 05:19:19 crc kubenswrapper[4867]: I0214 05:19:19.942803 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-ccpdl" podUID="4cd89cb2-e3ca-4d2c-8ac0-55877cda3728" containerName="registry-server" probeResult="failure" output=< Feb 14 05:19:19 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:19:19 crc kubenswrapper[4867]: > Feb 14 05:19:22 crc kubenswrapper[4867]: I0214 05:19:22.868416 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2csc4"] Feb 14 05:19:22 crc kubenswrapper[4867]: I0214 05:19:22.872475 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2csc4" Feb 14 05:19:22 crc kubenswrapper[4867]: I0214 05:19:22.882501 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2csc4"] Feb 14 05:19:22 crc kubenswrapper[4867]: I0214 05:19:22.953845 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16024882-d3c8-413a-9619-789d77e9f477-utilities\") pod \"redhat-marketplace-2csc4\" (UID: \"16024882-d3c8-413a-9619-789d77e9f477\") " pod="openshift-marketplace/redhat-marketplace-2csc4" Feb 14 05:19:22 crc kubenswrapper[4867]: I0214 05:19:22.954496 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5rck\" (UniqueName: \"kubernetes.io/projected/16024882-d3c8-413a-9619-789d77e9f477-kube-api-access-s5rck\") pod \"redhat-marketplace-2csc4\" (UID: \"16024882-d3c8-413a-9619-789d77e9f477\") " pod="openshift-marketplace/redhat-marketplace-2csc4" Feb 14 05:19:22 crc kubenswrapper[4867]: I0214 05:19:22.954720 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16024882-d3c8-413a-9619-789d77e9f477-catalog-content\") pod \"redhat-marketplace-2csc4\" (UID: \"16024882-d3c8-413a-9619-789d77e9f477\") " pod="openshift-marketplace/redhat-marketplace-2csc4" Feb 14 05:19:23 crc kubenswrapper[4867]: I0214 05:19:23.057072 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16024882-d3c8-413a-9619-789d77e9f477-utilities\") pod \"redhat-marketplace-2csc4\" (UID: \"16024882-d3c8-413a-9619-789d77e9f477\") " pod="openshift-marketplace/redhat-marketplace-2csc4" Feb 14 05:19:23 crc kubenswrapper[4867]: I0214 05:19:23.057266 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5rck\" (UniqueName: \"kubernetes.io/projected/16024882-d3c8-413a-9619-789d77e9f477-kube-api-access-s5rck\") pod \"redhat-marketplace-2csc4\" (UID: \"16024882-d3c8-413a-9619-789d77e9f477\") " pod="openshift-marketplace/redhat-marketplace-2csc4" Feb 14 05:19:23 crc kubenswrapper[4867]: I0214 05:19:23.057331 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16024882-d3c8-413a-9619-789d77e9f477-catalog-content\") pod \"redhat-marketplace-2csc4\" (UID: \"16024882-d3c8-413a-9619-789d77e9f477\") " pod="openshift-marketplace/redhat-marketplace-2csc4" Feb 14 05:19:23 crc kubenswrapper[4867]: I0214 05:19:23.058353 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16024882-d3c8-413a-9619-789d77e9f477-utilities\") pod \"redhat-marketplace-2csc4\" (UID: \"16024882-d3c8-413a-9619-789d77e9f477\") " pod="openshift-marketplace/redhat-marketplace-2csc4" Feb 14 05:19:23 crc kubenswrapper[4867]: I0214 05:19:23.058498 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16024882-d3c8-413a-9619-789d77e9f477-catalog-content\") pod \"redhat-marketplace-2csc4\" (UID: \"16024882-d3c8-413a-9619-789d77e9f477\") " pod="openshift-marketplace/redhat-marketplace-2csc4" Feb 14 05:19:23 crc kubenswrapper[4867]: I0214 05:19:23.175868 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5rck\" (UniqueName: \"kubernetes.io/projected/16024882-d3c8-413a-9619-789d77e9f477-kube-api-access-s5rck\") pod \"redhat-marketplace-2csc4\" (UID: \"16024882-d3c8-413a-9619-789d77e9f477\") " pod="openshift-marketplace/redhat-marketplace-2csc4" Feb 14 05:19:23 crc kubenswrapper[4867]: I0214 05:19:23.238222 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2csc4" Feb 14 05:19:23 crc kubenswrapper[4867]: I0214 05:19:23.884315 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2csc4"] Feb 14 05:19:23 crc kubenswrapper[4867]: W0214 05:19:23.894332 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16024882_d3c8_413a_9619_789d77e9f477.slice/crio-0bd87b1b60b2449316ba1f5fd108dff5a3e0deb8570fb277636fa1d8a6c12a91 WatchSource:0}: Error finding container 0bd87b1b60b2449316ba1f5fd108dff5a3e0deb8570fb277636fa1d8a6c12a91: Status 404 returned error can't find the container with id 0bd87b1b60b2449316ba1f5fd108dff5a3e0deb8570fb277636fa1d8a6c12a91 Feb 14 05:19:24 crc kubenswrapper[4867]: I0214 05:19:24.367970 4867 generic.go:334] "Generic (PLEG): container finished" podID="16024882-d3c8-413a-9619-789d77e9f477" containerID="a3c5aceb21a055ff97246bf194a65b6289deeec95ff415a7c557e033fc1ec697" exitCode=0 Feb 14 05:19:24 crc kubenswrapper[4867]: I0214 05:19:24.368026 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2csc4" event={"ID":"16024882-d3c8-413a-9619-789d77e9f477","Type":"ContainerDied","Data":"a3c5aceb21a055ff97246bf194a65b6289deeec95ff415a7c557e033fc1ec697"} Feb 14 05:19:24 crc kubenswrapper[4867]: I0214 05:19:24.368058 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2csc4" event={"ID":"16024882-d3c8-413a-9619-789d77e9f477","Type":"ContainerStarted","Data":"0bd87b1b60b2449316ba1f5fd108dff5a3e0deb8570fb277636fa1d8a6c12a91"} Feb 14 05:19:26 crc kubenswrapper[4867]: I0214 05:19:26.392720 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2csc4" event={"ID":"16024882-d3c8-413a-9619-789d77e9f477","Type":"ContainerStarted","Data":"53de0796beeedb338ef0361a1500f7f5f5ce4be4c9101baa657898e01e6ceb21"} Feb 14 05:19:28 crc kubenswrapper[4867]: I0214 05:19:28.414409 4867 generic.go:334] "Generic (PLEG): container finished" podID="16024882-d3c8-413a-9619-789d77e9f477" containerID="53de0796beeedb338ef0361a1500f7f5f5ce4be4c9101baa657898e01e6ceb21" exitCode=0 Feb 14 05:19:28 crc kubenswrapper[4867]: I0214 05:19:28.414484 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2csc4" event={"ID":"16024882-d3c8-413a-9619-789d77e9f477","Type":"ContainerDied","Data":"53de0796beeedb338ef0361a1500f7f5f5ce4be4c9101baa657898e01e6ceb21"} Feb 14 05:19:28 crc kubenswrapper[4867]: I0214 05:19:28.971753 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ccpdl" Feb 14 05:19:29 crc kubenswrapper[4867]: I0214 05:19:29.059529 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ccpdl" Feb 14 05:19:29 crc kubenswrapper[4867]: I0214 05:19:29.429242 4867 generic.go:334] "Generic (PLEG): container finished" podID="5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c" containerID="32e9234c8f61ccd977c1f7d1a44ad3e20f3d183b405ce1c31f2ab20e2ed59d21" exitCode=0 Feb 14 05:19:29 crc kubenswrapper[4867]: I0214 05:19:29.429316 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-49vhq" event={"ID":"5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c","Type":"ContainerDied","Data":"32e9234c8f61ccd977c1f7d1a44ad3e20f3d183b405ce1c31f2ab20e2ed59d21"} Feb 14 05:19:29 crc kubenswrapper[4867]: I0214 05:19:29.434236 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2csc4" event={"ID":"16024882-d3c8-413a-9619-789d77e9f477","Type":"ContainerStarted","Data":"80da5021132b233f94e0d7aa02f156221b27d610aad0017992e3be79a849895f"} Feb 14 05:19:29 crc kubenswrapper[4867]: I0214 05:19:29.497338 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2csc4" podStartSLOduration=3.077698623 podStartE2EDuration="7.497297826s" podCreationTimestamp="2026-02-14 05:19:22 +0000 UTC" firstStartedPulling="2026-02-14 05:19:24.370666613 +0000 UTC m=+4196.451603927" lastFinishedPulling="2026-02-14 05:19:28.790265816 +0000 UTC m=+4200.871203130" observedRunningTime="2026-02-14 05:19:29.484237553 +0000 UTC m=+4201.565174887" watchObservedRunningTime="2026-02-14 05:19:29.497297826 +0000 UTC m=+4201.578235140" Feb 14 05:19:30 crc kubenswrapper[4867]: I0214 05:19:30.856583 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ccpdl"] Feb 14 05:19:30 crc kubenswrapper[4867]: I0214 05:19:30.857868 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ccpdl" podUID="4cd89cb2-e3ca-4d2c-8ac0-55877cda3728" containerName="registry-server" containerID="cri-o://4386cbb7e5d69769887f0ad9bfdb3e124aab417ec53b5bee032ddbbed7cdb381" gracePeriod=2 Feb 14 05:19:31 crc kubenswrapper[4867]: I0214 05:19:31.477632 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ccpdl" Feb 14 05:19:31 crc kubenswrapper[4867]: I0214 05:19:31.578878 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9l2t\" (UniqueName: \"kubernetes.io/projected/4cd89cb2-e3ca-4d2c-8ac0-55877cda3728-kube-api-access-d9l2t\") pod \"4cd89cb2-e3ca-4d2c-8ac0-55877cda3728\" (UID: \"4cd89cb2-e3ca-4d2c-8ac0-55877cda3728\") " Feb 14 05:19:31 crc kubenswrapper[4867]: I0214 05:19:31.578992 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cd89cb2-e3ca-4d2c-8ac0-55877cda3728-catalog-content\") pod \"4cd89cb2-e3ca-4d2c-8ac0-55877cda3728\" (UID: \"4cd89cb2-e3ca-4d2c-8ac0-55877cda3728\") " Feb 14 05:19:31 crc kubenswrapper[4867]: I0214 05:19:31.579177 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cd89cb2-e3ca-4d2c-8ac0-55877cda3728-utilities\") pod \"4cd89cb2-e3ca-4d2c-8ac0-55877cda3728\" (UID: \"4cd89cb2-e3ca-4d2c-8ac0-55877cda3728\") " Feb 14 05:19:31 crc kubenswrapper[4867]: I0214 05:19:31.579701 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4cd89cb2-e3ca-4d2c-8ac0-55877cda3728-utilities" (OuterVolumeSpecName: "utilities") pod "4cd89cb2-e3ca-4d2c-8ac0-55877cda3728" (UID: "4cd89cb2-e3ca-4d2c-8ac0-55877cda3728"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 05:19:31 crc kubenswrapper[4867]: I0214 05:19:31.580254 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cd89cb2-e3ca-4d2c-8ac0-55877cda3728-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 05:19:31 crc kubenswrapper[4867]: I0214 05:19:31.584747 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cd89cb2-e3ca-4d2c-8ac0-55877cda3728-kube-api-access-d9l2t" (OuterVolumeSpecName: "kube-api-access-d9l2t") pod "4cd89cb2-e3ca-4d2c-8ac0-55877cda3728" (UID: "4cd89cb2-e3ca-4d2c-8ac0-55877cda3728"). InnerVolumeSpecName "kube-api-access-d9l2t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 05:19:31 crc kubenswrapper[4867]: I0214 05:19:31.631917 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4cd89cb2-e3ca-4d2c-8ac0-55877cda3728-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4cd89cb2-e3ca-4d2c-8ac0-55877cda3728" (UID: "4cd89cb2-e3ca-4d2c-8ac0-55877cda3728"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 05:19:31 crc kubenswrapper[4867]: I0214 05:19:31.681619 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d9l2t\" (UniqueName: \"kubernetes.io/projected/4cd89cb2-e3ca-4d2c-8ac0-55877cda3728-kube-api-access-d9l2t\") on node \"crc\" DevicePath \"\"" Feb 14 05:19:31 crc kubenswrapper[4867]: I0214 05:19:31.681650 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cd89cb2-e3ca-4d2c-8ac0-55877cda3728-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 05:19:31 crc kubenswrapper[4867]: I0214 05:19:31.837389 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-49vhq" event={"ID":"5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c","Type":"ContainerStarted","Data":"8637c55cd33254d8b0ce51e872c705e2de303e15fe068a58f6f17f2158a18ae2"} Feb 14 05:19:31 crc kubenswrapper[4867]: I0214 05:19:31.840119 4867 generic.go:334] "Generic (PLEG): container finished" podID="4cd89cb2-e3ca-4d2c-8ac0-55877cda3728" containerID="4386cbb7e5d69769887f0ad9bfdb3e124aab417ec53b5bee032ddbbed7cdb381" exitCode=0 Feb 14 05:19:31 crc kubenswrapper[4867]: I0214 05:19:31.840160 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ccpdl" Feb 14 05:19:31 crc kubenswrapper[4867]: I0214 05:19:31.840165 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ccpdl" event={"ID":"4cd89cb2-e3ca-4d2c-8ac0-55877cda3728","Type":"ContainerDied","Data":"4386cbb7e5d69769887f0ad9bfdb3e124aab417ec53b5bee032ddbbed7cdb381"} Feb 14 05:19:31 crc kubenswrapper[4867]: I0214 05:19:31.840202 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ccpdl" event={"ID":"4cd89cb2-e3ca-4d2c-8ac0-55877cda3728","Type":"ContainerDied","Data":"b915280cad8dbdbca2545e56ab124792f335b9bf6c5c77908ebfda56bab51bc4"} Feb 14 05:19:31 crc kubenswrapper[4867]: I0214 05:19:31.840223 4867 scope.go:117] "RemoveContainer" containerID="4386cbb7e5d69769887f0ad9bfdb3e124aab417ec53b5bee032ddbbed7cdb381" Feb 14 05:19:31 crc kubenswrapper[4867]: I0214 05:19:31.873118 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-49vhq" podStartSLOduration=4.112191528 podStartE2EDuration="18.873099629s" podCreationTimestamp="2026-02-14 05:19:13 +0000 UTC" firstStartedPulling="2026-02-14 05:19:15.269028069 +0000 UTC m=+4187.349965383" lastFinishedPulling="2026-02-14 05:19:30.02993617 +0000 UTC m=+4202.110873484" observedRunningTime="2026-02-14 05:19:31.863022814 +0000 UTC m=+4203.943960148" watchObservedRunningTime="2026-02-14 05:19:31.873099629 +0000 UTC m=+4203.954036943" Feb 14 05:19:31 crc kubenswrapper[4867]: I0214 05:19:31.881755 4867 scope.go:117] "RemoveContainer" containerID="41c0e6e6883de66e197f82001e61b38a735dc8ff5d59d0c42b5e561095eb0e8c" Feb 14 05:19:31 crc kubenswrapper[4867]: I0214 05:19:31.901497 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ccpdl"] Feb 14 05:19:31 crc kubenswrapper[4867]: I0214 05:19:31.914357 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ccpdl"] Feb 14 05:19:31 crc kubenswrapper[4867]: I0214 05:19:31.959213 4867 scope.go:117] "RemoveContainer" containerID="09cfccff68a249443672b44dc5ba251bcdd3149c3967997820634457c340ac96" Feb 14 05:19:31 crc kubenswrapper[4867]: I0214 05:19:31.998694 4867 scope.go:117] "RemoveContainer" containerID="4386cbb7e5d69769887f0ad9bfdb3e124aab417ec53b5bee032ddbbed7cdb381" Feb 14 05:19:32 crc kubenswrapper[4867]: E0214 05:19:31.999857 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4386cbb7e5d69769887f0ad9bfdb3e124aab417ec53b5bee032ddbbed7cdb381\": container with ID starting with 4386cbb7e5d69769887f0ad9bfdb3e124aab417ec53b5bee032ddbbed7cdb381 not found: ID does not exist" containerID="4386cbb7e5d69769887f0ad9bfdb3e124aab417ec53b5bee032ddbbed7cdb381" Feb 14 05:19:32 crc kubenswrapper[4867]: I0214 05:19:31.999899 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4386cbb7e5d69769887f0ad9bfdb3e124aab417ec53b5bee032ddbbed7cdb381"} err="failed to get container status \"4386cbb7e5d69769887f0ad9bfdb3e124aab417ec53b5bee032ddbbed7cdb381\": rpc error: code = NotFound desc = could not find container \"4386cbb7e5d69769887f0ad9bfdb3e124aab417ec53b5bee032ddbbed7cdb381\": container with ID starting with 4386cbb7e5d69769887f0ad9bfdb3e124aab417ec53b5bee032ddbbed7cdb381 not found: ID does not exist" Feb 14 05:19:32 crc kubenswrapper[4867]: I0214 05:19:31.999924 4867 scope.go:117] "RemoveContainer" containerID="41c0e6e6883de66e197f82001e61b38a735dc8ff5d59d0c42b5e561095eb0e8c" Feb 14 05:19:32 crc kubenswrapper[4867]: E0214 05:19:32.002832 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41c0e6e6883de66e197f82001e61b38a735dc8ff5d59d0c42b5e561095eb0e8c\": container with ID starting with 41c0e6e6883de66e197f82001e61b38a735dc8ff5d59d0c42b5e561095eb0e8c not found: ID does not exist" containerID="41c0e6e6883de66e197f82001e61b38a735dc8ff5d59d0c42b5e561095eb0e8c" Feb 14 05:19:32 crc kubenswrapper[4867]: I0214 05:19:32.002861 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41c0e6e6883de66e197f82001e61b38a735dc8ff5d59d0c42b5e561095eb0e8c"} err="failed to get container status \"41c0e6e6883de66e197f82001e61b38a735dc8ff5d59d0c42b5e561095eb0e8c\": rpc error: code = NotFound desc = could not find container \"41c0e6e6883de66e197f82001e61b38a735dc8ff5d59d0c42b5e561095eb0e8c\": container with ID starting with 41c0e6e6883de66e197f82001e61b38a735dc8ff5d59d0c42b5e561095eb0e8c not found: ID does not exist" Feb 14 05:19:32 crc kubenswrapper[4867]: I0214 05:19:32.002876 4867 scope.go:117] "RemoveContainer" containerID="09cfccff68a249443672b44dc5ba251bcdd3149c3967997820634457c340ac96" Feb 14 05:19:32 crc kubenswrapper[4867]: E0214 05:19:32.003740 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09cfccff68a249443672b44dc5ba251bcdd3149c3967997820634457c340ac96\": container with ID starting with 09cfccff68a249443672b44dc5ba251bcdd3149c3967997820634457c340ac96 not found: ID does not exist" containerID="09cfccff68a249443672b44dc5ba251bcdd3149c3967997820634457c340ac96" Feb 14 05:19:32 crc kubenswrapper[4867]: I0214 05:19:32.003766 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09cfccff68a249443672b44dc5ba251bcdd3149c3967997820634457c340ac96"} err="failed to get container status \"09cfccff68a249443672b44dc5ba251bcdd3149c3967997820634457c340ac96\": rpc error: code = NotFound desc = could not find container \"09cfccff68a249443672b44dc5ba251bcdd3149c3967997820634457c340ac96\": container with ID starting with 09cfccff68a249443672b44dc5ba251bcdd3149c3967997820634457c340ac96 not found: ID does not exist" Feb 14 05:19:33 crc kubenswrapper[4867]: I0214 05:19:33.009401 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4cd89cb2-e3ca-4d2c-8ac0-55877cda3728" path="/var/lib/kubelet/pods/4cd89cb2-e3ca-4d2c-8ac0-55877cda3728/volumes" Feb 14 05:19:33 crc kubenswrapper[4867]: I0214 05:19:33.239883 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2csc4" Feb 14 05:19:33 crc kubenswrapper[4867]: I0214 05:19:33.239929 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2csc4" Feb 14 05:19:33 crc kubenswrapper[4867]: I0214 05:19:33.991971 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-49vhq" Feb 14 05:19:33 crc kubenswrapper[4867]: I0214 05:19:33.992238 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-49vhq" Feb 14 05:19:35 crc kubenswrapper[4867]: I0214 05:19:35.002903 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-2csc4" podUID="16024882-d3c8-413a-9619-789d77e9f477" containerName="registry-server" probeResult="failure" output=< Feb 14 05:19:35 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:19:35 crc kubenswrapper[4867]: > Feb 14 05:19:35 crc kubenswrapper[4867]: I0214 05:19:35.048895 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-49vhq" podUID="5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c" containerName="registry-server" probeResult="failure" output=< Feb 14 05:19:35 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:19:35 crc kubenswrapper[4867]: > Feb 14 05:19:43 crc kubenswrapper[4867]: I0214 05:19:43.293706 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2csc4" Feb 14 05:19:43 crc kubenswrapper[4867]: I0214 05:19:43.356049 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2csc4" Feb 14 05:19:43 crc kubenswrapper[4867]: I0214 05:19:43.539571 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2csc4"] Feb 14 05:19:45 crc kubenswrapper[4867]: I0214 05:19:45.015165 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2csc4" podUID="16024882-d3c8-413a-9619-789d77e9f477" containerName="registry-server" containerID="cri-o://80da5021132b233f94e0d7aa02f156221b27d610aad0017992e3be79a849895f" gracePeriod=2 Feb 14 05:19:45 crc kubenswrapper[4867]: I0214 05:19:45.043579 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-49vhq" podUID="5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c" containerName="registry-server" probeResult="failure" output=< Feb 14 05:19:45 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:19:45 crc kubenswrapper[4867]: > Feb 14 05:19:45 crc kubenswrapper[4867]: I0214 05:19:45.552684 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2csc4" Feb 14 05:19:45 crc kubenswrapper[4867]: I0214 05:19:45.673360 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16024882-d3c8-413a-9619-789d77e9f477-catalog-content\") pod \"16024882-d3c8-413a-9619-789d77e9f477\" (UID: \"16024882-d3c8-413a-9619-789d77e9f477\") " Feb 14 05:19:45 crc kubenswrapper[4867]: I0214 05:19:45.673886 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5rck\" (UniqueName: \"kubernetes.io/projected/16024882-d3c8-413a-9619-789d77e9f477-kube-api-access-s5rck\") pod \"16024882-d3c8-413a-9619-789d77e9f477\" (UID: \"16024882-d3c8-413a-9619-789d77e9f477\") " Feb 14 05:19:45 crc kubenswrapper[4867]: I0214 05:19:45.674057 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16024882-d3c8-413a-9619-789d77e9f477-utilities\") pod \"16024882-d3c8-413a-9619-789d77e9f477\" (UID: \"16024882-d3c8-413a-9619-789d77e9f477\") " Feb 14 05:19:45 crc kubenswrapper[4867]: I0214 05:19:45.674651 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16024882-d3c8-413a-9619-789d77e9f477-utilities" (OuterVolumeSpecName: "utilities") pod "16024882-d3c8-413a-9619-789d77e9f477" (UID: "16024882-d3c8-413a-9619-789d77e9f477"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 05:19:45 crc kubenswrapper[4867]: I0214 05:19:45.679392 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16024882-d3c8-413a-9619-789d77e9f477-kube-api-access-s5rck" (OuterVolumeSpecName: "kube-api-access-s5rck") pod "16024882-d3c8-413a-9619-789d77e9f477" (UID: "16024882-d3c8-413a-9619-789d77e9f477"). InnerVolumeSpecName "kube-api-access-s5rck". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 05:19:45 crc kubenswrapper[4867]: I0214 05:19:45.701636 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16024882-d3c8-413a-9619-789d77e9f477-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "16024882-d3c8-413a-9619-789d77e9f477" (UID: "16024882-d3c8-413a-9619-789d77e9f477"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 05:19:45 crc kubenswrapper[4867]: I0214 05:19:45.777477 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s5rck\" (UniqueName: \"kubernetes.io/projected/16024882-d3c8-413a-9619-789d77e9f477-kube-api-access-s5rck\") on node \"crc\" DevicePath \"\"" Feb 14 05:19:45 crc kubenswrapper[4867]: I0214 05:19:45.777523 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16024882-d3c8-413a-9619-789d77e9f477-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 05:19:45 crc kubenswrapper[4867]: I0214 05:19:45.777535 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16024882-d3c8-413a-9619-789d77e9f477-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 05:19:46 crc kubenswrapper[4867]: I0214 05:19:46.028274 4867 generic.go:334] "Generic (PLEG): container finished" podID="16024882-d3c8-413a-9619-789d77e9f477" containerID="80da5021132b233f94e0d7aa02f156221b27d610aad0017992e3be79a849895f" exitCode=0 Feb 14 05:19:46 crc kubenswrapper[4867]: I0214 05:19:46.028609 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2csc4" event={"ID":"16024882-d3c8-413a-9619-789d77e9f477","Type":"ContainerDied","Data":"80da5021132b233f94e0d7aa02f156221b27d610aad0017992e3be79a849895f"} Feb 14 05:19:46 crc kubenswrapper[4867]: I0214 05:19:46.028642 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2csc4" event={"ID":"16024882-d3c8-413a-9619-789d77e9f477","Type":"ContainerDied","Data":"0bd87b1b60b2449316ba1f5fd108dff5a3e0deb8570fb277636fa1d8a6c12a91"} Feb 14 05:19:46 crc kubenswrapper[4867]: I0214 05:19:46.028662 4867 scope.go:117] "RemoveContainer" containerID="80da5021132b233f94e0d7aa02f156221b27d610aad0017992e3be79a849895f" Feb 14 05:19:46 crc kubenswrapper[4867]: I0214 05:19:46.028831 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2csc4" Feb 14 05:19:46 crc kubenswrapper[4867]: I0214 05:19:46.070255 4867 scope.go:117] "RemoveContainer" containerID="53de0796beeedb338ef0361a1500f7f5f5ce4be4c9101baa657898e01e6ceb21" Feb 14 05:19:46 crc kubenswrapper[4867]: I0214 05:19:46.107583 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2csc4"] Feb 14 05:19:46 crc kubenswrapper[4867]: I0214 05:19:46.120198 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2csc4"] Feb 14 05:19:46 crc kubenswrapper[4867]: I0214 05:19:46.207930 4867 scope.go:117] "RemoveContainer" containerID="a3c5aceb21a055ff97246bf194a65b6289deeec95ff415a7c557e033fc1ec697" Feb 14 05:19:46 crc kubenswrapper[4867]: I0214 05:19:46.240779 4867 scope.go:117] "RemoveContainer" containerID="80da5021132b233f94e0d7aa02f156221b27d610aad0017992e3be79a849895f" Feb 14 05:19:46 crc kubenswrapper[4867]: E0214 05:19:46.241676 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80da5021132b233f94e0d7aa02f156221b27d610aad0017992e3be79a849895f\": container with ID starting with 80da5021132b233f94e0d7aa02f156221b27d610aad0017992e3be79a849895f not found: ID does not exist" containerID="80da5021132b233f94e0d7aa02f156221b27d610aad0017992e3be79a849895f" Feb 14 05:19:46 crc kubenswrapper[4867]: I0214 05:19:46.242202 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80da5021132b233f94e0d7aa02f156221b27d610aad0017992e3be79a849895f"} err="failed to get container status \"80da5021132b233f94e0d7aa02f156221b27d610aad0017992e3be79a849895f\": rpc error: code = NotFound desc = could not find container \"80da5021132b233f94e0d7aa02f156221b27d610aad0017992e3be79a849895f\": container with ID starting with 80da5021132b233f94e0d7aa02f156221b27d610aad0017992e3be79a849895f not found: ID does not exist" Feb 14 05:19:46 crc kubenswrapper[4867]: I0214 05:19:46.242417 4867 scope.go:117] "RemoveContainer" containerID="53de0796beeedb338ef0361a1500f7f5f5ce4be4c9101baa657898e01e6ceb21" Feb 14 05:19:46 crc kubenswrapper[4867]: E0214 05:19:46.243337 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53de0796beeedb338ef0361a1500f7f5f5ce4be4c9101baa657898e01e6ceb21\": container with ID starting with 53de0796beeedb338ef0361a1500f7f5f5ce4be4c9101baa657898e01e6ceb21 not found: ID does not exist" containerID="53de0796beeedb338ef0361a1500f7f5f5ce4be4c9101baa657898e01e6ceb21" Feb 14 05:19:46 crc kubenswrapper[4867]: I0214 05:19:46.243381 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53de0796beeedb338ef0361a1500f7f5f5ce4be4c9101baa657898e01e6ceb21"} err="failed to get container status \"53de0796beeedb338ef0361a1500f7f5f5ce4be4c9101baa657898e01e6ceb21\": rpc error: code = NotFound desc = could not find container \"53de0796beeedb338ef0361a1500f7f5f5ce4be4c9101baa657898e01e6ceb21\": container with ID starting with 53de0796beeedb338ef0361a1500f7f5f5ce4be4c9101baa657898e01e6ceb21 not found: ID does not exist" Feb 14 05:19:46 crc kubenswrapper[4867]: I0214 05:19:46.243396 4867 scope.go:117] "RemoveContainer" containerID="a3c5aceb21a055ff97246bf194a65b6289deeec95ff415a7c557e033fc1ec697" Feb 14 05:19:46 crc kubenswrapper[4867]: E0214 05:19:46.243815 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3c5aceb21a055ff97246bf194a65b6289deeec95ff415a7c557e033fc1ec697\": container with ID starting with a3c5aceb21a055ff97246bf194a65b6289deeec95ff415a7c557e033fc1ec697 not found: ID does not exist" containerID="a3c5aceb21a055ff97246bf194a65b6289deeec95ff415a7c557e033fc1ec697" Feb 14 05:19:46 crc kubenswrapper[4867]: I0214 05:19:46.243883 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3c5aceb21a055ff97246bf194a65b6289deeec95ff415a7c557e033fc1ec697"} err="failed to get container status \"a3c5aceb21a055ff97246bf194a65b6289deeec95ff415a7c557e033fc1ec697\": rpc error: code = NotFound desc = could not find container \"a3c5aceb21a055ff97246bf194a65b6289deeec95ff415a7c557e033fc1ec697\": container with ID starting with a3c5aceb21a055ff97246bf194a65b6289deeec95ff415a7c557e033fc1ec697 not found: ID does not exist" Feb 14 05:19:47 crc kubenswrapper[4867]: I0214 05:19:47.012177 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16024882-d3c8-413a-9619-789d77e9f477" path="/var/lib/kubelet/pods/16024882-d3c8-413a-9619-789d77e9f477/volumes" Feb 14 05:19:55 crc kubenswrapper[4867]: I0214 05:19:55.038374 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-49vhq" podUID="5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c" containerName="registry-server" probeResult="failure" output=< Feb 14 05:19:55 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:19:55 crc kubenswrapper[4867]: > Feb 14 05:20:05 crc kubenswrapper[4867]: I0214 05:20:05.070851 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-49vhq" podUID="5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c" containerName="registry-server" probeResult="failure" output=< Feb 14 05:20:05 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:20:05 crc kubenswrapper[4867]: > Feb 14 05:20:15 crc kubenswrapper[4867]: I0214 05:20:15.044271 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-49vhq" podUID="5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c" containerName="registry-server" probeResult="failure" output=< Feb 14 05:20:15 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:20:15 crc kubenswrapper[4867]: > Feb 14 05:20:24 crc kubenswrapper[4867]: I0214 05:20:24.038669 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-49vhq" Feb 14 05:20:24 crc kubenswrapper[4867]: I0214 05:20:24.094019 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-49vhq" Feb 14 05:20:24 crc kubenswrapper[4867]: I0214 05:20:24.276001 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-49vhq"] Feb 14 05:20:25 crc kubenswrapper[4867]: I0214 05:20:25.445290 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-49vhq" podUID="5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c" containerName="registry-server" containerID="cri-o://8637c55cd33254d8b0ce51e872c705e2de303e15fe068a58f6f17f2158a18ae2" gracePeriod=2 Feb 14 05:20:26 crc kubenswrapper[4867]: I0214 05:20:26.141684 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-49vhq" Feb 14 05:20:26 crc kubenswrapper[4867]: I0214 05:20:26.283036 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c-catalog-content\") pod \"5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c\" (UID: \"5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c\") " Feb 14 05:20:26 crc kubenswrapper[4867]: I0214 05:20:26.283201 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g5qjj\" (UniqueName: \"kubernetes.io/projected/5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c-kube-api-access-g5qjj\") pod \"5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c\" (UID: \"5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c\") " Feb 14 05:20:26 crc kubenswrapper[4867]: I0214 05:20:26.283345 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c-utilities\") pod \"5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c\" (UID: \"5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c\") " Feb 14 05:20:26 crc kubenswrapper[4867]: I0214 05:20:26.289858 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c-kube-api-access-g5qjj" (OuterVolumeSpecName: "kube-api-access-g5qjj") pod "5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c" (UID: "5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c"). InnerVolumeSpecName "kube-api-access-g5qjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 05:20:26 crc kubenswrapper[4867]: I0214 05:20:26.304850 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c-utilities" (OuterVolumeSpecName: "utilities") pod "5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c" (UID: "5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 05:20:26 crc kubenswrapper[4867]: I0214 05:20:26.386213 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g5qjj\" (UniqueName: \"kubernetes.io/projected/5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c-kube-api-access-g5qjj\") on node \"crc\" DevicePath \"\"" Feb 14 05:20:26 crc kubenswrapper[4867]: I0214 05:20:26.386250 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 05:20:26 crc kubenswrapper[4867]: I0214 05:20:26.437346 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c" (UID: "5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 05:20:26 crc kubenswrapper[4867]: I0214 05:20:26.458204 4867 generic.go:334] "Generic (PLEG): container finished" podID="5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c" containerID="8637c55cd33254d8b0ce51e872c705e2de303e15fe068a58f6f17f2158a18ae2" exitCode=0 Feb 14 05:20:26 crc kubenswrapper[4867]: I0214 05:20:26.458246 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-49vhq" event={"ID":"5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c","Type":"ContainerDied","Data":"8637c55cd33254d8b0ce51e872c705e2de303e15fe068a58f6f17f2158a18ae2"} Feb 14 05:20:26 crc kubenswrapper[4867]: I0214 05:20:26.458271 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-49vhq" event={"ID":"5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c","Type":"ContainerDied","Data":"e85219fe308c4b890aa56acecb51d412bff27bd86fbcf1d4ca701931e660ccea"} Feb 14 05:20:26 crc kubenswrapper[4867]: I0214 05:20:26.458288 4867 scope.go:117] "RemoveContainer" containerID="8637c55cd33254d8b0ce51e872c705e2de303e15fe068a58f6f17f2158a18ae2" Feb 14 05:20:26 crc kubenswrapper[4867]: I0214 05:20:26.458417 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-49vhq" Feb 14 05:20:26 crc kubenswrapper[4867]: I0214 05:20:26.485256 4867 scope.go:117] "RemoveContainer" containerID="32e9234c8f61ccd977c1f7d1a44ad3e20f3d183b405ce1c31f2ab20e2ed59d21" Feb 14 05:20:26 crc kubenswrapper[4867]: I0214 05:20:26.489678 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 05:20:26 crc kubenswrapper[4867]: I0214 05:20:26.493090 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-49vhq"] Feb 14 05:20:26 crc kubenswrapper[4867]: I0214 05:20:26.504748 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-49vhq"] Feb 14 05:20:26 crc kubenswrapper[4867]: I0214 05:20:26.508125 4867 scope.go:117] "RemoveContainer" containerID="9122fb4db282317ba1bf48a8c758a31672f3b60aaf65417427c85eedf72c5eab" Feb 14 05:20:26 crc kubenswrapper[4867]: I0214 05:20:26.600861 4867 scope.go:117] "RemoveContainer" containerID="8637c55cd33254d8b0ce51e872c705e2de303e15fe068a58f6f17f2158a18ae2" Feb 14 05:20:26 crc kubenswrapper[4867]: E0214 05:20:26.606183 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8637c55cd33254d8b0ce51e872c705e2de303e15fe068a58f6f17f2158a18ae2\": container with ID starting with 8637c55cd33254d8b0ce51e872c705e2de303e15fe068a58f6f17f2158a18ae2 not found: ID does not exist" containerID="8637c55cd33254d8b0ce51e872c705e2de303e15fe068a58f6f17f2158a18ae2" Feb 14 05:20:26 crc kubenswrapper[4867]: I0214 05:20:26.606246 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8637c55cd33254d8b0ce51e872c705e2de303e15fe068a58f6f17f2158a18ae2"} err="failed to get container status \"8637c55cd33254d8b0ce51e872c705e2de303e15fe068a58f6f17f2158a18ae2\": rpc error: code = NotFound desc = could not find container \"8637c55cd33254d8b0ce51e872c705e2de303e15fe068a58f6f17f2158a18ae2\": container with ID starting with 8637c55cd33254d8b0ce51e872c705e2de303e15fe068a58f6f17f2158a18ae2 not found: ID does not exist" Feb 14 05:20:26 crc kubenswrapper[4867]: I0214 05:20:26.606289 4867 scope.go:117] "RemoveContainer" containerID="32e9234c8f61ccd977c1f7d1a44ad3e20f3d183b405ce1c31f2ab20e2ed59d21" Feb 14 05:20:26 crc kubenswrapper[4867]: E0214 05:20:26.608191 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32e9234c8f61ccd977c1f7d1a44ad3e20f3d183b405ce1c31f2ab20e2ed59d21\": container with ID starting with 32e9234c8f61ccd977c1f7d1a44ad3e20f3d183b405ce1c31f2ab20e2ed59d21 not found: ID does not exist" containerID="32e9234c8f61ccd977c1f7d1a44ad3e20f3d183b405ce1c31f2ab20e2ed59d21" Feb 14 05:20:26 crc kubenswrapper[4867]: I0214 05:20:26.608249 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32e9234c8f61ccd977c1f7d1a44ad3e20f3d183b405ce1c31f2ab20e2ed59d21"} err="failed to get container status \"32e9234c8f61ccd977c1f7d1a44ad3e20f3d183b405ce1c31f2ab20e2ed59d21\": rpc error: code = NotFound desc = could not find container \"32e9234c8f61ccd977c1f7d1a44ad3e20f3d183b405ce1c31f2ab20e2ed59d21\": container with ID starting with 32e9234c8f61ccd977c1f7d1a44ad3e20f3d183b405ce1c31f2ab20e2ed59d21 not found: ID does not exist" Feb 14 05:20:26 crc kubenswrapper[4867]: I0214 05:20:26.608265 4867 scope.go:117] "RemoveContainer" containerID="9122fb4db282317ba1bf48a8c758a31672f3b60aaf65417427c85eedf72c5eab" Feb 14 05:20:26 crc kubenswrapper[4867]: E0214 05:20:26.616700 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9122fb4db282317ba1bf48a8c758a31672f3b60aaf65417427c85eedf72c5eab\": container with ID starting with 9122fb4db282317ba1bf48a8c758a31672f3b60aaf65417427c85eedf72c5eab not found: ID does not exist" containerID="9122fb4db282317ba1bf48a8c758a31672f3b60aaf65417427c85eedf72c5eab" Feb 14 05:20:26 crc kubenswrapper[4867]: I0214 05:20:26.616757 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9122fb4db282317ba1bf48a8c758a31672f3b60aaf65417427c85eedf72c5eab"} err="failed to get container status \"9122fb4db282317ba1bf48a8c758a31672f3b60aaf65417427c85eedf72c5eab\": rpc error: code = NotFound desc = could not find container \"9122fb4db282317ba1bf48a8c758a31672f3b60aaf65417427c85eedf72c5eab\": container with ID starting with 9122fb4db282317ba1bf48a8c758a31672f3b60aaf65417427c85eedf72c5eab not found: ID does not exist" Feb 14 05:20:27 crc kubenswrapper[4867]: I0214 05:20:27.009031 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c" path="/var/lib/kubelet/pods/5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c/volumes" Feb 14 05:21:01 crc kubenswrapper[4867]: I0214 05:21:01.250880 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 05:21:01 crc kubenswrapper[4867]: I0214 05:21:01.251368 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 05:21:08 crc kubenswrapper[4867]: I0214 05:21:08.761067 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-handler-k6p82" podUID="ee9c78b0-77e6-47b0-8e8b-763d69cbd9aa" containerName="nmstate-handler" probeResult="failure" output="command timed out" Feb 14 05:21:31 crc kubenswrapper[4867]: I0214 05:21:31.250381 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 05:21:31 crc kubenswrapper[4867]: I0214 05:21:31.250870 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 05:22:01 crc kubenswrapper[4867]: I0214 05:22:01.251110 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 05:22:01 crc kubenswrapper[4867]: I0214 05:22:01.251626 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 05:22:01 crc kubenswrapper[4867]: I0214 05:22:01.251669 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" Feb 14 05:22:01 crc kubenswrapper[4867]: I0214 05:22:01.252468 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ec987150c85caa2259b5e07a0130f2569d95269321a91ae517f52e3f4caa949a"} pod="openshift-machine-config-operator/machine-config-daemon-4s95t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 05:22:01 crc kubenswrapper[4867]: I0214 05:22:01.252526 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" containerID="cri-o://ec987150c85caa2259b5e07a0130f2569d95269321a91ae517f52e3f4caa949a" gracePeriod=600 Feb 14 05:22:01 crc kubenswrapper[4867]: E0214 05:22:01.372906 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:22:01 crc kubenswrapper[4867]: I0214 05:22:01.537574 4867 generic.go:334] "Generic (PLEG): container finished" podID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerID="ec987150c85caa2259b5e07a0130f2569d95269321a91ae517f52e3f4caa949a" exitCode=0 Feb 14 05:22:01 crc kubenswrapper[4867]: I0214 05:22:01.537649 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" event={"ID":"5992e46c-bce7-4b9f-82f2-c7ffb93286cd","Type":"ContainerDied","Data":"ec987150c85caa2259b5e07a0130f2569d95269321a91ae517f52e3f4caa949a"} Feb 14 05:22:01 crc kubenswrapper[4867]: I0214 05:22:01.537904 4867 scope.go:117] "RemoveContainer" containerID="1e3602f7b703c67cfacb5cb1380c16876968a54c75c8bfed3061dc4a8fbe9713" Feb 14 05:22:01 crc kubenswrapper[4867]: I0214 05:22:01.538752 4867 scope.go:117] "RemoveContainer" containerID="ec987150c85caa2259b5e07a0130f2569d95269321a91ae517f52e3f4caa949a" Feb 14 05:22:01 crc kubenswrapper[4867]: E0214 05:22:01.539042 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:22:16 crc kubenswrapper[4867]: I0214 05:22:16.997480 4867 scope.go:117] "RemoveContainer" containerID="ec987150c85caa2259b5e07a0130f2569d95269321a91ae517f52e3f4caa949a" Feb 14 05:22:16 crc kubenswrapper[4867]: E0214 05:22:16.998575 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:22:27 crc kubenswrapper[4867]: I0214 05:22:27.997815 4867 scope.go:117] "RemoveContainer" containerID="ec987150c85caa2259b5e07a0130f2569d95269321a91ae517f52e3f4caa949a" Feb 14 05:22:27 crc kubenswrapper[4867]: E0214 05:22:27.998623 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:22:39 crc kubenswrapper[4867]: I0214 05:22:39.997842 4867 scope.go:117] "RemoveContainer" containerID="ec987150c85caa2259b5e07a0130f2569d95269321a91ae517f52e3f4caa949a" Feb 14 05:22:39 crc kubenswrapper[4867]: E0214 05:22:39.998621 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:22:51 crc kubenswrapper[4867]: I0214 05:22:51.997736 4867 scope.go:117] "RemoveContainer" containerID="ec987150c85caa2259b5e07a0130f2569d95269321a91ae517f52e3f4caa949a" Feb 14 05:22:51 crc kubenswrapper[4867]: E0214 05:22:51.998710 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:23:06 crc kubenswrapper[4867]: I0214 05:23:06.997833 4867 scope.go:117] "RemoveContainer" containerID="ec987150c85caa2259b5e07a0130f2569d95269321a91ae517f52e3f4caa949a" Feb 14 05:23:06 crc kubenswrapper[4867]: E0214 05:23:06.998838 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:23:17 crc kubenswrapper[4867]: I0214 05:23:17.998179 4867 scope.go:117] "RemoveContainer" containerID="ec987150c85caa2259b5e07a0130f2569d95269321a91ae517f52e3f4caa949a" Feb 14 05:23:18 crc kubenswrapper[4867]: E0214 05:23:17.999256 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:23:29 crc kubenswrapper[4867]: I0214 05:23:29.009660 4867 scope.go:117] "RemoveContainer" containerID="ec987150c85caa2259b5e07a0130f2569d95269321a91ae517f52e3f4caa949a" Feb 14 05:23:29 crc kubenswrapper[4867]: E0214 05:23:29.010559 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:23:41 crc kubenswrapper[4867]: I0214 05:23:41.997081 4867 scope.go:117] "RemoveContainer" containerID="ec987150c85caa2259b5e07a0130f2569d95269321a91ae517f52e3f4caa949a" Feb 14 05:23:41 crc kubenswrapper[4867]: E0214 05:23:41.997811 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:23:53 crc kubenswrapper[4867]: I0214 05:23:53.997805 4867 scope.go:117] "RemoveContainer" containerID="ec987150c85caa2259b5e07a0130f2569d95269321a91ae517f52e3f4caa949a" Feb 14 05:23:53 crc kubenswrapper[4867]: E0214 05:23:53.998777 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:24:05 crc kubenswrapper[4867]: I0214 05:24:05.998024 4867 scope.go:117] "RemoveContainer" containerID="ec987150c85caa2259b5e07a0130f2569d95269321a91ae517f52e3f4caa949a" Feb 14 05:24:05 crc kubenswrapper[4867]: E0214 05:24:05.999063 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:24:21 crc kubenswrapper[4867]: I0214 05:24:21.002233 4867 scope.go:117] "RemoveContainer" containerID="ec987150c85caa2259b5e07a0130f2569d95269321a91ae517f52e3f4caa949a" Feb 14 05:24:21 crc kubenswrapper[4867]: E0214 05:24:21.004356 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:24:34 crc kubenswrapper[4867]: I0214 05:24:34.997545 4867 scope.go:117] "RemoveContainer" containerID="ec987150c85caa2259b5e07a0130f2569d95269321a91ae517f52e3f4caa949a" Feb 14 05:24:34 crc kubenswrapper[4867]: E0214 05:24:34.998333 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:24:45 crc kubenswrapper[4867]: I0214 05:24:45.998569 4867 scope.go:117] "RemoveContainer" containerID="ec987150c85caa2259b5e07a0130f2569d95269321a91ae517f52e3f4caa949a" Feb 14 05:24:46 crc kubenswrapper[4867]: E0214 05:24:45.999767 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:24:57 crc kubenswrapper[4867]: I0214 05:24:57.998099 4867 scope.go:117] "RemoveContainer" containerID="ec987150c85caa2259b5e07a0130f2569d95269321a91ae517f52e3f4caa949a" Feb 14 05:24:58 crc kubenswrapper[4867]: E0214 05:24:57.999069 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:25:09 crc kubenswrapper[4867]: I0214 05:25:09.004826 4867 scope.go:117] "RemoveContainer" containerID="ec987150c85caa2259b5e07a0130f2569d95269321a91ae517f52e3f4caa949a" Feb 14 05:25:09 crc kubenswrapper[4867]: E0214 05:25:09.005701 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:25:21 crc kubenswrapper[4867]: I0214 05:25:21.997552 4867 scope.go:117] "RemoveContainer" containerID="ec987150c85caa2259b5e07a0130f2569d95269321a91ae517f52e3f4caa949a" Feb 14 05:25:21 crc kubenswrapper[4867]: E0214 05:25:21.998429 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:25:35 crc kubenswrapper[4867]: I0214 05:25:35.997996 4867 scope.go:117] "RemoveContainer" containerID="ec987150c85caa2259b5e07a0130f2569d95269321a91ae517f52e3f4caa949a" Feb 14 05:25:35 crc kubenswrapper[4867]: E0214 05:25:35.998910 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:25:49 crc kubenswrapper[4867]: I0214 05:25:49.997383 4867 scope.go:117] "RemoveContainer" containerID="ec987150c85caa2259b5e07a0130f2569d95269321a91ae517f52e3f4caa949a" Feb 14 05:25:49 crc kubenswrapper[4867]: E0214 05:25:49.998177 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:26:03 crc kubenswrapper[4867]: I0214 05:26:03.997804 4867 scope.go:117] "RemoveContainer" containerID="ec987150c85caa2259b5e07a0130f2569d95269321a91ae517f52e3f4caa949a" Feb 14 05:26:03 crc kubenswrapper[4867]: E0214 05:26:03.998583 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:26:16 crc kubenswrapper[4867]: I0214 05:26:16.997179 4867 scope.go:117] "RemoveContainer" containerID="ec987150c85caa2259b5e07a0130f2569d95269321a91ae517f52e3f4caa949a" Feb 14 05:26:16 crc kubenswrapper[4867]: E0214 05:26:16.997943 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:26:30 crc kubenswrapper[4867]: I0214 05:26:30.999203 4867 scope.go:117] "RemoveContainer" containerID="ec987150c85caa2259b5e07a0130f2569d95269321a91ae517f52e3f4caa949a" Feb 14 05:26:31 crc kubenswrapper[4867]: E0214 05:26:31.000021 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:26:37 crc kubenswrapper[4867]: I0214 05:26:37.249426 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Feb 14 05:26:37 crc kubenswrapper[4867]: E0214 05:26:37.250341 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cd89cb2-e3ca-4d2c-8ac0-55877cda3728" containerName="extract-content" Feb 14 05:26:37 crc kubenswrapper[4867]: I0214 05:26:37.250355 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cd89cb2-e3ca-4d2c-8ac0-55877cda3728" containerName="extract-content" Feb 14 05:26:37 crc kubenswrapper[4867]: E0214 05:26:37.250374 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16024882-d3c8-413a-9619-789d77e9f477" containerName="extract-utilities" Feb 14 05:26:37 crc kubenswrapper[4867]: I0214 05:26:37.250381 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="16024882-d3c8-413a-9619-789d77e9f477" containerName="extract-utilities" Feb 14 05:26:37 crc kubenswrapper[4867]: E0214 05:26:37.250392 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16024882-d3c8-413a-9619-789d77e9f477" containerName="registry-server" Feb 14 05:26:37 crc kubenswrapper[4867]: I0214 05:26:37.250400 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="16024882-d3c8-413a-9619-789d77e9f477" containerName="registry-server" Feb 14 05:26:37 crc kubenswrapper[4867]: E0214 05:26:37.250409 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c" containerName="extract-utilities" Feb 14 05:26:37 crc kubenswrapper[4867]: I0214 05:26:37.250415 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c" containerName="extract-utilities" Feb 14 05:26:37 crc kubenswrapper[4867]: E0214 05:26:37.250426 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c" containerName="registry-server" Feb 14 05:26:37 crc kubenswrapper[4867]: I0214 05:26:37.250432 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c" containerName="registry-server" Feb 14 05:26:37 crc kubenswrapper[4867]: E0214 05:26:37.250445 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16024882-d3c8-413a-9619-789d77e9f477" containerName="extract-content" Feb 14 05:26:37 crc kubenswrapper[4867]: I0214 05:26:37.250451 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="16024882-d3c8-413a-9619-789d77e9f477" containerName="extract-content" Feb 14 05:26:37 crc kubenswrapper[4867]: E0214 05:26:37.250461 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cd89cb2-e3ca-4d2c-8ac0-55877cda3728" containerName="registry-server" Feb 14 05:26:37 crc kubenswrapper[4867]: I0214 05:26:37.250466 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cd89cb2-e3ca-4d2c-8ac0-55877cda3728" containerName="registry-server" Feb 14 05:26:37 crc kubenswrapper[4867]: E0214 05:26:37.250481 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c" containerName="extract-content" Feb 14 05:26:37 crc kubenswrapper[4867]: I0214 05:26:37.250487 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c" containerName="extract-content" Feb 14 05:26:37 crc kubenswrapper[4867]: E0214 05:26:37.250528 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cd89cb2-e3ca-4d2c-8ac0-55877cda3728" containerName="extract-utilities" Feb 14 05:26:37 crc kubenswrapper[4867]: I0214 05:26:37.250534 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cd89cb2-e3ca-4d2c-8ac0-55877cda3728" containerName="extract-utilities" Feb 14 05:26:37 crc kubenswrapper[4867]: I0214 05:26:37.250739 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cd89cb2-e3ca-4d2c-8ac0-55877cda3728" containerName="registry-server" Feb 14 05:26:37 crc kubenswrapper[4867]: I0214 05:26:37.250755 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c" containerName="registry-server" Feb 14 05:26:37 crc kubenswrapper[4867]: I0214 05:26:37.250774 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="16024882-d3c8-413a-9619-789d77e9f477" containerName="registry-server" Feb 14 05:26:37 crc kubenswrapper[4867]: I0214 05:26:37.251549 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 14 05:26:37 crc kubenswrapper[4867]: I0214 05:26:37.254610 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Feb 14 05:26:37 crc kubenswrapper[4867]: I0214 05:26:37.254815 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Feb 14 05:26:37 crc kubenswrapper[4867]: I0214 05:26:37.256029 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-wxg74" Feb 14 05:26:37 crc kubenswrapper[4867]: I0214 05:26:37.263489 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 14 05:26:37 crc kubenswrapper[4867]: I0214 05:26:37.271916 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 14 05:26:37 crc kubenswrapper[4867]: I0214 05:26:37.357332 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/a161c594-8af3-458f-911a-bbf51e7bfcdd-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"a161c594-8af3-458f-911a-bbf51e7bfcdd\") " pod="openstack/tempest-tests-tempest" Feb 14 05:26:37 crc kubenswrapper[4867]: I0214 05:26:37.357403 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/a161c594-8af3-458f-911a-bbf51e7bfcdd-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"a161c594-8af3-458f-911a-bbf51e7bfcdd\") " pod="openstack/tempest-tests-tempest" Feb 14 05:26:37 crc kubenswrapper[4867]: I0214 05:26:37.357434 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/a161c594-8af3-458f-911a-bbf51e7bfcdd-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"a161c594-8af3-458f-911a-bbf51e7bfcdd\") " pod="openstack/tempest-tests-tempest" Feb 14 05:26:37 crc kubenswrapper[4867]: I0214 05:26:37.357473 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vh78z\" (UniqueName: \"kubernetes.io/projected/a161c594-8af3-458f-911a-bbf51e7bfcdd-kube-api-access-vh78z\") pod \"tempest-tests-tempest\" (UID: \"a161c594-8af3-458f-911a-bbf51e7bfcdd\") " pod="openstack/tempest-tests-tempest" Feb 14 05:26:37 crc kubenswrapper[4867]: I0214 05:26:37.357548 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a161c594-8af3-458f-911a-bbf51e7bfcdd-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"a161c594-8af3-458f-911a-bbf51e7bfcdd\") " pod="openstack/tempest-tests-tempest" Feb 14 05:26:37 crc kubenswrapper[4867]: I0214 05:26:37.357613 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a161c594-8af3-458f-911a-bbf51e7bfcdd-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"a161c594-8af3-458f-911a-bbf51e7bfcdd\") " pod="openstack/tempest-tests-tempest" Feb 14 05:26:37 crc kubenswrapper[4867]: I0214 05:26:37.357835 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"a161c594-8af3-458f-911a-bbf51e7bfcdd\") " pod="openstack/tempest-tests-tempest" Feb 14 05:26:37 crc kubenswrapper[4867]: I0214 05:26:37.357891 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a161c594-8af3-458f-911a-bbf51e7bfcdd-config-data\") pod \"tempest-tests-tempest\" (UID: \"a161c594-8af3-458f-911a-bbf51e7bfcdd\") " pod="openstack/tempest-tests-tempest" Feb 14 05:26:37 crc kubenswrapper[4867]: I0214 05:26:37.358030 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a161c594-8af3-458f-911a-bbf51e7bfcdd-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"a161c594-8af3-458f-911a-bbf51e7bfcdd\") " pod="openstack/tempest-tests-tempest" Feb 14 05:26:37 crc kubenswrapper[4867]: I0214 05:26:37.460948 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"a161c594-8af3-458f-911a-bbf51e7bfcdd\") " pod="openstack/tempest-tests-tempest" Feb 14 05:26:37 crc kubenswrapper[4867]: I0214 05:26:37.461017 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a161c594-8af3-458f-911a-bbf51e7bfcdd-config-data\") pod \"tempest-tests-tempest\" (UID: \"a161c594-8af3-458f-911a-bbf51e7bfcdd\") " pod="openstack/tempest-tests-tempest" Feb 14 05:26:37 crc kubenswrapper[4867]: I0214 05:26:37.461090 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a161c594-8af3-458f-911a-bbf51e7bfcdd-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"a161c594-8af3-458f-911a-bbf51e7bfcdd\") " pod="openstack/tempest-tests-tempest" Feb 14 05:26:37 crc kubenswrapper[4867]: I0214 05:26:37.461346 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/a161c594-8af3-458f-911a-bbf51e7bfcdd-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"a161c594-8af3-458f-911a-bbf51e7bfcdd\") " pod="openstack/tempest-tests-tempest" Feb 14 05:26:37 crc kubenswrapper[4867]: I0214 05:26:37.461414 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/a161c594-8af3-458f-911a-bbf51e7bfcdd-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"a161c594-8af3-458f-911a-bbf51e7bfcdd\") " pod="openstack/tempest-tests-tempest" Feb 14 05:26:37 crc kubenswrapper[4867]: I0214 05:26:37.461447 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/a161c594-8af3-458f-911a-bbf51e7bfcdd-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"a161c594-8af3-458f-911a-bbf51e7bfcdd\") " pod="openstack/tempest-tests-tempest" Feb 14 05:26:37 crc kubenswrapper[4867]: I0214 05:26:37.461525 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vh78z\" (UniqueName: \"kubernetes.io/projected/a161c594-8af3-458f-911a-bbf51e7bfcdd-kube-api-access-vh78z\") pod \"tempest-tests-tempest\" (UID: \"a161c594-8af3-458f-911a-bbf51e7bfcdd\") " pod="openstack/tempest-tests-tempest" Feb 14 05:26:37 crc kubenswrapper[4867]: I0214 05:26:37.461589 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a161c594-8af3-458f-911a-bbf51e7bfcdd-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"a161c594-8af3-458f-911a-bbf51e7bfcdd\") " pod="openstack/tempest-tests-tempest" Feb 14 05:26:37 crc kubenswrapper[4867]: I0214 05:26:37.461669 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a161c594-8af3-458f-911a-bbf51e7bfcdd-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"a161c594-8af3-458f-911a-bbf51e7bfcdd\") " pod="openstack/tempest-tests-tempest" Feb 14 05:26:37 crc kubenswrapper[4867]: I0214 05:26:37.467332 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/a161c594-8af3-458f-911a-bbf51e7bfcdd-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"a161c594-8af3-458f-911a-bbf51e7bfcdd\") " pod="openstack/tempest-tests-tempest" Feb 14 05:26:37 crc kubenswrapper[4867]: I0214 05:26:37.467436 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"a161c594-8af3-458f-911a-bbf51e7bfcdd\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/tempest-tests-tempest" Feb 14 05:26:37 crc kubenswrapper[4867]: I0214 05:26:37.469320 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/a161c594-8af3-458f-911a-bbf51e7bfcdd-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"a161c594-8af3-458f-911a-bbf51e7bfcdd\") " pod="openstack/tempest-tests-tempest" Feb 14 05:26:37 crc kubenswrapper[4867]: I0214 05:26:37.469631 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a161c594-8af3-458f-911a-bbf51e7bfcdd-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"a161c594-8af3-458f-911a-bbf51e7bfcdd\") " pod="openstack/tempest-tests-tempest" Feb 14 05:26:37 crc kubenswrapper[4867]: I0214 05:26:37.469664 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a161c594-8af3-458f-911a-bbf51e7bfcdd-config-data\") pod \"tempest-tests-tempest\" (UID: \"a161c594-8af3-458f-911a-bbf51e7bfcdd\") " pod="openstack/tempest-tests-tempest" Feb 14 05:26:37 crc kubenswrapper[4867]: I0214 05:26:37.474093 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a161c594-8af3-458f-911a-bbf51e7bfcdd-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"a161c594-8af3-458f-911a-bbf51e7bfcdd\") " pod="openstack/tempest-tests-tempest" Feb 14 05:26:37 crc kubenswrapper[4867]: I0214 05:26:37.480065 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/a161c594-8af3-458f-911a-bbf51e7bfcdd-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"a161c594-8af3-458f-911a-bbf51e7bfcdd\") " pod="openstack/tempest-tests-tempest" Feb 14 05:26:37 crc kubenswrapper[4867]: I0214 05:26:37.481372 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a161c594-8af3-458f-911a-bbf51e7bfcdd-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"a161c594-8af3-458f-911a-bbf51e7bfcdd\") " pod="openstack/tempest-tests-tempest" Feb 14 05:26:37 crc kubenswrapper[4867]: I0214 05:26:37.487435 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vh78z\" (UniqueName: \"kubernetes.io/projected/a161c594-8af3-458f-911a-bbf51e7bfcdd-kube-api-access-vh78z\") pod \"tempest-tests-tempest\" (UID: \"a161c594-8af3-458f-911a-bbf51e7bfcdd\") " pod="openstack/tempest-tests-tempest" Feb 14 05:26:37 crc kubenswrapper[4867]: I0214 05:26:37.506977 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"a161c594-8af3-458f-911a-bbf51e7bfcdd\") " pod="openstack/tempest-tests-tempest" Feb 14 05:26:37 crc kubenswrapper[4867]: I0214 05:26:37.580051 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 14 05:26:38 crc kubenswrapper[4867]: I0214 05:26:38.372243 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 14 05:26:38 crc kubenswrapper[4867]: I0214 05:26:38.452539 4867 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 14 05:26:38 crc kubenswrapper[4867]: I0214 05:26:38.906690 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"a161c594-8af3-458f-911a-bbf51e7bfcdd","Type":"ContainerStarted","Data":"69a1559021e3c0afa3311c13a382b071b919ecabc5729024c716838afe1c709a"} Feb 14 05:26:41 crc kubenswrapper[4867]: I0214 05:26:41.996928 4867 scope.go:117] "RemoveContainer" containerID="ec987150c85caa2259b5e07a0130f2569d95269321a91ae517f52e3f4caa949a" Feb 14 05:26:41 crc kubenswrapper[4867]: E0214 05:26:41.997818 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:26:57 crc kubenswrapper[4867]: I0214 05:26:56.999246 4867 scope.go:117] "RemoveContainer" containerID="ec987150c85caa2259b5e07a0130f2569d95269321a91ae517f52e3f4caa949a" Feb 14 05:26:57 crc kubenswrapper[4867]: E0214 05:26:57.000772 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:27:09 crc kubenswrapper[4867]: I0214 05:27:09.998617 4867 scope.go:117] "RemoveContainer" containerID="ec987150c85caa2259b5e07a0130f2569d95269321a91ae517f52e3f4caa949a" Feb 14 05:27:52 crc kubenswrapper[4867]: E0214 05:27:52.964077 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Feb 14 05:27:52 crc kubenswrapper[4867]: E0214 05:27:52.970214 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vh78z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(a161c594-8af3-458f-911a-bbf51e7bfcdd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 05:27:52 crc kubenswrapper[4867]: E0214 05:27:52.971598 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="a161c594-8af3-458f-911a-bbf51e7bfcdd" Feb 14 05:27:53 crc kubenswrapper[4867]: I0214 05:27:53.853418 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" event={"ID":"5992e46c-bce7-4b9f-82f2-c7ffb93286cd","Type":"ContainerStarted","Data":"de23552d651bd266665fca3b2536d2046c3c2309b2c56fb5a66759067df0e4c8"} Feb 14 05:27:53 crc kubenswrapper[4867]: E0214 05:27:53.855888 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="a161c594-8af3-458f-911a-bbf51e7bfcdd" Feb 14 05:28:10 crc kubenswrapper[4867]: I0214 05:28:10.131644 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 14 05:28:13 crc kubenswrapper[4867]: I0214 05:28:13.097725 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"a161c594-8af3-458f-911a-bbf51e7bfcdd","Type":"ContainerStarted","Data":"b1742179cf0672940dcd64c514227d7fd46e83cfc6502a0b57ebf7e4bf13678c"} Feb 14 05:28:13 crc kubenswrapper[4867]: I0214 05:28:13.125721 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=5.491671496 podStartE2EDuration="1m37.125695581s" podCreationTimestamp="2026-02-14 05:26:36 +0000 UTC" firstStartedPulling="2026-02-14 05:26:38.452216737 +0000 UTC m=+4630.533154071" lastFinishedPulling="2026-02-14 05:28:10.086240842 +0000 UTC m=+4722.167178156" observedRunningTime="2026-02-14 05:28:13.115282887 +0000 UTC m=+4725.196220201" watchObservedRunningTime="2026-02-14 05:28:13.125695581 +0000 UTC m=+4725.206632905" Feb 14 05:29:18 crc kubenswrapper[4867]: I0214 05:29:18.816062 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fwfld"] Feb 14 05:29:18 crc kubenswrapper[4867]: I0214 05:29:18.879669 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fwfld" Feb 14 05:29:19 crc kubenswrapper[4867]: I0214 05:29:19.049746 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mg5kn\" (UniqueName: \"kubernetes.io/projected/09ba042e-98c3-43cc-aa6a-efbb9a63ae61-kube-api-access-mg5kn\") pod \"certified-operators-fwfld\" (UID: \"09ba042e-98c3-43cc-aa6a-efbb9a63ae61\") " pod="openshift-marketplace/certified-operators-fwfld" Feb 14 05:29:19 crc kubenswrapper[4867]: I0214 05:29:19.050237 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09ba042e-98c3-43cc-aa6a-efbb9a63ae61-catalog-content\") pod \"certified-operators-fwfld\" (UID: \"09ba042e-98c3-43cc-aa6a-efbb9a63ae61\") " pod="openshift-marketplace/certified-operators-fwfld" Feb 14 05:29:19 crc kubenswrapper[4867]: I0214 05:29:19.050468 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09ba042e-98c3-43cc-aa6a-efbb9a63ae61-utilities\") pod \"certified-operators-fwfld\" (UID: \"09ba042e-98c3-43cc-aa6a-efbb9a63ae61\") " pod="openshift-marketplace/certified-operators-fwfld" Feb 14 05:29:19 crc kubenswrapper[4867]: I0214 05:29:19.074394 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9jj9q"] Feb 14 05:29:19 crc kubenswrapper[4867]: I0214 05:29:19.077200 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9jj9q" Feb 14 05:29:19 crc kubenswrapper[4867]: I0214 05:29:19.135093 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9jj9q"] Feb 14 05:29:19 crc kubenswrapper[4867]: I0214 05:29:19.153472 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3532ff4a-374c-407b-b01c-b63267b0f9f9-utilities\") pod \"redhat-operators-9jj9q\" (UID: \"3532ff4a-374c-407b-b01c-b63267b0f9f9\") " pod="openshift-marketplace/redhat-operators-9jj9q" Feb 14 05:29:19 crc kubenswrapper[4867]: I0214 05:29:19.153880 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mg5kn\" (UniqueName: \"kubernetes.io/projected/09ba042e-98c3-43cc-aa6a-efbb9a63ae61-kube-api-access-mg5kn\") pod \"certified-operators-fwfld\" (UID: \"09ba042e-98c3-43cc-aa6a-efbb9a63ae61\") " pod="openshift-marketplace/certified-operators-fwfld" Feb 14 05:29:19 crc kubenswrapper[4867]: I0214 05:29:19.154036 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09ba042e-98c3-43cc-aa6a-efbb9a63ae61-catalog-content\") pod \"certified-operators-fwfld\" (UID: \"09ba042e-98c3-43cc-aa6a-efbb9a63ae61\") " pod="openshift-marketplace/certified-operators-fwfld" Feb 14 05:29:19 crc kubenswrapper[4867]: I0214 05:29:19.154089 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3532ff4a-374c-407b-b01c-b63267b0f9f9-catalog-content\") pod \"redhat-operators-9jj9q\" (UID: \"3532ff4a-374c-407b-b01c-b63267b0f9f9\") " pod="openshift-marketplace/redhat-operators-9jj9q" Feb 14 05:29:19 crc kubenswrapper[4867]: I0214 05:29:19.154220 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dcv6\" (UniqueName: \"kubernetes.io/projected/3532ff4a-374c-407b-b01c-b63267b0f9f9-kube-api-access-6dcv6\") pod \"redhat-operators-9jj9q\" (UID: \"3532ff4a-374c-407b-b01c-b63267b0f9f9\") " pod="openshift-marketplace/redhat-operators-9jj9q" Feb 14 05:29:19 crc kubenswrapper[4867]: I0214 05:29:19.154271 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09ba042e-98c3-43cc-aa6a-efbb9a63ae61-utilities\") pod \"certified-operators-fwfld\" (UID: \"09ba042e-98c3-43cc-aa6a-efbb9a63ae61\") " pod="openshift-marketplace/certified-operators-fwfld" Feb 14 05:29:19 crc kubenswrapper[4867]: I0214 05:29:19.221468 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fwfld"] Feb 14 05:29:19 crc kubenswrapper[4867]: I0214 05:29:19.257091 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3532ff4a-374c-407b-b01c-b63267b0f9f9-catalog-content\") pod \"redhat-operators-9jj9q\" (UID: \"3532ff4a-374c-407b-b01c-b63267b0f9f9\") " pod="openshift-marketplace/redhat-operators-9jj9q" Feb 14 05:29:19 crc kubenswrapper[4867]: I0214 05:29:19.257240 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dcv6\" (UniqueName: \"kubernetes.io/projected/3532ff4a-374c-407b-b01c-b63267b0f9f9-kube-api-access-6dcv6\") pod \"redhat-operators-9jj9q\" (UID: \"3532ff4a-374c-407b-b01c-b63267b0f9f9\") " pod="openshift-marketplace/redhat-operators-9jj9q" Feb 14 05:29:19 crc kubenswrapper[4867]: I0214 05:29:19.257427 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3532ff4a-374c-407b-b01c-b63267b0f9f9-utilities\") pod \"redhat-operators-9jj9q\" (UID: \"3532ff4a-374c-407b-b01c-b63267b0f9f9\") " pod="openshift-marketplace/redhat-operators-9jj9q" Feb 14 05:29:19 crc kubenswrapper[4867]: I0214 05:29:19.261913 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09ba042e-98c3-43cc-aa6a-efbb9a63ae61-utilities\") pod \"certified-operators-fwfld\" (UID: \"09ba042e-98c3-43cc-aa6a-efbb9a63ae61\") " pod="openshift-marketplace/certified-operators-fwfld" Feb 14 05:29:19 crc kubenswrapper[4867]: I0214 05:29:19.352971 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09ba042e-98c3-43cc-aa6a-efbb9a63ae61-catalog-content\") pod \"certified-operators-fwfld\" (UID: \"09ba042e-98c3-43cc-aa6a-efbb9a63ae61\") " pod="openshift-marketplace/certified-operators-fwfld" Feb 14 05:29:19 crc kubenswrapper[4867]: I0214 05:29:19.357541 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3532ff4a-374c-407b-b01c-b63267b0f9f9-catalog-content\") pod \"redhat-operators-9jj9q\" (UID: \"3532ff4a-374c-407b-b01c-b63267b0f9f9\") " pod="openshift-marketplace/redhat-operators-9jj9q" Feb 14 05:29:19 crc kubenswrapper[4867]: I0214 05:29:19.357613 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3532ff4a-374c-407b-b01c-b63267b0f9f9-utilities\") pod \"redhat-operators-9jj9q\" (UID: \"3532ff4a-374c-407b-b01c-b63267b0f9f9\") " pod="openshift-marketplace/redhat-operators-9jj9q" Feb 14 05:29:19 crc kubenswrapper[4867]: I0214 05:29:19.370284 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dcv6\" (UniqueName: \"kubernetes.io/projected/3532ff4a-374c-407b-b01c-b63267b0f9f9-kube-api-access-6dcv6\") pod \"redhat-operators-9jj9q\" (UID: \"3532ff4a-374c-407b-b01c-b63267b0f9f9\") " pod="openshift-marketplace/redhat-operators-9jj9q" Feb 14 05:29:19 crc kubenswrapper[4867]: I0214 05:29:19.371262 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mg5kn\" (UniqueName: \"kubernetes.io/projected/09ba042e-98c3-43cc-aa6a-efbb9a63ae61-kube-api-access-mg5kn\") pod \"certified-operators-fwfld\" (UID: \"09ba042e-98c3-43cc-aa6a-efbb9a63ae61\") " pod="openshift-marketplace/certified-operators-fwfld" Feb 14 05:29:19 crc kubenswrapper[4867]: I0214 05:29:19.497561 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fwfld" Feb 14 05:29:19 crc kubenswrapper[4867]: I0214 05:29:19.544191 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9jj9q" Feb 14 05:29:20 crc kubenswrapper[4867]: I0214 05:29:20.888714 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-n4l4x"] Feb 14 05:29:20 crc kubenswrapper[4867]: I0214 05:29:20.892325 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n4l4x" Feb 14 05:29:20 crc kubenswrapper[4867]: I0214 05:29:20.907595 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c07eb1e9-f4cc-4664-b9f6-80322fe0644a-catalog-content\") pod \"community-operators-n4l4x\" (UID: \"c07eb1e9-f4cc-4664-b9f6-80322fe0644a\") " pod="openshift-marketplace/community-operators-n4l4x" Feb 14 05:29:20 crc kubenswrapper[4867]: I0214 05:29:20.907941 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5kf9\" (UniqueName: \"kubernetes.io/projected/c07eb1e9-f4cc-4664-b9f6-80322fe0644a-kube-api-access-p5kf9\") pod \"community-operators-n4l4x\" (UID: \"c07eb1e9-f4cc-4664-b9f6-80322fe0644a\") " pod="openshift-marketplace/community-operators-n4l4x" Feb 14 05:29:20 crc kubenswrapper[4867]: I0214 05:29:20.908035 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c07eb1e9-f4cc-4664-b9f6-80322fe0644a-utilities\") pod \"community-operators-n4l4x\" (UID: \"c07eb1e9-f4cc-4664-b9f6-80322fe0644a\") " pod="openshift-marketplace/community-operators-n4l4x" Feb 14 05:29:20 crc kubenswrapper[4867]: I0214 05:29:20.911003 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n4l4x"] Feb 14 05:29:21 crc kubenswrapper[4867]: I0214 05:29:21.011565 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5kf9\" (UniqueName: \"kubernetes.io/projected/c07eb1e9-f4cc-4664-b9f6-80322fe0644a-kube-api-access-p5kf9\") pod \"community-operators-n4l4x\" (UID: \"c07eb1e9-f4cc-4664-b9f6-80322fe0644a\") " pod="openshift-marketplace/community-operators-n4l4x" Feb 14 05:29:21 crc kubenswrapper[4867]: I0214 05:29:21.011701 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c07eb1e9-f4cc-4664-b9f6-80322fe0644a-utilities\") pod \"community-operators-n4l4x\" (UID: \"c07eb1e9-f4cc-4664-b9f6-80322fe0644a\") " pod="openshift-marketplace/community-operators-n4l4x" Feb 14 05:29:21 crc kubenswrapper[4867]: I0214 05:29:21.012079 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c07eb1e9-f4cc-4664-b9f6-80322fe0644a-catalog-content\") pod \"community-operators-n4l4x\" (UID: \"c07eb1e9-f4cc-4664-b9f6-80322fe0644a\") " pod="openshift-marketplace/community-operators-n4l4x" Feb 14 05:29:21 crc kubenswrapper[4867]: I0214 05:29:21.024965 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c07eb1e9-f4cc-4664-b9f6-80322fe0644a-utilities\") pod \"community-operators-n4l4x\" (UID: \"c07eb1e9-f4cc-4664-b9f6-80322fe0644a\") " pod="openshift-marketplace/community-operators-n4l4x" Feb 14 05:29:21 crc kubenswrapper[4867]: I0214 05:29:21.027589 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c07eb1e9-f4cc-4664-b9f6-80322fe0644a-catalog-content\") pod \"community-operators-n4l4x\" (UID: \"c07eb1e9-f4cc-4664-b9f6-80322fe0644a\") " pod="openshift-marketplace/community-operators-n4l4x" Feb 14 05:29:21 crc kubenswrapper[4867]: I0214 05:29:21.048540 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5kf9\" (UniqueName: \"kubernetes.io/projected/c07eb1e9-f4cc-4664-b9f6-80322fe0644a-kube-api-access-p5kf9\") pod \"community-operators-n4l4x\" (UID: \"c07eb1e9-f4cc-4664-b9f6-80322fe0644a\") " pod="openshift-marketplace/community-operators-n4l4x" Feb 14 05:29:21 crc kubenswrapper[4867]: I0214 05:29:21.239478 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n4l4x" Feb 14 05:29:22 crc kubenswrapper[4867]: I0214 05:29:22.135267 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fwfld"] Feb 14 05:29:22 crc kubenswrapper[4867]: I0214 05:29:22.157547 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9jj9q"] Feb 14 05:29:22 crc kubenswrapper[4867]: I0214 05:29:22.331768 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n4l4x"] Feb 14 05:29:22 crc kubenswrapper[4867]: W0214 05:29:22.338176 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc07eb1e9_f4cc_4664_b9f6_80322fe0644a.slice/crio-5ba318c0f038dd00ef73874b614866123801539825c20b7ed97427c3db408ff8 WatchSource:0}: Error finding container 5ba318c0f038dd00ef73874b614866123801539825c20b7ed97427c3db408ff8: Status 404 returned error can't find the container with id 5ba318c0f038dd00ef73874b614866123801539825c20b7ed97427c3db408ff8 Feb 14 05:29:22 crc kubenswrapper[4867]: I0214 05:29:22.922452 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9jj9q" event={"ID":"3532ff4a-374c-407b-b01c-b63267b0f9f9","Type":"ContainerDied","Data":"3ccc1ca8b5aa695fffe9a70b7b97042dbfab6774339fb2708f08dce70c3af3d0"} Feb 14 05:29:22 crc kubenswrapper[4867]: I0214 05:29:22.923198 4867 generic.go:334] "Generic (PLEG): container finished" podID="3532ff4a-374c-407b-b01c-b63267b0f9f9" containerID="3ccc1ca8b5aa695fffe9a70b7b97042dbfab6774339fb2708f08dce70c3af3d0" exitCode=0 Feb 14 05:29:22 crc kubenswrapper[4867]: I0214 05:29:22.926938 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9jj9q" event={"ID":"3532ff4a-374c-407b-b01c-b63267b0f9f9","Type":"ContainerStarted","Data":"6b53ea8d4257c47786cd3a09e618ae66005b213cde9dca1141144554e272f271"} Feb 14 05:29:22 crc kubenswrapper[4867]: I0214 05:29:22.941017 4867 generic.go:334] "Generic (PLEG): container finished" podID="09ba042e-98c3-43cc-aa6a-efbb9a63ae61" containerID="5cb79fe74b93324d918674ab2692becf5fd9a155cfb9970da26b3cebb5355a9d" exitCode=0 Feb 14 05:29:22 crc kubenswrapper[4867]: I0214 05:29:22.941145 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fwfld" event={"ID":"09ba042e-98c3-43cc-aa6a-efbb9a63ae61","Type":"ContainerDied","Data":"5cb79fe74b93324d918674ab2692becf5fd9a155cfb9970da26b3cebb5355a9d"} Feb 14 05:29:22 crc kubenswrapper[4867]: I0214 05:29:22.941184 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fwfld" event={"ID":"09ba042e-98c3-43cc-aa6a-efbb9a63ae61","Type":"ContainerStarted","Data":"611fc79292fb2762358fe75567d94939459a2919b3fc494b0f725c85bd01c821"} Feb 14 05:29:22 crc kubenswrapper[4867]: I0214 05:29:22.954862 4867 generic.go:334] "Generic (PLEG): container finished" podID="c07eb1e9-f4cc-4664-b9f6-80322fe0644a" containerID="36bcae6bb363439549f24488fba4f5cff8ec4aa55cfcc0e02fab4feb7920c86f" exitCode=0 Feb 14 05:29:22 crc kubenswrapper[4867]: I0214 05:29:22.954909 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n4l4x" event={"ID":"c07eb1e9-f4cc-4664-b9f6-80322fe0644a","Type":"ContainerDied","Data":"36bcae6bb363439549f24488fba4f5cff8ec4aa55cfcc0e02fab4feb7920c86f"} Feb 14 05:29:22 crc kubenswrapper[4867]: I0214 05:29:22.954954 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n4l4x" event={"ID":"c07eb1e9-f4cc-4664-b9f6-80322fe0644a","Type":"ContainerStarted","Data":"5ba318c0f038dd00ef73874b614866123801539825c20b7ed97427c3db408ff8"} Feb 14 05:29:23 crc kubenswrapper[4867]: E0214 05:29:23.261190 4867 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3532ff4a_374c_407b_b01c_b63267b0f9f9.slice/crio-3ccc1ca8b5aa695fffe9a70b7b97042dbfab6774339fb2708f08dce70c3af3d0.scope\": RecentStats: unable to find data in memory cache]" Feb 14 05:29:24 crc kubenswrapper[4867]: I0214 05:29:24.976886 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9jj9q" event={"ID":"3532ff4a-374c-407b-b01c-b63267b0f9f9","Type":"ContainerStarted","Data":"b68d87e77e9726db128cb19314bb5165ed9c15cd0be74610a3fa6b601224ffbc"} Feb 14 05:29:24 crc kubenswrapper[4867]: I0214 05:29:24.979236 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fwfld" event={"ID":"09ba042e-98c3-43cc-aa6a-efbb9a63ae61","Type":"ContainerStarted","Data":"e54727b5bf92a59032c5529b8aae9e9aaa32e613387911a5fa36f0cd61a385b3"} Feb 14 05:29:24 crc kubenswrapper[4867]: I0214 05:29:24.981461 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n4l4x" event={"ID":"c07eb1e9-f4cc-4664-b9f6-80322fe0644a","Type":"ContainerStarted","Data":"1a1f79e7d0e49fdf6f916b2defe58abde42138b8a7f873554959ac654f97cab7"} Feb 14 05:29:31 crc kubenswrapper[4867]: I0214 05:29:31.219439 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n4l4x" event={"ID":"c07eb1e9-f4cc-4664-b9f6-80322fe0644a","Type":"ContainerDied","Data":"1a1f79e7d0e49fdf6f916b2defe58abde42138b8a7f873554959ac654f97cab7"} Feb 14 05:29:31 crc kubenswrapper[4867]: I0214 05:29:31.220238 4867 generic.go:334] "Generic (PLEG): container finished" podID="c07eb1e9-f4cc-4664-b9f6-80322fe0644a" containerID="1a1f79e7d0e49fdf6f916b2defe58abde42138b8a7f873554959ac654f97cab7" exitCode=0 Feb 14 05:29:31 crc kubenswrapper[4867]: I0214 05:29:31.223711 4867 generic.go:334] "Generic (PLEG): container finished" podID="09ba042e-98c3-43cc-aa6a-efbb9a63ae61" containerID="e54727b5bf92a59032c5529b8aae9e9aaa32e613387911a5fa36f0cd61a385b3" exitCode=0 Feb 14 05:29:31 crc kubenswrapper[4867]: I0214 05:29:31.223769 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fwfld" event={"ID":"09ba042e-98c3-43cc-aa6a-efbb9a63ae61","Type":"ContainerDied","Data":"e54727b5bf92a59032c5529b8aae9e9aaa32e613387911a5fa36f0cd61a385b3"} Feb 14 05:29:31 crc kubenswrapper[4867]: I0214 05:29:31.742491 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gbzmm"] Feb 14 05:29:31 crc kubenswrapper[4867]: I0214 05:29:31.749259 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gbzmm" Feb 14 05:29:31 crc kubenswrapper[4867]: I0214 05:29:31.783363 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gbzmm"] Feb 14 05:29:31 crc kubenswrapper[4867]: I0214 05:29:31.865600 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-696zs\" (UniqueName: \"kubernetes.io/projected/ae8a4292-e933-464b-b36d-918f43ce6f65-kube-api-access-696zs\") pod \"redhat-marketplace-gbzmm\" (UID: \"ae8a4292-e933-464b-b36d-918f43ce6f65\") " pod="openshift-marketplace/redhat-marketplace-gbzmm" Feb 14 05:29:31 crc kubenswrapper[4867]: I0214 05:29:31.866117 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae8a4292-e933-464b-b36d-918f43ce6f65-utilities\") pod \"redhat-marketplace-gbzmm\" (UID: \"ae8a4292-e933-464b-b36d-918f43ce6f65\") " pod="openshift-marketplace/redhat-marketplace-gbzmm" Feb 14 05:29:31 crc kubenswrapper[4867]: I0214 05:29:31.866288 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae8a4292-e933-464b-b36d-918f43ce6f65-catalog-content\") pod \"redhat-marketplace-gbzmm\" (UID: \"ae8a4292-e933-464b-b36d-918f43ce6f65\") " pod="openshift-marketplace/redhat-marketplace-gbzmm" Feb 14 05:29:31 crc kubenswrapper[4867]: I0214 05:29:31.969015 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-696zs\" (UniqueName: \"kubernetes.io/projected/ae8a4292-e933-464b-b36d-918f43ce6f65-kube-api-access-696zs\") pod \"redhat-marketplace-gbzmm\" (UID: \"ae8a4292-e933-464b-b36d-918f43ce6f65\") " pod="openshift-marketplace/redhat-marketplace-gbzmm" Feb 14 05:29:31 crc kubenswrapper[4867]: I0214 05:29:31.969184 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae8a4292-e933-464b-b36d-918f43ce6f65-utilities\") pod \"redhat-marketplace-gbzmm\" (UID: \"ae8a4292-e933-464b-b36d-918f43ce6f65\") " pod="openshift-marketplace/redhat-marketplace-gbzmm" Feb 14 05:29:31 crc kubenswrapper[4867]: I0214 05:29:31.969244 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae8a4292-e933-464b-b36d-918f43ce6f65-catalog-content\") pod \"redhat-marketplace-gbzmm\" (UID: \"ae8a4292-e933-464b-b36d-918f43ce6f65\") " pod="openshift-marketplace/redhat-marketplace-gbzmm" Feb 14 05:29:32 crc kubenswrapper[4867]: I0214 05:29:32.008362 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae8a4292-e933-464b-b36d-918f43ce6f65-catalog-content\") pod \"redhat-marketplace-gbzmm\" (UID: \"ae8a4292-e933-464b-b36d-918f43ce6f65\") " pod="openshift-marketplace/redhat-marketplace-gbzmm" Feb 14 05:29:32 crc kubenswrapper[4867]: I0214 05:29:32.029666 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae8a4292-e933-464b-b36d-918f43ce6f65-utilities\") pod \"redhat-marketplace-gbzmm\" (UID: \"ae8a4292-e933-464b-b36d-918f43ce6f65\") " pod="openshift-marketplace/redhat-marketplace-gbzmm" Feb 14 05:29:32 crc kubenswrapper[4867]: I0214 05:29:32.167869 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-696zs\" (UniqueName: \"kubernetes.io/projected/ae8a4292-e933-464b-b36d-918f43ce6f65-kube-api-access-696zs\") pod \"redhat-marketplace-gbzmm\" (UID: \"ae8a4292-e933-464b-b36d-918f43ce6f65\") " pod="openshift-marketplace/redhat-marketplace-gbzmm" Feb 14 05:29:32 crc kubenswrapper[4867]: I0214 05:29:32.266695 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fwfld" event={"ID":"09ba042e-98c3-43cc-aa6a-efbb9a63ae61","Type":"ContainerStarted","Data":"d32a85def59446e2aea01a95f3cfe819170da1f9922b0c56fc8dbc92b574234c"} Feb 14 05:29:32 crc kubenswrapper[4867]: I0214 05:29:32.313230 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fwfld" podStartSLOduration=5.562013734 podStartE2EDuration="14.31075793s" podCreationTimestamp="2026-02-14 05:29:18 +0000 UTC" firstStartedPulling="2026-02-14 05:29:22.953538667 +0000 UTC m=+4795.034475981" lastFinishedPulling="2026-02-14 05:29:31.702282863 +0000 UTC m=+4803.783220177" observedRunningTime="2026-02-14 05:29:32.288102826 +0000 UTC m=+4804.369040140" watchObservedRunningTime="2026-02-14 05:29:32.31075793 +0000 UTC m=+4804.391695244" Feb 14 05:29:32 crc kubenswrapper[4867]: I0214 05:29:32.373113 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gbzmm" Feb 14 05:29:33 crc kubenswrapper[4867]: I0214 05:29:33.280391 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n4l4x" event={"ID":"c07eb1e9-f4cc-4664-b9f6-80322fe0644a","Type":"ContainerStarted","Data":"87e6c040d38bde68e493b7b3302f19e7c726d33e11f4f16178f8c9d7adfc5f47"} Feb 14 05:29:35 crc kubenswrapper[4867]: I0214 05:29:35.161790 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-n4l4x" podStartSLOduration=6.383599954 podStartE2EDuration="15.161765513s" podCreationTimestamp="2026-02-14 05:29:20 +0000 UTC" firstStartedPulling="2026-02-14 05:29:22.958104656 +0000 UTC m=+4795.039041970" lastFinishedPulling="2026-02-14 05:29:31.736270205 +0000 UTC m=+4803.817207529" observedRunningTime="2026-02-14 05:29:33.309491478 +0000 UTC m=+4805.390428802" watchObservedRunningTime="2026-02-14 05:29:35.161765513 +0000 UTC m=+4807.242702837" Feb 14 05:29:35 crc kubenswrapper[4867]: I0214 05:29:35.168759 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gbzmm"] Feb 14 05:29:35 crc kubenswrapper[4867]: W0214 05:29:35.277592 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae8a4292_e933_464b_b36d_918f43ce6f65.slice/crio-47cdca75a2ba0f821663d76cef9b19a6564e32fa60be6d56b7f13820ba0f0910 WatchSource:0}: Error finding container 47cdca75a2ba0f821663d76cef9b19a6564e32fa60be6d56b7f13820ba0f0910: Status 404 returned error can't find the container with id 47cdca75a2ba0f821663d76cef9b19a6564e32fa60be6d56b7f13820ba0f0910 Feb 14 05:29:35 crc kubenswrapper[4867]: I0214 05:29:35.302697 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gbzmm" event={"ID":"ae8a4292-e933-464b-b36d-918f43ce6f65","Type":"ContainerStarted","Data":"47cdca75a2ba0f821663d76cef9b19a6564e32fa60be6d56b7f13820ba0f0910"} Feb 14 05:29:36 crc kubenswrapper[4867]: I0214 05:29:36.315237 4867 generic.go:334] "Generic (PLEG): container finished" podID="ae8a4292-e933-464b-b36d-918f43ce6f65" containerID="8c243a37aff3c02c559e404368152638ab794bc475ff69a09f55fcd9db332faf" exitCode=0 Feb 14 05:29:36 crc kubenswrapper[4867]: I0214 05:29:36.315346 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gbzmm" event={"ID":"ae8a4292-e933-464b-b36d-918f43ce6f65","Type":"ContainerDied","Data":"8c243a37aff3c02c559e404368152638ab794bc475ff69a09f55fcd9db332faf"} Feb 14 05:29:37 crc kubenswrapper[4867]: I0214 05:29:37.334962 4867 generic.go:334] "Generic (PLEG): container finished" podID="3532ff4a-374c-407b-b01c-b63267b0f9f9" containerID="b68d87e77e9726db128cb19314bb5165ed9c15cd0be74610a3fa6b601224ffbc" exitCode=0 Feb 14 05:29:37 crc kubenswrapper[4867]: I0214 05:29:37.335063 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9jj9q" event={"ID":"3532ff4a-374c-407b-b01c-b63267b0f9f9","Type":"ContainerDied","Data":"b68d87e77e9726db128cb19314bb5165ed9c15cd0be74610a3fa6b601224ffbc"} Feb 14 05:29:38 crc kubenswrapper[4867]: I0214 05:29:38.351011 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gbzmm" event={"ID":"ae8a4292-e933-464b-b36d-918f43ce6f65","Type":"ContainerStarted","Data":"d8dba4d88b5c6eecbec89d7feae83ad9606443736a1880bc3a3ef22fc521b479"} Feb 14 05:29:39 crc kubenswrapper[4867]: I0214 05:29:39.499040 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fwfld" Feb 14 05:29:39 crc kubenswrapper[4867]: I0214 05:29:39.501686 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fwfld" Feb 14 05:29:39 crc kubenswrapper[4867]: I0214 05:29:39.708636 4867 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 1.238300255s: [/var/lib/containers/storage/overlay/548715b8e9244f4bf400b1cdd337ccd8a85917cae6e751f46636b49a47caba3a/diff /var/log/pods/openstack_openstackclient_6fdee887-8ecb-4c1e-8a88-0284fc050f0e/openstackclient/0.log]; will not log again for this container unless duration exceeds 2s Feb 14 05:29:40 crc kubenswrapper[4867]: I0214 05:29:40.375668 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9jj9q" event={"ID":"3532ff4a-374c-407b-b01c-b63267b0f9f9","Type":"ContainerStarted","Data":"0af814f84e64b35babeb4457762bbfc3989cb29f290cec6370bec1b95e729f03"} Feb 14 05:29:40 crc kubenswrapper[4867]: I0214 05:29:40.403712 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9jj9q" podStartSLOduration=4.969712322 podStartE2EDuration="21.403682607s" podCreationTimestamp="2026-02-14 05:29:19 +0000 UTC" firstStartedPulling="2026-02-14 05:29:22.932755631 +0000 UTC m=+4795.013692945" lastFinishedPulling="2026-02-14 05:29:39.366725926 +0000 UTC m=+4811.447663230" observedRunningTime="2026-02-14 05:29:40.393757397 +0000 UTC m=+4812.474694711" watchObservedRunningTime="2026-02-14 05:29:40.403682607 +0000 UTC m=+4812.484619921" Feb 14 05:29:40 crc kubenswrapper[4867]: I0214 05:29:40.591194 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-fwfld" podUID="09ba042e-98c3-43cc-aa6a-efbb9a63ae61" containerName="registry-server" probeResult="failure" output=< Feb 14 05:29:40 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:29:40 crc kubenswrapper[4867]: > Feb 14 05:29:41 crc kubenswrapper[4867]: I0214 05:29:41.240663 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-n4l4x" Feb 14 05:29:41 crc kubenswrapper[4867]: I0214 05:29:41.241073 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-n4l4x" Feb 14 05:29:41 crc kubenswrapper[4867]: I0214 05:29:41.318781 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cs8b7t" podUID="634f9e2f-2100-49e3-a31f-a369cf8ff13f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:29:42 crc kubenswrapper[4867]: I0214 05:29:42.402887 4867 generic.go:334] "Generic (PLEG): container finished" podID="ae8a4292-e933-464b-b36d-918f43ce6f65" containerID="d8dba4d88b5c6eecbec89d7feae83ad9606443736a1880bc3a3ef22fc521b479" exitCode=0 Feb 14 05:29:42 crc kubenswrapper[4867]: I0214 05:29:42.402996 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gbzmm" event={"ID":"ae8a4292-e933-464b-b36d-918f43ce6f65","Type":"ContainerDied","Data":"d8dba4d88b5c6eecbec89d7feae83ad9606443736a1880bc3a3ef22fc521b479"} Feb 14 05:29:42 crc kubenswrapper[4867]: I0214 05:29:42.822543 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-n4l4x" podUID="c07eb1e9-f4cc-4664-b9f6-80322fe0644a" containerName="registry-server" probeResult="failure" output=< Feb 14 05:29:42 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:29:42 crc kubenswrapper[4867]: > Feb 14 05:29:44 crc kubenswrapper[4867]: I0214 05:29:44.463959 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gbzmm" event={"ID":"ae8a4292-e933-464b-b36d-918f43ce6f65","Type":"ContainerStarted","Data":"02fa8e73abcf51bd71a1c91f18d3c7a2d7323bb60e9dc8dc6f9f4004369b2287"} Feb 14 05:29:44 crc kubenswrapper[4867]: I0214 05:29:44.518482 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gbzmm" podStartSLOduration=6.810735796 podStartE2EDuration="13.518438122s" podCreationTimestamp="2026-02-14 05:29:31 +0000 UTC" firstStartedPulling="2026-02-14 05:29:36.317578644 +0000 UTC m=+4808.398515958" lastFinishedPulling="2026-02-14 05:29:43.02528097 +0000 UTC m=+4815.106218284" observedRunningTime="2026-02-14 05:29:44.51303331 +0000 UTC m=+4816.593970624" watchObservedRunningTime="2026-02-14 05:29:44.518438122 +0000 UTC m=+4816.599375446" Feb 14 05:29:49 crc kubenswrapper[4867]: I0214 05:29:49.545020 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9jj9q" Feb 14 05:29:49 crc kubenswrapper[4867]: I0214 05:29:49.545583 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9jj9q" Feb 14 05:29:50 crc kubenswrapper[4867]: I0214 05:29:50.941975 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-fwfld" podUID="09ba042e-98c3-43cc-aa6a-efbb9a63ae61" containerName="registry-server" probeResult="failure" output=< Feb 14 05:29:50 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:29:50 crc kubenswrapper[4867]: > Feb 14 05:29:50 crc kubenswrapper[4867]: I0214 05:29:50.941985 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9jj9q" podUID="3532ff4a-374c-407b-b01c-b63267b0f9f9" containerName="registry-server" probeResult="failure" output=< Feb 14 05:29:50 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:29:50 crc kubenswrapper[4867]: > Feb 14 05:29:52 crc kubenswrapper[4867]: I0214 05:29:52.373716 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gbzmm" Feb 14 05:29:52 crc kubenswrapper[4867]: I0214 05:29:52.375192 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gbzmm" Feb 14 05:29:52 crc kubenswrapper[4867]: I0214 05:29:52.471922 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-n4l4x" podUID="c07eb1e9-f4cc-4664-b9f6-80322fe0644a" containerName="registry-server" probeResult="failure" output=< Feb 14 05:29:52 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:29:52 crc kubenswrapper[4867]: > Feb 14 05:29:53 crc kubenswrapper[4867]: I0214 05:29:53.471309 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-nzdwg" podUID="cfde5532-97c7-47b8-8b63-0159fc9e82b9" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:29:53 crc kubenswrapper[4867]: I0214 05:29:53.840327 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-gbzmm" podUID="ae8a4292-e933-464b-b36d-918f43ce6f65" containerName="registry-server" probeResult="failure" output=< Feb 14 05:29:53 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:29:53 crc kubenswrapper[4867]: > Feb 14 05:29:54 crc kubenswrapper[4867]: I0214 05:29:54.551425 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/certified-operators-mrccv" podUID="e0fe6db4-add0-4993-a40c-c5b6725565fa" containerName="registry-server" probeResult="failure" output=< Feb 14 05:29:54 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:29:54 crc kubenswrapper[4867]: > Feb 14 05:29:54 crc kubenswrapper[4867]: I0214 05:29:54.644407 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/certified-operators-mrccv" podUID="e0fe6db4-add0-4993-a40c-c5b6725565fa" containerName="registry-server" probeResult="failure" output=< Feb 14 05:29:54 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:29:54 crc kubenswrapper[4867]: > Feb 14 05:29:55 crc kubenswrapper[4867]: I0214 05:29:55.019978 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-x7qx5" podUID="dc65ca0c-1d72-468f-b600-dfb8332bf4bd" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:29:55 crc kubenswrapper[4867]: I0214 05:29:55.020420 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-x7qx5" podUID="dc65ca0c-1d72-468f-b600-dfb8332bf4bd" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:29:55 crc kubenswrapper[4867]: I0214 05:29:55.253307 4867 trace.go:236] Trace[1095630526]: "Calculate volume metrics of storage for pod openshift-logging/logging-loki-compactor-0" (14-Feb-2026 05:29:54.157) (total time: 1057ms): Feb 14 05:29:55 crc kubenswrapper[4867]: Trace[1095630526]: [1.057776266s] [1.057776266s] END Feb 14 05:29:55 crc kubenswrapper[4867]: I0214 05:29:55.830144 4867 patch_prober.go:28] interesting pod/logging-loki-gateway-767ffcbf75-md7ts container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:29:55 crc kubenswrapper[4867]: I0214 05:29:55.834128 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-767ffcbf75-md7ts" podUID="d28844dc-6974-446b-bd9a-b22586858387" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 05:29:55 crc kubenswrapper[4867]: I0214 05:29:55.846770 4867 patch_prober.go:28] interesting pod/logging-loki-gateway-767ffcbf75-l82l4 container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.53:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:29:55 crc kubenswrapper[4867]: I0214 05:29:55.846850 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-767ffcbf75-l82l4" podUID="0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.53:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 05:29:56 crc kubenswrapper[4867]: I0214 05:29:56.687091 4867 patch_prober.go:28] interesting pod/console-796d588566-h9wcn container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.135:8443/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:29:56 crc kubenswrapper[4867]: I0214 05:29:56.687491 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-796d588566-h9wcn" podUID="41d35864-bb64-45f3-bc1e-a7d5440c35ad" containerName="console" probeResult="failure" output="Get \"https://10.217.0.135:8443/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:29:56 crc kubenswrapper[4867]: I0214 05:29:56.755153 4867 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:29:56 crc kubenswrapper[4867]: I0214 05:29:56.760221 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 05:29:57 crc kubenswrapper[4867]: I0214 05:29:57.449164 4867 patch_prober.go:28] interesting pod/metrics-server-76ddc659b-tzdtd container/metrics-server namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.84:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:29:57 crc kubenswrapper[4867]: I0214 05:29:57.449706 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/metrics-server-76ddc659b-tzdtd" podUID="652d53d9-a4c0-4061-b817-ca5173785521" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.84:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:29:57 crc kubenswrapper[4867]: I0214 05:29:57.640476 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-xlg4t" podUID="34f53dfe-4707-4a5c-8745-c4ed944c6a6a" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.44:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:29:57 crc kubenswrapper[4867]: I0214 05:29:57.831480 4867 patch_prober.go:28] interesting pod/monitoring-plugin-7f5858d95d-fvlxd container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.85:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:29:57 crc kubenswrapper[4867]: I0214 05:29:57.831563 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-7f5858d95d-fvlxd" podUID="bcf2722f-8c1f-4061-8c4a-9888961c5361" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.85:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:29:57 crc kubenswrapper[4867]: I0214 05:29:57.885778 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-75585db5cc-kzk25" podUID="c83fa345-043f-453c-b797-a00db3111d44" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:29:58 crc kubenswrapper[4867]: I0214 05:29:58.166716 4867 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-p69vd container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.30:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:29:58 crc kubenswrapper[4867]: I0214 05:29:58.166785 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-p69vd" podUID="553b1e39-c2d5-459d-a7fd-058f936804cb" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.30:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 05:29:58 crc kubenswrapper[4867]: I0214 05:29:58.871692 4867 patch_prober.go:28] interesting pod/console-operator-58897d9998-htv2n container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:29:58 crc kubenswrapper[4867]: I0214 05:29:58.871757 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-htv2n" podUID="dc723269-8ee6-4236-9eaa-169a00d76442" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 05:29:58 crc kubenswrapper[4867]: I0214 05:29:58.871815 4867 patch_prober.go:28] interesting pod/console-operator-58897d9998-htv2n container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:29:58 crc kubenswrapper[4867]: I0214 05:29:58.871832 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-htv2n" podUID="dc723269-8ee6-4236-9eaa-169a00d76442" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 05:29:59 crc kubenswrapper[4867]: I0214 05:29:59.036945 4867 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-s94ht container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.18:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:29:59 crc kubenswrapper[4867]: I0214 05:29:59.036996 4867 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-s94ht container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.18:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:29:59 crc kubenswrapper[4867]: I0214 05:29:59.037056 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s94ht" podUID="1b196c26-84a1-408f-913b-eb50572102cf" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.18:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 05:29:59 crc kubenswrapper[4867]: I0214 05:29:59.037004 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s94ht" podUID="1b196c26-84a1-408f-913b-eb50572102cf" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.18:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 05:29:59 crc kubenswrapper[4867]: I0214 05:29:59.122834 4867 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-tcss9 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.19:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:29:59 crc kubenswrapper[4867]: I0214 05:29:59.123294 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tcss9" podUID="46664b60-c0df-4869-9304-cec4de385a86" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.19:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 05:29:59 crc kubenswrapper[4867]: I0214 05:29:59.213744 4867 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-tcss9 container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.19:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:29:59 crc kubenswrapper[4867]: I0214 05:29:59.213774 4867 patch_prober.go:28] interesting pod/router-default-5444994796-qlkzp container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:29:59 crc kubenswrapper[4867]: I0214 05:29:59.213813 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tcss9" podUID="46664b60-c0df-4869-9304-cec4de385a86" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.19:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 05:29:59 crc kubenswrapper[4867]: I0214 05:29:59.213853 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-qlkzp" podUID="4b71d414-e6bf-4f51-a808-1938c1edf207" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:29:59 crc kubenswrapper[4867]: I0214 05:29:59.213854 4867 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-dgp2v container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.41:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:29:59 crc kubenswrapper[4867]: I0214 05:29:59.213912 4867 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-rv8cb container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:29:59 crc kubenswrapper[4867]: I0214 05:29:59.213893 4867 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-dgp2v container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.41:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:29:59 crc kubenswrapper[4867]: I0214 05:29:59.213949 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dgp2v" podUID="b1dba42c-e410-49fd-8c48-449fca5d65dc" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.41:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:29:59 crc kubenswrapper[4867]: I0214 05:29:59.213975 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rv8cb" podUID="a0c7654d-1553-4b68-8af4-253f77d7c657" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.32:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:29:59 crc kubenswrapper[4867]: I0214 05:29:59.214020 4867 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-rv8cb container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.217.0.32:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:29:59 crc kubenswrapper[4867]: I0214 05:29:59.214065 4867 patch_prober.go:28] interesting pod/thanos-querier-85586fc579-b75c7 container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.82:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:29:59 crc kubenswrapper[4867]: I0214 05:29:59.213917 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dgp2v" podUID="b1dba42c-e410-49fd-8c48-449fca5d65dc" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.41:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:29:59 crc kubenswrapper[4867]: I0214 05:29:59.214194 4867 patch_prober.go:28] interesting pod/router-default-5444994796-qlkzp container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:29:59 crc kubenswrapper[4867]: I0214 05:29:59.214096 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rv8cb" podUID="a0c7654d-1553-4b68-8af4-253f77d7c657" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.32:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:29:59 crc kubenswrapper[4867]: I0214 05:29:59.214229 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-qlkzp" podUID="4b71d414-e6bf-4f51-a808-1938c1edf207" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:29:59 crc kubenswrapper[4867]: I0214 05:29:59.214183 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-85586fc579-b75c7" podUID="72801c86-0365-4e93-8887-4fdc6d8a9cad" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.82:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:29:59 crc kubenswrapper[4867]: I0214 05:29:59.399034 4867 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-72mpc container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:29:59 crc kubenswrapper[4867]: I0214 05:29:59.399043 4867 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-72mpc container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:29:59 crc kubenswrapper[4867]: I0214 05:29:59.399141 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-72mpc" podUID="b967a9e8-e5f1-4c92-889a-1dd6adf747fd" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:29:59 crc kubenswrapper[4867]: I0214 05:29:59.399096 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-72mpc" podUID="b967a9e8-e5f1-4c92-889a-1dd6adf747fd" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:00 crc kubenswrapper[4867]: I0214 05:30:00.341002 4867 patch_prober.go:28] interesting pod/controller-manager-574c444545-stzjc container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.88:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:00 crc kubenswrapper[4867]: I0214 05:30:00.341097 4867 patch_prober.go:28] interesting pod/controller-manager-574c444545-stzjc container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.88:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:00 crc kubenswrapper[4867]: I0214 05:30:00.341604 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-574c444545-stzjc" podUID="a9fc9dc1-437a-4160-b805-fabfd7f877c2" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.88:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:00 crc kubenswrapper[4867]: I0214 05:30:00.341517 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-574c444545-stzjc" podUID="a9fc9dc1-437a-4160-b805-fabfd7f877c2" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.88:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:00 crc kubenswrapper[4867]: I0214 05:30:00.343521 4867 patch_prober.go:28] interesting pod/route-controller-manager-7575f7b945-9zbh8 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.87:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:00 crc kubenswrapper[4867]: I0214 05:30:00.343576 4867 patch_prober.go:28] interesting pod/route-controller-manager-7575f7b945-9zbh8 container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.87:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:00 crc kubenswrapper[4867]: I0214 05:30:00.343635 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-7575f7b945-9zbh8" podUID="29172228-9eb8-461f-8f75-cdd021e0d30c" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.87:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:00 crc kubenswrapper[4867]: I0214 05:30:00.343584 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7575f7b945-9zbh8" podUID="29172228-9eb8-461f-8f75-cdd021e0d30c" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.87:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:00 crc kubenswrapper[4867]: I0214 05:30:00.508892 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-79d975b745-jqq2w" podUID="ebee5651-7233-4c18-bb97-a4dc91eabef4" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:00 crc kubenswrapper[4867]: I0214 05:30:00.766361 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="b27199a8-11ac-4e59-90b8-b42387dd6dd2" containerName="galera" probeResult="failure" output="command timed out" Feb 14 05:30:00 crc kubenswrapper[4867]: I0214 05:30:00.766361 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="b27199a8-11ac-4e59-90b8-b42387dd6dd2" containerName="galera" probeResult="failure" output="command timed out" Feb 14 05:30:01 crc kubenswrapper[4867]: I0214 05:30:01.251198 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 05:30:01 crc kubenswrapper[4867]: I0214 05:30:01.251314 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 05:30:01 crc kubenswrapper[4867]: I0214 05:30:01.323861 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cs8b7t" podUID="634f9e2f-2100-49e3-a31f-a369cf8ff13f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:01 crc kubenswrapper[4867]: I0214 05:30:01.742815 4867 trace.go:236] Trace[1437508483]: "Calculate volume metrics of wal for pod openshift-logging/logging-loki-ingester-0" (14-Feb-2026 05:30:00.449) (total time: 1265ms): Feb 14 05:30:01 crc kubenswrapper[4867]: Trace[1437508483]: [1.265513279s] [1.265513279s] END Feb 14 05:30:01 crc kubenswrapper[4867]: I0214 05:30:01.742814 4867 trace.go:236] Trace[1387678168]: "Calculate volume metrics of mysql-db for pod openstack/openstack-galera-0" (14-Feb-2026 05:30:00.662) (total time: 1050ms): Feb 14 05:30:01 crc kubenswrapper[4867]: Trace[1387678168]: [1.05098864s] [1.05098864s] END Feb 14 05:30:01 crc kubenswrapper[4867]: I0214 05:30:01.760322 4867 patch_prober.go:28] interesting pod/oauth-openshift-79479887dd-9ltbt container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.75:6443/healthz\": context deadline exceeded" start-of-body= Feb 14 05:30:01 crc kubenswrapper[4867]: I0214 05:30:01.760394 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" podUID="351f0f21-497e-4c3e-99cc-30baff4e6484" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.75:6443/healthz\": context deadline exceeded" Feb 14 05:30:01 crc kubenswrapper[4867]: I0214 05:30:01.760339 4867 patch_prober.go:28] interesting pod/oauth-openshift-79479887dd-9ltbt container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.75:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:01 crc kubenswrapper[4867]: I0214 05:30:01.760571 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" podUID="351f0f21-497e-4c3e-99cc-30baff4e6484" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.75:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:02 crc kubenswrapper[4867]: I0214 05:30:02.249235 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-fwfld" podUID="09ba042e-98c3-43cc-aa6a-efbb9a63ae61" containerName="registry-server" probeResult="failure" output=< Feb 14 05:30:02 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:30:02 crc kubenswrapper[4867]: > Feb 14 05:30:02 crc kubenswrapper[4867]: I0214 05:30:02.249632 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9jj9q" podUID="3532ff4a-374c-407b-b01c-b63267b0f9f9" containerName="registry-server" probeResult="failure" output=< Feb 14 05:30:02 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:30:02 crc kubenswrapper[4867]: > Feb 14 05:30:02 crc kubenswrapper[4867]: I0214 05:30:02.298380 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-n4l4x" podUID="c07eb1e9-f4cc-4664-b9f6-80322fe0644a" containerName="registry-server" probeResult="failure" output=< Feb 14 05:30:02 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:30:02 crc kubenswrapper[4867]: > Feb 14 05:30:02 crc kubenswrapper[4867]: I0214 05:30:02.439062 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-7f9bfb45cb-mpxbn" podUID="d5e9c930-96ca-4a35-af4f-b8ae033469a5" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.91:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:02 crc kubenswrapper[4867]: I0214 05:30:02.442685 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-7f9bfb45cb-mpxbn" podUID="d5e9c930-96ca-4a35-af4f-b8ae033469a5" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.91:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:02 crc kubenswrapper[4867]: I0214 05:30:02.929717 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-9gqfb" podUID="85e0628d-4132-4c09-9da0-35db43024c9c" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.93:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:02 crc kubenswrapper[4867]: I0214 05:30:02.929767 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-9gqfb" podUID="85e0628d-4132-4c09-9da0-35db43024c9c" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.93:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:03 crc kubenswrapper[4867]: I0214 05:30:03.081217 4867 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:03 crc kubenswrapper[4867]: I0214 05:30:03.081322 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:03 crc kubenswrapper[4867]: I0214 05:30:03.251854 4867 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-l8d7w container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:03 crc kubenswrapper[4867]: I0214 05:30:03.252029 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l8d7w" podUID="d1f6fd76-f362-495f-969d-a644f072552f" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:03 crc kubenswrapper[4867]: I0214 05:30:03.252128 4867 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-l8d7w container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:03 crc kubenswrapper[4867]: I0214 05:30:03.252161 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l8d7w" podUID="d1f6fd76-f362-495f-969d-a644f072552f" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:03 crc kubenswrapper[4867]: I0214 05:30:03.457688 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-nzdwg" podUID="cfde5532-97c7-47b8-8b63-0159fc9e82b9" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:03 crc kubenswrapper[4867]: I0214 05:30:03.640771 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-gbzmm" podUID="ae8a4292-e933-464b-b36d-918f43ce6f65" containerName="registry-server" probeResult="failure" output=< Feb 14 05:30:03 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:30:03 crc kubenswrapper[4867]: > Feb 14 05:30:03 crc kubenswrapper[4867]: I0214 05:30:03.835388 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-index-29mb7" podUID="b4bb205c-0469-49a0-b783-9b51ae11ddfe" containerName="registry-server" probeResult="failure" output=< Feb 14 05:30:03 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:30:03 crc kubenswrapper[4867]: > Feb 14 05:30:03 crc kubenswrapper[4867]: I0214 05:30:03.836290 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-29mb7" podUID="b4bb205c-0469-49a0-b783-9b51ae11ddfe" containerName="registry-server" probeResult="failure" output=< Feb 14 05:30:03 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:30:03 crc kubenswrapper[4867]: > Feb 14 05:30:04 crc kubenswrapper[4867]: I0214 05:30:04.069177 4867 patch_prober.go:28] interesting pod/nmstate-webhook-866bcb46dc-khbvf container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.73:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:04 crc kubenswrapper[4867]: I0214 05:30:04.069251 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-khbvf" podUID="fdb6e297-9da3-41ff-a6f3-de81833178c8" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.73:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:04 crc kubenswrapper[4867]: I0214 05:30:04.136476 4867 patch_prober.go:28] interesting pod/thanos-querier-85586fc579-b75c7 container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.82:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:04 crc kubenswrapper[4867]: I0214 05:30:04.136686 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-85586fc579-b75c7" podUID="72801c86-0365-4e93-8887-4fdc6d8a9cad" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.82:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:04 crc kubenswrapper[4867]: I0214 05:30:04.136519 4867 patch_prober.go:28] interesting pod/thanos-querier-85586fc579-b75c7 container/kube-rbac-proxy-web namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.82:9091/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:04 crc kubenswrapper[4867]: I0214 05:30:04.136813 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/thanos-querier-85586fc579-b75c7" podUID="72801c86-0365-4e93-8887-4fdc6d8a9cad" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.82:9091/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:04 crc kubenswrapper[4867]: I0214 05:30:04.407752 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-6b9546c8f4-49lm8" podUID="10461723-ecff-48fe-a034-9a07bf3bf8f7" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.98:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:04 crc kubenswrapper[4867]: I0214 05:30:04.541199 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/certified-operators-mrccv" podUID="e0fe6db4-add0-4993-a40c-c5b6725565fa" containerName="registry-server" probeResult="failure" output=< Feb 14 05:30:04 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:30:04 crc kubenswrapper[4867]: > Feb 14 05:30:04 crc kubenswrapper[4867]: I0214 05:30:04.639763 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-4hvw7" podUID="6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:04 crc kubenswrapper[4867]: I0214 05:30:04.639789 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-chbgl" podUID="3025ff58-4a91-43f5-8f15-94cadd0cef8b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.101:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:04 crc kubenswrapper[4867]: I0214 05:30:04.680687 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-ndb8l" podUID="652d3b74-0634-4f8f-b5ef-3adfc53920eb" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.100:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:04 crc kubenswrapper[4867]: I0214 05:30:04.680966 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-4hvw7" podUID="6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:04 crc kubenswrapper[4867]: I0214 05:30:04.681051 4867 patch_prober.go:28] interesting pod/logging-loki-distributor-5d5548c9f5-7zdqp container/loki-distributor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.50:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:04 crc kubenswrapper[4867]: I0214 05:30:04.681084 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-pxm8d" podUID="66c8a0dd-f076-4994-bd42-39c80de83233" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.99:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:04 crc kubenswrapper[4867]: I0214 05:30:04.681113 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-7zdqp" podUID="c9201352-8585-47d4-9c13-b9e21ac4cd9f" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.50:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:04 crc kubenswrapper[4867]: I0214 05:30:04.742261 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/certified-operators-mrccv" podUID="e0fe6db4-add0-4993-a40c-c5b6725565fa" containerName="registry-server" probeResult="failure" output=< Feb 14 05:30:04 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:30:04 crc kubenswrapper[4867]: > Feb 14 05:30:04 crc kubenswrapper[4867]: I0214 05:30:04.742373 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-bgznq" podUID="4b75df5b-04e5-445f-8d2d-57c6cbe5971c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.104:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:04 crc kubenswrapper[4867]: I0214 05:30:04.883234 4867 patch_prober.go:28] interesting pod/logging-loki-querier-76bf7b6d45-5td7f container/loki-querier namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.51:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:04 crc kubenswrapper[4867]: I0214 05:30:04.883301 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-querier-76bf7b6d45-5td7f" podUID="9c48c070-b4b3-48af-b40a-d82788f764d9" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.51:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:04 crc kubenswrapper[4867]: I0214 05:30:04.942093 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/community-operators-w69fq" podUID="be125812-eeef-4043-bef9-fea01037dddb" containerName="registry-server" probeResult="failure" output=< Feb 14 05:30:04 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:30:04 crc kubenswrapper[4867]: > Feb 14 05:30:04 crc kubenswrapper[4867]: I0214 05:30:04.946909 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/community-operators-w69fq" podUID="be125812-eeef-4043-bef9-fea01037dddb" containerName="registry-server" probeResult="failure" output=< Feb 14 05:30:04 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:30:04 crc kubenswrapper[4867]: > Feb 14 05:30:04 crc kubenswrapper[4867]: I0214 05:30:04.982877 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-x7qx5" podUID="dc65ca0c-1d72-468f-b600-dfb8332bf4bd" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:05 crc kubenswrapper[4867]: I0214 05:30:05.070472 4867 patch_prober.go:28] interesting pod/logging-loki-query-frontend-6d6859c548-cfcbp container/loki-query-frontend namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.52:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:05 crc kubenswrapper[4867]: I0214 05:30:05.070586 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-cfcbp" podUID="837b4fe4-f827-4882-8af7-225b18bb3e22" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.0.52:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:05 crc kubenswrapper[4867]: I0214 05:30:05.188717 4867 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-kv4j7 container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.25:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:05 crc kubenswrapper[4867]: I0214 05:30:05.188760 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-6nhjp" podUID="94ff35ef-77e1-4085-ad2f-837ebc666b2a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:05 crc kubenswrapper[4867]: I0214 05:30:05.189098 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-kv4j7" podUID="94f47db9-4437-4b3e-aee5-f6f65e715e62" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.25:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:05 crc kubenswrapper[4867]: I0214 05:30:05.188756 4867 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-kv4j7 container/operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.25:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:05 crc kubenswrapper[4867]: I0214 05:30:05.189156 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/observability-operator-59bdc8b94-kv4j7" podUID="94f47db9-4437-4b3e-aee5-f6f65e715e62" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.25:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:05 crc kubenswrapper[4867]: I0214 05:30:05.289697 4867 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-7qfh9 container/perses-operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.34:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:05 crc kubenswrapper[4867]: I0214 05:30:05.289736 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-wwm9m" podUID="7bb6de63-3c92-43de-a01b-b34df765aeba" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:05 crc kubenswrapper[4867]: I0214 05:30:05.289976 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/perses-operator-5bf474d74f-7qfh9" podUID="31f03187-50f6-4015-afdc-422455a63006" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.34:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:05 crc kubenswrapper[4867]: I0214 05:30:05.391629 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-tf6rg" podUID="74a43e5b-11c4-459d-bbc7-03aa03489f17" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:05 crc kubenswrapper[4867]: I0214 05:30:05.579694 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-7zkqz" podUID="64ff8480-2ca0-40d5-b5c9-448d0db3c575" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:05 crc kubenswrapper[4867]: I0214 05:30:05.751690 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-68f46476f-snrw6" podUID="bc4bb4fd-bcc8-438b-af84-a2db3d3e346a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:05 crc kubenswrapper[4867]: I0214 05:30:05.757789 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="62ee3130-2952-453e-82b6-dba068ba1bc9" containerName="prometheus" probeResult="failure" output="command timed out" Feb 14 05:30:05 crc kubenswrapper[4867]: I0214 05:30:05.758282 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="62ee3130-2952-453e-82b6-dba068ba1bc9" containerName="prometheus" probeResult="failure" output="command timed out" Feb 14 05:30:05 crc kubenswrapper[4867]: I0214 05:30:05.822744 4867 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:05 crc kubenswrapper[4867]: I0214 05:30:05.822796 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="775ca902-fd03-4191-9440-ea598768d4e6" containerName="loki-ingester" probeResult="failure" output="Get \"https://10.217.0.55:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:05 crc kubenswrapper[4867]: I0214 05:30:05.828792 4867 patch_prober.go:28] interesting pod/logging-loki-gateway-767ffcbf75-md7ts container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:05 crc kubenswrapper[4867]: I0214 05:30:05.828857 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-767ffcbf75-md7ts" podUID="d28844dc-6974-446b-bd9a-b22586858387" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:05 crc kubenswrapper[4867]: I0214 05:30:05.846826 4867 patch_prober.go:28] interesting pod/logging-loki-gateway-767ffcbf75-l82l4 container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.53:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:05 crc kubenswrapper[4867]: I0214 05:30:05.846930 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-767ffcbf75-l82l4" podUID="0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.53:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:06 crc kubenswrapper[4867]: I0214 05:30:06.025698 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-55dcdcc8d-49t56" podUID="d72a97fb-2a6a-4af1-8f0c-de88ab679119" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:06 crc kubenswrapper[4867]: I0214 05:30:06.025703 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-vwvtz" podUID="9ec66be5-3947-45d1-bf34-c7639e8d4c8a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:06 crc kubenswrapper[4867]: I0214 05:30:06.066728 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-7866795846-t7hwz" podUID="67e3f2b9-2dbf-4c35-b1cd-02be51f58e38" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:06 crc kubenswrapper[4867]: I0214 05:30:06.370547 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="8c8003cd-8992-4714-96a2-2e649aead118" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.166:9090/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:06 crc kubenswrapper[4867]: I0214 05:30:06.370811 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="8c8003cd-8992-4714-96a2-2e649aead118" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.166:9090/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:06 crc kubenswrapper[4867]: I0214 05:30:06.687926 4867 patch_prober.go:28] interesting pod/console-796d588566-h9wcn container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.135:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:06 crc kubenswrapper[4867]: I0214 05:30:06.687983 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-796d588566-h9wcn" podUID="41d35864-bb64-45f3-bc1e-a7d5440c35ad" containerName="console" probeResult="failure" output="Get \"https://10.217.0.135:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:06 crc kubenswrapper[4867]: I0214 05:30:06.754237 4867 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:06 crc kubenswrapper[4867]: I0214 05:30:06.754315 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:07 crc kubenswrapper[4867]: I0214 05:30:07.457663 4867 patch_prober.go:28] interesting pod/metrics-server-76ddc659b-tzdtd container/metrics-server namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.84:10250/livez\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:07 crc kubenswrapper[4867]: I0214 05:30:07.457747 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/metrics-server-76ddc659b-tzdtd" podUID="652d53d9-a4c0-4061-b817-ca5173785521" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.84:10250/livez\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:07 crc kubenswrapper[4867]: I0214 05:30:07.457853 4867 patch_prober.go:28] interesting pod/metrics-server-76ddc659b-tzdtd container/metrics-server namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.84:10250/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:07 crc kubenswrapper[4867]: I0214 05:30:07.457959 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/metrics-server-76ddc659b-tzdtd" podUID="652d53d9-a4c0-4061-b817-ca5173785521" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.84:10250/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:07 crc kubenswrapper[4867]: I0214 05:30:07.536993 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-xlg4t" podUID="34f53dfe-4707-4a5c-8745-c4ed944c6a6a" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.44:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:07 crc kubenswrapper[4867]: I0214 05:30:07.831143 4867 patch_prober.go:28] interesting pod/monitoring-plugin-7f5858d95d-fvlxd container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.85:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:07 crc kubenswrapper[4867]: I0214 05:30:07.831269 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-7f5858d95d-fvlxd" podUID="bcf2722f-8c1f-4061-8c4a-9888961c5361" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.85:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:07 crc kubenswrapper[4867]: I0214 05:30:07.925690 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-75585db5cc-kzk25" podUID="c83fa345-043f-453c-b797-a00db3111d44" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:07 crc kubenswrapper[4867]: I0214 05:30:07.925689 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-manager-75585db5cc-kzk25" podUID="c83fa345-043f-453c-b797-a00db3111d44" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:08 crc kubenswrapper[4867]: I0214 05:30:08.123975 4867 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-p69vd container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.30:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:08 crc kubenswrapper[4867]: I0214 05:30:08.124058 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-p69vd" podUID="553b1e39-c2d5-459d-a7fd-058f936804cb" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.30:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:08 crc kubenswrapper[4867]: I0214 05:30:08.868702 4867 patch_prober.go:28] interesting pod/console-operator-58897d9998-htv2n container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:08 crc kubenswrapper[4867]: I0214 05:30:08.869076 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-htv2n" podUID="dc723269-8ee6-4236-9eaa-169a00d76442" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:08 crc kubenswrapper[4867]: I0214 05:30:08.868708 4867 patch_prober.go:28] interesting pod/console-operator-58897d9998-htv2n container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:08 crc kubenswrapper[4867]: I0214 05:30:08.869146 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-htv2n" podUID="dc723269-8ee6-4236-9eaa-169a00d76442" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:09 crc kubenswrapper[4867]: I0214 05:30:09.038619 4867 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-s94ht container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.18:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:09 crc kubenswrapper[4867]: I0214 05:30:09.038678 4867 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-s94ht container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.18:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:09 crc kubenswrapper[4867]: I0214 05:30:09.038694 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s94ht" podUID="1b196c26-84a1-408f-913b-eb50572102cf" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.18:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:09 crc kubenswrapper[4867]: I0214 05:30:09.038734 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s94ht" podUID="1b196c26-84a1-408f-913b-eb50572102cf" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.18:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:09 crc kubenswrapper[4867]: I0214 05:30:09.043991 4867 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-tcss9 container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:09 crc kubenswrapper[4867]: I0214 05:30:09.044045 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tcss9" podUID="46664b60-c0df-4869-9304-cec4de385a86" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:09 crc kubenswrapper[4867]: I0214 05:30:09.044084 4867 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-tcss9 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:09 crc kubenswrapper[4867]: I0214 05:30:09.044134 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tcss9" podUID="46664b60-c0df-4869-9304-cec4de385a86" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:09 crc kubenswrapper[4867]: I0214 05:30:09.098711 4867 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-rv8cb container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.217.0.32:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:09 crc kubenswrapper[4867]: I0214 05:30:09.098796 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rv8cb" podUID="a0c7654d-1553-4b68-8af4-253f77d7c657" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.32:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:09 crc kubenswrapper[4867]: I0214 05:30:09.136807 4867 patch_prober.go:28] interesting pod/thanos-querier-85586fc579-b75c7 container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.82:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:09 crc kubenswrapper[4867]: I0214 05:30:09.136873 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-85586fc579-b75c7" podUID="72801c86-0365-4e93-8887-4fdc6d8a9cad" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.82:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:09 crc kubenswrapper[4867]: I0214 05:30:09.139674 4867 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-rv8cb container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:09 crc kubenswrapper[4867]: I0214 05:30:09.139733 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rv8cb" podUID="a0c7654d-1553-4b68-8af4-253f77d7c657" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.32:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:09 crc kubenswrapper[4867]: I0214 05:30:09.221744 4867 patch_prober.go:28] interesting pod/router-default-5444994796-qlkzp container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:09 crc kubenswrapper[4867]: I0214 05:30:09.221753 4867 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-dgp2v container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.41:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:09 crc kubenswrapper[4867]: I0214 05:30:09.222023 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-qlkzp" podUID="4b71d414-e6bf-4f51-a808-1938c1edf207" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:09 crc kubenswrapper[4867]: I0214 05:30:09.221884 4867 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-dgp2v container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.41:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:09 crc kubenswrapper[4867]: I0214 05:30:09.222070 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dgp2v" podUID="b1dba42c-e410-49fd-8c48-449fca5d65dc" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.41:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:09 crc kubenswrapper[4867]: I0214 05:30:09.222095 4867 patch_prober.go:28] interesting pod/router-default-5444994796-qlkzp container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:09 crc kubenswrapper[4867]: I0214 05:30:09.222174 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dgp2v" podUID="b1dba42c-e410-49fd-8c48-449fca5d65dc" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.41:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:09 crc kubenswrapper[4867]: I0214 05:30:09.222236 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-qlkzp" podUID="4b71d414-e6bf-4f51-a808-1938c1edf207" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:09 crc kubenswrapper[4867]: I0214 05:30:09.399683 4867 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-72mpc container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:09 crc kubenswrapper[4867]: I0214 05:30:09.399711 4867 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-72mpc container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:09 crc kubenswrapper[4867]: I0214 05:30:09.399845 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-72mpc" podUID="b967a9e8-e5f1-4c92-889a-1dd6adf747fd" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:09 crc kubenswrapper[4867]: I0214 05:30:09.399760 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-72mpc" podUID="b967a9e8-e5f1-4c92-889a-1dd6adf747fd" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:09 crc kubenswrapper[4867]: I0214 05:30:09.780410 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="27437fd9-2bc5-48ac-9e34-e733da15dd2b" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Feb 14 05:30:10 crc kubenswrapper[4867]: I0214 05:30:10.340202 4867 patch_prober.go:28] interesting pod/controller-manager-574c444545-stzjc container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.88:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:10 crc kubenswrapper[4867]: I0214 05:30:10.340527 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-574c444545-stzjc" podUID="a9fc9dc1-437a-4160-b805-fabfd7f877c2" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.88:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:10 crc kubenswrapper[4867]: I0214 05:30:10.340276 4867 patch_prober.go:28] interesting pod/controller-manager-574c444545-stzjc container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.88:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:10 crc kubenswrapper[4867]: I0214 05:30:10.340590 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-574c444545-stzjc" podUID="a9fc9dc1-437a-4160-b805-fabfd7f877c2" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.88:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:10 crc kubenswrapper[4867]: I0214 05:30:10.344648 4867 patch_prober.go:28] interesting pod/route-controller-manager-7575f7b945-9zbh8 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.87:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:10 crc kubenswrapper[4867]: I0214 05:30:10.344715 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7575f7b945-9zbh8" podUID="29172228-9eb8-461f-8f75-cdd021e0d30c" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.87:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:10 crc kubenswrapper[4867]: I0214 05:30:10.344777 4867 patch_prober.go:28] interesting pod/route-controller-manager-7575f7b945-9zbh8 container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.87:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:10 crc kubenswrapper[4867]: I0214 05:30:10.344795 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-7575f7b945-9zbh8" podUID="29172228-9eb8-461f-8f75-cdd021e0d30c" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.87:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:10 crc kubenswrapper[4867]: I0214 05:30:10.549741 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-79d975b745-jqq2w" podUID="ebee5651-7233-4c18-bb97-a4dc91eabef4" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:10 crc kubenswrapper[4867]: I0214 05:30:10.549866 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/infra-operator-controller-manager-79d975b745-jqq2w" podUID="ebee5651-7233-4c18-bb97-a4dc91eabef4" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:10 crc kubenswrapper[4867]: I0214 05:30:10.758754 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="b27199a8-11ac-4e59-90b8-b42387dd6dd2" containerName="galera" probeResult="failure" output="command timed out" Feb 14 05:30:10 crc kubenswrapper[4867]: I0214 05:30:10.758848 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="b27199a8-11ac-4e59-90b8-b42387dd6dd2" containerName="galera" probeResult="failure" output="command timed out" Feb 14 05:30:10 crc kubenswrapper[4867]: I0214 05:30:10.759700 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="62ee3130-2952-453e-82b6-dba068ba1bc9" containerName="prometheus" probeResult="failure" output="command timed out" Feb 14 05:30:10 crc kubenswrapper[4867]: I0214 05:30:10.760545 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="62ee3130-2952-453e-82b6-dba068ba1bc9" containerName="prometheus" probeResult="failure" output="command timed out" Feb 14 05:30:10 crc kubenswrapper[4867]: I0214 05:30:10.761975 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-gbz8c" podUID="c8fe62eb-932d-4b17-8ffa-6c90780bdd74" containerName="registry-server" probeResult="failure" output="command timed out" Feb 14 05:30:10 crc kubenswrapper[4867]: I0214 05:30:10.762047 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-marketplace-gbz8c" podUID="c8fe62eb-932d-4b17-8ffa-6c90780bdd74" containerName="registry-server" probeResult="failure" output="command timed out" Feb 14 05:30:10 crc kubenswrapper[4867]: I0214 05:30:10.785701 4867 patch_prober.go:28] interesting pod/loki-operator-controller-manager-5479889c99-ltnxf container/manager namespace/openshift-operators-redhat: Readiness probe status=failure output="Get \"http://10.217.0.47:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:10 crc kubenswrapper[4867]: I0214 05:30:10.785772 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators-redhat/loki-operator-controller-manager-5479889c99-ltnxf" podUID="4a918644-d451-4f71-8a69-627b0de1ebb7" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.47:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:10 crc kubenswrapper[4867]: I0214 05:30:10.817225 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="89e70483-d3e8-4758-bb61-ae6147dd4f39" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.1.9:8081/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:10 crc kubenswrapper[4867]: I0214 05:30:10.819378 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="89e70483-d3e8-4758-bb61-ae6147dd4f39" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.1.9:8080/livez\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:10 crc kubenswrapper[4867]: I0214 05:30:10.828558 4867 patch_prober.go:28] interesting pod/logging-loki-gateway-767ffcbf75-md7ts container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:10 crc kubenswrapper[4867]: I0214 05:30:10.828594 4867 patch_prober.go:28] interesting pod/logging-loki-gateway-767ffcbf75-md7ts container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:10 crc kubenswrapper[4867]: I0214 05:30:10.828648 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-767ffcbf75-md7ts" podUID="d28844dc-6974-446b-bd9a-b22586858387" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.54:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:10 crc kubenswrapper[4867]: I0214 05:30:10.828654 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-767ffcbf75-md7ts" podUID="d28844dc-6974-446b-bd9a-b22586858387" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:10 crc kubenswrapper[4867]: I0214 05:30:10.846260 4867 patch_prober.go:28] interesting pod/logging-loki-gateway-767ffcbf75-l82l4 container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.53:8081/ready\": context deadline exceeded" start-of-body= Feb 14 05:30:10 crc kubenswrapper[4867]: I0214 05:30:10.846343 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-767ffcbf75-l82l4" podUID="0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.53:8081/ready\": context deadline exceeded" Feb 14 05:30:10 crc kubenswrapper[4867]: I0214 05:30:10.846405 4867 patch_prober.go:28] interesting pod/logging-loki-gateway-767ffcbf75-l82l4 container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.53:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:10 crc kubenswrapper[4867]: I0214 05:30:10.846472 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-767ffcbf75-l82l4" podUID="0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.53:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:11 crc kubenswrapper[4867]: I0214 05:30:11.029169 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-operators-bvb8v" podUID="140d0152-99c5-425c-b956-595dea337206" containerName="registry-server" probeResult="failure" output=< Feb 14 05:30:11 crc kubenswrapper[4867]: timeout: health rpc did not complete within 1s Feb 14 05:30:11 crc kubenswrapper[4867]: > Feb 14 05:30:11 crc kubenswrapper[4867]: I0214 05:30:11.029365 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-operators-bvb8v" podUID="140d0152-99c5-425c-b956-595dea337206" containerName="registry-server" probeResult="failure" output=< Feb 14 05:30:11 crc kubenswrapper[4867]: timeout: health rpc did not complete within 1s Feb 14 05:30:11 crc kubenswrapper[4867]: > Feb 14 05:30:11 crc kubenswrapper[4867]: I0214 05:30:11.146068 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-fwfld" podUID="09ba042e-98c3-43cc-aa6a-efbb9a63ae61" containerName="registry-server" probeResult="failure" output=< Feb 14 05:30:11 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:30:11 crc kubenswrapper[4867]: > Feb 14 05:30:11 crc kubenswrapper[4867]: I0214 05:30:11.148250 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9jj9q" podUID="3532ff4a-374c-407b-b01c-b63267b0f9f9" containerName="registry-server" probeResult="failure" output=< Feb 14 05:30:11 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:30:11 crc kubenswrapper[4867]: > Feb 14 05:30:11 crc kubenswrapper[4867]: I0214 05:30:11.364711 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cs8b7t" podUID="634f9e2f-2100-49e3-a31f-a369cf8ff13f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:11 crc kubenswrapper[4867]: I0214 05:30:11.364950 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cs8b7t" podUID="634f9e2f-2100-49e3-a31f-a369cf8ff13f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:11 crc kubenswrapper[4867]: I0214 05:30:11.370605 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="8c8003cd-8992-4714-96a2-2e649aead118" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.166:9090/-/healthy\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:11 crc kubenswrapper[4867]: I0214 05:30:11.370681 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="8c8003cd-8992-4714-96a2-2e649aead118" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.166:9090/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:11 crc kubenswrapper[4867]: I0214 05:30:11.792733 4867 patch_prober.go:28] interesting pod/oauth-openshift-79479887dd-9ltbt container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.75:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:11 crc kubenswrapper[4867]: I0214 05:30:11.792789 4867 patch_prober.go:28] interesting pod/oauth-openshift-79479887dd-9ltbt container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.75:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:11 crc kubenswrapper[4867]: I0214 05:30:11.792800 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" podUID="351f0f21-497e-4c3e-99cc-30baff4e6484" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.75:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:11 crc kubenswrapper[4867]: I0214 05:30:11.792828 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" podUID="351f0f21-497e-4c3e-99cc-30baff4e6484" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.75:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:11 crc kubenswrapper[4867]: I0214 05:30:11.792758 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-67594686f4-52kwb" podUID="e1d5f0bd-4e8c-45c7-9d4e-c530689948ad" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.90:8080/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:11 crc kubenswrapper[4867]: I0214 05:30:11.909289 4867 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-jsc7b container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:11 crc kubenswrapper[4867]: I0214 05:30:11.909350 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jsc7b" podUID="d58c6e7c-e0bc-4833-ab34-348c03f75da7" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.10:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:12 crc kubenswrapper[4867]: I0214 05:30:12.419891 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-7f9bfb45cb-mpxbn" podUID="d5e9c930-96ca-4a35-af4f-b8ae033469a5" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.91:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:12 crc kubenswrapper[4867]: I0214 05:30:12.420154 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-7f9bfb45cb-mpxbn" podUID="d5e9c930-96ca-4a35-af4f-b8ae033469a5" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.91:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:12 crc kubenswrapper[4867]: I0214 05:30:12.759567 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="505de461-9e6f-4914-bf50-e2bf4149b566" containerName="galera" probeResult="failure" output="command timed out" Feb 14 05:30:12 crc kubenswrapper[4867]: I0214 05:30:12.759807 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="505de461-9e6f-4914-bf50-e2bf4149b566" containerName="galera" probeResult="failure" output="command timed out" Feb 14 05:30:12 crc kubenswrapper[4867]: I0214 05:30:12.927671 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-9gqfb" podUID="85e0628d-4132-4c09-9da0-35db43024c9c" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.93:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:12 crc kubenswrapper[4867]: I0214 05:30:12.927761 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-9gqfb" podUID="85e0628d-4132-4c09-9da0-35db43024c9c" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.93:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:13 crc kubenswrapper[4867]: I0214 05:30:13.081679 4867 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:13 crc kubenswrapper[4867]: I0214 05:30:13.082017 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:13 crc kubenswrapper[4867]: I0214 05:30:13.538711 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-nzdwg" podUID="cfde5532-97c7-47b8-8b63-0159fc9e82b9" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:13 crc kubenswrapper[4867]: I0214 05:30:13.538838 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-nzdwg" podUID="cfde5532-97c7-47b8-8b63-0159fc9e82b9" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:13 crc kubenswrapper[4867]: I0214 05:30:13.539224 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-nzdwg" podUID="cfde5532-97c7-47b8-8b63-0159fc9e82b9" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:13 crc kubenswrapper[4867]: I0214 05:30:13.560688 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/frr-k8s-nzdwg" Feb 14 05:30:13 crc kubenswrapper[4867]: I0214 05:30:13.574729 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="frr" containerStatusID={"Type":"cri-o","ID":"a607ea132c1aa0b9d6c68c3601ae04a26220cd55eee8e095594f2aace6ecac5a"} pod="metallb-system/frr-k8s-nzdwg" containerMessage="Container frr failed liveness probe, will be restarted" Feb 14 05:30:13 crc kubenswrapper[4867]: I0214 05:30:13.601487 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/frr-k8s-nzdwg" podUID="cfde5532-97c7-47b8-8b63-0159fc9e82b9" containerName="frr" containerID="cri-o://a607ea132c1aa0b9d6c68c3601ae04a26220cd55eee8e095594f2aace6ecac5a" gracePeriod=2 Feb 14 05:30:13 crc kubenswrapper[4867]: I0214 05:30:13.666866 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/controller-69bbfbf88f-zhmxc" podUID="516cf204-1263-431e-a450-039739b0d925" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.94:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:13 crc kubenswrapper[4867]: I0214 05:30:13.667009 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/controller-69bbfbf88f-zhmxc" podUID="516cf204-1263-431e-a450-039739b0d925" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.94:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:13 crc kubenswrapper[4867]: I0214 05:30:13.857698 4867 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-p82xp container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.60:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:13 crc kubenswrapper[4867]: I0214 05:30:13.857824 4867 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-p82xp container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.60:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:13 crc kubenswrapper[4867]: I0214 05:30:13.858318 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-p82xp" podUID="33b576d8-f768-4fd2-895d-7d4ababe8714" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.60:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:13 crc kubenswrapper[4867]: I0214 05:30:13.859298 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-p82xp" podUID="33b576d8-f768-4fd2-895d-7d4ababe8714" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.60:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:14 crc kubenswrapper[4867]: I0214 05:30:14.070774 4867 patch_prober.go:28] interesting pod/nmstate-webhook-866bcb46dc-khbvf container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.73:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:14 crc kubenswrapper[4867]: I0214 05:30:14.070847 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-khbvf" podUID="fdb6e297-9da3-41ff-a6f3-de81833178c8" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.73:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:14 crc kubenswrapper[4867]: I0214 05:30:14.136308 4867 patch_prober.go:28] interesting pod/thanos-querier-85586fc579-b75c7 container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.82:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:14 crc kubenswrapper[4867]: I0214 05:30:14.136366 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-85586fc579-b75c7" podUID="72801c86-0365-4e93-8887-4fdc6d8a9cad" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.82:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:14 crc kubenswrapper[4867]: I0214 05:30:14.449752 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-init-6b9546c8f4-49lm8" podUID="10461723-ecff-48fe-a034-9a07bf3bf8f7" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.98:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:14 crc kubenswrapper[4867]: I0214 05:30:14.449895 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-6b9546c8f4-49lm8" podUID="10461723-ecff-48fe-a034-9a07bf3bf8f7" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.98:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:14 crc kubenswrapper[4867]: I0214 05:30:14.720745 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-4hvw7" podUID="6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:14 crc kubenswrapper[4867]: I0214 05:30:14.802747 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-ndb8l" podUID="652d3b74-0634-4f8f-b5ef-3adfc53920eb" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.100:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:14 crc kubenswrapper[4867]: I0214 05:30:14.802852 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="27437fd9-2bc5-48ac-9e34-e733da15dd2b" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:14.852721 4867 trace.go:236] Trace[276298581]: "Calculate volume metrics of mysql-db for pod openstack/openstack-cell1-galera-0" (14-Feb-2026 05:30:09.453) (total time: 5380ms): Feb 14 05:30:15 crc kubenswrapper[4867]: Trace[276298581]: [5.380506222s] [5.380506222s] END Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:14.852722 4867 trace.go:236] Trace[1520390512]: "Calculate volume metrics of registry-storage for pod openshift-image-registry/image-registry-66df7c8f76-wwh9m" (14-Feb-2026 05:30:10.298) (total time: 4527ms): Feb 14 05:30:15 crc kubenswrapper[4867]: Trace[1520390512]: [4.527796865s] [4.527796865s] END Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:14.856535 4867 trace.go:236] Trace[1768292916]: "Calculate volume metrics of persistence for pod openstack/rabbitmq-server-0" (14-Feb-2026 05:30:09.646) (total time: 5180ms): Feb 14 05:30:15 crc kubenswrapper[4867]: Trace[1768292916]: [5.180510583s] [5.180510583s] END Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:14.884697 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-4hvw7" podUID="6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:14.884716 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/glance-operator-controller-manager-77987464f4-tpfxn" podUID="1f889f7b-8ae5-43e3-ab54-d3bf06c010df" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.102:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:14.884785 4867 patch_prober.go:28] interesting pod/logging-loki-distributor-5d5548c9f5-7zdqp container/loki-distributor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.50:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:14.884874 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-7zdqp" podUID="c9201352-8585-47d4-9c13-b9e21ac4cd9f" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.50:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:14.884894 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-pxm8d" podUID="66c8a0dd-f076-4994-bd42-39c80de83233" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.99:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:14.884948 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-pxm8d" podUID="66c8a0dd-f076-4994-bd42-39c80de83233" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.99:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:14.885102 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-chbgl" podUID="3025ff58-4a91-43f5-8f15-94cadd0cef8b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.101:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:14.966688 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-jxpv2" podUID="185d4fd5-608b-48d8-8731-27e7a05adfe2" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.103:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.007080 4867 generic.go:334] "Generic (PLEG): container finished" podID="cfde5532-97c7-47b8-8b63-0159fc9e82b9" containerID="a607ea132c1aa0b9d6c68c3601ae04a26220cd55eee8e095594f2aace6ecac5a" exitCode=143 Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.014155 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-nzdwg" event={"ID":"cfde5532-97c7-47b8-8b63-0159fc9e82b9","Type":"ContainerDied","Data":"a607ea132c1aa0b9d6c68c3601ae04a26220cd55eee8e095594f2aace6ecac5a"} Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.049749 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-bgznq" podUID="4b75df5b-04e5-445f-8d2d-57c6cbe5971c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.104:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.049868 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-chbgl" podUID="3025ff58-4a91-43f5-8f15-94cadd0cef8b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.101:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.049875 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-ndb8l" podUID="652d3b74-0634-4f8f-b5ef-3adfc53920eb" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.100:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.050167 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/glance-operator-controller-manager-77987464f4-tpfxn" podUID="1f889f7b-8ae5-43e3-ab54-d3bf06c010df" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.102:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.050367 4867 patch_prober.go:28] interesting pod/logging-loki-querier-76bf7b6d45-5td7f container/loki-querier namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.51:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.050426 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-querier-76bf7b6d45-5td7f" podUID="9c48c070-b4b3-48af-b40a-d82788f764d9" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.51:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.050519 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-jxpv2" podUID="185d4fd5-608b-48d8-8731-27e7a05adfe2" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.103:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.091815 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-x7qx5" podUID="dc65ca0c-1d72-468f-b600-dfb8332bf4bd" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.132751 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-x7qx5" podUID="dc65ca0c-1d72-468f-b600-dfb8332bf4bd" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.132862 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-x7qx5" Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.133299 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-bgznq" podUID="4b75df5b-04e5-445f-8d2d-57c6cbe5971c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.104:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.173686 4867 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-kv4j7 container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.25:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.173683 4867 patch_prober.go:28] interesting pod/logging-loki-query-frontend-6d6859c548-cfcbp container/loki-query-frontend namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.52:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.173750 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-kv4j7" podUID="94f47db9-4437-4b3e-aee5-f6f65e715e62" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.25:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.173784 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-cfcbp" podUID="837b4fe4-f827-4882-8af7-225b18bb3e22" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.0.52:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.297697 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-6nhjp" podUID="94ff35ef-77e1-4085-ad2f-837ebc666b2a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.380899 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-8dzwp" podUID="6b5078d9-f30f-40a8-b5b5-8eb11271ec10" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.463709 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-2xwdd" podUID="38a9cdf3-42e2-4279-8092-af7e8c82bc51" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.109:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.628892 4867 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-7qfh9 container/perses-operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.34:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.628929 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-6nhjp" podUID="94ff35ef-77e1-4085-ad2f-837ebc666b2a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.628956 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/perses-operator-5bf474d74f-7qfh9" podUID="31f03187-50f6-4015-afdc-422455a63006" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.34:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.628897 4867 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-kv4j7 container/operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.25:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.629010 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/observability-operator-59bdc8b94-kv4j7" podUID="94f47db9-4437-4b3e-aee5-f6f65e715e62" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.25:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.629064 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-8dzwp" podUID="6b5078d9-f30f-40a8-b5b5-8eb11271ec10" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.629384 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-tf6rg" podUID="74a43e5b-11c4-459d-bbc7-03aa03489f17" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.629479 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-2xwdd" podUID="38a9cdf3-42e2-4279-8092-af7e8c82bc51" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.109:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.629663 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-wwm9m" podUID="7bb6de63-3c92-43de-a01b-b34df765aeba" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.711718 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-wwm9m" podUID="7bb6de63-3c92-43de-a01b-b34df765aeba" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.752733 4867 patch_prober.go:28] interesting pod/logging-loki-distributor-5d5548c9f5-7zdqp container/loki-distributor namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.50:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.752781 4867 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-7qfh9 container/perses-operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.34:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.752795 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-7zdqp" podUID="c9201352-8585-47d4-9c13-b9e21ac4cd9f" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.50:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.752817 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/perses-operator-5bf474d74f-7qfh9" podUID="31f03187-50f6-4015-afdc-422455a63006" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.34:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.752732 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-7zkqz" podUID="64ff8480-2ca0-40d5-b5c9-448d0db3c575" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.761303 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="62ee3130-2952-453e-82b6-dba068ba1bc9" containerName="prometheus" probeResult="failure" output="command timed out" Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.761312 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="62ee3130-2952-453e-82b6-dba068ba1bc9" containerName="prometheus" probeResult="failure" output="command timed out" Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.761436 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.830815 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-7zkqz" podUID="64ff8480-2ca0-40d5-b5c9-448d0db3c575" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.830840 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-dszdp" podUID="ffb00aaf-6760-440e-827a-f795baf3693a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.872801 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/swift-operator-controller-manager-68f46476f-snrw6" podUID="bc4bb4fd-bcc8-438b-af84-a2db3d3e346a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.873028 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-68f46476f-snrw6" podUID="bc4bb4fd-bcc8-438b-af84-a2db3d3e346a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.873129 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-dszdp" podUID="ffb00aaf-6760-440e-827a-f795baf3693a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.873367 4867 patch_prober.go:28] interesting pod/logging-loki-gateway-767ffcbf75-md7ts container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.873392 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-767ffcbf75-md7ts" podUID="d28844dc-6974-446b-bd9a-b22586858387" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.873426 4867 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.873440 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="775ca902-fd03-4191-9440-ea598768d4e6" containerName="loki-ingester" probeResult="failure" output="Get \"https://10.217.0.55:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.873464 4867 patch_prober.go:28] interesting pod/logging-loki-gateway-767ffcbf75-l82l4 container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.53:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.873478 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-767ffcbf75-l82l4" podUID="0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.53:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.884836 4867 patch_prober.go:28] interesting pod/logging-loki-querier-76bf7b6d45-5td7f container/loki-querier namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.51:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.884931 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-querier-76bf7b6d45-5td7f" podUID="9c48c070-b4b3-48af-b40a-d82788f764d9" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.51:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:16 crc kubenswrapper[4867]: I0214 05:30:16.071067 4867 patch_prober.go:28] interesting pod/logging-loki-query-frontend-6d6859c548-cfcbp container/loki-query-frontend namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.52:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:16 crc kubenswrapper[4867]: I0214 05:30:16.071141 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-cfcbp" podUID="837b4fe4-f827-4882-8af7-225b18bb3e22" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.0.52:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:16 crc kubenswrapper[4867]: I0214 05:30:16.105575 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-vwvtz" podUID="9ec66be5-3947-45d1-bf34-c7639e8d4c8a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:16 crc kubenswrapper[4867]: I0214 05:30:16.105675 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/telemetry-operator-controller-manager-55dcdcc8d-49t56" podUID="d72a97fb-2a6a-4af1-8f0c-de88ab679119" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:16 crc kubenswrapper[4867]: I0214 05:30:16.196747 4867 patch_prober.go:28] interesting pod/logging-loki-compactor-0 container/loki-compactor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.56:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:16 crc kubenswrapper[4867]: I0214 05:30:16.197167 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-compactor-0" podUID="6975f95f-884b-4952-8bf8-0d18537e3403" containerName="loki-compactor" probeResult="failure" output="Get \"https://10.217.0.56:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:16 crc kubenswrapper[4867]: I0214 05:30:16.272843 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-6d9jj" podUID="82e5dbee-ab1e-498c-9460-be75226afa18" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:16 crc kubenswrapper[4867]: I0214 05:30:16.272892 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-55dcdcc8d-49t56" podUID="d72a97fb-2a6a-4af1-8f0c-de88ab679119" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:16 crc kubenswrapper[4867]: I0214 05:30:16.272853 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-vwvtz" podUID="9ec66be5-3947-45d1-bf34-c7639e8d4c8a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:16 crc kubenswrapper[4867]: I0214 05:30:16.313807 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-x7qx5" podUID="dc65ca0c-1d72-468f-b600-dfb8332bf4bd" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:16 crc kubenswrapper[4867]: I0214 05:30:16.313807 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-7866795846-t7hwz" podUID="67e3f2b9-2dbf-4c35-b1cd-02be51f58e38" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:16 crc kubenswrapper[4867]: I0214 05:30:16.314220 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/test-operator-controller-manager-7866795846-t7hwz" podUID="67e3f2b9-2dbf-4c35-b1cd-02be51f58e38" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:16 crc kubenswrapper[4867]: I0214 05:30:16.314329 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-6d9jj" podUID="82e5dbee-ab1e-498c-9460-be75226afa18" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:16 crc kubenswrapper[4867]: I0214 05:30:16.371385 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="8c8003cd-8992-4714-96a2-2e649aead118" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.166:9090/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:16 crc kubenswrapper[4867]: I0214 05:30:16.371404 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="8c8003cd-8992-4714-96a2-2e649aead118" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.166:9090/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:16 crc kubenswrapper[4867]: I0214 05:30:16.371654 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 14 05:30:16 crc kubenswrapper[4867]: I0214 05:30:16.398690 4867 patch_prober.go:28] interesting pod/logging-loki-index-gateway-0 container/loki-index-gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:16 crc kubenswrapper[4867]: I0214 05:30:16.398822 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-index-gateway-0" podUID="3c3333e0-ec4e-41bf-8296-9469ad3ac9cd" containerName="loki-index-gateway" probeResult="failure" output="Get \"https://10.217.0.57:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:16 crc kubenswrapper[4867]: I0214 05:30:16.686812 4867 patch_prober.go:28] interesting pod/console-796d588566-h9wcn container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.135:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:16 crc kubenswrapper[4867]: I0214 05:30:16.686920 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-796d588566-h9wcn" podUID="41d35864-bb64-45f3-bc1e-a7d5440c35ad" containerName="console" probeResult="failure" output="Get \"https://10.217.0.135:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:16 crc kubenswrapper[4867]: I0214 05:30:16.687059 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-796d588566-h9wcn" Feb 14 05:30:16 crc kubenswrapper[4867]: I0214 05:30:16.754741 4867 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:16 crc kubenswrapper[4867]: I0214 05:30:16.754816 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:16 crc kubenswrapper[4867]: I0214 05:30:16.754912 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 14 05:30:16 crc kubenswrapper[4867]: I0214 05:30:16.763932 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-n4l4x" podUID="c07eb1e9-f4cc-4664-b9f6-80322fe0644a" containerName="registry-server" probeResult="failure" output="command timed out" Feb 14 05:30:16 crc kubenswrapper[4867]: I0214 05:30:16.764883 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-29mb7" podUID="b4bb205c-0469-49a0-b783-9b51ae11ddfe" containerName="registry-server" probeResult="failure" output="command timed out" Feb 14 05:30:16 crc kubenswrapper[4867]: I0214 05:30:16.765152 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-index-29mb7" podUID="b4bb205c-0469-49a0-b783-9b51ae11ddfe" containerName="registry-server" probeResult="failure" output="command timed out" Feb 14 05:30:16 crc kubenswrapper[4867]: I0214 05:30:16.833662 4867 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.55:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:16 crc kubenswrapper[4867]: I0214 05:30:16.833734 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-ingester-0" podUID="775ca902-fd03-4191-9440-ea598768d4e6" containerName="loki-ingester" probeResult="failure" output="Get \"https://10.217.0.55:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:16 crc kubenswrapper[4867]: I0214 05:30:16.836048 4867 patch_prober.go:28] interesting pod/logging-loki-gateway-767ffcbf75-md7ts container/opa namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.54:8083/live\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:16 crc kubenswrapper[4867]: I0214 05:30:16.836093 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-gateway-767ffcbf75-md7ts" podUID="d28844dc-6974-446b-bd9a-b22586858387" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.54:8083/live\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:16 crc kubenswrapper[4867]: I0214 05:30:16.839289 4867 trace.go:236] Trace[772054449]: "Calculate volume metrics of storage for pod openshift-logging/logging-loki-index-gateway-0" (14-Feb-2026 05:30:14.235) (total time: 2603ms): Feb 14 05:30:16 crc kubenswrapper[4867]: Trace[772054449]: [2.603724734s] [2.603724734s] END Feb 14 05:30:16 crc kubenswrapper[4867]: I0214 05:30:16.845340 4867 patch_prober.go:28] interesting pod/logging-loki-gateway-767ffcbf75-l82l4 container/opa namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.53:8083/live\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:16 crc kubenswrapper[4867]: I0214 05:30:16.845410 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-gateway-767ffcbf75-l82l4" podUID="0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.53:8083/live\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:17 crc kubenswrapper[4867]: I0214 05:30:17.196072 4867 patch_prober.go:28] interesting pod/logging-loki-compactor-0 container/loki-compactor namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.56:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:17 crc kubenswrapper[4867]: I0214 05:30:17.196117 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-compactor-0" podUID="6975f95f-884b-4952-8bf8-0d18537e3403" containerName="loki-compactor" probeResult="failure" output="Get \"https://10.217.0.56:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:17 crc kubenswrapper[4867]: I0214 05:30:17.458452 4867 patch_prober.go:28] interesting pod/metrics-server-76ddc659b-tzdtd container/metrics-server namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.84:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:17 crc kubenswrapper[4867]: I0214 05:30:17.458773 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/metrics-server-76ddc659b-tzdtd" podUID="652d53d9-a4c0-4061-b817-ca5173785521" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.84:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:17 crc kubenswrapper[4867]: I0214 05:30:17.463015 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-monitoring/metrics-server-76ddc659b-tzdtd" Feb 14 05:30:17 crc kubenswrapper[4867]: I0214 05:30:17.463920 4867 patch_prober.go:28] interesting pod/logging-loki-index-gateway-0 container/loki-index-gateway namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.57:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:17 crc kubenswrapper[4867]: I0214 05:30:17.463986 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-index-gateway-0" podUID="3c3333e0-ec4e-41bf-8296-9469ad3ac9cd" containerName="loki-index-gateway" probeResult="failure" output="Get \"https://10.217.0.57:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:17 crc kubenswrapper[4867]: I0214 05:30:17.468067 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="metrics-server" containerStatusID={"Type":"cri-o","ID":"075b79918bc2f91b3a5dae96c88d4b1fcea3cd1da542c02c4a8dfaa3b4541715"} pod="openshift-monitoring/metrics-server-76ddc659b-tzdtd" containerMessage="Container metrics-server failed liveness probe, will be restarted" Feb 14 05:30:17 crc kubenswrapper[4867]: I0214 05:30:17.470989 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/metrics-server-76ddc659b-tzdtd" podUID="652d53d9-a4c0-4061-b817-ca5173785521" containerName="metrics-server" containerID="cri-o://075b79918bc2f91b3a5dae96c88d4b1fcea3cd1da542c02c4a8dfaa3b4541715" gracePeriod=170 Feb 14 05:30:17 crc kubenswrapper[4867]: I0214 05:30:17.614189 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-xlg4t" podUID="34f53dfe-4707-4a5c-8745-c4ed944c6a6a" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.44:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:17 crc kubenswrapper[4867]: I0214 05:30:17.688562 4867 patch_prober.go:28] interesting pod/console-796d588566-h9wcn container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.135:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:17 crc kubenswrapper[4867]: I0214 05:30:17.688614 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-796d588566-h9wcn" podUID="41d35864-bb64-45f3-bc1e-a7d5440c35ad" containerName="console" probeResult="failure" output="Get \"https://10.217.0.135:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:17 crc kubenswrapper[4867]: I0214 05:30:17.759062 4867 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:17 crc kubenswrapper[4867]: I0214 05:30:17.759460 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:17 crc kubenswrapper[4867]: I0214 05:30:17.769930 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-gbzmm" podUID="ae8a4292-e933-464b-b36d-918f43ce6f65" containerName="registry-server" probeResult="failure" output="command timed out" Feb 14 05:30:17 crc kubenswrapper[4867]: I0214 05:30:17.838825 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/community-operators-w69fq" podUID="be125812-eeef-4043-bef9-fea01037dddb" containerName="registry-server" probeResult="failure" output=< Feb 14 05:30:17 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:30:17 crc kubenswrapper[4867]: > Feb 14 05:30:17 crc kubenswrapper[4867]: I0214 05:30:17.838921 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/certified-operators-mrccv" podUID="e0fe6db4-add0-4993-a40c-c5b6725565fa" containerName="registry-server" probeResult="failure" output=< Feb 14 05:30:17 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:30:17 crc kubenswrapper[4867]: > Feb 14 05:30:17 crc kubenswrapper[4867]: I0214 05:30:17.838996 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mrccv" Feb 14 05:30:17 crc kubenswrapper[4867]: I0214 05:30:17.839399 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-operators-bvb8v" podUID="140d0152-99c5-425c-b956-595dea337206" containerName="registry-server" probeResult="failure" output=< Feb 14 05:30:17 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:30:17 crc kubenswrapper[4867]: > Feb 14 05:30:17 crc kubenswrapper[4867]: I0214 05:30:17.839446 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/community-operators-w69fq" podUID="be125812-eeef-4043-bef9-fea01037dddb" containerName="registry-server" probeResult="failure" output=< Feb 14 05:30:17 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:30:17 crc kubenswrapper[4867]: > Feb 14 05:30:17 crc kubenswrapper[4867]: I0214 05:30:17.839485 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/certified-operators-mrccv" podUID="e0fe6db4-add0-4993-a40c-c5b6725565fa" containerName="registry-server" probeResult="failure" output=< Feb 14 05:30:17 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:30:17 crc kubenswrapper[4867]: > Feb 14 05:30:17 crc kubenswrapper[4867]: I0214 05:30:17.839515 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/certified-operators-mrccv" Feb 14 05:30:17 crc kubenswrapper[4867]: I0214 05:30:17.839556 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-marketplace-gbz8c" podUID="c8fe62eb-932d-4b17-8ffa-6c90780bdd74" containerName="registry-server" probeResult="failure" output=< Feb 14 05:30:17 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:30:17 crc kubenswrapper[4867]: > Feb 14 05:30:17 crc kubenswrapper[4867]: I0214 05:30:17.839580 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-gbz8c" podUID="c8fe62eb-932d-4b17-8ffa-6c90780bdd74" containerName="registry-server" probeResult="failure" output=< Feb 14 05:30:17 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:30:17 crc kubenswrapper[4867]: > Feb 14 05:30:17 crc kubenswrapper[4867]: I0214 05:30:17.839629 4867 patch_prober.go:28] interesting pod/monitoring-plugin-7f5858d95d-fvlxd container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.85:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:17 crc kubenswrapper[4867]: I0214 05:30:17.839645 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-7f5858d95d-fvlxd" podUID="bcf2722f-8c1f-4061-8c4a-9888961c5361" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.85:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:17 crc kubenswrapper[4867]: I0214 05:30:17.839678 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-7f5858d95d-fvlxd" Feb 14 05:30:17 crc kubenswrapper[4867]: I0214 05:30:17.839940 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-operators-bvb8v" podUID="140d0152-99c5-425c-b956-595dea337206" containerName="registry-server" probeResult="failure" output=< Feb 14 05:30:17 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:30:17 crc kubenswrapper[4867]: > Feb 14 05:30:17 crc kubenswrapper[4867]: I0214 05:30:17.840847 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="registry-server" containerStatusID={"Type":"cri-o","ID":"5a18a56f3dda9e5462434b66a63a51cc809ec7dc9d7b1183267bce6297e94690"} pod="openshift-marketplace/certified-operators-mrccv" containerMessage="Container registry-server failed liveness probe, will be restarted" Feb 14 05:30:17 crc kubenswrapper[4867]: I0214 05:30:17.840896 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mrccv" podUID="e0fe6db4-add0-4993-a40c-c5b6725565fa" containerName="registry-server" containerID="cri-o://5a18a56f3dda9e5462434b66a63a51cc809ec7dc9d7b1183267bce6297e94690" gracePeriod=30 Feb 14 05:30:17 crc kubenswrapper[4867]: E0214 05:30:17.867794 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5a18a56f3dda9e5462434b66a63a51cc809ec7dc9d7b1183267bce6297e94690" cmd=["grpc_health_probe","-addr=:50051"] Feb 14 05:30:17 crc kubenswrapper[4867]: E0214 05:30:17.870577 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5a18a56f3dda9e5462434b66a63a51cc809ec7dc9d7b1183267bce6297e94690" cmd=["grpc_health_probe","-addr=:50051"] Feb 14 05:30:17 crc kubenswrapper[4867]: E0214 05:30:17.873935 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5a18a56f3dda9e5462434b66a63a51cc809ec7dc9d7b1183267bce6297e94690" cmd=["grpc_health_probe","-addr=:50051"] Feb 14 05:30:17 crc kubenswrapper[4867]: E0214 05:30:17.873999 4867 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-marketplace/certified-operators-mrccv" podUID="e0fe6db4-add0-4993-a40c-c5b6725565fa" containerName="registry-server" Feb 14 05:30:17 crc kubenswrapper[4867]: I0214 05:30:17.885675 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-75585db5cc-kzk25" podUID="c83fa345-043f-453c-b797-a00db3111d44" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:17 crc kubenswrapper[4867]: I0214 05:30:17.885777 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-75585db5cc-kzk25" Feb 14 05:30:17 crc kubenswrapper[4867]: I0214 05:30:17.966315 4867 patch_prober.go:28] interesting pod/image-registry-66df7c8f76-wwh9m container/registry namespace/openshift-image-registry: Liveness probe status=failure output="Get \"https://10.217.0.68:5000/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:17 crc kubenswrapper[4867]: I0214 05:30:17.966391 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-image-registry/image-registry-66df7c8f76-wwh9m" podUID="bbf9502a-06eb-4e94-911a-3a7ac1426dd8" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.68:5000/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:17 crc kubenswrapper[4867]: I0214 05:30:17.972710 4867 patch_prober.go:28] interesting pod/image-registry-66df7c8f76-wwh9m container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.68:5000/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:17 crc kubenswrapper[4867]: I0214 05:30:17.972769 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-66df7c8f76-wwh9m" podUID="bbf9502a-06eb-4e94-911a-3a7ac1426dd8" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.68:5000/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:18 crc kubenswrapper[4867]: I0214 05:30:18.233355 4867 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-p69vd container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.30:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:18 crc kubenswrapper[4867]: I0214 05:30:18.233796 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-p69vd" podUID="553b1e39-c2d5-459d-a7fd-058f936804cb" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.30:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:18 crc kubenswrapper[4867]: I0214 05:30:18.233854 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication-operator/authentication-operator-69f744f599-p69vd" Feb 14 05:30:18 crc kubenswrapper[4867]: I0214 05:30:18.236784 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="authentication-operator" containerStatusID={"Type":"cri-o","ID":"b3ec6ea524af8ababe998d66f1ad7b4fd6c79fcd1e44d811fa653aa1b5766706"} pod="openshift-authentication-operator/authentication-operator-69f744f599-p69vd" containerMessage="Container authentication-operator failed liveness probe, will be restarted" Feb 14 05:30:18 crc kubenswrapper[4867]: I0214 05:30:18.236838 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication-operator/authentication-operator-69f744f599-p69vd" podUID="553b1e39-c2d5-459d-a7fd-058f936804cb" containerName="authentication-operator" containerID="cri-o://b3ec6ea524af8ababe998d66f1ad7b4fd6c79fcd1e44d811fa653aa1b5766706" gracePeriod=30 Feb 14 05:30:18 crc kubenswrapper[4867]: I0214 05:30:18.496234 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-nzdwg" event={"ID":"cfde5532-97c7-47b8-8b63-0159fc9e82b9","Type":"ContainerStarted","Data":"fb3865629417f734b4b087d4b7a5ea9ec4e1ff48d5844ca96c3162d51e0b069a"} Feb 14 05:30:18 crc kubenswrapper[4867]: I0214 05:30:18.787864 4867 patch_prober.go:28] interesting pod/console-operator-58897d9998-htv2n container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:18 crc kubenswrapper[4867]: I0214 05:30:18.787915 4867 patch_prober.go:28] interesting pod/console-operator-58897d9998-htv2n container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:18 crc kubenswrapper[4867]: I0214 05:30:18.787947 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-htv2n" podUID="dc723269-8ee6-4236-9eaa-169a00d76442" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:18 crc kubenswrapper[4867]: I0214 05:30:18.787982 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-htv2n" podUID="dc723269-8ee6-4236-9eaa-169a00d76442" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:18 crc kubenswrapper[4867]: I0214 05:30:18.788002 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console-operator/console-operator-58897d9998-htv2n" Feb 14 05:30:18 crc kubenswrapper[4867]: I0214 05:30:18.788084 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-htv2n" Feb 14 05:30:18 crc kubenswrapper[4867]: I0214 05:30:18.789585 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="console-operator" containerStatusID={"Type":"cri-o","ID":"0048178c63d05d01b42d22de443716f1298cccafc53f9294b614ff7f1612f71a"} pod="openshift-console-operator/console-operator-58897d9998-htv2n" containerMessage="Container console-operator failed liveness probe, will be restarted" Feb 14 05:30:18 crc kubenswrapper[4867]: I0214 05:30:18.789627 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console-operator/console-operator-58897d9998-htv2n" podUID="dc723269-8ee6-4236-9eaa-169a00d76442" containerName="console-operator" containerID="cri-o://0048178c63d05d01b42d22de443716f1298cccafc53f9294b614ff7f1612f71a" gracePeriod=30 Feb 14 05:30:18 crc kubenswrapper[4867]: I0214 05:30:18.840376 4867 patch_prober.go:28] interesting pod/monitoring-plugin-7f5858d95d-fvlxd container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.85:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:18 crc kubenswrapper[4867]: I0214 05:30:18.840431 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-7f5858d95d-fvlxd" podUID="bcf2722f-8c1f-4061-8c4a-9888961c5361" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.85:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:18 crc kubenswrapper[4867]: I0214 05:30:18.927845 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-75585db5cc-kzk25" podUID="c83fa345-043f-453c-b797-a00db3111d44" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.040406 4867 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-s94ht container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.18:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.040466 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s94ht" podUID="1b196c26-84a1-408f-913b-eb50572102cf" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.18:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.040522 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s94ht" Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.042872 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="packageserver" containerStatusID={"Type":"cri-o","ID":"c943db06330ddf72b1ccef3b0bef6de1e4225825a436a45e341b66e82e44cf32"} pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s94ht" containerMessage="Container packageserver failed liveness probe, will be restarted" Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.042914 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s94ht" podUID="1b196c26-84a1-408f-913b-eb50572102cf" containerName="packageserver" containerID="cri-o://c943db06330ddf72b1ccef3b0bef6de1e4225825a436a45e341b66e82e44cf32" gracePeriod=30 Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.062668 4867 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-tcss9 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.062739 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tcss9" podUID="46664b60-c0df-4869-9304-cec4de385a86" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.062794 4867 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-tcss9 container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.19:8443/healthz\": context deadline exceeded" start-of-body= Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.062835 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tcss9" podUID="46664b60-c0df-4869-9304-cec4de385a86" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.19:8443/healthz\": context deadline exceeded" Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.062842 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tcss9" Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.062957 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tcss9" Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.063027 4867 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-s94ht container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.18:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.063230 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s94ht" podUID="1b196c26-84a1-408f-913b-eb50572102cf" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.18:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.063327 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s94ht" Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.064217 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="olm-operator" containerStatusID={"Type":"cri-o","ID":"6ff2ed29a3b77b2481e62c7a269a418387c210dfacd8443a4552d6a8773dde4c"} pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tcss9" containerMessage="Container olm-operator failed liveness probe, will be restarted" Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.064296 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tcss9" podUID="46664b60-c0df-4869-9304-cec4de385a86" containerName="olm-operator" containerID="cri-o://6ff2ed29a3b77b2481e62c7a269a418387c210dfacd8443a4552d6a8773dde4c" gracePeriod=30 Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.144779 4867 patch_prober.go:28] interesting pod/thanos-querier-85586fc579-b75c7 container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.82:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.144863 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-85586fc579-b75c7" podUID="72801c86-0365-4e93-8887-4fdc6d8a9cad" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.82:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.234708 4867 patch_prober.go:28] interesting pod/router-default-5444994796-qlkzp container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.234779 4867 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-dgp2v container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.41:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.234786 4867 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-rv8cb container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.234812 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dgp2v" podUID="b1dba42c-e410-49fd-8c48-449fca5d65dc" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.41:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.234859 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dgp2v" Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.234856 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rv8cb" podUID="a0c7654d-1553-4b68-8af4-253f77d7c657" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.32:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.234780 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-qlkzp" podUID="4b71d414-e6bf-4f51-a808-1938c1edf207" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.234914 4867 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-dgp2v container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.41:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.234974 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dgp2v" podUID="b1dba42c-e410-49fd-8c48-449fca5d65dc" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.41:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.235012 4867 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-rv8cb container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.217.0.32:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.235032 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rv8cb" podUID="a0c7654d-1553-4b68-8af4-253f77d7c657" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.32:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.235037 4867 patch_prober.go:28] interesting pod/router-default-5444994796-qlkzp container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.235051 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-qlkzp" podUID="4b71d414-e6bf-4f51-a808-1938c1edf207" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.235017 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rv8cb" Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.235119 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-qlkzp" Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.235141 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dgp2v" Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.235163 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rv8cb" Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.235174 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-ingress/router-default-5444994796-qlkzp" Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.242811 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"d6f9a4aceb60429befbb079eda354a35872f1921b3ba953e54763f01e9e1d148"} pod="openshift-ingress/router-default-5444994796-qlkzp" containerMessage="Container router failed liveness probe, will be restarted" Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.242828 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="catalog-operator" containerStatusID={"Type":"cri-o","ID":"1c2f18b80eabbfd8f9faa98d372c322248253795be83a6d80562b3ec3e4cc570"} pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dgp2v" containerMessage="Container catalog-operator failed liveness probe, will be restarted" Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.242853 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="package-server-manager" containerStatusID={"Type":"cri-o","ID":"a3c4bddbff04cdcab7e0f56ecaa633a0e493e61f17878482d74e1ba56c884806"} pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rv8cb" containerMessage="Container package-server-manager failed liveness probe, will be restarted" Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.242871 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-5444994796-qlkzp" podUID="4b71d414-e6bf-4f51-a808-1938c1edf207" containerName="router" containerID="cri-o://d6f9a4aceb60429befbb079eda354a35872f1921b3ba953e54763f01e9e1d148" gracePeriod=10 Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.242885 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dgp2v" podUID="b1dba42c-e410-49fd-8c48-449fca5d65dc" containerName="catalog-operator" containerID="cri-o://1c2f18b80eabbfd8f9faa98d372c322248253795be83a6d80562b3ec3e4cc570" gracePeriod=30 Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.242890 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rv8cb" podUID="a0c7654d-1553-4b68-8af4-253f77d7c657" containerName="package-server-manager" containerID="cri-o://a3c4bddbff04cdcab7e0f56ecaa633a0e493e61f17878482d74e1ba56c884806" gracePeriod=30 Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.372651 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="8c8003cd-8992-4714-96a2-2e649aead118" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.166:9090/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.399070 4867 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-72mpc container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.399088 4867 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-72mpc container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.399134 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-72mpc" podUID="b967a9e8-e5f1-4c92-889a-1dd6adf747fd" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.399306 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-72mpc" Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.399179 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-72mpc" podUID="b967a9e8-e5f1-4c92-889a-1dd6adf747fd" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.399415 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-72mpc" Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.401096 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="prometheus-operator-admission-webhook" containerStatusID={"Type":"cri-o","ID":"1771829f5105142e5fb1906dbc8e69f1496d47af4f931c40341a4509f9eb8537"} pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-72mpc" containerMessage="Container prometheus-operator-admission-webhook failed liveness probe, will be restarted" Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.401139 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-72mpc" podUID="b967a9e8-e5f1-4c92-889a-1dd6adf747fd" containerName="prometheus-operator-admission-webhook" containerID="cri-o://1771829f5105142e5fb1906dbc8e69f1496d47af4f931c40341a4509f9eb8537" gracePeriod=30 Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.761358 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="62ee3130-2952-453e-82b6-dba068ba1bc9" containerName="prometheus" probeResult="failure" output="command timed out" Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.789364 4867 patch_prober.go:28] interesting pod/console-operator-58897d9998-htv2n container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.789427 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-htv2n" podUID="dc723269-8ee6-4236-9eaa-169a00d76442" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.024371 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.064563 4867 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-tcss9 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.064639 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tcss9" podUID="46664b60-c0df-4869-9304-cec4de385a86" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.318676 4867 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-rv8cb container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.318741 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rv8cb" podUID="a0c7654d-1553-4b68-8af4-253f77d7c657" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.32:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.318798 4867 patch_prober.go:28] interesting pod/router-default-5444994796-qlkzp container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.318813 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-qlkzp" podUID="4b71d414-e6bf-4f51-a808-1938c1edf207" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.342377 4867 patch_prober.go:28] interesting pod/controller-manager-574c444545-stzjc container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.88:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.342454 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-574c444545-stzjc" podUID="a9fc9dc1-437a-4160-b805-fabfd7f877c2" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.88:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.342393 4867 patch_prober.go:28] interesting pod/controller-manager-574c444545-stzjc container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.88:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.342578 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-574c444545-stzjc" podUID="a9fc9dc1-437a-4160-b805-fabfd7f877c2" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.88:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.342648 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-controller-manager/controller-manager-574c444545-stzjc" Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.343767 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="controller-manager" containerStatusID={"Type":"cri-o","ID":"8ea3d56833a0efa19ba33e28ae9cc5702afdb9a3c57db5fa754cb3ed8734293a"} pod="openshift-controller-manager/controller-manager-574c444545-stzjc" containerMessage="Container controller-manager failed liveness probe, will be restarted" Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.343837 4867 patch_prober.go:28] interesting pod/route-controller-manager-7575f7b945-9zbh8 container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.87:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.343926 4867 patch_prober.go:28] interesting pod/route-controller-manager-7575f7b945-9zbh8 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.87:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.343931 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-574c444545-stzjc" podUID="a9fc9dc1-437a-4160-b805-fabfd7f877c2" containerName="controller-manager" containerID="cri-o://8ea3d56833a0efa19ba33e28ae9cc5702afdb9a3c57db5fa754cb3ed8734293a" gracePeriod=30 Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.343954 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7575f7b945-9zbh8" podUID="29172228-9eb8-461f-8f75-cdd021e0d30c" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.87:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.344011 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-7575f7b945-9zbh8" podUID="29172228-9eb8-461f-8f75-cdd021e0d30c" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.87:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.344030 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-route-controller-manager/route-controller-manager-7575f7b945-9zbh8" Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.344856 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="route-controller-manager" containerStatusID={"Type":"cri-o","ID":"b2b4d86a5abf177e594abdba567dce9b2b749401c08580b54c991a839d54dc2c"} pod="openshift-route-controller-manager/route-controller-manager-7575f7b945-9zbh8" containerMessage="Container route-controller-manager failed liveness probe, will be restarted" Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.344890 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7575f7b945-9zbh8" podUID="29172228-9eb8-461f-8f75-cdd021e0d30c" containerName="route-controller-manager" containerID="cri-o://b2b4d86a5abf177e594abdba567dce9b2b749401c08580b54c991a839d54dc2c" gracePeriod=30 Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.400309 4867 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-72mpc container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.400369 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-72mpc" podUID="b967a9e8-e5f1-4c92-889a-1dd6adf747fd" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.508687 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-79d975b745-jqq2w" podUID="ebee5651-7233-4c18-bb97-a4dc91eabef4" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.508806 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79d975b745-jqq2w" Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.546420 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-6nhjp" event={"ID":"94ff35ef-77e1-4085-ad2f-837ebc666b2a","Type":"ContainerDied","Data":"56f2401d817967e7dfc249d99a2014932b93916388d466d645c9c4c84aa46aab"} Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.550426 4867 generic.go:334] "Generic (PLEG): container finished" podID="94ff35ef-77e1-4085-ad2f-837ebc666b2a" containerID="56f2401d817967e7dfc249d99a2014932b93916388d466d645c9c4c84aa46aab" exitCode=1 Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.552323 4867 scope.go:117] "RemoveContainer" containerID="56f2401d817967e7dfc249d99a2014932b93916388d466d645c9c4c84aa46aab" Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.760040 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="b27199a8-11ac-4e59-90b8-b42387dd6dd2" containerName="galera" probeResult="failure" output="command timed out" Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.760419 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.760991 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="b27199a8-11ac-4e59-90b8-b42387dd6dd2" containerName="galera" probeResult="failure" output="command timed out" Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.761150 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/openstack-galera-0" Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.761960 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="62ee3130-2952-453e-82b6-dba068ba1bc9" containerName="prometheus" probeResult="failure" output="command timed out" Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.763588 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"fcaa00f4074b2721a8dae207c9036fd698a9b4947b9c404b3f74667a5403e217"} pod="openstack/openstack-galera-0" containerMessage="Container galera failed liveness probe, will be restarted" Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.763613 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="27437fd9-2bc5-48ac-9e34-e733da15dd2b" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.763847 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ceilometer-0" Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.766568 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="ceilometer-central-agent" containerStatusID={"Type":"cri-o","ID":"86c896e795193cbc041ce48aa8f5cfb49ed56bfd923d3ce2eec001f309e51bd7"} pod="openstack/ceilometer-0" containerMessage="Container ceilometer-central-agent failed liveness probe, will be restarted" Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.766743 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="27437fd9-2bc5-48ac-9e34-e733da15dd2b" containerName="ceilometer-central-agent" containerID="cri-o://86c896e795193cbc041ce48aa8f5cfb49ed56bfd923d3ce2eec001f309e51bd7" gracePeriod=30 Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.815654 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="89e70483-d3e8-4758-bb61-ae6147dd4f39" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.1.9:8080/livez\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.815790 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="89e70483-d3e8-4758-bb61-ae6147dd4f39" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.1.9:8081/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.828797 4867 patch_prober.go:28] interesting pod/logging-loki-gateway-767ffcbf75-md7ts container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.828867 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-767ffcbf75-md7ts" podUID="d28844dc-6974-446b-bd9a-b22586858387" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.54:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.828938 4867 patch_prober.go:28] interesting pod/logging-loki-gateway-767ffcbf75-md7ts container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.828975 4867 patch_prober.go:28] interesting pod/loki-operator-controller-manager-5479889c99-ltnxf container/manager namespace/openshift-operators-redhat: Readiness probe status=failure output="Get \"http://10.217.0.47:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.829001 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-767ffcbf75-md7ts" podUID="d28844dc-6974-446b-bd9a-b22586858387" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.829008 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators-redhat/loki-operator-controller-manager-5479889c99-ltnxf" podUID="4a918644-d451-4f71-8a69-627b0de1ebb7" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.47:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.829252 4867 patch_prober.go:28] interesting pod/loki-operator-controller-manager-5479889c99-ltnxf container/manager namespace/openshift-operators-redhat: Liveness probe status=failure output="Get \"http://10.217.0.47:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.829286 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators-redhat/loki-operator-controller-manager-5479889c99-ltnxf" podUID="4a918644-d451-4f71-8a69-627b0de1ebb7" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.47:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.846070 4867 patch_prober.go:28] interesting pod/logging-loki-gateway-767ffcbf75-l82l4 container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.53:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.846154 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-767ffcbf75-l82l4" podUID="0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.53:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.846105 4867 patch_prober.go:28] interesting pod/logging-loki-gateway-767ffcbf75-l82l4 container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.53:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.846294 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-767ffcbf75-l82l4" podUID="0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.53:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:21 crc kubenswrapper[4867]: I0214 05:30:21.324773 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cs8b7t" podUID="634f9e2f-2100-49e3-a31f-a369cf8ff13f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:21 crc kubenswrapper[4867]: I0214 05:30:21.325212 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cs8b7t" Feb 14 05:30:21 crc kubenswrapper[4867]: I0214 05:30:21.550732 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-79d975b745-jqq2w" podUID="ebee5651-7233-4c18-bb97-a4dc91eabef4" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:21 crc kubenswrapper[4867]: I0214 05:30:21.563004 4867 generic.go:334] "Generic (PLEG): container finished" podID="b1dba42c-e410-49fd-8c48-449fca5d65dc" containerID="1c2f18b80eabbfd8f9faa98d372c322248253795be83a6d80562b3ec3e4cc570" exitCode=0 Feb 14 05:30:21 crc kubenswrapper[4867]: I0214 05:30:21.563083 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dgp2v" event={"ID":"b1dba42c-e410-49fd-8c48-449fca5d65dc","Type":"ContainerDied","Data":"1c2f18b80eabbfd8f9faa98d372c322248253795be83a6d80562b3ec3e4cc570"} Feb 14 05:30:21 crc kubenswrapper[4867]: I0214 05:30:21.571410 4867 generic.go:334] "Generic (PLEG): container finished" podID="64ff8480-2ca0-40d5-b5c9-448d0db3c575" containerID="dba0773e63253be2ecd558d953c291677c56007f46dc4d0a1851dfa825654812" exitCode=1 Feb 14 05:30:21 crc kubenswrapper[4867]: I0214 05:30:21.571482 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-7zkqz" event={"ID":"64ff8480-2ca0-40d5-b5c9-448d0db3c575","Type":"ContainerDied","Data":"dba0773e63253be2ecd558d953c291677c56007f46dc4d0a1851dfa825654812"} Feb 14 05:30:21 crc kubenswrapper[4867]: I0214 05:30:21.572814 4867 scope.go:117] "RemoveContainer" containerID="dba0773e63253be2ecd558d953c291677c56007f46dc4d0a1851dfa825654812" Feb 14 05:30:21 crc kubenswrapper[4867]: I0214 05:30:21.577095 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-58897d9998-htv2n_dc723269-8ee6-4236-9eaa-169a00d76442/console-operator/0.log" Feb 14 05:30:21 crc kubenswrapper[4867]: I0214 05:30:21.577145 4867 generic.go:334] "Generic (PLEG): container finished" podID="dc723269-8ee6-4236-9eaa-169a00d76442" containerID="0048178c63d05d01b42d22de443716f1298cccafc53f9294b614ff7f1612f71a" exitCode=1 Feb 14 05:30:21 crc kubenswrapper[4867]: I0214 05:30:21.577205 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-htv2n" event={"ID":"dc723269-8ee6-4236-9eaa-169a00d76442","Type":"ContainerDied","Data":"0048178c63d05d01b42d22de443716f1298cccafc53f9294b614ff7f1612f71a"} Feb 14 05:30:21 crc kubenswrapper[4867]: I0214 05:30:21.581595 4867 generic.go:334] "Generic (PLEG): container finished" podID="46664b60-c0df-4869-9304-cec4de385a86" containerID="6ff2ed29a3b77b2481e62c7a269a418387c210dfacd8443a4552d6a8773dde4c" exitCode=0 Feb 14 05:30:21 crc kubenswrapper[4867]: I0214 05:30:21.581627 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tcss9" event={"ID":"46664b60-c0df-4869-9304-cec4de385a86","Type":"ContainerDied","Data":"6ff2ed29a3b77b2481e62c7a269a418387c210dfacd8443a4552d6a8773dde4c"} Feb 14 05:30:21 crc kubenswrapper[4867]: I0214 05:30:21.761329 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="b27199a8-11ac-4e59-90b8-b42387dd6dd2" containerName="galera" probeResult="failure" output="command timed out" Feb 14 05:30:21 crc kubenswrapper[4867]: I0214 05:30:21.792725 4867 patch_prober.go:28] interesting pod/oauth-openshift-79479887dd-9ltbt container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.75:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:21 crc kubenswrapper[4867]: I0214 05:30:21.792738 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-67594686f4-52kwb" podUID="e1d5f0bd-4e8c-45c7-9d4e-c530689948ad" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.90:8080/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:21 crc kubenswrapper[4867]: I0214 05:30:21.792798 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" podUID="351f0f21-497e-4c3e-99cc-30baff4e6484" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.75:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:21 crc kubenswrapper[4867]: I0214 05:30:21.792847 4867 patch_prober.go:28] interesting pod/oauth-openshift-79479887dd-9ltbt container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.75:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:21 crc kubenswrapper[4867]: I0214 05:30:21.792893 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 05:30:21 crc kubenswrapper[4867]: I0214 05:30:21.792918 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" podUID="351f0f21-497e-4c3e-99cc-30baff4e6484" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.75:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:21 crc kubenswrapper[4867]: I0214 05:30:21.793084 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 05:30:21 crc kubenswrapper[4867]: I0214 05:30:21.796767 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="oauth-openshift" containerStatusID={"Type":"cri-o","ID":"563d4e57c17a704703d730e549779becfa05a0901ceefc0c24faf0d612500998"} pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" containerMessage="Container oauth-openshift failed liveness probe, will be restarted" Feb 14 05:30:21 crc kubenswrapper[4867]: I0214 05:30:21.910475 4867 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-jsc7b container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:21 crc kubenswrapper[4867]: I0214 05:30:21.910567 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jsc7b" podUID="d58c6e7c-e0bc-4833-ab34-348c03f75da7" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.10:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:22 crc kubenswrapper[4867]: I0214 05:30:22.415703 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-nzdwg" Feb 14 05:30:22 crc kubenswrapper[4867]: I0214 05:30:22.449656 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-7f9bfb45cb-mpxbn" podUID="d5e9c930-96ca-4a35-af4f-b8ae033469a5" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.91:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:22 crc kubenswrapper[4867]: I0214 05:30:22.449699 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-7f9bfb45cb-mpxbn" podUID="d5e9c930-96ca-4a35-af4f-b8ae033469a5" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.91:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:22 crc kubenswrapper[4867]: I0214 05:30:22.449656 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cs8b7t" podUID="634f9e2f-2100-49e3-a31f-a369cf8ff13f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:22 crc kubenswrapper[4867]: I0214 05:30:22.449775 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-7f9bfb45cb-mpxbn" Feb 14 05:30:22 crc kubenswrapper[4867]: I0214 05:30:22.449805 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/metallb-operator-webhook-server-7f9bfb45cb-mpxbn" Feb 14 05:30:22 crc kubenswrapper[4867]: I0214 05:30:22.451208 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="webhook-server" containerStatusID={"Type":"cri-o","ID":"7b47d8831936f974296fa5b46313134eee7c7016a1d36736b8027bb6454a7f66"} pod="metallb-system/metallb-operator-webhook-server-7f9bfb45cb-mpxbn" containerMessage="Container webhook-server failed liveness probe, will be restarted" Feb 14 05:30:22 crc kubenswrapper[4867]: I0214 05:30:22.451258 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/metallb-operator-webhook-server-7f9bfb45cb-mpxbn" podUID="d5e9c930-96ca-4a35-af4f-b8ae033469a5" containerName="webhook-server" containerID="cri-o://7b47d8831936f974296fa5b46313134eee7c7016a1d36736b8027bb6454a7f66" gracePeriod=2 Feb 14 05:30:22 crc kubenswrapper[4867]: I0214 05:30:22.592717 4867 generic.go:334] "Generic (PLEG): container finished" podID="553b1e39-c2d5-459d-a7fd-058f936804cb" containerID="b3ec6ea524af8ababe998d66f1ad7b4fd6c79fcd1e44d811fa653aa1b5766706" exitCode=0 Feb 14 05:30:22 crc kubenswrapper[4867]: I0214 05:30:22.592781 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-p69vd" event={"ID":"553b1e39-c2d5-459d-a7fd-058f936804cb","Type":"ContainerDied","Data":"b3ec6ea524af8ababe998d66f1ad7b4fd6c79fcd1e44d811fa653aa1b5766706"} Feb 14 05:30:22 crc kubenswrapper[4867]: I0214 05:30:22.595014 4867 generic.go:334] "Generic (PLEG): container finished" podID="b967a9e8-e5f1-4c92-889a-1dd6adf747fd" containerID="1771829f5105142e5fb1906dbc8e69f1496d47af4f931c40341a4509f9eb8537" exitCode=0 Feb 14 05:30:22 crc kubenswrapper[4867]: I0214 05:30:22.595045 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-72mpc" event={"ID":"b967a9e8-e5f1-4c92-889a-1dd6adf747fd","Type":"ContainerDied","Data":"1771829f5105142e5fb1906dbc8e69f1496d47af4f931c40341a4509f9eb8537"} Feb 14 05:30:22 crc kubenswrapper[4867]: I0214 05:30:22.714575 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Feb 14 05:30:22 crc kubenswrapper[4867]: I0214 05:30:22.718754 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-7f9bfb45cb-mpxbn" Feb 14 05:30:22 crc kubenswrapper[4867]: I0214 05:30:22.758961 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="505de461-9e6f-4914-bf50-e2bf4149b566" containerName="galera" probeResult="failure" output="command timed out" Feb 14 05:30:22 crc kubenswrapper[4867]: I0214 05:30:22.759618 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="505de461-9e6f-4914-bf50-e2bf4149b566" containerName="galera" probeResult="failure" output="command timed out" Feb 14 05:30:22 crc kubenswrapper[4867]: I0214 05:30:22.793824 4867 patch_prober.go:28] interesting pod/oauth-openshift-79479887dd-9ltbt container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.75:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:22 crc kubenswrapper[4867]: I0214 05:30:22.793879 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" podUID="351f0f21-497e-4c3e-99cc-30baff4e6484" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.75:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:22 crc kubenswrapper[4867]: I0214 05:30:22.928779 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-9gqfb" podUID="85e0628d-4132-4c09-9da0-35db43024c9c" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.93:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:22 crc kubenswrapper[4867]: I0214 05:30:22.928824 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-9gqfb" podUID="85e0628d-4132-4c09-9da0-35db43024c9c" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.93:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:22 crc kubenswrapper[4867]: I0214 05:30:22.928879 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-9gqfb" Feb 14 05:30:22 crc kubenswrapper[4867]: I0214 05:30:22.928948 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-9gqfb" Feb 14 05:30:22 crc kubenswrapper[4867]: I0214 05:30:22.930182 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="frr-k8s-webhook-server" containerStatusID={"Type":"cri-o","ID":"e4c58a36f0ba8ec1610fa373ec1045e46fc1fd0f54e17718ead321d3a683914d"} pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-9gqfb" containerMessage="Container frr-k8s-webhook-server failed liveness probe, will be restarted" Feb 14 05:30:22 crc kubenswrapper[4867]: I0214 05:30:22.930231 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-9gqfb" podUID="85e0628d-4132-4c09-9da0-35db43024c9c" containerName="frr-k8s-webhook-server" containerID="cri-o://e4c58a36f0ba8ec1610fa373ec1045e46fc1fd0f54e17718ead321d3a683914d" gracePeriod=10 Feb 14 05:30:23 crc kubenswrapper[4867]: I0214 05:30:23.330270 4867 trace.go:236] Trace[1563430983]: "Calculate volume metrics of glance for pod openstack/glance-default-external-api-0" (14-Feb-2026 05:30:21.071) (total time: 2241ms): Feb 14 05:30:23 crc kubenswrapper[4867]: Trace[1563430983]: [2.241733536s] [2.241733536s] END Feb 14 05:30:23 crc kubenswrapper[4867]: E0214 05:30:23.454002 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5a18a56f3dda9e5462434b66a63a51cc809ec7dc9d7b1183267bce6297e94690" cmd=["grpc_health_probe","-addr=:50051"] Feb 14 05:30:23 crc kubenswrapper[4867]: E0214 05:30:23.457218 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5a18a56f3dda9e5462434b66a63a51cc809ec7dc9d7b1183267bce6297e94690" cmd=["grpc_health_probe","-addr=:50051"] Feb 14 05:30:23 crc kubenswrapper[4867]: E0214 05:30:23.460588 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5a18a56f3dda9e5462434b66a63a51cc809ec7dc9d7b1183267bce6297e94690" cmd=["grpc_health_probe","-addr=:50051"] Feb 14 05:30:23 crc kubenswrapper[4867]: E0214 05:30:23.460656 4867 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-marketplace/certified-operators-mrccv" podUID="e0fe6db4-add0-4993-a40c-c5b6725565fa" containerName="registry-server" Feb 14 05:30:23 crc kubenswrapper[4867]: I0214 05:30:23.518321 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9jj9q" podUID="3532ff4a-374c-407b-b01c-b63267b0f9f9" containerName="registry-server" probeResult="failure" output=< Feb 14 05:30:23 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:30:23 crc kubenswrapper[4867]: > Feb 14 05:30:23 crc kubenswrapper[4867]: I0214 05:30:23.522766 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-gbzmm" podUID="ae8a4292-e933-464b-b36d-918f43ce6f65" containerName="registry-server" probeResult="failure" output=< Feb 14 05:30:23 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:30:23 crc kubenswrapper[4867]: > Feb 14 05:30:23 crc kubenswrapper[4867]: I0214 05:30:23.526987 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-n4l4x" podUID="c07eb1e9-f4cc-4664-b9f6-80322fe0644a" containerName="registry-server" probeResult="failure" output=< Feb 14 05:30:23 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:30:23 crc kubenswrapper[4867]: > Feb 14 05:30:23 crc kubenswrapper[4867]: I0214 05:30:23.531154 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-fwfld" podUID="09ba042e-98c3-43cc-aa6a-efbb9a63ae61" containerName="registry-server" probeResult="failure" output=< Feb 14 05:30:23 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:30:23 crc kubenswrapper[4867]: > Feb 14 05:30:23 crc kubenswrapper[4867]: I0214 05:30:23.539726 4867 prober.go:107] "Probe failed" probeType="Startup" pod="metallb-system/frr-k8s-nzdwg" podUID="cfde5532-97c7-47b8-8b63-0159fc9e82b9" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:23 crc kubenswrapper[4867]: I0214 05:30:23.539722 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-nzdwg" podUID="cfde5532-97c7-47b8-8b63-0159fc9e82b9" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:23 crc kubenswrapper[4867]: I0214 05:30:23.539842 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-nzdwg" podUID="cfde5532-97c7-47b8-8b63-0159fc9e82b9" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:23 crc kubenswrapper[4867]: I0214 05:30:23.606637 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-6nhjp" event={"ID":"94ff35ef-77e1-4085-ad2f-837ebc666b2a","Type":"ContainerStarted","Data":"fe7e9873ab36c7f8d1e55938a1671ab6f035ea944cbd539e11e2ab7ea37bf6d5"} Feb 14 05:30:23 crc kubenswrapper[4867]: I0214 05:30:23.606884 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-6nhjp" Feb 14 05:30:23 crc kubenswrapper[4867]: I0214 05:30:23.609034 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-58897d9998-htv2n_dc723269-8ee6-4236-9eaa-169a00d76442/console-operator/0.log" Feb 14 05:30:23 crc kubenswrapper[4867]: I0214 05:30:23.609201 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-htv2n" event={"ID":"dc723269-8ee6-4236-9eaa-169a00d76442","Type":"ContainerStarted","Data":"9134060b76bd36568c962f17b9fb144f5365dea8e3056127b8a490f076986c9c"} Feb 14 05:30:23 crc kubenswrapper[4867]: I0214 05:30:23.609337 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-htv2n" Feb 14 05:30:23 crc kubenswrapper[4867]: I0214 05:30:23.609873 4867 patch_prober.go:28] interesting pod/console-operator-58897d9998-htv2n container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/readyz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Feb 14 05:30:23 crc kubenswrapper[4867]: I0214 05:30:23.609905 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-htv2n" podUID="dc723269-8ee6-4236-9eaa-169a00d76442" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/readyz\": dial tcp 10.217.0.9:8443: connect: connection refused" Feb 14 05:30:23 crc kubenswrapper[4867]: I0214 05:30:23.614563 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dgp2v" event={"ID":"b1dba42c-e410-49fd-8c48-449fca5d65dc","Type":"ContainerStarted","Data":"ac0bf9407908a49c2fd7cb80c9c437229e335f1a3c5baa1bfaeee6f27fce2d00"} Feb 14 05:30:23 crc kubenswrapper[4867]: I0214 05:30:23.615809 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dgp2v" Feb 14 05:30:23 crc kubenswrapper[4867]: I0214 05:30:23.616373 4867 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-dgp2v container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" start-of-body= Feb 14 05:30:23 crc kubenswrapper[4867]: I0214 05:30:23.616412 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dgp2v" podUID="b1dba42c-e410-49fd-8c48-449fca5d65dc" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" Feb 14 05:30:23 crc kubenswrapper[4867]: I0214 05:30:23.618452 4867 patch_prober.go:28] interesting pod/etcd-crc container/etcd namespace/openshift-etcd: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=failed to establish etcd client: giving up getting a cached client after 3 tries Feb 14 05:30:23 crc kubenswrapper[4867]: I0214 05:30:23.618490 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-crc" podUID="2139d3e2895fc6797b9c76a1b4c9886d" containerName="etcd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 14 05:30:23 crc kubenswrapper[4867]: I0214 05:30:23.618682 4867 patch_prober.go:28] interesting pod/etcd-crc container/etcd namespace/openshift-etcd: Liveness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=failed to establish etcd client: giving up getting a cached client after 3 tries Feb 14 05:30:23 crc kubenswrapper[4867]: I0214 05:30:23.618693 4867 generic.go:334] "Generic (PLEG): container finished" podID="4a918644-d451-4f71-8a69-627b0de1ebb7" containerID="45aa757658fb299c4e4089cef9945c1427c62ec817c7670b4ba12f2330eb044e" exitCode=1 Feb 14 05:30:23 crc kubenswrapper[4867]: I0214 05:30:23.618734 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-etcd/etcd-crc" podUID="2139d3e2895fc6797b9c76a1b4c9886d" containerName="etcd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 14 05:30:23 crc kubenswrapper[4867]: I0214 05:30:23.618776 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-5479889c99-ltnxf" event={"ID":"4a918644-d451-4f71-8a69-627b0de1ebb7","Type":"ContainerDied","Data":"45aa757658fb299c4e4089cef9945c1427c62ec817c7670b4ba12f2330eb044e"} Feb 14 05:30:23 crc kubenswrapper[4867]: I0214 05:30:23.621054 4867 generic.go:334] "Generic (PLEG): container finished" podID="29172228-9eb8-461f-8f75-cdd021e0d30c" containerID="b2b4d86a5abf177e594abdba567dce9b2b749401c08580b54c991a839d54dc2c" exitCode=0 Feb 14 05:30:23 crc kubenswrapper[4867]: I0214 05:30:23.621119 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7575f7b945-9zbh8" event={"ID":"29172228-9eb8-461f-8f75-cdd021e0d30c","Type":"ContainerDied","Data":"b2b4d86a5abf177e594abdba567dce9b2b749401c08580b54c991a839d54dc2c"} Feb 14 05:30:23 crc kubenswrapper[4867]: I0214 05:30:23.623321 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-7zkqz" event={"ID":"64ff8480-2ca0-40d5-b5c9-448d0db3c575","Type":"ContainerStarted","Data":"1f5a72a1daf050366de810bc1aa6558f7631e2545468e80dc7bcb0232f9f5e4d"} Feb 14 05:30:23 crc kubenswrapper[4867]: I0214 05:30:23.623571 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-7zkqz" Feb 14 05:30:23 crc kubenswrapper[4867]: I0214 05:30:23.625262 4867 generic.go:334] "Generic (PLEG): container finished" podID="a9fc9dc1-437a-4160-b805-fabfd7f877c2" containerID="8ea3d56833a0efa19ba33e28ae9cc5702afdb9a3c57db5fa754cb3ed8734293a" exitCode=0 Feb 14 05:30:23 crc kubenswrapper[4867]: I0214 05:30:23.625299 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-574c444545-stzjc" event={"ID":"a9fc9dc1-437a-4160-b805-fabfd7f877c2","Type":"ContainerDied","Data":"8ea3d56833a0efa19ba33e28ae9cc5702afdb9a3c57db5fa754cb3ed8734293a"} Feb 14 05:30:23 crc kubenswrapper[4867]: I0214 05:30:23.625786 4867 scope.go:117] "RemoveContainer" containerID="45aa757658fb299c4e4089cef9945c1427c62ec817c7670b4ba12f2330eb044e" Feb 14 05:30:23 crc kubenswrapper[4867]: I0214 05:30:23.971741 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-9gqfb" podUID="85e0628d-4132-4c09-9da0-35db43024c9c" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.93:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.030864 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-x7qx5" Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.407744 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-6b9546c8f4-49lm8" podUID="10461723-ecff-48fe-a034-9a07bf3bf8f7" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.98:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.408095 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-6b9546c8f4-49lm8" Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.419351 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-6b9546c8f4-49lm8" Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.555662 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-4hvw7" podUID="6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.555733 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/speaker-4hvw7" Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.556003 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-4hvw7" podUID="6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.556110 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-4hvw7" Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.556786 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="speaker" containerStatusID={"Type":"cri-o","ID":"1c50e8be32836da6fce22b59341f0df53ed1589043997f275a93de461dc1feea"} pod="metallb-system/speaker-4hvw7" containerMessage="Container speaker failed liveness probe, will be restarted" Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.556836 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/speaker-4hvw7" podUID="6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8" containerName="speaker" containerID="cri-o://1c50e8be32836da6fce22b59341f0df53ed1589043997f275a93de461dc1feea" gracePeriod=2 Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.595846 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517450-67scd"] Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.599899 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517450-67scd" Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.624324 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.634772 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.657764 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7575f7b945-9zbh8" event={"ID":"29172228-9eb8-461f-8f75-cdd021e0d30c","Type":"ContainerStarted","Data":"5256716fb99e6b9c6c166c6a352357713533194081156a34479ed30354c65c2c"} Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.658145 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7575f7b945-9zbh8" Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.658479 4867 patch_prober.go:28] interesting pod/route-controller-manager-7575f7b945-9zbh8 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.87:8443/healthz\": dial tcp 10.217.0.87:8443: connect: connection refused" start-of-body= Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.658620 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7575f7b945-9zbh8" podUID="29172228-9eb8-461f-8f75-cdd021e0d30c" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.87:8443/healthz\": dial tcp 10.217.0.87:8443: connect: connection refused" Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.661149 4867 generic.go:334] "Generic (PLEG): container finished" podID="1b196c26-84a1-408f-913b-eb50572102cf" containerID="c943db06330ddf72b1ccef3b0bef6de1e4225825a436a45e341b66e82e44cf32" exitCode=0 Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.661213 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s94ht" event={"ID":"1b196c26-84a1-408f-913b-eb50572102cf","Type":"ContainerDied","Data":"c943db06330ddf72b1ccef3b0bef6de1e4225825a436a45e341b66e82e44cf32"} Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.663174 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-p69vd" event={"ID":"553b1e39-c2d5-459d-a7fd-058f936804cb","Type":"ContainerStarted","Data":"648ace95ef188599adcebc066729e1605bfdd3d635297064138d6abe64b4b847"} Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.665717 4867 generic.go:334] "Generic (PLEG): container finished" podID="d5e9c930-96ca-4a35-af4f-b8ae033469a5" containerID="7b47d8831936f974296fa5b46313134eee7c7016a1d36736b8027bb6454a7f66" exitCode=0 Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.665777 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7f9bfb45cb-mpxbn" event={"ID":"d5e9c930-96ca-4a35-af4f-b8ae033469a5","Type":"ContainerDied","Data":"7b47d8831936f974296fa5b46313134eee7c7016a1d36736b8027bb6454a7f66"} Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.668665 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tcss9" event={"ID":"46664b60-c0df-4869-9304-cec4de385a86","Type":"ContainerStarted","Data":"3ea8f9e51f3c690e4d7e7df0149e187df6541e37b12dcea391f106c1a4377dc2"} Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.670006 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tcss9" Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.670090 4867 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-tcss9 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.670141 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tcss9" podUID="46664b60-c0df-4869-9304-cec4de385a86" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.673823 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-5479889c99-ltnxf" event={"ID":"4a918644-d451-4f71-8a69-627b0de1ebb7","Type":"ContainerStarted","Data":"3e432b7bd7e7479ef22fb4a1f58571fc980580d6853a79877068a64f678ca70f"} Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.674045 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-5479889c99-ltnxf" Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.676763 4867 generic.go:334] "Generic (PLEG): container finished" podID="e1d5f0bd-4e8c-45c7-9d4e-c530689948ad" containerID="4de37120723c6ceb858cc27ed5593f4b0f873f34286ef080ea925db6e29ad027" exitCode=1 Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.676830 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-67594686f4-52kwb" event={"ID":"e1d5f0bd-4e8c-45c7-9d4e-c530689948ad","Type":"ContainerDied","Data":"4de37120723c6ceb858cc27ed5593f4b0f873f34286ef080ea925db6e29ad027"} Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.678430 4867 scope.go:117] "RemoveContainer" containerID="4de37120723c6ceb858cc27ed5593f4b0f873f34286ef080ea925db6e29ad027" Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.679255 4867 generic.go:334] "Generic (PLEG): container finished" podID="85e0628d-4132-4c09-9da0-35db43024c9c" containerID="e4c58a36f0ba8ec1610fa373ec1045e46fc1fd0f54e17718ead321d3a683914d" exitCode=0 Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.679295 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-9gqfb" event={"ID":"85e0628d-4132-4c09-9da0-35db43024c9c","Type":"ContainerDied","Data":"e4c58a36f0ba8ec1610fa373ec1045e46fc1fd0f54e17718ead321d3a683914d"} Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.683292 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-72mpc" event={"ID":"b967a9e8-e5f1-4c92-889a-1dd6adf747fd","Type":"ContainerStarted","Data":"6ad135b222f9c6b4f7d1f78014739538c53ab615d351d5d0da90a6bfb8609f53"} Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.683693 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-72mpc" Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.683766 4867 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-72mpc container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.65:8443/healthz\": dial tcp 10.217.0.65:8443: connect: connection refused" start-of-body= Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.683805 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-72mpc" podUID="b967a9e8-e5f1-4c92-889a-1dd6adf747fd" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.65:8443/healthz\": dial tcp 10.217.0.65:8443: connect: connection refused" Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.687929 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-574c444545-stzjc" event={"ID":"a9fc9dc1-437a-4160-b805-fabfd7f877c2","Type":"ContainerStarted","Data":"795af41e3a2def91739801d0722202b0215cc42eff67dec20742a6cb0eae5da3"} Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.687963 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-574c444545-stzjc" Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.688712 4867 patch_prober.go:28] interesting pod/controller-manager-574c444545-stzjc container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.88:8443/healthz\": dial tcp 10.217.0.88:8443: connect: connection refused" start-of-body= Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.688914 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-574c444545-stzjc" podUID="a9fc9dc1-437a-4160-b805-fabfd7f877c2" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.88:8443/healthz\": dial tcp 10.217.0.88:8443: connect: connection refused" Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.690549 4867 generic.go:334] "Generic (PLEG): container finished" podID="a0c7654d-1553-4b68-8af4-253f77d7c657" containerID="a3c4bddbff04cdcab7e0f56ecaa633a0e493e61f17878482d74e1ba56c884806" exitCode=0 Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.692178 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rv8cb" event={"ID":"a0c7654d-1553-4b68-8af4-253f77d7c657","Type":"ContainerDied","Data":"a3c4bddbff04cdcab7e0f56ecaa633a0e493e61f17878482d74e1ba56c884806"} Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.692237 4867 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-dgp2v container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" start-of-body= Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.692390 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dgp2v" podUID="b1dba42c-e410-49fd-8c48-449fca5d65dc" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.693367 4867 patch_prober.go:28] interesting pod/console-operator-58897d9998-htv2n container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/readyz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.693526 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-htv2n" podUID="dc723269-8ee6-4236-9eaa-169a00d76442" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/readyz\": dial tcp 10.217.0.9:8443: connect: connection refused" Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.796247 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="27437fd9-2bc5-48ac-9e34-e733da15dd2b" containerName="ceilometer-notification-agent" probeResult="failure" output="command timed out" Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.802782 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/473b9472-6542-4e27-87e9-17365cd400e1-config-volume\") pod \"collect-profiles-29517450-67scd\" (UID: \"473b9472-6542-4e27-87e9-17365cd400e1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517450-67scd" Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.803169 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/473b9472-6542-4e27-87e9-17365cd400e1-secret-volume\") pod \"collect-profiles-29517450-67scd\" (UID: \"473b9472-6542-4e27-87e9-17365cd400e1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517450-67scd" Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.805356 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddk97\" (UniqueName: \"kubernetes.io/projected/473b9472-6542-4e27-87e9-17365cd400e1-kube-api-access-ddk97\") pod \"collect-profiles-29517450-67scd\" (UID: \"473b9472-6542-4e27-87e9-17365cd400e1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517450-67scd" Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.913320 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddk97\" (UniqueName: \"kubernetes.io/projected/473b9472-6542-4e27-87e9-17365cd400e1-kube-api-access-ddk97\") pod \"collect-profiles-29517450-67scd\" (UID: \"473b9472-6542-4e27-87e9-17365cd400e1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517450-67scd" Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.913593 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/473b9472-6542-4e27-87e9-17365cd400e1-config-volume\") pod \"collect-profiles-29517450-67scd\" (UID: \"473b9472-6542-4e27-87e9-17365cd400e1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517450-67scd" Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.913749 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/473b9472-6542-4e27-87e9-17365cd400e1-secret-volume\") pod \"collect-profiles-29517450-67scd\" (UID: \"473b9472-6542-4e27-87e9-17365cd400e1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517450-67scd" Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.927426 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/473b9472-6542-4e27-87e9-17365cd400e1-config-volume\") pod \"collect-profiles-29517450-67scd\" (UID: \"473b9472-6542-4e27-87e9-17365cd400e1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517450-67scd" Feb 14 05:30:25 crc kubenswrapper[4867]: I0214 05:30:25.038524 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-index-29mb7" podUID="b4bb205c-0469-49a0-b783-9b51ae11ddfe" containerName="registry-server" probeResult="failure" output=< Feb 14 05:30:25 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:30:25 crc kubenswrapper[4867]: > Feb 14 05:30:25 crc kubenswrapper[4867]: I0214 05:30:25.050989 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-29mb7" podUID="b4bb205c-0469-49a0-b783-9b51ae11ddfe" containerName="registry-server" probeResult="failure" output=< Feb 14 05:30:25 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:30:25 crc kubenswrapper[4867]: > Feb 14 05:30:25 crc kubenswrapper[4867]: I0214 05:30:25.079949 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-29mb7" Feb 14 05:30:25 crc kubenswrapper[4867]: I0214 05:30:25.079980 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/openstack-operator-index-29mb7" Feb 14 05:30:25 crc kubenswrapper[4867]: I0214 05:30:25.086115 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="registry-server" containerStatusID={"Type":"cri-o","ID":"56b5a70b5aa1a66aaa851499b6c31a6255ba3615b98722b19c9dce1fa934e34b"} pod="openstack-operators/openstack-operator-index-29mb7" containerMessage="Container registry-server failed liveness probe, will be restarted" Feb 14 05:30:25 crc kubenswrapper[4867]: I0214 05:30:25.086190 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-29mb7" podUID="b4bb205c-0469-49a0-b783-9b51ae11ddfe" containerName="registry-server" containerID="cri-o://56b5a70b5aa1a66aaa851499b6c31a6255ba3615b98722b19c9dce1fa934e34b" gracePeriod=30 Feb 14 05:30:25 crc kubenswrapper[4867]: E0214 05:30:25.098789 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="56b5a70b5aa1a66aaa851499b6c31a6255ba3615b98722b19c9dce1fa934e34b" cmd=["grpc_health_probe","-addr=:50051"] Feb 14 05:30:25 crc kubenswrapper[4867]: I0214 05:30:25.102193 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddk97\" (UniqueName: \"kubernetes.io/projected/473b9472-6542-4e27-87e9-17365cd400e1-kube-api-access-ddk97\") pod \"collect-profiles-29517450-67scd\" (UID: \"473b9472-6542-4e27-87e9-17365cd400e1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517450-67scd" Feb 14 05:30:25 crc kubenswrapper[4867]: E0214 05:30:25.102845 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="56b5a70b5aa1a66aaa851499b6c31a6255ba3615b98722b19c9dce1fa934e34b" cmd=["grpc_health_probe","-addr=:50051"] Feb 14 05:30:25 crc kubenswrapper[4867]: E0214 05:30:25.107318 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="56b5a70b5aa1a66aaa851499b6c31a6255ba3615b98722b19c9dce1fa934e34b" cmd=["grpc_health_probe","-addr=:50051"] Feb 14 05:30:25 crc kubenswrapper[4867]: E0214 05:30:25.107366 4867 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack-operators/openstack-operator-index-29mb7" podUID="b4bb205c-0469-49a0-b783-9b51ae11ddfe" containerName="registry-server" Feb 14 05:30:25 crc kubenswrapper[4867]: I0214 05:30:25.121134 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/473b9472-6542-4e27-87e9-17365cd400e1-secret-volume\") pod \"collect-profiles-29517450-67scd\" (UID: \"473b9472-6542-4e27-87e9-17365cd400e1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517450-67scd" Feb 14 05:30:25 crc kubenswrapper[4867]: I0214 05:30:25.260865 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517450-67scd" Feb 14 05:30:25 crc kubenswrapper[4867]: I0214 05:30:25.555581 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517450-67scd"] Feb 14 05:30:25 crc kubenswrapper[4867]: I0214 05:30:25.715193 4867 generic.go:334] "Generic (PLEG): container finished" podID="b4bb205c-0469-49a0-b783-9b51ae11ddfe" containerID="56b5a70b5aa1a66aaa851499b6c31a6255ba3615b98722b19c9dce1fa934e34b" exitCode=0 Feb 14 05:30:25 crc kubenswrapper[4867]: I0214 05:30:25.715564 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-29mb7" event={"ID":"b4bb205c-0469-49a0-b783-9b51ae11ddfe","Type":"ContainerDied","Data":"56b5a70b5aa1a66aaa851499b6c31a6255ba3615b98722b19c9dce1fa934e34b"} Feb 14 05:30:25 crc kubenswrapper[4867]: I0214 05:30:25.733807 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-67594686f4-52kwb" event={"ID":"e1d5f0bd-4e8c-45c7-9d4e-c530689948ad","Type":"ContainerStarted","Data":"7c0a0f796434121a6b451116b6114beebec2415659a834ee519edae8f84bc637"} Feb 14 05:30:25 crc kubenswrapper[4867]: I0214 05:30:25.734003 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-796d588566-h9wcn" Feb 14 05:30:25 crc kubenswrapper[4867]: I0214 05:30:25.734048 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-67594686f4-52kwb" Feb 14 05:30:25 crc kubenswrapper[4867]: I0214 05:30:25.739550 4867 generic.go:334] "Generic (PLEG): container finished" podID="c38fa6a1-63b1-44a2-82b8-d6fd3d8a1f8d" containerID="0f79bed42d7427fc6fb8fd280b968295c72ddab44991fb6bd63a312b21582ecc" exitCode=1 Feb 14 05:30:25 crc kubenswrapper[4867]: I0214 05:30:25.739639 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-87pdl" event={"ID":"c38fa6a1-63b1-44a2-82b8-d6fd3d8a1f8d","Type":"ContainerDied","Data":"0f79bed42d7427fc6fb8fd280b968295c72ddab44991fb6bd63a312b21582ecc"} Feb 14 05:30:25 crc kubenswrapper[4867]: I0214 05:30:25.750706 4867 scope.go:117] "RemoveContainer" containerID="0f79bed42d7427fc6fb8fd280b968295c72ddab44991fb6bd63a312b21582ecc" Feb 14 05:30:25 crc kubenswrapper[4867]: I0214 05:30:25.753206 4867 generic.go:334] "Generic (PLEG): container finished" podID="6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8" containerID="1c50e8be32836da6fce22b59341f0df53ed1589043997f275a93de461dc1feea" exitCode=0 Feb 14 05:30:25 crc kubenswrapper[4867]: I0214 05:30:25.753269 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-4hvw7" event={"ID":"6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8","Type":"ContainerDied","Data":"1c50e8be32836da6fce22b59341f0df53ed1589043997f275a93de461dc1feea"} Feb 14 05:30:25 crc kubenswrapper[4867]: I0214 05:30:25.758796 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7f9bfb45cb-mpxbn" event={"ID":"d5e9c930-96ca-4a35-af4f-b8ae033469a5","Type":"ContainerStarted","Data":"19e39365907f39db0aefd7f0404c6815634871f70efdbfbc4ee845e439bb7415"} Feb 14 05:30:25 crc kubenswrapper[4867]: I0214 05:30:25.759283 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-7f9bfb45cb-mpxbn" Feb 14 05:30:25 crc kubenswrapper[4867]: I0214 05:30:25.759812 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 14 05:30:25 crc kubenswrapper[4867]: I0214 05:30:25.766237 4867 generic.go:334] "Generic (PLEG): container finished" podID="634f9e2f-2100-49e3-a31f-a369cf8ff13f" containerID="403136f34a075ecd6d7c5c8a094d619a3f5e7e071fa96a3e6040cda845a2f86f" exitCode=1 Feb 14 05:30:25 crc kubenswrapper[4867]: I0214 05:30:25.766335 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cs8b7t" event={"ID":"634f9e2f-2100-49e3-a31f-a369cf8ff13f","Type":"ContainerDied","Data":"403136f34a075ecd6d7c5c8a094d619a3f5e7e071fa96a3e6040cda845a2f86f"} Feb 14 05:30:25 crc kubenswrapper[4867]: I0214 05:30:25.767412 4867 scope.go:117] "RemoveContainer" containerID="403136f34a075ecd6d7c5c8a094d619a3f5e7e071fa96a3e6040cda845a2f86f" Feb 14 05:30:25 crc kubenswrapper[4867]: I0214 05:30:25.821386 4867 generic.go:334] "Generic (PLEG): container finished" podID="e0fe6db4-add0-4993-a40c-c5b6725565fa" containerID="5a18a56f3dda9e5462434b66a63a51cc809ec7dc9d7b1183267bce6297e94690" exitCode=0 Feb 14 05:30:25 crc kubenswrapper[4867]: I0214 05:30:25.823272 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mrccv" event={"ID":"e0fe6db4-add0-4993-a40c-c5b6725565fa","Type":"ContainerDied","Data":"5a18a56f3dda9e5462434b66a63a51cc809ec7dc9d7b1183267bce6297e94690"} Feb 14 05:30:25 crc kubenswrapper[4867]: I0214 05:30:25.824168 4867 patch_prober.go:28] interesting pod/route-controller-manager-7575f7b945-9zbh8 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.87:8443/healthz\": dial tcp 10.217.0.87:8443: connect: connection refused" start-of-body= Feb 14 05:30:25 crc kubenswrapper[4867]: I0214 05:30:25.824214 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7575f7b945-9zbh8" podUID="29172228-9eb8-461f-8f75-cdd021e0d30c" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.87:8443/healthz\": dial tcp 10.217.0.87:8443: connect: connection refused" Feb 14 05:30:25 crc kubenswrapper[4867]: I0214 05:30:25.824289 4867 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-tcss9 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Feb 14 05:30:25 crc kubenswrapper[4867]: I0214 05:30:25.824306 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tcss9" podUID="46664b60-c0df-4869-9304-cec4de385a86" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" Feb 14 05:30:25 crc kubenswrapper[4867]: I0214 05:30:25.824468 4867 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-dgp2v container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" start-of-body= Feb 14 05:30:25 crc kubenswrapper[4867]: I0214 05:30:25.831324 4867 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-72mpc container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.65:8443/healthz\": dial tcp 10.217.0.65:8443: connect: connection refused" start-of-body= Feb 14 05:30:25 crc kubenswrapper[4867]: I0214 05:30:25.831365 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-72mpc" podUID="b967a9e8-e5f1-4c92-889a-1dd6adf747fd" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.65:8443/healthz\": dial tcp 10.217.0.65:8443: connect: connection refused" Feb 14 05:30:25 crc kubenswrapper[4867]: I0214 05:30:25.831438 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dgp2v" podUID="b1dba42c-e410-49fd-8c48-449fca5d65dc" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" Feb 14 05:30:25 crc kubenswrapper[4867]: I0214 05:30:25.831953 4867 patch_prober.go:28] interesting pod/controller-manager-574c444545-stzjc container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.88:8443/healthz\": dial tcp 10.217.0.88:8443: connect: connection refused" start-of-body= Feb 14 05:30:25 crc kubenswrapper[4867]: I0214 05:30:25.831980 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-574c444545-stzjc" podUID="a9fc9dc1-437a-4160-b805-fabfd7f877c2" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.88:8443/healthz\": dial tcp 10.217.0.88:8443: connect: connection refused" Feb 14 05:30:26 crc kubenswrapper[4867]: I0214 05:30:26.266873 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="b27199a8-11ac-4e59-90b8-b42387dd6dd2" containerName="galera" containerID="cri-o://fcaa00f4074b2721a8dae207c9036fd698a9b4947b9c404b3f74667a5403e217" gracePeriod=25 Feb 14 05:30:26 crc kubenswrapper[4867]: I0214 05:30:26.848953 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cs8b7t" event={"ID":"634f9e2f-2100-49e3-a31f-a369cf8ff13f","Type":"ContainerStarted","Data":"4389fd5035a82f3c51a86d0103019ee8c507417a4882c3decf546c05a63b7fb0"} Feb 14 05:30:26 crc kubenswrapper[4867]: I0214 05:30:26.851368 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cs8b7t" Feb 14 05:30:26 crc kubenswrapper[4867]: I0214 05:30:26.853751 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-75585db5cc-kzk25" Feb 14 05:30:26 crc kubenswrapper[4867]: I0214 05:30:26.861546 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rv8cb" event={"ID":"a0c7654d-1553-4b68-8af4-253f77d7c657","Type":"ContainerStarted","Data":"76bdc3a6742cdd5fb37605a49dd459333a78bcbc1eeb32e12badbc6d5d8cde36"} Feb 14 05:30:26 crc kubenswrapper[4867]: I0214 05:30:26.862739 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rv8cb" Feb 14 05:30:26 crc kubenswrapper[4867]: I0214 05:30:26.865283 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s94ht" event={"ID":"1b196c26-84a1-408f-913b-eb50572102cf","Type":"ContainerStarted","Data":"96f714c002693e445ae683c2076037bd2aff1426418df2b693bdfa14640e4b82"} Feb 14 05:30:26 crc kubenswrapper[4867]: I0214 05:30:26.866159 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s94ht" Feb 14 05:30:26 crc kubenswrapper[4867]: I0214 05:30:26.866241 4867 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-s94ht container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.18:5443/healthz\": dial tcp 10.217.0.18:5443: connect: connection refused" start-of-body= Feb 14 05:30:26 crc kubenswrapper[4867]: I0214 05:30:26.866269 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s94ht" podUID="1b196c26-84a1-408f-913b-eb50572102cf" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.18:5443/healthz\": dial tcp 10.217.0.18:5443: connect: connection refused" Feb 14 05:30:26 crc kubenswrapper[4867]: I0214 05:30:26.868408 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-9gqfb" event={"ID":"85e0628d-4132-4c09-9da0-35db43024c9c","Type":"ContainerStarted","Data":"93c21c64b23ef48f6f85f2357742df956c068366b556eb0d6321f48f119996e8"} Feb 14 05:30:26 crc kubenswrapper[4867]: I0214 05:30:26.869012 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-9gqfb" Feb 14 05:30:26 crc kubenswrapper[4867]: I0214 05:30:26.873369 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-87pdl" event={"ID":"c38fa6a1-63b1-44a2-82b8-d6fd3d8a1f8d","Type":"ContainerStarted","Data":"c86095bf55dbd005bfba9ff7baa5168e85bdb646b4e05c4676ed04e13f016c6d"} Feb 14 05:30:27 crc kubenswrapper[4867]: I0214 05:30:27.022209 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="38c903d9-50f6-418b-84d5-7ee82e9d1e2f" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 14 05:30:27 crc kubenswrapper[4867]: I0214 05:30:27.058555 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-7f5858d95d-fvlxd" Feb 14 05:30:27 crc kubenswrapper[4867]: E0214 05:30:27.333564 4867 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod27437fd9_2bc5_48ac_9e34_e733da15dd2b.slice/crio-86c896e795193cbc041ce48aa8f5cfb49ed56bfd923d3ce2eec001f309e51bd7.scope\": RecentStats: unable to find data in memory cache]" Feb 14 05:30:27 crc kubenswrapper[4867]: I0214 05:30:27.516444 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-nzdwg" Feb 14 05:30:27 crc kubenswrapper[4867]: I0214 05:30:27.787368 4867 patch_prober.go:28] interesting pod/console-operator-58897d9998-htv2n container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/readyz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Feb 14 05:30:27 crc kubenswrapper[4867]: I0214 05:30:27.787420 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-htv2n" podUID="dc723269-8ee6-4236-9eaa-169a00d76442" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/readyz\": dial tcp 10.217.0.9:8443: connect: connection refused" Feb 14 05:30:27 crc kubenswrapper[4867]: I0214 05:30:27.787425 4867 patch_prober.go:28] interesting pod/console-operator-58897d9998-htv2n container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Feb 14 05:30:27 crc kubenswrapper[4867]: I0214 05:30:27.787480 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-htv2n" podUID="dc723269-8ee6-4236-9eaa-169a00d76442" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Feb 14 05:30:27 crc kubenswrapper[4867]: I0214 05:30:27.895286 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mrccv" event={"ID":"e0fe6db4-add0-4993-a40c-c5b6725565fa","Type":"ContainerStarted","Data":"67bec8c1a78964f3af1c6beb53b597e598e64e7e5ded1183b3aeb8057ed46b8a"} Feb 14 05:30:27 crc kubenswrapper[4867]: I0214 05:30:27.899083 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-29mb7" event={"ID":"b4bb205c-0469-49a0-b783-9b51ae11ddfe","Type":"ContainerStarted","Data":"85318066019cefb00a675f400eaf63d7a35438da88d78e8a7709d1024bb99115"} Feb 14 05:30:27 crc kubenswrapper[4867]: I0214 05:30:27.904265 4867 generic.go:334] "Generic (PLEG): container finished" podID="27437fd9-2bc5-48ac-9e34-e733da15dd2b" containerID="86c896e795193cbc041ce48aa8f5cfb49ed56bfd923d3ce2eec001f309e51bd7" exitCode=0 Feb 14 05:30:27 crc kubenswrapper[4867]: I0214 05:30:27.904654 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27437fd9-2bc5-48ac-9e34-e733da15dd2b","Type":"ContainerDied","Data":"86c896e795193cbc041ce48aa8f5cfb49ed56bfd923d3ce2eec001f309e51bd7"} Feb 14 05:30:27 crc kubenswrapper[4867]: I0214 05:30:27.905328 4867 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-s94ht container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.18:5443/healthz\": dial tcp 10.217.0.18:5443: connect: connection refused" start-of-body= Feb 14 05:30:27 crc kubenswrapper[4867]: I0214 05:30:27.905381 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s94ht" podUID="1b196c26-84a1-408f-913b-eb50572102cf" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.18:5443/healthz\": dial tcp 10.217.0.18:5443: connect: connection refused" Feb 14 05:30:28 crc kubenswrapper[4867]: I0214 05:30:28.037681 4867 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-s94ht container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.18:5443/healthz\": dial tcp 10.217.0.18:5443: connect: connection refused" start-of-body= Feb 14 05:30:28 crc kubenswrapper[4867]: I0214 05:30:28.037737 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s94ht" podUID="1b196c26-84a1-408f-913b-eb50572102cf" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.18:5443/healthz\": dial tcp 10.217.0.18:5443: connect: connection refused" Feb 14 05:30:28 crc kubenswrapper[4867]: I0214 05:30:28.037682 4867 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-s94ht container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.18:5443/healthz\": dial tcp 10.217.0.18:5443: connect: connection refused" start-of-body= Feb 14 05:30:28 crc kubenswrapper[4867]: I0214 05:30:28.037803 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s94ht" podUID="1b196c26-84a1-408f-913b-eb50572102cf" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.18:5443/healthz\": dial tcp 10.217.0.18:5443: connect: connection refused" Feb 14 05:30:28 crc kubenswrapper[4867]: I0214 05:30:28.044896 4867 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-tcss9 container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Feb 14 05:30:28 crc kubenswrapper[4867]: I0214 05:30:28.044969 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tcss9" podUID="46664b60-c0df-4869-9304-cec4de385a86" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" Feb 14 05:30:28 crc kubenswrapper[4867]: I0214 05:30:28.045008 4867 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-tcss9 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Feb 14 05:30:28 crc kubenswrapper[4867]: I0214 05:30:28.045084 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tcss9" podUID="46664b60-c0df-4869-9304-cec4de385a86" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" Feb 14 05:30:28 crc kubenswrapper[4867]: I0214 05:30:28.126753 4867 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-dgp2v container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" start-of-body= Feb 14 05:30:28 crc kubenswrapper[4867]: I0214 05:30:28.127285 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dgp2v" podUID="b1dba42c-e410-49fd-8c48-449fca5d65dc" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" Feb 14 05:30:28 crc kubenswrapper[4867]: I0214 05:30:28.126757 4867 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-dgp2v container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" start-of-body= Feb 14 05:30:28 crc kubenswrapper[4867]: I0214 05:30:28.127369 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dgp2v" podUID="b1dba42c-e410-49fd-8c48-449fca5d65dc" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" Feb 14 05:30:28 crc kubenswrapper[4867]: I0214 05:30:28.220752 4867 patch_prober.go:28] interesting pod/router-default-5444994796-qlkzp container/router namespace/openshift-ingress: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]backend-http ok Feb 14 05:30:28 crc kubenswrapper[4867]: [+]has-synced ok Feb 14 05:30:28 crc kubenswrapper[4867]: [-]process-running failed: reason withheld Feb 14 05:30:28 crc kubenswrapper[4867]: healthz check failed Feb 14 05:30:28 crc kubenswrapper[4867]: I0214 05:30:28.220814 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-qlkzp" podUID="4b71d414-e6bf-4f51-a808-1938c1edf207" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 14 05:30:28 crc kubenswrapper[4867]: I0214 05:30:28.398533 4867 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-72mpc container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.65:8443/healthz\": dial tcp 10.217.0.65:8443: connect: connection refused" start-of-body= Feb 14 05:30:28 crc kubenswrapper[4867]: I0214 05:30:28.398644 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-72mpc" podUID="b967a9e8-e5f1-4c92-889a-1dd6adf747fd" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.65:8443/healthz\": dial tcp 10.217.0.65:8443: connect: connection refused" Feb 14 05:30:28 crc kubenswrapper[4867]: I0214 05:30:28.398592 4867 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-72mpc container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.65:8443/healthz\": dial tcp 10.217.0.65:8443: connect: connection refused" start-of-body= Feb 14 05:30:28 crc kubenswrapper[4867]: I0214 05:30:28.398766 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-72mpc" podUID="b967a9e8-e5f1-4c92-889a-1dd6adf747fd" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.65:8443/healthz\": dial tcp 10.217.0.65:8443: connect: connection refused" Feb 14 05:30:28 crc kubenswrapper[4867]: I0214 05:30:28.927180 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27437fd9-2bc5-48ac-9e34-e733da15dd2b","Type":"ContainerStarted","Data":"b41170ee2bb16f2e334839addb6382f3dd37db9fe4c0c536cea87f10a0681b84"} Feb 14 05:30:28 crc kubenswrapper[4867]: I0214 05:30:28.930517 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-4hvw7" event={"ID":"6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8","Type":"ContainerStarted","Data":"63dd68177499d45fbfb9999ed189a1e4fa94afccb38254448208d9f63c6805de"} Feb 14 05:30:28 crc kubenswrapper[4867]: I0214 05:30:28.931048 4867 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-s94ht container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.18:5443/healthz\": dial tcp 10.217.0.18:5443: connect: connection refused" start-of-body= Feb 14 05:30:28 crc kubenswrapper[4867]: I0214 05:30:28.931082 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s94ht" podUID="1b196c26-84a1-408f-913b-eb50572102cf" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.18:5443/healthz\": dial tcp 10.217.0.18:5443: connect: connection refused" Feb 14 05:30:28 crc kubenswrapper[4867]: I0214 05:30:28.931965 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-4hvw7" podUID="6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": dial tcp [::1]:29150: connect: connection refused" Feb 14 05:30:29 crc kubenswrapper[4867]: I0214 05:30:29.340784 4867 patch_prober.go:28] interesting pod/controller-manager-574c444545-stzjc container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.88:8443/healthz\": dial tcp 10.217.0.88:8443: connect: connection refused" start-of-body= Feb 14 05:30:29 crc kubenswrapper[4867]: I0214 05:30:29.341189 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-574c444545-stzjc" podUID="a9fc9dc1-437a-4160-b805-fabfd7f877c2" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.88:8443/healthz\": dial tcp 10.217.0.88:8443: connect: connection refused" Feb 14 05:30:29 crc kubenswrapper[4867]: I0214 05:30:29.343947 4867 patch_prober.go:28] interesting pod/route-controller-manager-7575f7b945-9zbh8 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.87:8443/healthz\": dial tcp 10.217.0.87:8443: connect: connection refused" start-of-body= Feb 14 05:30:29 crc kubenswrapper[4867]: I0214 05:30:29.344002 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7575f7b945-9zbh8" podUID="29172228-9eb8-461f-8f75-cdd021e0d30c" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.87:8443/healthz\": dial tcp 10.217.0.87:8443: connect: connection refused" Feb 14 05:30:29 crc kubenswrapper[4867]: I0214 05:30:29.345215 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="38c903d9-50f6-418b-84d5-7ee82e9d1e2f" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 14 05:30:29 crc kubenswrapper[4867]: I0214 05:30:29.531086 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79d975b745-jqq2w" Feb 14 05:30:29 crc kubenswrapper[4867]: E0214 05:30:29.646646 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fcaa00f4074b2721a8dae207c9036fd698a9b4947b9c404b3f74667a5403e217" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Feb 14 05:30:29 crc kubenswrapper[4867]: E0214 05:30:29.651646 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fcaa00f4074b2721a8dae207c9036fd698a9b4947b9c404b3f74667a5403e217" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Feb 14 05:30:29 crc kubenswrapper[4867]: E0214 05:30:29.655534 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fcaa00f4074b2721a8dae207c9036fd698a9b4947b9c404b3f74667a5403e217" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Feb 14 05:30:29 crc kubenswrapper[4867]: E0214 05:30:29.655644 4867 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="b27199a8-11ac-4e59-90b8-b42387dd6dd2" containerName="galera" Feb 14 05:30:29 crc kubenswrapper[4867]: I0214 05:30:29.745551 4867 patch_prober.go:28] interesting pod/loki-operator-controller-manager-5479889c99-ltnxf container/manager namespace/openshift-operators-redhat: Readiness probe status=failure output="Get \"http://10.217.0.47:8081/readyz\": dial tcp 10.217.0.47:8081: connect: connection refused" start-of-body= Feb 14 05:30:29 crc kubenswrapper[4867]: I0214 05:30:29.745597 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators-redhat/loki-operator-controller-manager-5479889c99-ltnxf" podUID="4a918644-d451-4f71-8a69-627b0de1ebb7" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.47:8081/readyz\": dial tcp 10.217.0.47:8081: connect: connection refused" Feb 14 05:30:29 crc kubenswrapper[4867]: I0214 05:30:29.948377 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-5444994796-qlkzp_4b71d414-e6bf-4f51-a808-1938c1edf207/router/0.log" Feb 14 05:30:29 crc kubenswrapper[4867]: I0214 05:30:29.948743 4867 generic.go:334] "Generic (PLEG): container finished" podID="4b71d414-e6bf-4f51-a808-1938c1edf207" containerID="d6f9a4aceb60429befbb079eda354a35872f1921b3ba953e54763f01e9e1d148" exitCode=137 Feb 14 05:30:29 crc kubenswrapper[4867]: I0214 05:30:29.949908 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-qlkzp" event={"ID":"4b71d414-e6bf-4f51-a808-1938c1edf207","Type":"ContainerDied","Data":"d6f9a4aceb60429befbb079eda354a35872f1921b3ba953e54763f01e9e1d148"} Feb 14 05:30:29 crc kubenswrapper[4867]: I0214 05:30:29.949944 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-4hvw7" Feb 14 05:30:30 crc kubenswrapper[4867]: I0214 05:30:30.848039 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 05:30:30 crc kubenswrapper[4867]: I0214 05:30:30.982484 4867 generic.go:334] "Generic (PLEG): container finished" podID="b27199a8-11ac-4e59-90b8-b42387dd6dd2" containerID="fcaa00f4074b2721a8dae207c9036fd698a9b4947b9c404b3f74667a5403e217" exitCode=0 Feb 14 05:30:30 crc kubenswrapper[4867]: I0214 05:30:30.982595 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"b27199a8-11ac-4e59-90b8-b42387dd6dd2","Type":"ContainerDied","Data":"fcaa00f4074b2721a8dae207c9036fd698a9b4947b9c404b3f74667a5403e217"} Feb 14 05:30:30 crc kubenswrapper[4867]: I0214 05:30:30.989005 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-29mb7" Feb 14 05:30:30 crc kubenswrapper[4867]: I0214 05:30:30.989314 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-29mb7" Feb 14 05:30:31 crc kubenswrapper[4867]: I0214 05:30:31.003950 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-5444994796-qlkzp_4b71d414-e6bf-4f51-a808-1938c1edf207/router/0.log" Feb 14 05:30:31 crc kubenswrapper[4867]: I0214 05:30:31.075195 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-qlkzp" event={"ID":"4b71d414-e6bf-4f51-a808-1938c1edf207","Type":"ContainerStarted","Data":"44f0f426b9ce03e78b4461340baf65994577935885180313c500722c000c86c5"} Feb 14 05:30:31 crc kubenswrapper[4867]: I0214 05:30:31.104995 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-qlkzp" Feb 14 05:30:31 crc kubenswrapper[4867]: I0214 05:30:31.107752 4867 patch_prober.go:28] interesting pod/router-default-5444994796-qlkzp container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 14 05:30:31 crc kubenswrapper[4867]: I0214 05:30:31.107805 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qlkzp" podUID="4b71d414-e6bf-4f51-a808-1938c1edf207" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 14 05:30:31 crc kubenswrapper[4867]: I0214 05:30:31.253370 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 05:30:31 crc kubenswrapper[4867]: I0214 05:30:31.253427 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 05:30:31 crc kubenswrapper[4867]: I0214 05:30:31.350420 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-fwfld" podUID="09ba042e-98c3-43cc-aa6a-efbb9a63ae61" containerName="registry-server" probeResult="failure" output=< Feb 14 05:30:31 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:30:31 crc kubenswrapper[4867]: > Feb 14 05:30:31 crc kubenswrapper[4867]: I0214 05:30:31.357234 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9jj9q" podUID="3532ff4a-374c-407b-b01c-b63267b0f9f9" containerName="registry-server" probeResult="failure" output=< Feb 14 05:30:31 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:30:31 crc kubenswrapper[4867]: > Feb 14 05:30:31 crc kubenswrapper[4867]: I0214 05:30:31.869593 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-29mb7" Feb 14 05:30:31 crc kubenswrapper[4867]: I0214 05:30:31.999337 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517450-67scd"] Feb 14 05:30:32 crc kubenswrapper[4867]: I0214 05:30:32.056771 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"b27199a8-11ac-4e59-90b8-b42387dd6dd2","Type":"ContainerStarted","Data":"9ba3b8e288c1810798f2349fac6c2540acaad348ddc3c638e43fd430ab504089"} Feb 14 05:30:32 crc kubenswrapper[4867]: I0214 05:30:32.120808 4867 patch_prober.go:28] interesting pod/router-default-5444994796-qlkzp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 14 05:30:32 crc kubenswrapper[4867]: [-]has-synced failed: reason withheld Feb 14 05:30:32 crc kubenswrapper[4867]: [+]process-running ok Feb 14 05:30:32 crc kubenswrapper[4867]: healthz check failed Feb 14 05:30:32 crc kubenswrapper[4867]: I0214 05:30:32.120854 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qlkzp" podUID="4b71d414-e6bf-4f51-a808-1938c1edf207" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 14 05:30:32 crc kubenswrapper[4867]: I0214 05:30:32.146950 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-29mb7" Feb 14 05:30:32 crc kubenswrapper[4867]: I0214 05:30:32.375233 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-n4l4x" podUID="c07eb1e9-f4cc-4664-b9f6-80322fe0644a" containerName="registry-server" probeResult="failure" output=< Feb 14 05:30:32 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:30:32 crc kubenswrapper[4867]: > Feb 14 05:30:32 crc kubenswrapper[4867]: I0214 05:30:32.604314 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="38c903d9-50f6-418b-84d5-7ee82e9d1e2f" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 14 05:30:32 crc kubenswrapper[4867]: I0214 05:30:32.604841 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 14 05:30:32 crc kubenswrapper[4867]: I0214 05:30:32.613807 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cinder-scheduler" containerStatusID={"Type":"cri-o","ID":"702bb86d1f52e378d22876224d381176ef1535b855223d432ee7fca7f6c8bd06"} pod="openstack/cinder-scheduler-0" containerMessage="Container cinder-scheduler failed liveness probe, will be restarted" Feb 14 05:30:32 crc kubenswrapper[4867]: I0214 05:30:32.613925 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="38c903d9-50f6-418b-84d5-7ee82e9d1e2f" containerName="cinder-scheduler" containerID="cri-o://702bb86d1f52e378d22876224d381176ef1535b855223d432ee7fca7f6c8bd06" gracePeriod=30 Feb 14 05:30:32 crc kubenswrapper[4867]: I0214 05:30:32.771271 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="505de461-9e6f-4914-bf50-e2bf4149b566" containerName="galera" probeResult="failure" output="command timed out" Feb 14 05:30:32 crc kubenswrapper[4867]: I0214 05:30:32.771364 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 14 05:30:32 crc kubenswrapper[4867]: I0214 05:30:32.784032 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="505de461-9e6f-4914-bf50-e2bf4149b566" containerName="galera" probeResult="failure" output="command timed out" Feb 14 05:30:32 crc kubenswrapper[4867]: I0214 05:30:32.784092 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 14 05:30:33 crc kubenswrapper[4867]: I0214 05:30:33.082734 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517450-67scd" event={"ID":"473b9472-6542-4e27-87e9-17365cd400e1","Type":"ContainerStarted","Data":"bc51a9a60243fe37db817bd1bb60afaae3afb2d44efe11d5562826c857a86b53"} Feb 14 05:30:33 crc kubenswrapper[4867]: I0214 05:30:33.083280 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517450-67scd" event={"ID":"473b9472-6542-4e27-87e9-17365cd400e1","Type":"ContainerStarted","Data":"ea51cf97a09727e5c9495fcbc65a1efc118d0310665ce814fa54cad90fa4e092"} Feb 14 05:30:33 crc kubenswrapper[4867]: I0214 05:30:33.085784 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"339fe681bb88adb32b1f3cac0ab3a9a7c019700102a8ea9f39f2eb6eacf010e9"} pod="openstack/openstack-cell1-galera-0" containerMessage="Container galera failed liveness probe, will be restarted" Feb 14 05:30:33 crc kubenswrapper[4867]: I0214 05:30:33.106244 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-qlkzp" Feb 14 05:30:33 crc kubenswrapper[4867]: I0214 05:30:33.131404 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 14 05:30:33 crc kubenswrapper[4867]: I0214 05:30:33.133820 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29517450-67scd" podStartSLOduration=26.130930516 podStartE2EDuration="26.130930516s" podCreationTimestamp="2026-02-14 05:30:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 05:30:33.104183424 +0000 UTC m=+4865.185120758" watchObservedRunningTime="2026-02-14 05:30:33.130930516 +0000 UTC m=+4865.211867830" Feb 14 05:30:33 crc kubenswrapper[4867]: I0214 05:30:33.381276 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="505de461-9e6f-4914-bf50-e2bf4149b566" containerName="galera" containerID="cri-o://339fe681bb88adb32b1f3cac0ab3a9a7c019700102a8ea9f39f2eb6eacf010e9" gracePeriod=30 Feb 14 05:30:33 crc kubenswrapper[4867]: I0214 05:30:33.451772 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mrccv" Feb 14 05:30:33 crc kubenswrapper[4867]: I0214 05:30:33.451940 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mrccv" Feb 14 05:30:33 crc kubenswrapper[4867]: I0214 05:30:33.502130 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-gbzmm" podUID="ae8a4292-e933-464b-b36d-918f43ce6f65" containerName="registry-server" probeResult="failure" output=< Feb 14 05:30:33 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:30:33 crc kubenswrapper[4867]: > Feb 14 05:30:34 crc kubenswrapper[4867]: I0214 05:30:34.086477 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-6nhjp" Feb 14 05:30:34 crc kubenswrapper[4867]: I0214 05:30:34.096214 4867 generic.go:334] "Generic (PLEG): container finished" podID="473b9472-6542-4e27-87e9-17365cd400e1" containerID="bc51a9a60243fe37db817bd1bb60afaae3afb2d44efe11d5562826c857a86b53" exitCode=0 Feb 14 05:30:34 crc kubenswrapper[4867]: I0214 05:30:34.096275 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517450-67scd" event={"ID":"473b9472-6542-4e27-87e9-17365cd400e1","Type":"ContainerDied","Data":"bc51a9a60243fe37db817bd1bb60afaae3afb2d44efe11d5562826c857a86b53"} Feb 14 05:30:34 crc kubenswrapper[4867]: I0214 05:30:34.096532 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-qlkzp" Feb 14 05:30:34 crc kubenswrapper[4867]: I0214 05:30:34.100053 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-qlkzp" Feb 14 05:30:34 crc kubenswrapper[4867]: I0214 05:30:34.516892 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-mrccv" podUID="e0fe6db4-add0-4993-a40c-c5b6725565fa" containerName="registry-server" probeResult="failure" output=< Feb 14 05:30:34 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:30:34 crc kubenswrapper[4867]: > Feb 14 05:30:34 crc kubenswrapper[4867]: I0214 05:30:34.542100 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-7zkqz" Feb 14 05:30:35 crc kubenswrapper[4867]: I0214 05:30:35.110779 4867 generic.go:334] "Generic (PLEG): container finished" podID="38c903d9-50f6-418b-84d5-7ee82e9d1e2f" containerID="702bb86d1f52e378d22876224d381176ef1535b855223d432ee7fca7f6c8bd06" exitCode=0 Feb 14 05:30:35 crc kubenswrapper[4867]: I0214 05:30:35.110859 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"38c903d9-50f6-418b-84d5-7ee82e9d1e2f","Type":"ContainerDied","Data":"702bb86d1f52e378d22876224d381176ef1535b855223d432ee7fca7f6c8bd06"} Feb 14 05:30:35 crc kubenswrapper[4867]: I0214 05:30:35.916357 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517450-67scd" Feb 14 05:30:36 crc kubenswrapper[4867]: I0214 05:30:36.116665 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/473b9472-6542-4e27-87e9-17365cd400e1-secret-volume\") pod \"473b9472-6542-4e27-87e9-17365cd400e1\" (UID: \"473b9472-6542-4e27-87e9-17365cd400e1\") " Feb 14 05:30:36 crc kubenswrapper[4867]: I0214 05:30:36.117053 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddk97\" (UniqueName: \"kubernetes.io/projected/473b9472-6542-4e27-87e9-17365cd400e1-kube-api-access-ddk97\") pod \"473b9472-6542-4e27-87e9-17365cd400e1\" (UID: \"473b9472-6542-4e27-87e9-17365cd400e1\") " Feb 14 05:30:36 crc kubenswrapper[4867]: I0214 05:30:36.117162 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/473b9472-6542-4e27-87e9-17365cd400e1-config-volume\") pod \"473b9472-6542-4e27-87e9-17365cd400e1\" (UID: \"473b9472-6542-4e27-87e9-17365cd400e1\") " Feb 14 05:30:36 crc kubenswrapper[4867]: I0214 05:30:36.118561 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/473b9472-6542-4e27-87e9-17365cd400e1-config-volume" (OuterVolumeSpecName: "config-volume") pod "473b9472-6542-4e27-87e9-17365cd400e1" (UID: "473b9472-6542-4e27-87e9-17365cd400e1"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 05:30:36 crc kubenswrapper[4867]: I0214 05:30:36.144681 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/473b9472-6542-4e27-87e9-17365cd400e1-kube-api-access-ddk97" (OuterVolumeSpecName: "kube-api-access-ddk97") pod "473b9472-6542-4e27-87e9-17365cd400e1" (UID: "473b9472-6542-4e27-87e9-17365cd400e1"). InnerVolumeSpecName "kube-api-access-ddk97". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 05:30:36 crc kubenswrapper[4867]: I0214 05:30:36.149151 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517450-67scd" Feb 14 05:30:36 crc kubenswrapper[4867]: I0214 05:30:36.149332 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517450-67scd" event={"ID":"473b9472-6542-4e27-87e9-17365cd400e1","Type":"ContainerDied","Data":"ea51cf97a09727e5c9495fcbc65a1efc118d0310665ce814fa54cad90fa4e092"} Feb 14 05:30:36 crc kubenswrapper[4867]: I0214 05:30:36.149362 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea51cf97a09727e5c9495fcbc65a1efc118d0310665ce814fa54cad90fa4e092" Feb 14 05:30:36 crc kubenswrapper[4867]: I0214 05:30:36.155010 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/473b9472-6542-4e27-87e9-17365cd400e1-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "473b9472-6542-4e27-87e9-17365cd400e1" (UID: "473b9472-6542-4e27-87e9-17365cd400e1"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 05:30:36 crc kubenswrapper[4867]: I0214 05:30:36.220158 4867 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/473b9472-6542-4e27-87e9-17365cd400e1-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 14 05:30:36 crc kubenswrapper[4867]: I0214 05:30:36.220201 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ddk97\" (UniqueName: \"kubernetes.io/projected/473b9472-6542-4e27-87e9-17365cd400e1-kube-api-access-ddk97\") on node \"crc\" DevicePath \"\"" Feb 14 05:30:36 crc kubenswrapper[4867]: I0214 05:30:36.220212 4867 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/473b9472-6542-4e27-87e9-17365cd400e1-config-volume\") on node \"crc\" DevicePath \"\"" Feb 14 05:30:36 crc kubenswrapper[4867]: I0214 05:30:36.265695 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517405-57nzc"] Feb 14 05:30:36 crc kubenswrapper[4867]: I0214 05:30:36.285192 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517405-57nzc"] Feb 14 05:30:37 crc kubenswrapper[4867]: I0214 05:30:37.013961 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9309a87-899d-49c2-885b-9d5689c3086b" path="/var/lib/kubelet/pods/c9309a87-899d-49c2-885b-9d5689c3086b/volumes" Feb 14 05:30:37 crc kubenswrapper[4867]: I0214 05:30:37.161783 4867 generic.go:334] "Generic (PLEG): container finished" podID="505de461-9e6f-4914-bf50-e2bf4149b566" containerID="339fe681bb88adb32b1f3cac0ab3a9a7c019700102a8ea9f39f2eb6eacf010e9" exitCode=0 Feb 14 05:30:37 crc kubenswrapper[4867]: I0214 05:30:37.161825 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"505de461-9e6f-4914-bf50-e2bf4149b566","Type":"ContainerDied","Data":"339fe681bb88adb32b1f3cac0ab3a9a7c019700102a8ea9f39f2eb6eacf010e9"} Feb 14 05:30:37 crc kubenswrapper[4867]: I0214 05:30:37.161851 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"505de461-9e6f-4914-bf50-e2bf4149b566","Type":"ContainerStarted","Data":"0786b22eca9ace8c7f0637021537b8c4d7bac2e310ec10ad729a4d4b4602c81e"} Feb 14 05:30:37 crc kubenswrapper[4867]: I0214 05:30:37.796795 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-htv2n" Feb 14 05:30:38 crc kubenswrapper[4867]: I0214 05:30:38.054437 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s94ht" Feb 14 05:30:38 crc kubenswrapper[4867]: I0214 05:30:38.073495 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tcss9" Feb 14 05:30:38 crc kubenswrapper[4867]: I0214 05:30:38.149099 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dgp2v" Feb 14 05:30:38 crc kubenswrapper[4867]: I0214 05:30:38.406870 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-72mpc" Feb 14 05:30:39 crc kubenswrapper[4867]: I0214 05:30:39.190274 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"38c903d9-50f6-418b-84d5-7ee82e9d1e2f","Type":"ContainerStarted","Data":"5b5431547eb607a4a1209617a9ab1ff6fe980675998dc6ffef354f0a308a263a"} Feb 14 05:30:39 crc kubenswrapper[4867]: I0214 05:30:39.279589 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 14 05:30:39 crc kubenswrapper[4867]: I0214 05:30:39.344469 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-574c444545-stzjc" Feb 14 05:30:39 crc kubenswrapper[4867]: I0214 05:30:39.347476 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7575f7b945-9zbh8" Feb 14 05:30:39 crc kubenswrapper[4867]: I0214 05:30:39.632605 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 14 05:30:39 crc kubenswrapper[4867]: I0214 05:30:39.634131 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 14 05:30:39 crc kubenswrapper[4867]: I0214 05:30:39.747867 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-5479889c99-ltnxf" Feb 14 05:30:40 crc kubenswrapper[4867]: I0214 05:30:40.289541 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cs8b7t" Feb 14 05:30:40 crc kubenswrapper[4867]: I0214 05:30:40.579736 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-fwfld" podUID="09ba042e-98c3-43cc-aa6a-efbb9a63ae61" containerName="registry-server" probeResult="failure" output=< Feb 14 05:30:40 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:30:40 crc kubenswrapper[4867]: > Feb 14 05:30:40 crc kubenswrapper[4867]: I0214 05:30:40.630743 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9jj9q" podUID="3532ff4a-374c-407b-b01c-b63267b0f9f9" containerName="registry-server" probeResult="failure" output=< Feb 14 05:30:40 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:30:40 crc kubenswrapper[4867]: > Feb 14 05:30:40 crc kubenswrapper[4867]: I0214 05:30:40.747212 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 14 05:30:41 crc kubenswrapper[4867]: I0214 05:30:41.343066 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-7f9bfb45cb-mpxbn" Feb 14 05:30:41 crc kubenswrapper[4867]: I0214 05:30:41.393926 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 14 05:30:41 crc kubenswrapper[4867]: I0214 05:30:41.431448 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 14 05:30:41 crc kubenswrapper[4867]: I0214 05:30:41.431521 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 14 05:30:42 crc kubenswrapper[4867]: I0214 05:30:42.010539 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-9gqfb" Feb 14 05:30:42 crc kubenswrapper[4867]: I0214 05:30:42.148408 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 14 05:30:42 crc kubenswrapper[4867]: I0214 05:30:42.307485 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-n4l4x" podUID="c07eb1e9-f4cc-4664-b9f6-80322fe0644a" containerName="registry-server" probeResult="failure" output=< Feb 14 05:30:42 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:30:42 crc kubenswrapper[4867]: > Feb 14 05:30:43 crc kubenswrapper[4867]: I0214 05:30:43.081020 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 14 05:30:43 crc kubenswrapper[4867]: I0214 05:30:43.489451 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-4hvw7" Feb 14 05:30:43 crc kubenswrapper[4867]: I0214 05:30:43.913111 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-gbzmm" podUID="ae8a4292-e933-464b-b36d-918f43ce6f65" containerName="registry-server" probeResult="failure" output=< Feb 14 05:30:43 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:30:43 crc kubenswrapper[4867]: > Feb 14 05:30:44 crc kubenswrapper[4867]: I0214 05:30:44.385587 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 14 05:30:44 crc kubenswrapper[4867]: I0214 05:30:44.507865 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-mrccv" podUID="e0fe6db4-add0-4993-a40c-c5b6725565fa" containerName="registry-server" probeResult="failure" output=< Feb 14 05:30:44 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:30:44 crc kubenswrapper[4867]: > Feb 14 05:30:46 crc kubenswrapper[4867]: I0214 05:30:46.845570 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" podUID="351f0f21-497e-4c3e-99cc-30baff4e6484" containerName="oauth-openshift" containerID="cri-o://563d4e57c17a704703d730e549779becfa05a0901ceefc0c24faf0d612500998" gracePeriod=15 Feb 14 05:30:47 crc kubenswrapper[4867]: I0214 05:30:47.284133 4867 generic.go:334] "Generic (PLEG): container finished" podID="351f0f21-497e-4c3e-99cc-30baff4e6484" containerID="563d4e57c17a704703d730e549779becfa05a0901ceefc0c24faf0d612500998" exitCode=0 Feb 14 05:30:47 crc kubenswrapper[4867]: I0214 05:30:47.284185 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" event={"ID":"351f0f21-497e-4c3e-99cc-30baff4e6484","Type":"ContainerDied","Data":"563d4e57c17a704703d730e549779becfa05a0901ceefc0c24faf0d612500998"} Feb 14 05:30:48 crc kubenswrapper[4867]: I0214 05:30:48.332385 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" event={"ID":"351f0f21-497e-4c3e-99cc-30baff4e6484","Type":"ContainerStarted","Data":"8ecd1d525e321c7dcf77de95967937ad6f027cf611bd81c7d4857db407427727"} Feb 14 05:30:48 crc kubenswrapper[4867]: I0214 05:30:48.333968 4867 patch_prober.go:28] interesting pod/oauth-openshift-79479887dd-9ltbt container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.75:6443/healthz\": dial tcp 10.217.0.75:6443: connect: connection refused" start-of-body= Feb 14 05:30:48 crc kubenswrapper[4867]: I0214 05:30:48.334058 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 05:30:48 crc kubenswrapper[4867]: I0214 05:30:48.334090 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" podUID="351f0f21-497e-4c3e-99cc-30baff4e6484" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.75:6443/healthz\": dial tcp 10.217.0.75:6443: connect: connection refused" Feb 14 05:30:49 crc kubenswrapper[4867]: I0214 05:30:49.353616 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 05:30:50 crc kubenswrapper[4867]: I0214 05:30:50.578223 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-fwfld" podUID="09ba042e-98c3-43cc-aa6a-efbb9a63ae61" containerName="registry-server" probeResult="failure" output=< Feb 14 05:30:50 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:30:50 crc kubenswrapper[4867]: > Feb 14 05:30:50 crc kubenswrapper[4867]: I0214 05:30:50.601551 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9jj9q" podUID="3532ff4a-374c-407b-b01c-b63267b0f9f9" containerName="registry-server" probeResult="failure" output=< Feb 14 05:30:50 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:30:50 crc kubenswrapper[4867]: > Feb 14 05:30:52 crc kubenswrapper[4867]: I0214 05:30:52.316051 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-n4l4x" podUID="c07eb1e9-f4cc-4664-b9f6-80322fe0644a" containerName="registry-server" probeResult="failure" output=< Feb 14 05:30:52 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:30:52 crc kubenswrapper[4867]: > Feb 14 05:30:52 crc kubenswrapper[4867]: I0214 05:30:52.449357 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gbzmm" Feb 14 05:30:52 crc kubenswrapper[4867]: I0214 05:30:52.522277 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gbzmm" Feb 14 05:30:52 crc kubenswrapper[4867]: I0214 05:30:52.726533 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gbzmm"] Feb 14 05:30:54 crc kubenswrapper[4867]: I0214 05:30:54.416486 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-gbzmm" podUID="ae8a4292-e933-464b-b36d-918f43ce6f65" containerName="registry-server" containerID="cri-o://02fa8e73abcf51bd71a1c91f18d3c7a2d7323bb60e9dc8dc6f9f4004369b2287" gracePeriod=2 Feb 14 05:30:54 crc kubenswrapper[4867]: I0214 05:30:54.514972 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-mrccv" podUID="e0fe6db4-add0-4993-a40c-c5b6725565fa" containerName="registry-server" probeResult="failure" output=< Feb 14 05:30:54 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:30:54 crc kubenswrapper[4867]: > Feb 14 05:30:55 crc kubenswrapper[4867]: I0214 05:30:55.444050 4867 generic.go:334] "Generic (PLEG): container finished" podID="ae8a4292-e933-464b-b36d-918f43ce6f65" containerID="02fa8e73abcf51bd71a1c91f18d3c7a2d7323bb60e9dc8dc6f9f4004369b2287" exitCode=0 Feb 14 05:30:55 crc kubenswrapper[4867]: I0214 05:30:55.444359 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gbzmm" event={"ID":"ae8a4292-e933-464b-b36d-918f43ce6f65","Type":"ContainerDied","Data":"02fa8e73abcf51bd71a1c91f18d3c7a2d7323bb60e9dc8dc6f9f4004369b2287"} Feb 14 05:30:56 crc kubenswrapper[4867]: I0214 05:30:56.150482 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gbzmm" Feb 14 05:30:56 crc kubenswrapper[4867]: I0214 05:30:56.205686 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae8a4292-e933-464b-b36d-918f43ce6f65-catalog-content\") pod \"ae8a4292-e933-464b-b36d-918f43ce6f65\" (UID: \"ae8a4292-e933-464b-b36d-918f43ce6f65\") " Feb 14 05:30:56 crc kubenswrapper[4867]: I0214 05:30:56.205862 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-696zs\" (UniqueName: \"kubernetes.io/projected/ae8a4292-e933-464b-b36d-918f43ce6f65-kube-api-access-696zs\") pod \"ae8a4292-e933-464b-b36d-918f43ce6f65\" (UID: \"ae8a4292-e933-464b-b36d-918f43ce6f65\") " Feb 14 05:30:56 crc kubenswrapper[4867]: I0214 05:30:56.205972 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae8a4292-e933-464b-b36d-918f43ce6f65-utilities\") pod \"ae8a4292-e933-464b-b36d-918f43ce6f65\" (UID: \"ae8a4292-e933-464b-b36d-918f43ce6f65\") " Feb 14 05:30:56 crc kubenswrapper[4867]: I0214 05:30:56.209818 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae8a4292-e933-464b-b36d-918f43ce6f65-utilities" (OuterVolumeSpecName: "utilities") pod "ae8a4292-e933-464b-b36d-918f43ce6f65" (UID: "ae8a4292-e933-464b-b36d-918f43ce6f65"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 05:30:56 crc kubenswrapper[4867]: I0214 05:30:56.243762 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae8a4292-e933-464b-b36d-918f43ce6f65-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ae8a4292-e933-464b-b36d-918f43ce6f65" (UID: "ae8a4292-e933-464b-b36d-918f43ce6f65"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 05:30:56 crc kubenswrapper[4867]: I0214 05:30:56.250527 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae8a4292-e933-464b-b36d-918f43ce6f65-kube-api-access-696zs" (OuterVolumeSpecName: "kube-api-access-696zs") pod "ae8a4292-e933-464b-b36d-918f43ce6f65" (UID: "ae8a4292-e933-464b-b36d-918f43ce6f65"). InnerVolumeSpecName "kube-api-access-696zs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 05:30:56 crc kubenswrapper[4867]: I0214 05:30:56.310890 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae8a4292-e933-464b-b36d-918f43ce6f65-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 05:30:56 crc kubenswrapper[4867]: I0214 05:30:56.310923 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-696zs\" (UniqueName: \"kubernetes.io/projected/ae8a4292-e933-464b-b36d-918f43ce6f65-kube-api-access-696zs\") on node \"crc\" DevicePath \"\"" Feb 14 05:30:56 crc kubenswrapper[4867]: I0214 05:30:56.310934 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae8a4292-e933-464b-b36d-918f43ce6f65-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 05:30:56 crc kubenswrapper[4867]: I0214 05:30:56.460418 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gbzmm" event={"ID":"ae8a4292-e933-464b-b36d-918f43ce6f65","Type":"ContainerDied","Data":"47cdca75a2ba0f821663d76cef9b19a6564e32fa60be6d56b7f13820ba0f0910"} Feb 14 05:30:56 crc kubenswrapper[4867]: I0214 05:30:56.460498 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gbzmm" Feb 14 05:30:56 crc kubenswrapper[4867]: I0214 05:30:56.464334 4867 scope.go:117] "RemoveContainer" containerID="02fa8e73abcf51bd71a1c91f18d3c7a2d7323bb60e9dc8dc6f9f4004369b2287" Feb 14 05:30:56 crc kubenswrapper[4867]: I0214 05:30:56.505006 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gbzmm"] Feb 14 05:30:56 crc kubenswrapper[4867]: I0214 05:30:56.513232 4867 scope.go:117] "RemoveContainer" containerID="d8dba4d88b5c6eecbec89d7feae83ad9606443736a1880bc3a3ef22fc521b479" Feb 14 05:30:56 crc kubenswrapper[4867]: I0214 05:30:56.521122 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-gbzmm"] Feb 14 05:30:56 crc kubenswrapper[4867]: I0214 05:30:56.556644 4867 scope.go:117] "RemoveContainer" containerID="8c243a37aff3c02c559e404368152638ab794bc475ff69a09f55fcd9db332faf" Feb 14 05:30:57 crc kubenswrapper[4867]: I0214 05:30:57.013300 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae8a4292-e933-464b-b36d-918f43ce6f65" path="/var/lib/kubelet/pods/ae8a4292-e933-464b-b36d-918f43ce6f65/volumes" Feb 14 05:30:58 crc kubenswrapper[4867]: I0214 05:30:58.170976 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rv8cb" Feb 14 05:30:59 crc kubenswrapper[4867]: I0214 05:30:59.564455 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fwfld" Feb 14 05:30:59 crc kubenswrapper[4867]: I0214 05:30:59.632014 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fwfld" Feb 14 05:30:59 crc kubenswrapper[4867]: I0214 05:30:59.808857 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fwfld"] Feb 14 05:31:00 crc kubenswrapper[4867]: I0214 05:31:00.598066 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9jj9q" podUID="3532ff4a-374c-407b-b01c-b63267b0f9f9" containerName="registry-server" probeResult="failure" output=< Feb 14 05:31:00 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:31:00 crc kubenswrapper[4867]: > Feb 14 05:31:00 crc kubenswrapper[4867]: I0214 05:31:00.766831 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-67594686f4-52kwb" Feb 14 05:31:01 crc kubenswrapper[4867]: I0214 05:31:01.250623 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 05:31:01 crc kubenswrapper[4867]: I0214 05:31:01.250706 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 05:31:01 crc kubenswrapper[4867]: I0214 05:31:01.250771 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" Feb 14 05:31:01 crc kubenswrapper[4867]: I0214 05:31:01.252369 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"de23552d651bd266665fca3b2536d2046c3c2309b2c56fb5a66759067df0e4c8"} pod="openshift-machine-config-operator/machine-config-daemon-4s95t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 05:31:01 crc kubenswrapper[4867]: I0214 05:31:01.252456 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" containerID="cri-o://de23552d651bd266665fca3b2536d2046c3c2309b2c56fb5a66759067df0e4c8" gracePeriod=600 Feb 14 05:31:01 crc kubenswrapper[4867]: I0214 05:31:01.325795 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-n4l4x" Feb 14 05:31:01 crc kubenswrapper[4867]: I0214 05:31:01.409929 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-n4l4x" Feb 14 05:31:01 crc kubenswrapper[4867]: I0214 05:31:01.526584 4867 generic.go:334] "Generic (PLEG): container finished" podID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerID="de23552d651bd266665fca3b2536d2046c3c2309b2c56fb5a66759067df0e4c8" exitCode=0 Feb 14 05:31:01 crc kubenswrapper[4867]: I0214 05:31:01.526672 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" event={"ID":"5992e46c-bce7-4b9f-82f2-c7ffb93286cd","Type":"ContainerDied","Data":"de23552d651bd266665fca3b2536d2046c3c2309b2c56fb5a66759067df0e4c8"} Feb 14 05:31:01 crc kubenswrapper[4867]: I0214 05:31:01.526740 4867 scope.go:117] "RemoveContainer" containerID="ec987150c85caa2259b5e07a0130f2569d95269321a91ae517f52e3f4caa949a" Feb 14 05:31:01 crc kubenswrapper[4867]: I0214 05:31:01.527085 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fwfld" podUID="09ba042e-98c3-43cc-aa6a-efbb9a63ae61" containerName="registry-server" containerID="cri-o://d32a85def59446e2aea01a95f3cfe819170da1f9922b0c56fc8dbc92b574234c" gracePeriod=2 Feb 14 05:31:02 crc kubenswrapper[4867]: I0214 05:31:02.216852 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n4l4x"] Feb 14 05:31:02 crc kubenswrapper[4867]: I0214 05:31:02.447096 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fwfld" Feb 14 05:31:02 crc kubenswrapper[4867]: I0214 05:31:02.548179 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" event={"ID":"5992e46c-bce7-4b9f-82f2-c7ffb93286cd","Type":"ContainerStarted","Data":"5e73bb84ca12c5e0e2f84b8149632e8db299b151552bafe8248698ab62e5c36a"} Feb 14 05:31:02 crc kubenswrapper[4867]: I0214 05:31:02.551651 4867 generic.go:334] "Generic (PLEG): container finished" podID="09ba042e-98c3-43cc-aa6a-efbb9a63ae61" containerID="d32a85def59446e2aea01a95f3cfe819170da1f9922b0c56fc8dbc92b574234c" exitCode=0 Feb 14 05:31:02 crc kubenswrapper[4867]: I0214 05:31:02.551731 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fwfld" event={"ID":"09ba042e-98c3-43cc-aa6a-efbb9a63ae61","Type":"ContainerDied","Data":"d32a85def59446e2aea01a95f3cfe819170da1f9922b0c56fc8dbc92b574234c"} Feb 14 05:31:02 crc kubenswrapper[4867]: I0214 05:31:02.551809 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-n4l4x" podUID="c07eb1e9-f4cc-4664-b9f6-80322fe0644a" containerName="registry-server" containerID="cri-o://87e6c040d38bde68e493b7b3302f19e7c726d33e11f4f16178f8c9d7adfc5f47" gracePeriod=2 Feb 14 05:31:02 crc kubenswrapper[4867]: I0214 05:31:02.551755 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fwfld" Feb 14 05:31:02 crc kubenswrapper[4867]: I0214 05:31:02.551881 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fwfld" event={"ID":"09ba042e-98c3-43cc-aa6a-efbb9a63ae61","Type":"ContainerDied","Data":"611fc79292fb2762358fe75567d94939459a2919b3fc494b0f725c85bd01c821"} Feb 14 05:31:02 crc kubenswrapper[4867]: I0214 05:31:02.551915 4867 scope.go:117] "RemoveContainer" containerID="d32a85def59446e2aea01a95f3cfe819170da1f9922b0c56fc8dbc92b574234c" Feb 14 05:31:02 crc kubenswrapper[4867]: I0214 05:31:02.580827 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09ba042e-98c3-43cc-aa6a-efbb9a63ae61-catalog-content\") pod \"09ba042e-98c3-43cc-aa6a-efbb9a63ae61\" (UID: \"09ba042e-98c3-43cc-aa6a-efbb9a63ae61\") " Feb 14 05:31:02 crc kubenswrapper[4867]: I0214 05:31:02.588809 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09ba042e-98c3-43cc-aa6a-efbb9a63ae61-utilities\") pod \"09ba042e-98c3-43cc-aa6a-efbb9a63ae61\" (UID: \"09ba042e-98c3-43cc-aa6a-efbb9a63ae61\") " Feb 14 05:31:02 crc kubenswrapper[4867]: I0214 05:31:02.589108 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5kn\" (UniqueName: \"kubernetes.io/projected/09ba042e-98c3-43cc-aa6a-efbb9a63ae61-kube-api-access-mg5kn\") pod \"09ba042e-98c3-43cc-aa6a-efbb9a63ae61\" (UID: \"09ba042e-98c3-43cc-aa6a-efbb9a63ae61\") " Feb 14 05:31:02 crc kubenswrapper[4867]: I0214 05:31:02.593790 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09ba042e-98c3-43cc-aa6a-efbb9a63ae61-utilities" (OuterVolumeSpecName: "utilities") pod "09ba042e-98c3-43cc-aa6a-efbb9a63ae61" (UID: "09ba042e-98c3-43cc-aa6a-efbb9a63ae61"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 05:31:02 crc kubenswrapper[4867]: I0214 05:31:02.619444 4867 scope.go:117] "RemoveContainer" containerID="e54727b5bf92a59032c5529b8aae9e9aaa32e613387911a5fa36f0cd61a385b3" Feb 14 05:31:02 crc kubenswrapper[4867]: I0214 05:31:02.620976 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ba042e-98c3-43cc-aa6a-efbb9a63ae61-kube-api-access-mg5kn" (OuterVolumeSpecName: "kube-api-access-mg5kn") pod "09ba042e-98c3-43cc-aa6a-efbb9a63ae61" (UID: "09ba042e-98c3-43cc-aa6a-efbb9a63ae61"). InnerVolumeSpecName "kube-api-access-mg5kn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 05:31:02 crc kubenswrapper[4867]: I0214 05:31:02.695084 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09ba042e-98c3-43cc-aa6a-efbb9a63ae61-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 05:31:02 crc kubenswrapper[4867]: I0214 05:31:02.695152 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5kn\" (UniqueName: \"kubernetes.io/projected/09ba042e-98c3-43cc-aa6a-efbb9a63ae61-kube-api-access-mg5kn\") on node \"crc\" DevicePath \"\"" Feb 14 05:31:02 crc kubenswrapper[4867]: I0214 05:31:02.781207 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09ba042e-98c3-43cc-aa6a-efbb9a63ae61-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "09ba042e-98c3-43cc-aa6a-efbb9a63ae61" (UID: "09ba042e-98c3-43cc-aa6a-efbb9a63ae61"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 05:31:02 crc kubenswrapper[4867]: I0214 05:31:02.847256 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09ba042e-98c3-43cc-aa6a-efbb9a63ae61-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 05:31:02 crc kubenswrapper[4867]: I0214 05:31:02.854903 4867 scope.go:117] "RemoveContainer" containerID="5cb79fe74b93324d918674ab2692becf5fd9a155cfb9970da26b3cebb5355a9d" Feb 14 05:31:02 crc kubenswrapper[4867]: I0214 05:31:02.928747 4867 scope.go:117] "RemoveContainer" containerID="d32a85def59446e2aea01a95f3cfe819170da1f9922b0c56fc8dbc92b574234c" Feb 14 05:31:02 crc kubenswrapper[4867]: E0214 05:31:02.944202 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d32a85def59446e2aea01a95f3cfe819170da1f9922b0c56fc8dbc92b574234c\": container with ID starting with d32a85def59446e2aea01a95f3cfe819170da1f9922b0c56fc8dbc92b574234c not found: ID does not exist" containerID="d32a85def59446e2aea01a95f3cfe819170da1f9922b0c56fc8dbc92b574234c" Feb 14 05:31:02 crc kubenswrapper[4867]: I0214 05:31:02.944740 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d32a85def59446e2aea01a95f3cfe819170da1f9922b0c56fc8dbc92b574234c"} err="failed to get container status \"d32a85def59446e2aea01a95f3cfe819170da1f9922b0c56fc8dbc92b574234c\": rpc error: code = NotFound desc = could not find container \"d32a85def59446e2aea01a95f3cfe819170da1f9922b0c56fc8dbc92b574234c\": container with ID starting with d32a85def59446e2aea01a95f3cfe819170da1f9922b0c56fc8dbc92b574234c not found: ID does not exist" Feb 14 05:31:02 crc kubenswrapper[4867]: I0214 05:31:02.944773 4867 scope.go:117] "RemoveContainer" containerID="e54727b5bf92a59032c5529b8aae9e9aaa32e613387911a5fa36f0cd61a385b3" Feb 14 05:31:02 crc kubenswrapper[4867]: E0214 05:31:02.945431 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e54727b5bf92a59032c5529b8aae9e9aaa32e613387911a5fa36f0cd61a385b3\": container with ID starting with e54727b5bf92a59032c5529b8aae9e9aaa32e613387911a5fa36f0cd61a385b3 not found: ID does not exist" containerID="e54727b5bf92a59032c5529b8aae9e9aaa32e613387911a5fa36f0cd61a385b3" Feb 14 05:31:02 crc kubenswrapper[4867]: I0214 05:31:02.945474 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e54727b5bf92a59032c5529b8aae9e9aaa32e613387911a5fa36f0cd61a385b3"} err="failed to get container status \"e54727b5bf92a59032c5529b8aae9e9aaa32e613387911a5fa36f0cd61a385b3\": rpc error: code = NotFound desc = could not find container \"e54727b5bf92a59032c5529b8aae9e9aaa32e613387911a5fa36f0cd61a385b3\": container with ID starting with e54727b5bf92a59032c5529b8aae9e9aaa32e613387911a5fa36f0cd61a385b3 not found: ID does not exist" Feb 14 05:31:02 crc kubenswrapper[4867]: I0214 05:31:02.945520 4867 scope.go:117] "RemoveContainer" containerID="5cb79fe74b93324d918674ab2692becf5fd9a155cfb9970da26b3cebb5355a9d" Feb 14 05:31:02 crc kubenswrapper[4867]: E0214 05:31:02.946778 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5cb79fe74b93324d918674ab2692becf5fd9a155cfb9970da26b3cebb5355a9d\": container with ID starting with 5cb79fe74b93324d918674ab2692becf5fd9a155cfb9970da26b3cebb5355a9d not found: ID does not exist" containerID="5cb79fe74b93324d918674ab2692becf5fd9a155cfb9970da26b3cebb5355a9d" Feb 14 05:31:02 crc kubenswrapper[4867]: I0214 05:31:02.946807 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5cb79fe74b93324d918674ab2692becf5fd9a155cfb9970da26b3cebb5355a9d"} err="failed to get container status \"5cb79fe74b93324d918674ab2692becf5fd9a155cfb9970da26b3cebb5355a9d\": rpc error: code = NotFound desc = could not find container \"5cb79fe74b93324d918674ab2692becf5fd9a155cfb9970da26b3cebb5355a9d\": container with ID starting with 5cb79fe74b93324d918674ab2692becf5fd9a155cfb9970da26b3cebb5355a9d not found: ID does not exist" Feb 14 05:31:02 crc kubenswrapper[4867]: I0214 05:31:02.989307 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fwfld"] Feb 14 05:31:03 crc kubenswrapper[4867]: I0214 05:31:03.060019 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fwfld"] Feb 14 05:31:03 crc kubenswrapper[4867]: I0214 05:31:03.257653 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n4l4x" Feb 14 05:31:03 crc kubenswrapper[4867]: I0214 05:31:03.359729 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c07eb1e9-f4cc-4664-b9f6-80322fe0644a-utilities\") pod \"c07eb1e9-f4cc-4664-b9f6-80322fe0644a\" (UID: \"c07eb1e9-f4cc-4664-b9f6-80322fe0644a\") " Feb 14 05:31:03 crc kubenswrapper[4867]: I0214 05:31:03.360069 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p5kf9\" (UniqueName: \"kubernetes.io/projected/c07eb1e9-f4cc-4664-b9f6-80322fe0644a-kube-api-access-p5kf9\") pod \"c07eb1e9-f4cc-4664-b9f6-80322fe0644a\" (UID: \"c07eb1e9-f4cc-4664-b9f6-80322fe0644a\") " Feb 14 05:31:03 crc kubenswrapper[4867]: I0214 05:31:03.360537 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c07eb1e9-f4cc-4664-b9f6-80322fe0644a-catalog-content\") pod \"c07eb1e9-f4cc-4664-b9f6-80322fe0644a\" (UID: \"c07eb1e9-f4cc-4664-b9f6-80322fe0644a\") " Feb 14 05:31:03 crc kubenswrapper[4867]: I0214 05:31:03.380821 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c07eb1e9-f4cc-4664-b9f6-80322fe0644a-kube-api-access-p5kf9" (OuterVolumeSpecName: "kube-api-access-p5kf9") pod "c07eb1e9-f4cc-4664-b9f6-80322fe0644a" (UID: "c07eb1e9-f4cc-4664-b9f6-80322fe0644a"). InnerVolumeSpecName "kube-api-access-p5kf9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 05:31:03 crc kubenswrapper[4867]: I0214 05:31:03.385187 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c07eb1e9-f4cc-4664-b9f6-80322fe0644a-utilities" (OuterVolumeSpecName: "utilities") pod "c07eb1e9-f4cc-4664-b9f6-80322fe0644a" (UID: "c07eb1e9-f4cc-4664-b9f6-80322fe0644a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 05:31:03 crc kubenswrapper[4867]: I0214 05:31:03.461436 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c07eb1e9-f4cc-4664-b9f6-80322fe0644a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c07eb1e9-f4cc-4664-b9f6-80322fe0644a" (UID: "c07eb1e9-f4cc-4664-b9f6-80322fe0644a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 05:31:03 crc kubenswrapper[4867]: I0214 05:31:03.465114 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c07eb1e9-f4cc-4664-b9f6-80322fe0644a-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 05:31:03 crc kubenswrapper[4867]: I0214 05:31:03.465188 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p5kf9\" (UniqueName: \"kubernetes.io/projected/c07eb1e9-f4cc-4664-b9f6-80322fe0644a-kube-api-access-p5kf9\") on node \"crc\" DevicePath \"\"" Feb 14 05:31:03 crc kubenswrapper[4867]: I0214 05:31:03.465206 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c07eb1e9-f4cc-4664-b9f6-80322fe0644a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 05:31:03 crc kubenswrapper[4867]: I0214 05:31:03.522306 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mrccv" Feb 14 05:31:03 crc kubenswrapper[4867]: I0214 05:31:03.591189 4867 generic.go:334] "Generic (PLEG): container finished" podID="c07eb1e9-f4cc-4664-b9f6-80322fe0644a" containerID="87e6c040d38bde68e493b7b3302f19e7c726d33e11f4f16178f8c9d7adfc5f47" exitCode=0 Feb 14 05:31:03 crc kubenswrapper[4867]: I0214 05:31:03.592408 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n4l4x" Feb 14 05:31:03 crc kubenswrapper[4867]: I0214 05:31:03.593616 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n4l4x" event={"ID":"c07eb1e9-f4cc-4664-b9f6-80322fe0644a","Type":"ContainerDied","Data":"87e6c040d38bde68e493b7b3302f19e7c726d33e11f4f16178f8c9d7adfc5f47"} Feb 14 05:31:03 crc kubenswrapper[4867]: I0214 05:31:03.593691 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n4l4x" event={"ID":"c07eb1e9-f4cc-4664-b9f6-80322fe0644a","Type":"ContainerDied","Data":"5ba318c0f038dd00ef73874b614866123801539825c20b7ed97427c3db408ff8"} Feb 14 05:31:03 crc kubenswrapper[4867]: I0214 05:31:03.593721 4867 scope.go:117] "RemoveContainer" containerID="87e6c040d38bde68e493b7b3302f19e7c726d33e11f4f16178f8c9d7adfc5f47" Feb 14 05:31:03 crc kubenswrapper[4867]: I0214 05:31:03.625768 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mrccv" Feb 14 05:31:03 crc kubenswrapper[4867]: I0214 05:31:03.669612 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n4l4x"] Feb 14 05:31:03 crc kubenswrapper[4867]: I0214 05:31:03.684717 4867 scope.go:117] "RemoveContainer" containerID="1a1f79e7d0e49fdf6f916b2defe58abde42138b8a7f873554959ac654f97cab7" Feb 14 05:31:03 crc kubenswrapper[4867]: I0214 05:31:03.741632 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-n4l4x"] Feb 14 05:31:03 crc kubenswrapper[4867]: I0214 05:31:03.876754 4867 scope.go:117] "RemoveContainer" containerID="36bcae6bb363439549f24488fba4f5cff8ec4aa55cfcc0e02fab4feb7920c86f" Feb 14 05:31:03 crc kubenswrapper[4867]: I0214 05:31:03.959865 4867 scope.go:117] "RemoveContainer" containerID="87e6c040d38bde68e493b7b3302f19e7c726d33e11f4f16178f8c9d7adfc5f47" Feb 14 05:31:03 crc kubenswrapper[4867]: E0214 05:31:03.960695 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87e6c040d38bde68e493b7b3302f19e7c726d33e11f4f16178f8c9d7adfc5f47\": container with ID starting with 87e6c040d38bde68e493b7b3302f19e7c726d33e11f4f16178f8c9d7adfc5f47 not found: ID does not exist" containerID="87e6c040d38bde68e493b7b3302f19e7c726d33e11f4f16178f8c9d7adfc5f47" Feb 14 05:31:03 crc kubenswrapper[4867]: I0214 05:31:03.960732 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87e6c040d38bde68e493b7b3302f19e7c726d33e11f4f16178f8c9d7adfc5f47"} err="failed to get container status \"87e6c040d38bde68e493b7b3302f19e7c726d33e11f4f16178f8c9d7adfc5f47\": rpc error: code = NotFound desc = could not find container \"87e6c040d38bde68e493b7b3302f19e7c726d33e11f4f16178f8c9d7adfc5f47\": container with ID starting with 87e6c040d38bde68e493b7b3302f19e7c726d33e11f4f16178f8c9d7adfc5f47 not found: ID does not exist" Feb 14 05:31:03 crc kubenswrapper[4867]: I0214 05:31:03.960757 4867 scope.go:117] "RemoveContainer" containerID="1a1f79e7d0e49fdf6f916b2defe58abde42138b8a7f873554959ac654f97cab7" Feb 14 05:31:03 crc kubenswrapper[4867]: E0214 05:31:03.961027 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a1f79e7d0e49fdf6f916b2defe58abde42138b8a7f873554959ac654f97cab7\": container with ID starting with 1a1f79e7d0e49fdf6f916b2defe58abde42138b8a7f873554959ac654f97cab7 not found: ID does not exist" containerID="1a1f79e7d0e49fdf6f916b2defe58abde42138b8a7f873554959ac654f97cab7" Feb 14 05:31:03 crc kubenswrapper[4867]: I0214 05:31:03.961064 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a1f79e7d0e49fdf6f916b2defe58abde42138b8a7f873554959ac654f97cab7"} err="failed to get container status \"1a1f79e7d0e49fdf6f916b2defe58abde42138b8a7f873554959ac654f97cab7\": rpc error: code = NotFound desc = could not find container \"1a1f79e7d0e49fdf6f916b2defe58abde42138b8a7f873554959ac654f97cab7\": container with ID starting with 1a1f79e7d0e49fdf6f916b2defe58abde42138b8a7f873554959ac654f97cab7 not found: ID does not exist" Feb 14 05:31:03 crc kubenswrapper[4867]: I0214 05:31:03.961084 4867 scope.go:117] "RemoveContainer" containerID="36bcae6bb363439549f24488fba4f5cff8ec4aa55cfcc0e02fab4feb7920c86f" Feb 14 05:31:03 crc kubenswrapper[4867]: E0214 05:31:03.961381 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36bcae6bb363439549f24488fba4f5cff8ec4aa55cfcc0e02fab4feb7920c86f\": container with ID starting with 36bcae6bb363439549f24488fba4f5cff8ec4aa55cfcc0e02fab4feb7920c86f not found: ID does not exist" containerID="36bcae6bb363439549f24488fba4f5cff8ec4aa55cfcc0e02fab4feb7920c86f" Feb 14 05:31:03 crc kubenswrapper[4867]: I0214 05:31:03.961414 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36bcae6bb363439549f24488fba4f5cff8ec4aa55cfcc0e02fab4feb7920c86f"} err="failed to get container status \"36bcae6bb363439549f24488fba4f5cff8ec4aa55cfcc0e02fab4feb7920c86f\": rpc error: code = NotFound desc = could not find container \"36bcae6bb363439549f24488fba4f5cff8ec4aa55cfcc0e02fab4feb7920c86f\": container with ID starting with 36bcae6bb363439549f24488fba4f5cff8ec4aa55cfcc0e02fab4feb7920c86f not found: ID does not exist" Feb 14 05:31:05 crc kubenswrapper[4867]: I0214 05:31:05.011008 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ba042e-98c3-43cc-aa6a-efbb9a63ae61" path="/var/lib/kubelet/pods/09ba042e-98c3-43cc-aa6a-efbb9a63ae61/volumes" Feb 14 05:31:05 crc kubenswrapper[4867]: I0214 05:31:05.013191 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c07eb1e9-f4cc-4664-b9f6-80322fe0644a" path="/var/lib/kubelet/pods/c07eb1e9-f4cc-4664-b9f6-80322fe0644a/volumes" Feb 14 05:31:10 crc kubenswrapper[4867]: I0214 05:31:10.655564 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9jj9q" podUID="3532ff4a-374c-407b-b01c-b63267b0f9f9" containerName="registry-server" probeResult="failure" output=< Feb 14 05:31:10 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:31:10 crc kubenswrapper[4867]: > Feb 14 05:31:17 crc kubenswrapper[4867]: I0214 05:31:17.970096 4867 scope.go:117] "RemoveContainer" containerID="ab4ee5d7ccbbb8ee4ad53cb2ebd2a425cf55cf8aed22876c6ecd5b2b84a7972a" Feb 14 05:31:20 crc kubenswrapper[4867]: I0214 05:31:20.690652 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9jj9q" podUID="3532ff4a-374c-407b-b01c-b63267b0f9f9" containerName="registry-server" probeResult="failure" output=< Feb 14 05:31:20 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:31:20 crc kubenswrapper[4867]: > Feb 14 05:31:20 crc kubenswrapper[4867]: I0214 05:31:20.692753 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9jj9q" Feb 14 05:31:20 crc kubenswrapper[4867]: I0214 05:31:20.694056 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="registry-server" containerStatusID={"Type":"cri-o","ID":"0af814f84e64b35babeb4457762bbfc3989cb29f290cec6370bec1b95e729f03"} pod="openshift-marketplace/redhat-operators-9jj9q" containerMessage="Container registry-server failed startup probe, will be restarted" Feb 14 05:31:20 crc kubenswrapper[4867]: I0214 05:31:20.694214 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9jj9q" podUID="3532ff4a-374c-407b-b01c-b63267b0f9f9" containerName="registry-server" containerID="cri-o://0af814f84e64b35babeb4457762bbfc3989cb29f290cec6370bec1b95e729f03" gracePeriod=30 Feb 14 05:31:33 crc kubenswrapper[4867]: I0214 05:31:33.939403 4867 generic.go:334] "Generic (PLEG): container finished" podID="3532ff4a-374c-407b-b01c-b63267b0f9f9" containerID="0af814f84e64b35babeb4457762bbfc3989cb29f290cec6370bec1b95e729f03" exitCode=0 Feb 14 05:31:33 crc kubenswrapper[4867]: I0214 05:31:33.939498 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9jj9q" event={"ID":"3532ff4a-374c-407b-b01c-b63267b0f9f9","Type":"ContainerDied","Data":"0af814f84e64b35babeb4457762bbfc3989cb29f290cec6370bec1b95e729f03"} Feb 14 05:31:35 crc kubenswrapper[4867]: I0214 05:31:35.973421 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9jj9q" event={"ID":"3532ff4a-374c-407b-b01c-b63267b0f9f9","Type":"ContainerStarted","Data":"6f275a36fbe27cd89bd6f963bc54c915a722d81138ab06e240ac5d200b94ad27"} Feb 14 05:31:39 crc kubenswrapper[4867]: I0214 05:31:39.545056 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9jj9q" Feb 14 05:31:39 crc kubenswrapper[4867]: I0214 05:31:39.545615 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9jj9q" Feb 14 05:31:40 crc kubenswrapper[4867]: I0214 05:31:40.595011 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9jj9q" podUID="3532ff4a-374c-407b-b01c-b63267b0f9f9" containerName="registry-server" probeResult="failure" output=< Feb 14 05:31:40 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:31:40 crc kubenswrapper[4867]: > Feb 14 05:31:50 crc kubenswrapper[4867]: I0214 05:31:50.592613 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9jj9q" podUID="3532ff4a-374c-407b-b01c-b63267b0f9f9" containerName="registry-server" probeResult="failure" output=< Feb 14 05:31:50 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:31:50 crc kubenswrapper[4867]: > Feb 14 05:32:00 crc kubenswrapper[4867]: I0214 05:32:00.595027 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9jj9q" podUID="3532ff4a-374c-407b-b01c-b63267b0f9f9" containerName="registry-server" probeResult="failure" output=< Feb 14 05:32:00 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:32:00 crc kubenswrapper[4867]: > Feb 14 05:32:10 crc kubenswrapper[4867]: I0214 05:32:10.121374 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9jj9q" Feb 14 05:32:10 crc kubenswrapper[4867]: I0214 05:32:10.191114 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9jj9q" Feb 14 05:32:10 crc kubenswrapper[4867]: I0214 05:32:10.435464 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9jj9q"] Feb 14 05:32:11 crc kubenswrapper[4867]: I0214 05:32:11.418095 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9jj9q" podUID="3532ff4a-374c-407b-b01c-b63267b0f9f9" containerName="registry-server" containerID="cri-o://6f275a36fbe27cd89bd6f963bc54c915a722d81138ab06e240ac5d200b94ad27" gracePeriod=2 Feb 14 05:32:12 crc kubenswrapper[4867]: I0214 05:32:12.484373 4867 generic.go:334] "Generic (PLEG): container finished" podID="3532ff4a-374c-407b-b01c-b63267b0f9f9" containerID="6f275a36fbe27cd89bd6f963bc54c915a722d81138ab06e240ac5d200b94ad27" exitCode=0 Feb 14 05:32:12 crc kubenswrapper[4867]: I0214 05:32:12.485163 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9jj9q" event={"ID":"3532ff4a-374c-407b-b01c-b63267b0f9f9","Type":"ContainerDied","Data":"6f275a36fbe27cd89bd6f963bc54c915a722d81138ab06e240ac5d200b94ad27"} Feb 14 05:32:12 crc kubenswrapper[4867]: I0214 05:32:12.486782 4867 scope.go:117] "RemoveContainer" containerID="0af814f84e64b35babeb4457762bbfc3989cb29f290cec6370bec1b95e729f03" Feb 14 05:32:12 crc kubenswrapper[4867]: I0214 05:32:12.602482 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9jj9q" Feb 14 05:32:12 crc kubenswrapper[4867]: I0214 05:32:12.758841 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3532ff4a-374c-407b-b01c-b63267b0f9f9-utilities\") pod \"3532ff4a-374c-407b-b01c-b63267b0f9f9\" (UID: \"3532ff4a-374c-407b-b01c-b63267b0f9f9\") " Feb 14 05:32:12 crc kubenswrapper[4867]: I0214 05:32:12.759486 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dcv6\" (UniqueName: \"kubernetes.io/projected/3532ff4a-374c-407b-b01c-b63267b0f9f9-kube-api-access-6dcv6\") pod \"3532ff4a-374c-407b-b01c-b63267b0f9f9\" (UID: \"3532ff4a-374c-407b-b01c-b63267b0f9f9\") " Feb 14 05:32:12 crc kubenswrapper[4867]: I0214 05:32:12.759585 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3532ff4a-374c-407b-b01c-b63267b0f9f9-catalog-content\") pod \"3532ff4a-374c-407b-b01c-b63267b0f9f9\" (UID: \"3532ff4a-374c-407b-b01c-b63267b0f9f9\") " Feb 14 05:32:12 crc kubenswrapper[4867]: I0214 05:32:12.759901 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3532ff4a-374c-407b-b01c-b63267b0f9f9-utilities" (OuterVolumeSpecName: "utilities") pod "3532ff4a-374c-407b-b01c-b63267b0f9f9" (UID: "3532ff4a-374c-407b-b01c-b63267b0f9f9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 05:32:12 crc kubenswrapper[4867]: I0214 05:32:12.778425 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3532ff4a-374c-407b-b01c-b63267b0f9f9-kube-api-access-6dcv6" (OuterVolumeSpecName: "kube-api-access-6dcv6") pod "3532ff4a-374c-407b-b01c-b63267b0f9f9" (UID: "3532ff4a-374c-407b-b01c-b63267b0f9f9"). InnerVolumeSpecName "kube-api-access-6dcv6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 05:32:12 crc kubenswrapper[4867]: I0214 05:32:12.863397 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3532ff4a-374c-407b-b01c-b63267b0f9f9-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 05:32:12 crc kubenswrapper[4867]: I0214 05:32:12.863431 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6dcv6\" (UniqueName: \"kubernetes.io/projected/3532ff4a-374c-407b-b01c-b63267b0f9f9-kube-api-access-6dcv6\") on node \"crc\" DevicePath \"\"" Feb 14 05:32:12 crc kubenswrapper[4867]: I0214 05:32:12.906361 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3532ff4a-374c-407b-b01c-b63267b0f9f9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3532ff4a-374c-407b-b01c-b63267b0f9f9" (UID: "3532ff4a-374c-407b-b01c-b63267b0f9f9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 05:32:12 crc kubenswrapper[4867]: I0214 05:32:12.965082 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3532ff4a-374c-407b-b01c-b63267b0f9f9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 05:32:13 crc kubenswrapper[4867]: I0214 05:32:13.503777 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9jj9q" event={"ID":"3532ff4a-374c-407b-b01c-b63267b0f9f9","Type":"ContainerDied","Data":"6b53ea8d4257c47786cd3a09e618ae66005b213cde9dca1141144554e272f271"} Feb 14 05:32:13 crc kubenswrapper[4867]: I0214 05:32:13.504250 4867 scope.go:117] "RemoveContainer" containerID="6f275a36fbe27cd89bd6f963bc54c915a722d81138ab06e240ac5d200b94ad27" Feb 14 05:32:13 crc kubenswrapper[4867]: I0214 05:32:13.504400 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9jj9q" Feb 14 05:32:13 crc kubenswrapper[4867]: I0214 05:32:13.533068 4867 scope.go:117] "RemoveContainer" containerID="b68d87e77e9726db128cb19314bb5165ed9c15cd0be74610a3fa6b601224ffbc" Feb 14 05:32:13 crc kubenswrapper[4867]: I0214 05:32:13.565302 4867 scope.go:117] "RemoveContainer" containerID="3ccc1ca8b5aa695fffe9a70b7b97042dbfab6774339fb2708f08dce70c3af3d0" Feb 14 05:32:13 crc kubenswrapper[4867]: I0214 05:32:13.576536 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9jj9q"] Feb 14 05:32:13 crc kubenswrapper[4867]: I0214 05:32:13.593414 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9jj9q"] Feb 14 05:32:15 crc kubenswrapper[4867]: I0214 05:32:15.021030 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3532ff4a-374c-407b-b01c-b63267b0f9f9" path="/var/lib/kubelet/pods/3532ff4a-374c-407b-b01c-b63267b0f9f9/volumes" Feb 14 05:32:47 crc kubenswrapper[4867]: I0214 05:32:47.929826 4867 generic.go:334] "Generic (PLEG): container finished" podID="652d53d9-a4c0-4061-b817-ca5173785521" containerID="075b79918bc2f91b3a5dae96c88d4b1fcea3cd1da542c02c4a8dfaa3b4541715" exitCode=0 Feb 14 05:32:47 crc kubenswrapper[4867]: I0214 05:32:47.929935 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-76ddc659b-tzdtd" event={"ID":"652d53d9-a4c0-4061-b817-ca5173785521","Type":"ContainerDied","Data":"075b79918bc2f91b3a5dae96c88d4b1fcea3cd1da542c02c4a8dfaa3b4541715"} Feb 14 05:32:48 crc kubenswrapper[4867]: I0214 05:32:48.960685 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-76ddc659b-tzdtd" event={"ID":"652d53d9-a4c0-4061-b817-ca5173785521","Type":"ContainerStarted","Data":"ebe3d08837c845b7ee5ed212ba8dbb14e4590da7452a878dc78de2a88b4b09a9"} Feb 14 05:33:01 crc kubenswrapper[4867]: I0214 05:33:01.251640 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 05:33:01 crc kubenswrapper[4867]: I0214 05:33:01.252291 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 05:33:06 crc kubenswrapper[4867]: I0214 05:33:06.449140 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-76ddc659b-tzdtd" Feb 14 05:33:06 crc kubenswrapper[4867]: I0214 05:33:06.449736 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-76ddc659b-tzdtd" Feb 14 05:33:26 crc kubenswrapper[4867]: I0214 05:33:26.454642 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-76ddc659b-tzdtd" Feb 14 05:33:26 crc kubenswrapper[4867]: I0214 05:33:26.459156 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-76ddc659b-tzdtd" Feb 14 05:33:31 crc kubenswrapper[4867]: I0214 05:33:31.250861 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 05:33:31 crc kubenswrapper[4867]: I0214 05:33:31.251572 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 05:34:01 crc kubenswrapper[4867]: I0214 05:34:01.251596 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 05:34:01 crc kubenswrapper[4867]: I0214 05:34:01.252931 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 05:34:01 crc kubenswrapper[4867]: I0214 05:34:01.253067 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" Feb 14 05:34:01 crc kubenswrapper[4867]: I0214 05:34:01.255051 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5e73bb84ca12c5e0e2f84b8149632e8db299b151552bafe8248698ab62e5c36a"} pod="openshift-machine-config-operator/machine-config-daemon-4s95t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 05:34:01 crc kubenswrapper[4867]: I0214 05:34:01.255211 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" containerID="cri-o://5e73bb84ca12c5e0e2f84b8149632e8db299b151552bafe8248698ab62e5c36a" gracePeriod=600 Feb 14 05:34:01 crc kubenswrapper[4867]: E0214 05:34:01.379643 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:34:01 crc kubenswrapper[4867]: I0214 05:34:01.887865 4867 generic.go:334] "Generic (PLEG): container finished" podID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerID="5e73bb84ca12c5e0e2f84b8149632e8db299b151552bafe8248698ab62e5c36a" exitCode=0 Feb 14 05:34:01 crc kubenswrapper[4867]: I0214 05:34:01.887946 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" event={"ID":"5992e46c-bce7-4b9f-82f2-c7ffb93286cd","Type":"ContainerDied","Data":"5e73bb84ca12c5e0e2f84b8149632e8db299b151552bafe8248698ab62e5c36a"} Feb 14 05:34:01 crc kubenswrapper[4867]: I0214 05:34:01.888337 4867 scope.go:117] "RemoveContainer" containerID="de23552d651bd266665fca3b2536d2046c3c2309b2c56fb5a66759067df0e4c8" Feb 14 05:34:01 crc kubenswrapper[4867]: I0214 05:34:01.889374 4867 scope.go:117] "RemoveContainer" containerID="5e73bb84ca12c5e0e2f84b8149632e8db299b151552bafe8248698ab62e5c36a" Feb 14 05:34:01 crc kubenswrapper[4867]: E0214 05:34:01.889786 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:34:14 crc kubenswrapper[4867]: I0214 05:34:14.997715 4867 scope.go:117] "RemoveContainer" containerID="5e73bb84ca12c5e0e2f84b8149632e8db299b151552bafe8248698ab62e5c36a" Feb 14 05:34:14 crc kubenswrapper[4867]: E0214 05:34:14.999539 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:34:29 crc kubenswrapper[4867]: I0214 05:34:29.997755 4867 scope.go:117] "RemoveContainer" containerID="5e73bb84ca12c5e0e2f84b8149632e8db299b151552bafe8248698ab62e5c36a" Feb 14 05:34:30 crc kubenswrapper[4867]: E0214 05:34:30.000578 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:34:40 crc kubenswrapper[4867]: I0214 05:34:40.998862 4867 scope.go:117] "RemoveContainer" containerID="5e73bb84ca12c5e0e2f84b8149632e8db299b151552bafe8248698ab62e5c36a" Feb 14 05:34:41 crc kubenswrapper[4867]: E0214 05:34:40.999421 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:34:55 crc kubenswrapper[4867]: I0214 05:34:55.997748 4867 scope.go:117] "RemoveContainer" containerID="5e73bb84ca12c5e0e2f84b8149632e8db299b151552bafe8248698ab62e5c36a" Feb 14 05:34:55 crc kubenswrapper[4867]: E0214 05:34:55.998716 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:35:09 crc kubenswrapper[4867]: I0214 05:35:09.997238 4867 scope.go:117] "RemoveContainer" containerID="5e73bb84ca12c5e0e2f84b8149632e8db299b151552bafe8248698ab62e5c36a" Feb 14 05:35:09 crc kubenswrapper[4867]: E0214 05:35:09.998137 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:35:22 crc kubenswrapper[4867]: I0214 05:35:22.998375 4867 scope.go:117] "RemoveContainer" containerID="5e73bb84ca12c5e0e2f84b8149632e8db299b151552bafe8248698ab62e5c36a" Feb 14 05:35:23 crc kubenswrapper[4867]: E0214 05:35:22.999395 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:35:37 crc kubenswrapper[4867]: I0214 05:35:37.997120 4867 scope.go:117] "RemoveContainer" containerID="5e73bb84ca12c5e0e2f84b8149632e8db299b151552bafe8248698ab62e5c36a" Feb 14 05:35:37 crc kubenswrapper[4867]: E0214 05:35:37.998149 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:35:49 crc kubenswrapper[4867]: I0214 05:35:49.007182 4867 scope.go:117] "RemoveContainer" containerID="5e73bb84ca12c5e0e2f84b8149632e8db299b151552bafe8248698ab62e5c36a" Feb 14 05:35:49 crc kubenswrapper[4867]: E0214 05:35:49.009803 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:36:03 crc kubenswrapper[4867]: I0214 05:36:03.003703 4867 scope.go:117] "RemoveContainer" containerID="5e73bb84ca12c5e0e2f84b8149632e8db299b151552bafe8248698ab62e5c36a" Feb 14 05:36:03 crc kubenswrapper[4867]: E0214 05:36:03.010177 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:36:16 crc kubenswrapper[4867]: I0214 05:36:16.997446 4867 scope.go:117] "RemoveContainer" containerID="5e73bb84ca12c5e0e2f84b8149632e8db299b151552bafe8248698ab62e5c36a" Feb 14 05:36:16 crc kubenswrapper[4867]: E0214 05:36:16.999669 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:36:30 crc kubenswrapper[4867]: I0214 05:36:30.484734 4867 scope.go:117] "RemoveContainer" containerID="5e73bb84ca12c5e0e2f84b8149632e8db299b151552bafe8248698ab62e5c36a" Feb 14 05:36:30 crc kubenswrapper[4867]: E0214 05:36:30.485874 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:36:44 crc kubenswrapper[4867]: I0214 05:36:43.998263 4867 scope.go:117] "RemoveContainer" containerID="5e73bb84ca12c5e0e2f84b8149632e8db299b151552bafe8248698ab62e5c36a" Feb 14 05:36:44 crc kubenswrapper[4867]: E0214 05:36:43.999378 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:36:56 crc kubenswrapper[4867]: I0214 05:36:56.998240 4867 scope.go:117] "RemoveContainer" containerID="5e73bb84ca12c5e0e2f84b8149632e8db299b151552bafe8248698ab62e5c36a" Feb 14 05:36:57 crc kubenswrapper[4867]: E0214 05:36:57.000050 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:37:11 crc kubenswrapper[4867]: I0214 05:37:11.997524 4867 scope.go:117] "RemoveContainer" containerID="5e73bb84ca12c5e0e2f84b8149632e8db299b151552bafe8248698ab62e5c36a" Feb 14 05:37:11 crc kubenswrapper[4867]: E0214 05:37:11.998526 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:37:22 crc kubenswrapper[4867]: I0214 05:37:22.998155 4867 scope.go:117] "RemoveContainer" containerID="5e73bb84ca12c5e0e2f84b8149632e8db299b151552bafe8248698ab62e5c36a" Feb 14 05:37:22 crc kubenswrapper[4867]: E0214 05:37:22.999143 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:37:34 crc kubenswrapper[4867]: I0214 05:37:34.997980 4867 scope.go:117] "RemoveContainer" containerID="5e73bb84ca12c5e0e2f84b8149632e8db299b151552bafe8248698ab62e5c36a" Feb 14 05:37:35 crc kubenswrapper[4867]: E0214 05:37:34.998955 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:37:47 crc kubenswrapper[4867]: I0214 05:37:47.000802 4867 scope.go:117] "RemoveContainer" containerID="5e73bb84ca12c5e0e2f84b8149632e8db299b151552bafe8248698ab62e5c36a" Feb 14 05:37:47 crc kubenswrapper[4867]: E0214 05:37:47.001767 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:38:01 crc kubenswrapper[4867]: I0214 05:38:01.001238 4867 scope.go:117] "RemoveContainer" containerID="5e73bb84ca12c5e0e2f84b8149632e8db299b151552bafe8248698ab62e5c36a" Feb 14 05:38:01 crc kubenswrapper[4867]: E0214 05:38:01.002578 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:38:12 crc kubenswrapper[4867]: I0214 05:38:12.997760 4867 scope.go:117] "RemoveContainer" containerID="5e73bb84ca12c5e0e2f84b8149632e8db299b151552bafe8248698ab62e5c36a" Feb 14 05:38:12 crc kubenswrapper[4867]: E0214 05:38:12.998933 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:38:23 crc kubenswrapper[4867]: I0214 05:38:23.998629 4867 scope.go:117] "RemoveContainer" containerID="5e73bb84ca12c5e0e2f84b8149632e8db299b151552bafe8248698ab62e5c36a" Feb 14 05:38:24 crc kubenswrapper[4867]: E0214 05:38:24.000087 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:38:34 crc kubenswrapper[4867]: I0214 05:38:34.999084 4867 scope.go:117] "RemoveContainer" containerID="5e73bb84ca12c5e0e2f84b8149632e8db299b151552bafe8248698ab62e5c36a" Feb 14 05:38:35 crc kubenswrapper[4867]: E0214 05:38:35.000588 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:38:45 crc kubenswrapper[4867]: I0214 05:38:45.998490 4867 scope.go:117] "RemoveContainer" containerID="5e73bb84ca12c5e0e2f84b8149632e8db299b151552bafe8248698ab62e5c36a" Feb 14 05:38:46 crc kubenswrapper[4867]: E0214 05:38:45.999487 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:38:56 crc kubenswrapper[4867]: I0214 05:38:56.997650 4867 scope.go:117] "RemoveContainer" containerID="5e73bb84ca12c5e0e2f84b8149632e8db299b151552bafe8248698ab62e5c36a" Feb 14 05:38:56 crc kubenswrapper[4867]: E0214 05:38:56.998909 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:39:09 crc kubenswrapper[4867]: I0214 05:39:09.998071 4867 scope.go:117] "RemoveContainer" containerID="5e73bb84ca12c5e0e2f84b8149632e8db299b151552bafe8248698ab62e5c36a" Feb 14 05:39:10 crc kubenswrapper[4867]: I0214 05:39:10.793449 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" event={"ID":"5992e46c-bce7-4b9f-82f2-c7ffb93286cd","Type":"ContainerStarted","Data":"f5d63b1271ea439ba7c2f7514281f50c704e327b66fe9d213dc7e443134b610b"} Feb 14 05:40:34 crc kubenswrapper[4867]: I0214 05:40:34.035176 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-c4zxt"] Feb 14 05:40:34 crc kubenswrapper[4867]: E0214 05:40:34.039453 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3532ff4a-374c-407b-b01c-b63267b0f9f9" containerName="extract-content" Feb 14 05:40:34 crc kubenswrapper[4867]: I0214 05:40:34.039481 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="3532ff4a-374c-407b-b01c-b63267b0f9f9" containerName="extract-content" Feb 14 05:40:34 crc kubenswrapper[4867]: E0214 05:40:34.039500 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3532ff4a-374c-407b-b01c-b63267b0f9f9" containerName="registry-server" Feb 14 05:40:34 crc kubenswrapper[4867]: I0214 05:40:34.039526 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="3532ff4a-374c-407b-b01c-b63267b0f9f9" containerName="registry-server" Feb 14 05:40:34 crc kubenswrapper[4867]: E0214 05:40:34.039542 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae8a4292-e933-464b-b36d-918f43ce6f65" containerName="registry-server" Feb 14 05:40:34 crc kubenswrapper[4867]: I0214 05:40:34.039549 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae8a4292-e933-464b-b36d-918f43ce6f65" containerName="registry-server" Feb 14 05:40:34 crc kubenswrapper[4867]: E0214 05:40:34.039562 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c07eb1e9-f4cc-4664-b9f6-80322fe0644a" containerName="extract-utilities" Feb 14 05:40:34 crc kubenswrapper[4867]: I0214 05:40:34.039569 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="c07eb1e9-f4cc-4664-b9f6-80322fe0644a" containerName="extract-utilities" Feb 14 05:40:34 crc kubenswrapper[4867]: E0214 05:40:34.039595 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c07eb1e9-f4cc-4664-b9f6-80322fe0644a" containerName="extract-content" Feb 14 05:40:34 crc kubenswrapper[4867]: I0214 05:40:34.039604 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="c07eb1e9-f4cc-4664-b9f6-80322fe0644a" containerName="extract-content" Feb 14 05:40:34 crc kubenswrapper[4867]: E0214 05:40:34.039623 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae8a4292-e933-464b-b36d-918f43ce6f65" containerName="extract-content" Feb 14 05:40:34 crc kubenswrapper[4867]: I0214 05:40:34.039631 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae8a4292-e933-464b-b36d-918f43ce6f65" containerName="extract-content" Feb 14 05:40:34 crc kubenswrapper[4867]: E0214 05:40:34.039641 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3532ff4a-374c-407b-b01c-b63267b0f9f9" containerName="extract-utilities" Feb 14 05:40:34 crc kubenswrapper[4867]: I0214 05:40:34.039648 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="3532ff4a-374c-407b-b01c-b63267b0f9f9" containerName="extract-utilities" Feb 14 05:40:34 crc kubenswrapper[4867]: E0214 05:40:34.039658 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c07eb1e9-f4cc-4664-b9f6-80322fe0644a" containerName="registry-server" Feb 14 05:40:34 crc kubenswrapper[4867]: I0214 05:40:34.039666 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="c07eb1e9-f4cc-4664-b9f6-80322fe0644a" containerName="registry-server" Feb 14 05:40:34 crc kubenswrapper[4867]: E0214 05:40:34.039678 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09ba042e-98c3-43cc-aa6a-efbb9a63ae61" containerName="registry-server" Feb 14 05:40:34 crc kubenswrapper[4867]: I0214 05:40:34.039685 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="09ba042e-98c3-43cc-aa6a-efbb9a63ae61" containerName="registry-server" Feb 14 05:40:34 crc kubenswrapper[4867]: E0214 05:40:34.039712 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3532ff4a-374c-407b-b01c-b63267b0f9f9" containerName="registry-server" Feb 14 05:40:34 crc kubenswrapper[4867]: I0214 05:40:34.039720 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="3532ff4a-374c-407b-b01c-b63267b0f9f9" containerName="registry-server" Feb 14 05:40:34 crc kubenswrapper[4867]: E0214 05:40:34.039738 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09ba042e-98c3-43cc-aa6a-efbb9a63ae61" containerName="extract-utilities" Feb 14 05:40:34 crc kubenswrapper[4867]: I0214 05:40:34.039746 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="09ba042e-98c3-43cc-aa6a-efbb9a63ae61" containerName="extract-utilities" Feb 14 05:40:34 crc kubenswrapper[4867]: E0214 05:40:34.039761 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="473b9472-6542-4e27-87e9-17365cd400e1" containerName="collect-profiles" Feb 14 05:40:34 crc kubenswrapper[4867]: I0214 05:40:34.039769 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="473b9472-6542-4e27-87e9-17365cd400e1" containerName="collect-profiles" Feb 14 05:40:34 crc kubenswrapper[4867]: E0214 05:40:34.039790 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae8a4292-e933-464b-b36d-918f43ce6f65" containerName="extract-utilities" Feb 14 05:40:34 crc kubenswrapper[4867]: I0214 05:40:34.039798 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae8a4292-e933-464b-b36d-918f43ce6f65" containerName="extract-utilities" Feb 14 05:40:34 crc kubenswrapper[4867]: E0214 05:40:34.039809 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09ba042e-98c3-43cc-aa6a-efbb9a63ae61" containerName="extract-content" Feb 14 05:40:34 crc kubenswrapper[4867]: I0214 05:40:34.039817 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="09ba042e-98c3-43cc-aa6a-efbb9a63ae61" containerName="extract-content" Feb 14 05:40:34 crc kubenswrapper[4867]: I0214 05:40:34.040093 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="09ba042e-98c3-43cc-aa6a-efbb9a63ae61" containerName="registry-server" Feb 14 05:40:34 crc kubenswrapper[4867]: I0214 05:40:34.040115 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="473b9472-6542-4e27-87e9-17365cd400e1" containerName="collect-profiles" Feb 14 05:40:34 crc kubenswrapper[4867]: I0214 05:40:34.040134 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="3532ff4a-374c-407b-b01c-b63267b0f9f9" containerName="registry-server" Feb 14 05:40:34 crc kubenswrapper[4867]: I0214 05:40:34.040150 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="3532ff4a-374c-407b-b01c-b63267b0f9f9" containerName="registry-server" Feb 14 05:40:34 crc kubenswrapper[4867]: I0214 05:40:34.040158 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="c07eb1e9-f4cc-4664-b9f6-80322fe0644a" containerName="registry-server" Feb 14 05:40:34 crc kubenswrapper[4867]: I0214 05:40:34.040165 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae8a4292-e933-464b-b36d-918f43ce6f65" containerName="registry-server" Feb 14 05:40:34 crc kubenswrapper[4867]: I0214 05:40:34.044659 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c4zxt" Feb 14 05:40:34 crc kubenswrapper[4867]: I0214 05:40:34.106076 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c4zxt"] Feb 14 05:40:34 crc kubenswrapper[4867]: I0214 05:40:34.198730 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tgsr\" (UniqueName: \"kubernetes.io/projected/1623abf8-a3d2-4598-8f39-f0153f263393-kube-api-access-2tgsr\") pod \"community-operators-c4zxt\" (UID: \"1623abf8-a3d2-4598-8f39-f0153f263393\") " pod="openshift-marketplace/community-operators-c4zxt" Feb 14 05:40:34 crc kubenswrapper[4867]: I0214 05:40:34.199177 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1623abf8-a3d2-4598-8f39-f0153f263393-utilities\") pod \"community-operators-c4zxt\" (UID: \"1623abf8-a3d2-4598-8f39-f0153f263393\") " pod="openshift-marketplace/community-operators-c4zxt" Feb 14 05:40:34 crc kubenswrapper[4867]: I0214 05:40:34.199534 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1623abf8-a3d2-4598-8f39-f0153f263393-catalog-content\") pod \"community-operators-c4zxt\" (UID: \"1623abf8-a3d2-4598-8f39-f0153f263393\") " pod="openshift-marketplace/community-operators-c4zxt" Feb 14 05:40:34 crc kubenswrapper[4867]: I0214 05:40:34.302121 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1623abf8-a3d2-4598-8f39-f0153f263393-catalog-content\") pod \"community-operators-c4zxt\" (UID: \"1623abf8-a3d2-4598-8f39-f0153f263393\") " pod="openshift-marketplace/community-operators-c4zxt" Feb 14 05:40:34 crc kubenswrapper[4867]: I0214 05:40:34.302249 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tgsr\" (UniqueName: \"kubernetes.io/projected/1623abf8-a3d2-4598-8f39-f0153f263393-kube-api-access-2tgsr\") pod \"community-operators-c4zxt\" (UID: \"1623abf8-a3d2-4598-8f39-f0153f263393\") " pod="openshift-marketplace/community-operators-c4zxt" Feb 14 05:40:34 crc kubenswrapper[4867]: I0214 05:40:34.302337 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1623abf8-a3d2-4598-8f39-f0153f263393-utilities\") pod \"community-operators-c4zxt\" (UID: \"1623abf8-a3d2-4598-8f39-f0153f263393\") " pod="openshift-marketplace/community-operators-c4zxt" Feb 14 05:40:34 crc kubenswrapper[4867]: I0214 05:40:34.304270 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1623abf8-a3d2-4598-8f39-f0153f263393-catalog-content\") pod \"community-operators-c4zxt\" (UID: \"1623abf8-a3d2-4598-8f39-f0153f263393\") " pod="openshift-marketplace/community-operators-c4zxt" Feb 14 05:40:34 crc kubenswrapper[4867]: I0214 05:40:34.304642 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1623abf8-a3d2-4598-8f39-f0153f263393-utilities\") pod \"community-operators-c4zxt\" (UID: \"1623abf8-a3d2-4598-8f39-f0153f263393\") " pod="openshift-marketplace/community-operators-c4zxt" Feb 14 05:40:34 crc kubenswrapper[4867]: I0214 05:40:34.324731 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tgsr\" (UniqueName: \"kubernetes.io/projected/1623abf8-a3d2-4598-8f39-f0153f263393-kube-api-access-2tgsr\") pod \"community-operators-c4zxt\" (UID: \"1623abf8-a3d2-4598-8f39-f0153f263393\") " pod="openshift-marketplace/community-operators-c4zxt" Feb 14 05:40:34 crc kubenswrapper[4867]: I0214 05:40:34.371600 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c4zxt" Feb 14 05:40:35 crc kubenswrapper[4867]: I0214 05:40:35.932404 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c4zxt"] Feb 14 05:40:35 crc kubenswrapper[4867]: W0214 05:40:35.944260 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1623abf8_a3d2_4598_8f39_f0153f263393.slice/crio-295e96f9f07d3095cea3a700b623e45a0a3c5905cbf092c822537e6b819d4532 WatchSource:0}: Error finding container 295e96f9f07d3095cea3a700b623e45a0a3c5905cbf092c822537e6b819d4532: Status 404 returned error can't find the container with id 295e96f9f07d3095cea3a700b623e45a0a3c5905cbf092c822537e6b819d4532 Feb 14 05:40:36 crc kubenswrapper[4867]: I0214 05:40:36.942471 4867 generic.go:334] "Generic (PLEG): container finished" podID="1623abf8-a3d2-4598-8f39-f0153f263393" containerID="196ca742dcc703f46deb1d50ebb9f9afbcb2cb52b7aa66003ca89e4afaf13dc4" exitCode=0 Feb 14 05:40:36 crc kubenswrapper[4867]: I0214 05:40:36.942586 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c4zxt" event={"ID":"1623abf8-a3d2-4598-8f39-f0153f263393","Type":"ContainerDied","Data":"196ca742dcc703f46deb1d50ebb9f9afbcb2cb52b7aa66003ca89e4afaf13dc4"} Feb 14 05:40:36 crc kubenswrapper[4867]: I0214 05:40:36.942803 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c4zxt" event={"ID":"1623abf8-a3d2-4598-8f39-f0153f263393","Type":"ContainerStarted","Data":"295e96f9f07d3095cea3a700b623e45a0a3c5905cbf092c822537e6b819d4532"} Feb 14 05:40:36 crc kubenswrapper[4867]: I0214 05:40:36.946915 4867 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 14 05:40:38 crc kubenswrapper[4867]: I0214 05:40:38.968645 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c4zxt" event={"ID":"1623abf8-a3d2-4598-8f39-f0153f263393","Type":"ContainerStarted","Data":"cbe326a8e5634578b70f7f6afe4763f8fc03fbfab3802a9533507439c097bf40"} Feb 14 05:40:40 crc kubenswrapper[4867]: I0214 05:40:40.380005 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tlqjg"] Feb 14 05:40:40 crc kubenswrapper[4867]: I0214 05:40:40.384727 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tlqjg" Feb 14 05:40:40 crc kubenswrapper[4867]: I0214 05:40:40.396499 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tlqjg"] Feb 14 05:40:40 crc kubenswrapper[4867]: I0214 05:40:40.564431 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c89371c3-d8bf-4ac1-8b52-9df945ca0c87-catalog-content\") pod \"certified-operators-tlqjg\" (UID: \"c89371c3-d8bf-4ac1-8b52-9df945ca0c87\") " pod="openshift-marketplace/certified-operators-tlqjg" Feb 14 05:40:40 crc kubenswrapper[4867]: I0214 05:40:40.564800 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c89371c3-d8bf-4ac1-8b52-9df945ca0c87-utilities\") pod \"certified-operators-tlqjg\" (UID: \"c89371c3-d8bf-4ac1-8b52-9df945ca0c87\") " pod="openshift-marketplace/certified-operators-tlqjg" Feb 14 05:40:40 crc kubenswrapper[4867]: I0214 05:40:40.564952 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kd5n8\" (UniqueName: \"kubernetes.io/projected/c89371c3-d8bf-4ac1-8b52-9df945ca0c87-kube-api-access-kd5n8\") pod \"certified-operators-tlqjg\" (UID: \"c89371c3-d8bf-4ac1-8b52-9df945ca0c87\") " pod="openshift-marketplace/certified-operators-tlqjg" Feb 14 05:40:40 crc kubenswrapper[4867]: I0214 05:40:40.667782 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c89371c3-d8bf-4ac1-8b52-9df945ca0c87-utilities\") pod \"certified-operators-tlqjg\" (UID: \"c89371c3-d8bf-4ac1-8b52-9df945ca0c87\") " pod="openshift-marketplace/certified-operators-tlqjg" Feb 14 05:40:40 crc kubenswrapper[4867]: I0214 05:40:40.667977 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kd5n8\" (UniqueName: \"kubernetes.io/projected/c89371c3-d8bf-4ac1-8b52-9df945ca0c87-kube-api-access-kd5n8\") pod \"certified-operators-tlqjg\" (UID: \"c89371c3-d8bf-4ac1-8b52-9df945ca0c87\") " pod="openshift-marketplace/certified-operators-tlqjg" Feb 14 05:40:40 crc kubenswrapper[4867]: I0214 05:40:40.668158 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c89371c3-d8bf-4ac1-8b52-9df945ca0c87-catalog-content\") pod \"certified-operators-tlqjg\" (UID: \"c89371c3-d8bf-4ac1-8b52-9df945ca0c87\") " pod="openshift-marketplace/certified-operators-tlqjg" Feb 14 05:40:40 crc kubenswrapper[4867]: I0214 05:40:40.670429 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c89371c3-d8bf-4ac1-8b52-9df945ca0c87-utilities\") pod \"certified-operators-tlqjg\" (UID: \"c89371c3-d8bf-4ac1-8b52-9df945ca0c87\") " pod="openshift-marketplace/certified-operators-tlqjg" Feb 14 05:40:40 crc kubenswrapper[4867]: I0214 05:40:40.670583 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c89371c3-d8bf-4ac1-8b52-9df945ca0c87-catalog-content\") pod \"certified-operators-tlqjg\" (UID: \"c89371c3-d8bf-4ac1-8b52-9df945ca0c87\") " pod="openshift-marketplace/certified-operators-tlqjg" Feb 14 05:40:40 crc kubenswrapper[4867]: I0214 05:40:40.688029 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kd5n8\" (UniqueName: \"kubernetes.io/projected/c89371c3-d8bf-4ac1-8b52-9df945ca0c87-kube-api-access-kd5n8\") pod \"certified-operators-tlqjg\" (UID: \"c89371c3-d8bf-4ac1-8b52-9df945ca0c87\") " pod="openshift-marketplace/certified-operators-tlqjg" Feb 14 05:40:40 crc kubenswrapper[4867]: I0214 05:40:40.735326 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tlqjg" Feb 14 05:40:41 crc kubenswrapper[4867]: I0214 05:40:41.904899 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tlqjg"] Feb 14 05:40:41 crc kubenswrapper[4867]: W0214 05:40:41.922831 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc89371c3_d8bf_4ac1_8b52_9df945ca0c87.slice/crio-2e3ad0b986cd4719f281090d06d73a636c10ee3dac7a89d41cd182d5abad5524 WatchSource:0}: Error finding container 2e3ad0b986cd4719f281090d06d73a636c10ee3dac7a89d41cd182d5abad5524: Status 404 returned error can't find the container with id 2e3ad0b986cd4719f281090d06d73a636c10ee3dac7a89d41cd182d5abad5524 Feb 14 05:40:42 crc kubenswrapper[4867]: I0214 05:40:42.012537 4867 generic.go:334] "Generic (PLEG): container finished" podID="1623abf8-a3d2-4598-8f39-f0153f263393" containerID="cbe326a8e5634578b70f7f6afe4763f8fc03fbfab3802a9533507439c097bf40" exitCode=0 Feb 14 05:40:42 crc kubenswrapper[4867]: I0214 05:40:42.012658 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c4zxt" event={"ID":"1623abf8-a3d2-4598-8f39-f0153f263393","Type":"ContainerDied","Data":"cbe326a8e5634578b70f7f6afe4763f8fc03fbfab3802a9533507439c097bf40"} Feb 14 05:40:42 crc kubenswrapper[4867]: I0214 05:40:42.019120 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tlqjg" event={"ID":"c89371c3-d8bf-4ac1-8b52-9df945ca0c87","Type":"ContainerStarted","Data":"2e3ad0b986cd4719f281090d06d73a636c10ee3dac7a89d41cd182d5abad5524"} Feb 14 05:40:43 crc kubenswrapper[4867]: I0214 05:40:43.032385 4867 generic.go:334] "Generic (PLEG): container finished" podID="c89371c3-d8bf-4ac1-8b52-9df945ca0c87" containerID="3bc3e643c5429ba5ba3a05589e465e07b9fffdc8f4d33443aa5bd143360e4eb3" exitCode=0 Feb 14 05:40:43 crc kubenswrapper[4867]: I0214 05:40:43.032894 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tlqjg" event={"ID":"c89371c3-d8bf-4ac1-8b52-9df945ca0c87","Type":"ContainerDied","Data":"3bc3e643c5429ba5ba3a05589e465e07b9fffdc8f4d33443aa5bd143360e4eb3"} Feb 14 05:40:43 crc kubenswrapper[4867]: I0214 05:40:43.037923 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c4zxt" event={"ID":"1623abf8-a3d2-4598-8f39-f0153f263393","Type":"ContainerStarted","Data":"cc44a1a3222d6deb16349071be26b927d02318057d20a59ca7cbee80422066fa"} Feb 14 05:40:43 crc kubenswrapper[4867]: I0214 05:40:43.094650 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-c4zxt" podStartSLOduration=4.362208947 podStartE2EDuration="10.093625546s" podCreationTimestamp="2026-02-14 05:40:33 +0000 UTC" firstStartedPulling="2026-02-14 05:40:36.94653862 +0000 UTC m=+5469.027475944" lastFinishedPulling="2026-02-14 05:40:42.677955229 +0000 UTC m=+5474.758892543" observedRunningTime="2026-02-14 05:40:43.08616936 +0000 UTC m=+5475.167106674" watchObservedRunningTime="2026-02-14 05:40:43.093625546 +0000 UTC m=+5475.174562860" Feb 14 05:40:44 crc kubenswrapper[4867]: I0214 05:40:44.050821 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tlqjg" event={"ID":"c89371c3-d8bf-4ac1-8b52-9df945ca0c87","Type":"ContainerStarted","Data":"cb9fea3befa5bf44f897f98aba6d284869bf350bf21600e5d93ec69f31f91067"} Feb 14 05:40:44 crc kubenswrapper[4867]: I0214 05:40:44.372006 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-c4zxt" Feb 14 05:40:44 crc kubenswrapper[4867]: I0214 05:40:44.372670 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-c4zxt" Feb 14 05:40:45 crc kubenswrapper[4867]: I0214 05:40:45.445370 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-c4zxt" podUID="1623abf8-a3d2-4598-8f39-f0153f263393" containerName="registry-server" probeResult="failure" output=< Feb 14 05:40:45 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:40:45 crc kubenswrapper[4867]: > Feb 14 05:40:48 crc kubenswrapper[4867]: I0214 05:40:48.116180 4867 generic.go:334] "Generic (PLEG): container finished" podID="c89371c3-d8bf-4ac1-8b52-9df945ca0c87" containerID="cb9fea3befa5bf44f897f98aba6d284869bf350bf21600e5d93ec69f31f91067" exitCode=0 Feb 14 05:40:48 crc kubenswrapper[4867]: I0214 05:40:48.116279 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tlqjg" event={"ID":"c89371c3-d8bf-4ac1-8b52-9df945ca0c87","Type":"ContainerDied","Data":"cb9fea3befa5bf44f897f98aba6d284869bf350bf21600e5d93ec69f31f91067"} Feb 14 05:40:49 crc kubenswrapper[4867]: I0214 05:40:49.132840 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tlqjg" event={"ID":"c89371c3-d8bf-4ac1-8b52-9df945ca0c87","Type":"ContainerStarted","Data":"db0b1076448b0cf8a4ffa6679332907db4bdde3817f84bbb2e6e7c141e2f4ef9"} Feb 14 05:40:49 crc kubenswrapper[4867]: I0214 05:40:49.158786 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tlqjg" podStartSLOduration=3.507649541 podStartE2EDuration="9.158763982s" podCreationTimestamp="2026-02-14 05:40:40 +0000 UTC" firstStartedPulling="2026-02-14 05:40:43.034877495 +0000 UTC m=+5475.115814809" lastFinishedPulling="2026-02-14 05:40:48.685991936 +0000 UTC m=+5480.766929250" observedRunningTime="2026-02-14 05:40:49.153770481 +0000 UTC m=+5481.234707805" watchObservedRunningTime="2026-02-14 05:40:49.158763982 +0000 UTC m=+5481.239701306" Feb 14 05:40:50 crc kubenswrapper[4867]: I0214 05:40:50.736405 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tlqjg" Feb 14 05:40:50 crc kubenswrapper[4867]: I0214 05:40:50.737085 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tlqjg" Feb 14 05:40:51 crc kubenswrapper[4867]: I0214 05:40:51.798615 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-tlqjg" podUID="c89371c3-d8bf-4ac1-8b52-9df945ca0c87" containerName="registry-server" probeResult="failure" output=< Feb 14 05:40:51 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:40:51 crc kubenswrapper[4867]: > Feb 14 05:40:55 crc kubenswrapper[4867]: I0214 05:40:55.460158 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-c4zxt" podUID="1623abf8-a3d2-4598-8f39-f0153f263393" containerName="registry-server" probeResult="failure" output=< Feb 14 05:40:55 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:40:55 crc kubenswrapper[4867]: > Feb 14 05:41:01 crc kubenswrapper[4867]: I0214 05:41:01.785467 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-tlqjg" podUID="c89371c3-d8bf-4ac1-8b52-9df945ca0c87" containerName="registry-server" probeResult="failure" output=< Feb 14 05:41:01 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:41:01 crc kubenswrapper[4867]: > Feb 14 05:41:05 crc kubenswrapper[4867]: I0214 05:41:05.436440 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-c4zxt" podUID="1623abf8-a3d2-4598-8f39-f0153f263393" containerName="registry-server" probeResult="failure" output=< Feb 14 05:41:05 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:41:05 crc kubenswrapper[4867]: > Feb 14 05:41:10 crc kubenswrapper[4867]: I0214 05:41:10.788164 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tlqjg" Feb 14 05:41:10 crc kubenswrapper[4867]: I0214 05:41:10.912221 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tlqjg" Feb 14 05:41:11 crc kubenswrapper[4867]: I0214 05:41:11.596998 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tlqjg"] Feb 14 05:41:12 crc kubenswrapper[4867]: I0214 05:41:12.449798 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tlqjg" podUID="c89371c3-d8bf-4ac1-8b52-9df945ca0c87" containerName="registry-server" containerID="cri-o://db0b1076448b0cf8a4ffa6679332907db4bdde3817f84bbb2e6e7c141e2f4ef9" gracePeriod=2 Feb 14 05:41:13 crc kubenswrapper[4867]: I0214 05:41:13.058008 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tlqjg" Feb 14 05:41:13 crc kubenswrapper[4867]: I0214 05:41:13.179472 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c89371c3-d8bf-4ac1-8b52-9df945ca0c87-utilities\") pod \"c89371c3-d8bf-4ac1-8b52-9df945ca0c87\" (UID: \"c89371c3-d8bf-4ac1-8b52-9df945ca0c87\") " Feb 14 05:41:13 crc kubenswrapper[4867]: I0214 05:41:13.180093 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c89371c3-d8bf-4ac1-8b52-9df945ca0c87-catalog-content\") pod \"c89371c3-d8bf-4ac1-8b52-9df945ca0c87\" (UID: \"c89371c3-d8bf-4ac1-8b52-9df945ca0c87\") " Feb 14 05:41:13 crc kubenswrapper[4867]: I0214 05:41:13.180367 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kd5n8\" (UniqueName: \"kubernetes.io/projected/c89371c3-d8bf-4ac1-8b52-9df945ca0c87-kube-api-access-kd5n8\") pod \"c89371c3-d8bf-4ac1-8b52-9df945ca0c87\" (UID: \"c89371c3-d8bf-4ac1-8b52-9df945ca0c87\") " Feb 14 05:41:13 crc kubenswrapper[4867]: I0214 05:41:13.180707 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c89371c3-d8bf-4ac1-8b52-9df945ca0c87-utilities" (OuterVolumeSpecName: "utilities") pod "c89371c3-d8bf-4ac1-8b52-9df945ca0c87" (UID: "c89371c3-d8bf-4ac1-8b52-9df945ca0c87"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 05:41:13 crc kubenswrapper[4867]: I0214 05:41:13.183791 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c89371c3-d8bf-4ac1-8b52-9df945ca0c87-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 05:41:13 crc kubenswrapper[4867]: I0214 05:41:13.194159 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c89371c3-d8bf-4ac1-8b52-9df945ca0c87-kube-api-access-kd5n8" (OuterVolumeSpecName: "kube-api-access-kd5n8") pod "c89371c3-d8bf-4ac1-8b52-9df945ca0c87" (UID: "c89371c3-d8bf-4ac1-8b52-9df945ca0c87"). InnerVolumeSpecName "kube-api-access-kd5n8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 05:41:13 crc kubenswrapper[4867]: I0214 05:41:13.247027 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c89371c3-d8bf-4ac1-8b52-9df945ca0c87-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c89371c3-d8bf-4ac1-8b52-9df945ca0c87" (UID: "c89371c3-d8bf-4ac1-8b52-9df945ca0c87"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 05:41:13 crc kubenswrapper[4867]: I0214 05:41:13.288015 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c89371c3-d8bf-4ac1-8b52-9df945ca0c87-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 05:41:13 crc kubenswrapper[4867]: I0214 05:41:13.288053 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kd5n8\" (UniqueName: \"kubernetes.io/projected/c89371c3-d8bf-4ac1-8b52-9df945ca0c87-kube-api-access-kd5n8\") on node \"crc\" DevicePath \"\"" Feb 14 05:41:13 crc kubenswrapper[4867]: I0214 05:41:13.466681 4867 generic.go:334] "Generic (PLEG): container finished" podID="c89371c3-d8bf-4ac1-8b52-9df945ca0c87" containerID="db0b1076448b0cf8a4ffa6679332907db4bdde3817f84bbb2e6e7c141e2f4ef9" exitCode=0 Feb 14 05:41:13 crc kubenswrapper[4867]: I0214 05:41:13.466812 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tlqjg" Feb 14 05:41:13 crc kubenswrapper[4867]: I0214 05:41:13.466845 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tlqjg" event={"ID":"c89371c3-d8bf-4ac1-8b52-9df945ca0c87","Type":"ContainerDied","Data":"db0b1076448b0cf8a4ffa6679332907db4bdde3817f84bbb2e6e7c141e2f4ef9"} Feb 14 05:41:13 crc kubenswrapper[4867]: I0214 05:41:13.469981 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tlqjg" event={"ID":"c89371c3-d8bf-4ac1-8b52-9df945ca0c87","Type":"ContainerDied","Data":"2e3ad0b986cd4719f281090d06d73a636c10ee3dac7a89d41cd182d5abad5524"} Feb 14 05:41:13 crc kubenswrapper[4867]: I0214 05:41:13.470032 4867 scope.go:117] "RemoveContainer" containerID="db0b1076448b0cf8a4ffa6679332907db4bdde3817f84bbb2e6e7c141e2f4ef9" Feb 14 05:41:13 crc kubenswrapper[4867]: I0214 05:41:13.530184 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tlqjg"] Feb 14 05:41:13 crc kubenswrapper[4867]: I0214 05:41:13.538157 4867 scope.go:117] "RemoveContainer" containerID="cb9fea3befa5bf44f897f98aba6d284869bf350bf21600e5d93ec69f31f91067" Feb 14 05:41:13 crc kubenswrapper[4867]: I0214 05:41:13.543467 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tlqjg"] Feb 14 05:41:13 crc kubenswrapper[4867]: I0214 05:41:13.563883 4867 scope.go:117] "RemoveContainer" containerID="3bc3e643c5429ba5ba3a05589e465e07b9fffdc8f4d33443aa5bd143360e4eb3" Feb 14 05:41:13 crc kubenswrapper[4867]: I0214 05:41:13.647610 4867 scope.go:117] "RemoveContainer" containerID="db0b1076448b0cf8a4ffa6679332907db4bdde3817f84bbb2e6e7c141e2f4ef9" Feb 14 05:41:13 crc kubenswrapper[4867]: E0214 05:41:13.649093 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db0b1076448b0cf8a4ffa6679332907db4bdde3817f84bbb2e6e7c141e2f4ef9\": container with ID starting with db0b1076448b0cf8a4ffa6679332907db4bdde3817f84bbb2e6e7c141e2f4ef9 not found: ID does not exist" containerID="db0b1076448b0cf8a4ffa6679332907db4bdde3817f84bbb2e6e7c141e2f4ef9" Feb 14 05:41:13 crc kubenswrapper[4867]: I0214 05:41:13.649538 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db0b1076448b0cf8a4ffa6679332907db4bdde3817f84bbb2e6e7c141e2f4ef9"} err="failed to get container status \"db0b1076448b0cf8a4ffa6679332907db4bdde3817f84bbb2e6e7c141e2f4ef9\": rpc error: code = NotFound desc = could not find container \"db0b1076448b0cf8a4ffa6679332907db4bdde3817f84bbb2e6e7c141e2f4ef9\": container with ID starting with db0b1076448b0cf8a4ffa6679332907db4bdde3817f84bbb2e6e7c141e2f4ef9 not found: ID does not exist" Feb 14 05:41:13 crc kubenswrapper[4867]: I0214 05:41:13.649587 4867 scope.go:117] "RemoveContainer" containerID="cb9fea3befa5bf44f897f98aba6d284869bf350bf21600e5d93ec69f31f91067" Feb 14 05:41:13 crc kubenswrapper[4867]: E0214 05:41:13.650174 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb9fea3befa5bf44f897f98aba6d284869bf350bf21600e5d93ec69f31f91067\": container with ID starting with cb9fea3befa5bf44f897f98aba6d284869bf350bf21600e5d93ec69f31f91067 not found: ID does not exist" containerID="cb9fea3befa5bf44f897f98aba6d284869bf350bf21600e5d93ec69f31f91067" Feb 14 05:41:13 crc kubenswrapper[4867]: I0214 05:41:13.650232 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb9fea3befa5bf44f897f98aba6d284869bf350bf21600e5d93ec69f31f91067"} err="failed to get container status \"cb9fea3befa5bf44f897f98aba6d284869bf350bf21600e5d93ec69f31f91067\": rpc error: code = NotFound desc = could not find container \"cb9fea3befa5bf44f897f98aba6d284869bf350bf21600e5d93ec69f31f91067\": container with ID starting with cb9fea3befa5bf44f897f98aba6d284869bf350bf21600e5d93ec69f31f91067 not found: ID does not exist" Feb 14 05:41:13 crc kubenswrapper[4867]: I0214 05:41:13.650280 4867 scope.go:117] "RemoveContainer" containerID="3bc3e643c5429ba5ba3a05589e465e07b9fffdc8f4d33443aa5bd143360e4eb3" Feb 14 05:41:13 crc kubenswrapper[4867]: E0214 05:41:13.650785 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3bc3e643c5429ba5ba3a05589e465e07b9fffdc8f4d33443aa5bd143360e4eb3\": container with ID starting with 3bc3e643c5429ba5ba3a05589e465e07b9fffdc8f4d33443aa5bd143360e4eb3 not found: ID does not exist" containerID="3bc3e643c5429ba5ba3a05589e465e07b9fffdc8f4d33443aa5bd143360e4eb3" Feb 14 05:41:13 crc kubenswrapper[4867]: I0214 05:41:13.650827 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bc3e643c5429ba5ba3a05589e465e07b9fffdc8f4d33443aa5bd143360e4eb3"} err="failed to get container status \"3bc3e643c5429ba5ba3a05589e465e07b9fffdc8f4d33443aa5bd143360e4eb3\": rpc error: code = NotFound desc = could not find container \"3bc3e643c5429ba5ba3a05589e465e07b9fffdc8f4d33443aa5bd143360e4eb3\": container with ID starting with 3bc3e643c5429ba5ba3a05589e465e07b9fffdc8f4d33443aa5bd143360e4eb3 not found: ID does not exist" Feb 14 05:41:14 crc kubenswrapper[4867]: I0214 05:41:14.444205 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-c4zxt" Feb 14 05:41:14 crc kubenswrapper[4867]: I0214 05:41:14.506549 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-c4zxt" Feb 14 05:41:15 crc kubenswrapper[4867]: I0214 05:41:15.019532 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c89371c3-d8bf-4ac1-8b52-9df945ca0c87" path="/var/lib/kubelet/pods/c89371c3-d8bf-4ac1-8b52-9df945ca0c87/volumes" Feb 14 05:41:15 crc kubenswrapper[4867]: I0214 05:41:15.991835 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-c4zxt"] Feb 14 05:41:15 crc kubenswrapper[4867]: I0214 05:41:15.992229 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-c4zxt" podUID="1623abf8-a3d2-4598-8f39-f0153f263393" containerName="registry-server" containerID="cri-o://cc44a1a3222d6deb16349071be26b927d02318057d20a59ca7cbee80422066fa" gracePeriod=2 Feb 14 05:41:16 crc kubenswrapper[4867]: I0214 05:41:16.512375 4867 generic.go:334] "Generic (PLEG): container finished" podID="1623abf8-a3d2-4598-8f39-f0153f263393" containerID="cc44a1a3222d6deb16349071be26b927d02318057d20a59ca7cbee80422066fa" exitCode=0 Feb 14 05:41:16 crc kubenswrapper[4867]: I0214 05:41:16.512430 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c4zxt" event={"ID":"1623abf8-a3d2-4598-8f39-f0153f263393","Type":"ContainerDied","Data":"cc44a1a3222d6deb16349071be26b927d02318057d20a59ca7cbee80422066fa"} Feb 14 05:41:16 crc kubenswrapper[4867]: I0214 05:41:16.512937 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c4zxt" event={"ID":"1623abf8-a3d2-4598-8f39-f0153f263393","Type":"ContainerDied","Data":"295e96f9f07d3095cea3a700b623e45a0a3c5905cbf092c822537e6b819d4532"} Feb 14 05:41:16 crc kubenswrapper[4867]: I0214 05:41:16.512962 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="295e96f9f07d3095cea3a700b623e45a0a3c5905cbf092c822537e6b819d4532" Feb 14 05:41:16 crc kubenswrapper[4867]: I0214 05:41:16.561463 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c4zxt" Feb 14 05:41:16 crc kubenswrapper[4867]: I0214 05:41:16.697035 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1623abf8-a3d2-4598-8f39-f0153f263393-utilities\") pod \"1623abf8-a3d2-4598-8f39-f0153f263393\" (UID: \"1623abf8-a3d2-4598-8f39-f0153f263393\") " Feb 14 05:41:16 crc kubenswrapper[4867]: I0214 05:41:16.697160 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1623abf8-a3d2-4598-8f39-f0153f263393-catalog-content\") pod \"1623abf8-a3d2-4598-8f39-f0153f263393\" (UID: \"1623abf8-a3d2-4598-8f39-f0153f263393\") " Feb 14 05:41:16 crc kubenswrapper[4867]: I0214 05:41:16.697262 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2tgsr\" (UniqueName: \"kubernetes.io/projected/1623abf8-a3d2-4598-8f39-f0153f263393-kube-api-access-2tgsr\") pod \"1623abf8-a3d2-4598-8f39-f0153f263393\" (UID: \"1623abf8-a3d2-4598-8f39-f0153f263393\") " Feb 14 05:41:16 crc kubenswrapper[4867]: I0214 05:41:16.703948 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1623abf8-a3d2-4598-8f39-f0153f263393-kube-api-access-2tgsr" (OuterVolumeSpecName: "kube-api-access-2tgsr") pod "1623abf8-a3d2-4598-8f39-f0153f263393" (UID: "1623abf8-a3d2-4598-8f39-f0153f263393"). InnerVolumeSpecName "kube-api-access-2tgsr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 05:41:16 crc kubenswrapper[4867]: I0214 05:41:16.709739 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1623abf8-a3d2-4598-8f39-f0153f263393-utilities" (OuterVolumeSpecName: "utilities") pod "1623abf8-a3d2-4598-8f39-f0153f263393" (UID: "1623abf8-a3d2-4598-8f39-f0153f263393"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 05:41:16 crc kubenswrapper[4867]: I0214 05:41:16.748372 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1623abf8-a3d2-4598-8f39-f0153f263393-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1623abf8-a3d2-4598-8f39-f0153f263393" (UID: "1623abf8-a3d2-4598-8f39-f0153f263393"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 05:41:16 crc kubenswrapper[4867]: I0214 05:41:16.801107 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1623abf8-a3d2-4598-8f39-f0153f263393-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 05:41:16 crc kubenswrapper[4867]: I0214 05:41:16.801571 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1623abf8-a3d2-4598-8f39-f0153f263393-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 05:41:16 crc kubenswrapper[4867]: I0214 05:41:16.801584 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2tgsr\" (UniqueName: \"kubernetes.io/projected/1623abf8-a3d2-4598-8f39-f0153f263393-kube-api-access-2tgsr\") on node \"crc\" DevicePath \"\"" Feb 14 05:41:17 crc kubenswrapper[4867]: I0214 05:41:17.522312 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c4zxt" Feb 14 05:41:17 crc kubenswrapper[4867]: I0214 05:41:17.550814 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-c4zxt"] Feb 14 05:41:17 crc kubenswrapper[4867]: I0214 05:41:17.562061 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-c4zxt"] Feb 14 05:41:19 crc kubenswrapper[4867]: I0214 05:41:19.019219 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1623abf8-a3d2-4598-8f39-f0153f263393" path="/var/lib/kubelet/pods/1623abf8-a3d2-4598-8f39-f0153f263393/volumes" Feb 14 05:41:31 crc kubenswrapper[4867]: I0214 05:41:31.251462 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 05:41:31 crc kubenswrapper[4867]: I0214 05:41:31.252175 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 05:41:52 crc kubenswrapper[4867]: I0214 05:41:52.734685 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hnp9l"] Feb 14 05:41:52 crc kubenswrapper[4867]: E0214 05:41:52.762374 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1623abf8-a3d2-4598-8f39-f0153f263393" containerName="extract-content" Feb 14 05:41:52 crc kubenswrapper[4867]: I0214 05:41:52.762406 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="1623abf8-a3d2-4598-8f39-f0153f263393" containerName="extract-content" Feb 14 05:41:52 crc kubenswrapper[4867]: E0214 05:41:52.762431 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1623abf8-a3d2-4598-8f39-f0153f263393" containerName="registry-server" Feb 14 05:41:52 crc kubenswrapper[4867]: I0214 05:41:52.762440 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="1623abf8-a3d2-4598-8f39-f0153f263393" containerName="registry-server" Feb 14 05:41:52 crc kubenswrapper[4867]: E0214 05:41:52.762527 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1623abf8-a3d2-4598-8f39-f0153f263393" containerName="extract-utilities" Feb 14 05:41:52 crc kubenswrapper[4867]: I0214 05:41:52.762537 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="1623abf8-a3d2-4598-8f39-f0153f263393" containerName="extract-utilities" Feb 14 05:41:52 crc kubenswrapper[4867]: E0214 05:41:52.762548 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c89371c3-d8bf-4ac1-8b52-9df945ca0c87" containerName="registry-server" Feb 14 05:41:52 crc kubenswrapper[4867]: I0214 05:41:52.762555 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="c89371c3-d8bf-4ac1-8b52-9df945ca0c87" containerName="registry-server" Feb 14 05:41:52 crc kubenswrapper[4867]: E0214 05:41:52.762579 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c89371c3-d8bf-4ac1-8b52-9df945ca0c87" containerName="extract-content" Feb 14 05:41:52 crc kubenswrapper[4867]: I0214 05:41:52.762586 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="c89371c3-d8bf-4ac1-8b52-9df945ca0c87" containerName="extract-content" Feb 14 05:41:52 crc kubenswrapper[4867]: E0214 05:41:52.762670 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c89371c3-d8bf-4ac1-8b52-9df945ca0c87" containerName="extract-utilities" Feb 14 05:41:52 crc kubenswrapper[4867]: I0214 05:41:52.762681 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="c89371c3-d8bf-4ac1-8b52-9df945ca0c87" containerName="extract-utilities" Feb 14 05:41:52 crc kubenswrapper[4867]: I0214 05:41:52.763099 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="c89371c3-d8bf-4ac1-8b52-9df945ca0c87" containerName="registry-server" Feb 14 05:41:52 crc kubenswrapper[4867]: I0214 05:41:52.763138 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="1623abf8-a3d2-4598-8f39-f0153f263393" containerName="registry-server" Feb 14 05:41:52 crc kubenswrapper[4867]: I0214 05:41:52.766879 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hnp9l"] Feb 14 05:41:52 crc kubenswrapper[4867]: I0214 05:41:52.767023 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hnp9l" Feb 14 05:41:52 crc kubenswrapper[4867]: I0214 05:41:52.842139 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfa44170-d9b0-46a8-a2bb-8c6fa355cf70-catalog-content\") pod \"redhat-marketplace-hnp9l\" (UID: \"cfa44170-d9b0-46a8-a2bb-8c6fa355cf70\") " pod="openshift-marketplace/redhat-marketplace-hnp9l" Feb 14 05:41:52 crc kubenswrapper[4867]: I0214 05:41:52.842436 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spttv\" (UniqueName: \"kubernetes.io/projected/cfa44170-d9b0-46a8-a2bb-8c6fa355cf70-kube-api-access-spttv\") pod \"redhat-marketplace-hnp9l\" (UID: \"cfa44170-d9b0-46a8-a2bb-8c6fa355cf70\") " pod="openshift-marketplace/redhat-marketplace-hnp9l" Feb 14 05:41:52 crc kubenswrapper[4867]: I0214 05:41:52.842595 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfa44170-d9b0-46a8-a2bb-8c6fa355cf70-utilities\") pod \"redhat-marketplace-hnp9l\" (UID: \"cfa44170-d9b0-46a8-a2bb-8c6fa355cf70\") " pod="openshift-marketplace/redhat-marketplace-hnp9l" Feb 14 05:41:52 crc kubenswrapper[4867]: I0214 05:41:52.945543 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfa44170-d9b0-46a8-a2bb-8c6fa355cf70-catalog-content\") pod \"redhat-marketplace-hnp9l\" (UID: \"cfa44170-d9b0-46a8-a2bb-8c6fa355cf70\") " pod="openshift-marketplace/redhat-marketplace-hnp9l" Feb 14 05:41:52 crc kubenswrapper[4867]: I0214 05:41:52.945668 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spttv\" (UniqueName: \"kubernetes.io/projected/cfa44170-d9b0-46a8-a2bb-8c6fa355cf70-kube-api-access-spttv\") pod \"redhat-marketplace-hnp9l\" (UID: \"cfa44170-d9b0-46a8-a2bb-8c6fa355cf70\") " pod="openshift-marketplace/redhat-marketplace-hnp9l" Feb 14 05:41:52 crc kubenswrapper[4867]: I0214 05:41:52.945721 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfa44170-d9b0-46a8-a2bb-8c6fa355cf70-utilities\") pod \"redhat-marketplace-hnp9l\" (UID: \"cfa44170-d9b0-46a8-a2bb-8c6fa355cf70\") " pod="openshift-marketplace/redhat-marketplace-hnp9l" Feb 14 05:41:52 crc kubenswrapper[4867]: I0214 05:41:52.946307 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfa44170-d9b0-46a8-a2bb-8c6fa355cf70-catalog-content\") pod \"redhat-marketplace-hnp9l\" (UID: \"cfa44170-d9b0-46a8-a2bb-8c6fa355cf70\") " pod="openshift-marketplace/redhat-marketplace-hnp9l" Feb 14 05:41:52 crc kubenswrapper[4867]: I0214 05:41:52.946404 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfa44170-d9b0-46a8-a2bb-8c6fa355cf70-utilities\") pod \"redhat-marketplace-hnp9l\" (UID: \"cfa44170-d9b0-46a8-a2bb-8c6fa355cf70\") " pod="openshift-marketplace/redhat-marketplace-hnp9l" Feb 14 05:41:52 crc kubenswrapper[4867]: I0214 05:41:52.981107 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spttv\" (UniqueName: \"kubernetes.io/projected/cfa44170-d9b0-46a8-a2bb-8c6fa355cf70-kube-api-access-spttv\") pod \"redhat-marketplace-hnp9l\" (UID: \"cfa44170-d9b0-46a8-a2bb-8c6fa355cf70\") " pod="openshift-marketplace/redhat-marketplace-hnp9l" Feb 14 05:41:53 crc kubenswrapper[4867]: I0214 05:41:53.102017 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hnp9l" Feb 14 05:41:53 crc kubenswrapper[4867]: I0214 05:41:53.684525 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hnp9l"] Feb 14 05:41:53 crc kubenswrapper[4867]: I0214 05:41:53.977447 4867 generic.go:334] "Generic (PLEG): container finished" podID="cfa44170-d9b0-46a8-a2bb-8c6fa355cf70" containerID="c714a6fab8fc461e14d0f2f11c7a7e01cce0430791be8faf274c90db20b0ebbc" exitCode=0 Feb 14 05:41:53 crc kubenswrapper[4867]: I0214 05:41:53.977536 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hnp9l" event={"ID":"cfa44170-d9b0-46a8-a2bb-8c6fa355cf70","Type":"ContainerDied","Data":"c714a6fab8fc461e14d0f2f11c7a7e01cce0430791be8faf274c90db20b0ebbc"} Feb 14 05:41:53 crc kubenswrapper[4867]: I0214 05:41:53.977915 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hnp9l" event={"ID":"cfa44170-d9b0-46a8-a2bb-8c6fa355cf70","Type":"ContainerStarted","Data":"03b63e6e338e97fd57df5fde6fda2a32cf024491536c59c6434b27784e69fdbf"} Feb 14 05:41:56 crc kubenswrapper[4867]: I0214 05:41:56.002214 4867 generic.go:334] "Generic (PLEG): container finished" podID="cfa44170-d9b0-46a8-a2bb-8c6fa355cf70" containerID="d3ee72f5184f22da9e9fbc4f5a4589f4a519e499fcc04c5522f39b89cd0b7684" exitCode=0 Feb 14 05:41:56 crc kubenswrapper[4867]: I0214 05:41:56.002315 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hnp9l" event={"ID":"cfa44170-d9b0-46a8-a2bb-8c6fa355cf70","Type":"ContainerDied","Data":"d3ee72f5184f22da9e9fbc4f5a4589f4a519e499fcc04c5522f39b89cd0b7684"} Feb 14 05:41:57 crc kubenswrapper[4867]: I0214 05:41:57.018426 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hnp9l" event={"ID":"cfa44170-d9b0-46a8-a2bb-8c6fa355cf70","Type":"ContainerStarted","Data":"53c4a9ed5aa0cad33a1da047be151c421c199488e1e73c551860333703ddc24e"} Feb 14 05:41:57 crc kubenswrapper[4867]: I0214 05:41:57.043116 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hnp9l" podStartSLOduration=2.645893405 podStartE2EDuration="5.04309463s" podCreationTimestamp="2026-02-14 05:41:52 +0000 UTC" firstStartedPulling="2026-02-14 05:41:53.979548569 +0000 UTC m=+5546.060485883" lastFinishedPulling="2026-02-14 05:41:56.376749794 +0000 UTC m=+5548.457687108" observedRunningTime="2026-02-14 05:41:57.036049235 +0000 UTC m=+5549.116986549" watchObservedRunningTime="2026-02-14 05:41:57.04309463 +0000 UTC m=+5549.124031944" Feb 14 05:42:01 crc kubenswrapper[4867]: I0214 05:42:01.251074 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 05:42:01 crc kubenswrapper[4867]: I0214 05:42:01.251922 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 05:42:03 crc kubenswrapper[4867]: I0214 05:42:03.103116 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hnp9l" Feb 14 05:42:03 crc kubenswrapper[4867]: I0214 05:42:03.103773 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hnp9l" Feb 14 05:42:03 crc kubenswrapper[4867]: I0214 05:42:03.172485 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hnp9l" Feb 14 05:42:04 crc kubenswrapper[4867]: I0214 05:42:04.196456 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hnp9l" Feb 14 05:42:04 crc kubenswrapper[4867]: I0214 05:42:04.290987 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hnp9l"] Feb 14 05:42:06 crc kubenswrapper[4867]: I0214 05:42:06.146611 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hnp9l" podUID="cfa44170-d9b0-46a8-a2bb-8c6fa355cf70" containerName="registry-server" containerID="cri-o://53c4a9ed5aa0cad33a1da047be151c421c199488e1e73c551860333703ddc24e" gracePeriod=2 Feb 14 05:42:06 crc kubenswrapper[4867]: I0214 05:42:06.737999 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hnp9l" Feb 14 05:42:06 crc kubenswrapper[4867]: I0214 05:42:06.846836 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfa44170-d9b0-46a8-a2bb-8c6fa355cf70-utilities\") pod \"cfa44170-d9b0-46a8-a2bb-8c6fa355cf70\" (UID: \"cfa44170-d9b0-46a8-a2bb-8c6fa355cf70\") " Feb 14 05:42:06 crc kubenswrapper[4867]: I0214 05:42:06.846890 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfa44170-d9b0-46a8-a2bb-8c6fa355cf70-catalog-content\") pod \"cfa44170-d9b0-46a8-a2bb-8c6fa355cf70\" (UID: \"cfa44170-d9b0-46a8-a2bb-8c6fa355cf70\") " Feb 14 05:42:06 crc kubenswrapper[4867]: I0214 05:42:06.847167 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-spttv\" (UniqueName: \"kubernetes.io/projected/cfa44170-d9b0-46a8-a2bb-8c6fa355cf70-kube-api-access-spttv\") pod \"cfa44170-d9b0-46a8-a2bb-8c6fa355cf70\" (UID: \"cfa44170-d9b0-46a8-a2bb-8c6fa355cf70\") " Feb 14 05:42:06 crc kubenswrapper[4867]: I0214 05:42:06.850011 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cfa44170-d9b0-46a8-a2bb-8c6fa355cf70-utilities" (OuterVolumeSpecName: "utilities") pod "cfa44170-d9b0-46a8-a2bb-8c6fa355cf70" (UID: "cfa44170-d9b0-46a8-a2bb-8c6fa355cf70"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 05:42:06 crc kubenswrapper[4867]: I0214 05:42:06.858847 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfa44170-d9b0-46a8-a2bb-8c6fa355cf70-kube-api-access-spttv" (OuterVolumeSpecName: "kube-api-access-spttv") pod "cfa44170-d9b0-46a8-a2bb-8c6fa355cf70" (UID: "cfa44170-d9b0-46a8-a2bb-8c6fa355cf70"). InnerVolumeSpecName "kube-api-access-spttv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 05:42:06 crc kubenswrapper[4867]: I0214 05:42:06.878096 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cfa44170-d9b0-46a8-a2bb-8c6fa355cf70-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cfa44170-d9b0-46a8-a2bb-8c6fa355cf70" (UID: "cfa44170-d9b0-46a8-a2bb-8c6fa355cf70"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 05:42:06 crc kubenswrapper[4867]: I0214 05:42:06.950775 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfa44170-d9b0-46a8-a2bb-8c6fa355cf70-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 05:42:06 crc kubenswrapper[4867]: I0214 05:42:06.950833 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfa44170-d9b0-46a8-a2bb-8c6fa355cf70-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 05:42:06 crc kubenswrapper[4867]: I0214 05:42:06.950847 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-spttv\" (UniqueName: \"kubernetes.io/projected/cfa44170-d9b0-46a8-a2bb-8c6fa355cf70-kube-api-access-spttv\") on node \"crc\" DevicePath \"\"" Feb 14 05:42:07 crc kubenswrapper[4867]: I0214 05:42:07.161299 4867 generic.go:334] "Generic (PLEG): container finished" podID="cfa44170-d9b0-46a8-a2bb-8c6fa355cf70" containerID="53c4a9ed5aa0cad33a1da047be151c421c199488e1e73c551860333703ddc24e" exitCode=0 Feb 14 05:42:07 crc kubenswrapper[4867]: I0214 05:42:07.161356 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hnp9l" Feb 14 05:42:07 crc kubenswrapper[4867]: I0214 05:42:07.161389 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hnp9l" event={"ID":"cfa44170-d9b0-46a8-a2bb-8c6fa355cf70","Type":"ContainerDied","Data":"53c4a9ed5aa0cad33a1da047be151c421c199488e1e73c551860333703ddc24e"} Feb 14 05:42:07 crc kubenswrapper[4867]: I0214 05:42:07.161752 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hnp9l" event={"ID":"cfa44170-d9b0-46a8-a2bb-8c6fa355cf70","Type":"ContainerDied","Data":"03b63e6e338e97fd57df5fde6fda2a32cf024491536c59c6434b27784e69fdbf"} Feb 14 05:42:07 crc kubenswrapper[4867]: I0214 05:42:07.161778 4867 scope.go:117] "RemoveContainer" containerID="53c4a9ed5aa0cad33a1da047be151c421c199488e1e73c551860333703ddc24e" Feb 14 05:42:07 crc kubenswrapper[4867]: I0214 05:42:07.189910 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hnp9l"] Feb 14 05:42:07 crc kubenswrapper[4867]: I0214 05:42:07.199277 4867 scope.go:117] "RemoveContainer" containerID="d3ee72f5184f22da9e9fbc4f5a4589f4a519e499fcc04c5522f39b89cd0b7684" Feb 14 05:42:07 crc kubenswrapper[4867]: I0214 05:42:07.204571 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hnp9l"] Feb 14 05:42:07 crc kubenswrapper[4867]: I0214 05:42:07.222135 4867 scope.go:117] "RemoveContainer" containerID="c714a6fab8fc461e14d0f2f11c7a7e01cce0430791be8faf274c90db20b0ebbc" Feb 14 05:42:07 crc kubenswrapper[4867]: I0214 05:42:07.277964 4867 scope.go:117] "RemoveContainer" containerID="53c4a9ed5aa0cad33a1da047be151c421c199488e1e73c551860333703ddc24e" Feb 14 05:42:07 crc kubenswrapper[4867]: E0214 05:42:07.278550 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53c4a9ed5aa0cad33a1da047be151c421c199488e1e73c551860333703ddc24e\": container with ID starting with 53c4a9ed5aa0cad33a1da047be151c421c199488e1e73c551860333703ddc24e not found: ID does not exist" containerID="53c4a9ed5aa0cad33a1da047be151c421c199488e1e73c551860333703ddc24e" Feb 14 05:42:07 crc kubenswrapper[4867]: I0214 05:42:07.278578 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53c4a9ed5aa0cad33a1da047be151c421c199488e1e73c551860333703ddc24e"} err="failed to get container status \"53c4a9ed5aa0cad33a1da047be151c421c199488e1e73c551860333703ddc24e\": rpc error: code = NotFound desc = could not find container \"53c4a9ed5aa0cad33a1da047be151c421c199488e1e73c551860333703ddc24e\": container with ID starting with 53c4a9ed5aa0cad33a1da047be151c421c199488e1e73c551860333703ddc24e not found: ID does not exist" Feb 14 05:42:07 crc kubenswrapper[4867]: I0214 05:42:07.278609 4867 scope.go:117] "RemoveContainer" containerID="d3ee72f5184f22da9e9fbc4f5a4589f4a519e499fcc04c5522f39b89cd0b7684" Feb 14 05:42:07 crc kubenswrapper[4867]: E0214 05:42:07.278978 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3ee72f5184f22da9e9fbc4f5a4589f4a519e499fcc04c5522f39b89cd0b7684\": container with ID starting with d3ee72f5184f22da9e9fbc4f5a4589f4a519e499fcc04c5522f39b89cd0b7684 not found: ID does not exist" containerID="d3ee72f5184f22da9e9fbc4f5a4589f4a519e499fcc04c5522f39b89cd0b7684" Feb 14 05:42:07 crc kubenswrapper[4867]: I0214 05:42:07.279003 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3ee72f5184f22da9e9fbc4f5a4589f4a519e499fcc04c5522f39b89cd0b7684"} err="failed to get container status \"d3ee72f5184f22da9e9fbc4f5a4589f4a519e499fcc04c5522f39b89cd0b7684\": rpc error: code = NotFound desc = could not find container \"d3ee72f5184f22da9e9fbc4f5a4589f4a519e499fcc04c5522f39b89cd0b7684\": container with ID starting with d3ee72f5184f22da9e9fbc4f5a4589f4a519e499fcc04c5522f39b89cd0b7684 not found: ID does not exist" Feb 14 05:42:07 crc kubenswrapper[4867]: I0214 05:42:07.279022 4867 scope.go:117] "RemoveContainer" containerID="c714a6fab8fc461e14d0f2f11c7a7e01cce0430791be8faf274c90db20b0ebbc" Feb 14 05:42:07 crc kubenswrapper[4867]: E0214 05:42:07.279400 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c714a6fab8fc461e14d0f2f11c7a7e01cce0430791be8faf274c90db20b0ebbc\": container with ID starting with c714a6fab8fc461e14d0f2f11c7a7e01cce0430791be8faf274c90db20b0ebbc not found: ID does not exist" containerID="c714a6fab8fc461e14d0f2f11c7a7e01cce0430791be8faf274c90db20b0ebbc" Feb 14 05:42:07 crc kubenswrapper[4867]: I0214 05:42:07.279434 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c714a6fab8fc461e14d0f2f11c7a7e01cce0430791be8faf274c90db20b0ebbc"} err="failed to get container status \"c714a6fab8fc461e14d0f2f11c7a7e01cce0430791be8faf274c90db20b0ebbc\": rpc error: code = NotFound desc = could not find container \"c714a6fab8fc461e14d0f2f11c7a7e01cce0430791be8faf274c90db20b0ebbc\": container with ID starting with c714a6fab8fc461e14d0f2f11c7a7e01cce0430791be8faf274c90db20b0ebbc not found: ID does not exist" Feb 14 05:42:09 crc kubenswrapper[4867]: I0214 05:42:09.012623 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cfa44170-d9b0-46a8-a2bb-8c6fa355cf70" path="/var/lib/kubelet/pods/cfa44170-d9b0-46a8-a2bb-8c6fa355cf70/volumes" Feb 14 05:42:31 crc kubenswrapper[4867]: I0214 05:42:31.251042 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 05:42:31 crc kubenswrapper[4867]: I0214 05:42:31.251813 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 05:42:31 crc kubenswrapper[4867]: I0214 05:42:31.251880 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" Feb 14 05:42:31 crc kubenswrapper[4867]: I0214 05:42:31.252955 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f5d63b1271ea439ba7c2f7514281f50c704e327b66fe9d213dc7e443134b610b"} pod="openshift-machine-config-operator/machine-config-daemon-4s95t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 05:42:31 crc kubenswrapper[4867]: I0214 05:42:31.253056 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" containerID="cri-o://f5d63b1271ea439ba7c2f7514281f50c704e327b66fe9d213dc7e443134b610b" gracePeriod=600 Feb 14 05:42:31 crc kubenswrapper[4867]: I0214 05:42:31.441726 4867 generic.go:334] "Generic (PLEG): container finished" podID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerID="f5d63b1271ea439ba7c2f7514281f50c704e327b66fe9d213dc7e443134b610b" exitCode=0 Feb 14 05:42:31 crc kubenswrapper[4867]: I0214 05:42:31.441816 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" event={"ID":"5992e46c-bce7-4b9f-82f2-c7ffb93286cd","Type":"ContainerDied","Data":"f5d63b1271ea439ba7c2f7514281f50c704e327b66fe9d213dc7e443134b610b"} Feb 14 05:42:31 crc kubenswrapper[4867]: I0214 05:42:31.442207 4867 scope.go:117] "RemoveContainer" containerID="5e73bb84ca12c5e0e2f84b8149632e8db299b151552bafe8248698ab62e5c36a" Feb 14 05:42:32 crc kubenswrapper[4867]: I0214 05:42:32.468623 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" event={"ID":"5992e46c-bce7-4b9f-82f2-c7ffb93286cd","Type":"ContainerStarted","Data":"57022a394f9e48e84c2c7ab708dd1c775f970a72e65d0163882f6edf72cdab37"} Feb 14 05:42:37 crc kubenswrapper[4867]: I0214 05:42:37.017101 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fwkj9"] Feb 14 05:42:37 crc kubenswrapper[4867]: E0214 05:42:37.018241 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfa44170-d9b0-46a8-a2bb-8c6fa355cf70" containerName="extract-content" Feb 14 05:42:37 crc kubenswrapper[4867]: I0214 05:42:37.018256 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfa44170-d9b0-46a8-a2bb-8c6fa355cf70" containerName="extract-content" Feb 14 05:42:37 crc kubenswrapper[4867]: E0214 05:42:37.018303 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfa44170-d9b0-46a8-a2bb-8c6fa355cf70" containerName="registry-server" Feb 14 05:42:37 crc kubenswrapper[4867]: I0214 05:42:37.018310 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfa44170-d9b0-46a8-a2bb-8c6fa355cf70" containerName="registry-server" Feb 14 05:42:37 crc kubenswrapper[4867]: E0214 05:42:37.018325 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfa44170-d9b0-46a8-a2bb-8c6fa355cf70" containerName="extract-utilities" Feb 14 05:42:37 crc kubenswrapper[4867]: I0214 05:42:37.018332 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfa44170-d9b0-46a8-a2bb-8c6fa355cf70" containerName="extract-utilities" Feb 14 05:42:37 crc kubenswrapper[4867]: I0214 05:42:37.018619 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfa44170-d9b0-46a8-a2bb-8c6fa355cf70" containerName="registry-server" Feb 14 05:42:37 crc kubenswrapper[4867]: I0214 05:42:37.028114 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fwkj9" Feb 14 05:42:37 crc kubenswrapper[4867]: I0214 05:42:37.035837 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fwkj9"] Feb 14 05:42:37 crc kubenswrapper[4867]: I0214 05:42:37.125444 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gthwm\" (UniqueName: \"kubernetes.io/projected/159832eb-a78e-4fcd-bbb3-42445194727f-kube-api-access-gthwm\") pod \"redhat-operators-fwkj9\" (UID: \"159832eb-a78e-4fcd-bbb3-42445194727f\") " pod="openshift-marketplace/redhat-operators-fwkj9" Feb 14 05:42:37 crc kubenswrapper[4867]: I0214 05:42:37.125772 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/159832eb-a78e-4fcd-bbb3-42445194727f-catalog-content\") pod \"redhat-operators-fwkj9\" (UID: \"159832eb-a78e-4fcd-bbb3-42445194727f\") " pod="openshift-marketplace/redhat-operators-fwkj9" Feb 14 05:42:37 crc kubenswrapper[4867]: I0214 05:42:37.125828 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/159832eb-a78e-4fcd-bbb3-42445194727f-utilities\") pod \"redhat-operators-fwkj9\" (UID: \"159832eb-a78e-4fcd-bbb3-42445194727f\") " pod="openshift-marketplace/redhat-operators-fwkj9" Feb 14 05:42:37 crc kubenswrapper[4867]: I0214 05:42:37.228475 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/159832eb-a78e-4fcd-bbb3-42445194727f-catalog-content\") pod \"redhat-operators-fwkj9\" (UID: \"159832eb-a78e-4fcd-bbb3-42445194727f\") " pod="openshift-marketplace/redhat-operators-fwkj9" Feb 14 05:42:37 crc kubenswrapper[4867]: I0214 05:42:37.228553 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/159832eb-a78e-4fcd-bbb3-42445194727f-utilities\") pod \"redhat-operators-fwkj9\" (UID: \"159832eb-a78e-4fcd-bbb3-42445194727f\") " pod="openshift-marketplace/redhat-operators-fwkj9" Feb 14 05:42:37 crc kubenswrapper[4867]: I0214 05:42:37.228688 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gthwm\" (UniqueName: \"kubernetes.io/projected/159832eb-a78e-4fcd-bbb3-42445194727f-kube-api-access-gthwm\") pod \"redhat-operators-fwkj9\" (UID: \"159832eb-a78e-4fcd-bbb3-42445194727f\") " pod="openshift-marketplace/redhat-operators-fwkj9" Feb 14 05:42:37 crc kubenswrapper[4867]: I0214 05:42:37.229022 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/159832eb-a78e-4fcd-bbb3-42445194727f-catalog-content\") pod \"redhat-operators-fwkj9\" (UID: \"159832eb-a78e-4fcd-bbb3-42445194727f\") " pod="openshift-marketplace/redhat-operators-fwkj9" Feb 14 05:42:37 crc kubenswrapper[4867]: I0214 05:42:37.229725 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/159832eb-a78e-4fcd-bbb3-42445194727f-utilities\") pod \"redhat-operators-fwkj9\" (UID: \"159832eb-a78e-4fcd-bbb3-42445194727f\") " pod="openshift-marketplace/redhat-operators-fwkj9" Feb 14 05:42:37 crc kubenswrapper[4867]: I0214 05:42:37.381415 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gthwm\" (UniqueName: \"kubernetes.io/projected/159832eb-a78e-4fcd-bbb3-42445194727f-kube-api-access-gthwm\") pod \"redhat-operators-fwkj9\" (UID: \"159832eb-a78e-4fcd-bbb3-42445194727f\") " pod="openshift-marketplace/redhat-operators-fwkj9" Feb 14 05:42:37 crc kubenswrapper[4867]: I0214 05:42:37.658364 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fwkj9" Feb 14 05:42:38 crc kubenswrapper[4867]: I0214 05:42:38.237442 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fwkj9"] Feb 14 05:42:38 crc kubenswrapper[4867]: I0214 05:42:38.552953 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fwkj9" event={"ID":"159832eb-a78e-4fcd-bbb3-42445194727f","Type":"ContainerStarted","Data":"a4f892867677ed3b7f68271fe9e6b97b68c94c9eb4b88b4e24c805889f39d99b"} Feb 14 05:42:38 crc kubenswrapper[4867]: I0214 05:42:38.553023 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fwkj9" event={"ID":"159832eb-a78e-4fcd-bbb3-42445194727f","Type":"ContainerStarted","Data":"70a3c8fb291523d49cc697cb4e2f3f0924e4d7a7ffe89d4339b8129115addb42"} Feb 14 05:42:39 crc kubenswrapper[4867]: I0214 05:42:39.566205 4867 generic.go:334] "Generic (PLEG): container finished" podID="159832eb-a78e-4fcd-bbb3-42445194727f" containerID="a4f892867677ed3b7f68271fe9e6b97b68c94c9eb4b88b4e24c805889f39d99b" exitCode=0 Feb 14 05:42:39 crc kubenswrapper[4867]: I0214 05:42:39.566285 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fwkj9" event={"ID":"159832eb-a78e-4fcd-bbb3-42445194727f","Type":"ContainerDied","Data":"a4f892867677ed3b7f68271fe9e6b97b68c94c9eb4b88b4e24c805889f39d99b"} Feb 14 05:42:41 crc kubenswrapper[4867]: I0214 05:42:41.610315 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fwkj9" event={"ID":"159832eb-a78e-4fcd-bbb3-42445194727f","Type":"ContainerStarted","Data":"f340285247d57ce26dc5d1b1f4bfd2fffd160b22fc2966043a6cbc8ce2a85141"} Feb 14 05:42:55 crc kubenswrapper[4867]: I0214 05:42:55.776846 4867 generic.go:334] "Generic (PLEG): container finished" podID="159832eb-a78e-4fcd-bbb3-42445194727f" containerID="f340285247d57ce26dc5d1b1f4bfd2fffd160b22fc2966043a6cbc8ce2a85141" exitCode=0 Feb 14 05:42:55 crc kubenswrapper[4867]: I0214 05:42:55.776968 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fwkj9" event={"ID":"159832eb-a78e-4fcd-bbb3-42445194727f","Type":"ContainerDied","Data":"f340285247d57ce26dc5d1b1f4bfd2fffd160b22fc2966043a6cbc8ce2a85141"} Feb 14 05:42:56 crc kubenswrapper[4867]: I0214 05:42:56.792430 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fwkj9" event={"ID":"159832eb-a78e-4fcd-bbb3-42445194727f","Type":"ContainerStarted","Data":"3ca6cb09cd430f5a3defeae78e1e443d4c1ec2d8364fedad7b075708227be641"} Feb 14 05:42:56 crc kubenswrapper[4867]: I0214 05:42:56.817216 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fwkj9" podStartSLOduration=4.173064365 podStartE2EDuration="20.817193924s" podCreationTimestamp="2026-02-14 05:42:36 +0000 UTC" firstStartedPulling="2026-02-14 05:42:39.570052021 +0000 UTC m=+5591.650989335" lastFinishedPulling="2026-02-14 05:42:56.21418158 +0000 UTC m=+5608.295118894" observedRunningTime="2026-02-14 05:42:56.811041353 +0000 UTC m=+5608.891978667" watchObservedRunningTime="2026-02-14 05:42:56.817193924 +0000 UTC m=+5608.898131238" Feb 14 05:42:57 crc kubenswrapper[4867]: I0214 05:42:57.658966 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fwkj9" Feb 14 05:42:57 crc kubenswrapper[4867]: I0214 05:42:57.659718 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fwkj9" Feb 14 05:42:58 crc kubenswrapper[4867]: I0214 05:42:58.715036 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fwkj9" podUID="159832eb-a78e-4fcd-bbb3-42445194727f" containerName="registry-server" probeResult="failure" output=< Feb 14 05:42:58 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:42:58 crc kubenswrapper[4867]: > Feb 14 05:43:08 crc kubenswrapper[4867]: I0214 05:43:08.708519 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fwkj9" podUID="159832eb-a78e-4fcd-bbb3-42445194727f" containerName="registry-server" probeResult="failure" output=< Feb 14 05:43:08 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:43:08 crc kubenswrapper[4867]: > Feb 14 05:43:18 crc kubenswrapper[4867]: I0214 05:43:18.714356 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fwkj9" podUID="159832eb-a78e-4fcd-bbb3-42445194727f" containerName="registry-server" probeResult="failure" output=< Feb 14 05:43:18 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:43:18 crc kubenswrapper[4867]: > Feb 14 05:43:28 crc kubenswrapper[4867]: I0214 05:43:28.709940 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fwkj9" podUID="159832eb-a78e-4fcd-bbb3-42445194727f" containerName="registry-server" probeResult="failure" output=< Feb 14 05:43:28 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:43:28 crc kubenswrapper[4867]: > Feb 14 05:43:38 crc kubenswrapper[4867]: I0214 05:43:38.719138 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fwkj9" podUID="159832eb-a78e-4fcd-bbb3-42445194727f" containerName="registry-server" probeResult="failure" output=< Feb 14 05:43:38 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:43:38 crc kubenswrapper[4867]: > Feb 14 05:43:48 crc kubenswrapper[4867]: I0214 05:43:48.316134 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fwkj9" Feb 14 05:43:48 crc kubenswrapper[4867]: I0214 05:43:48.380627 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fwkj9" Feb 14 05:43:48 crc kubenswrapper[4867]: I0214 05:43:48.572950 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fwkj9"] Feb 14 05:43:49 crc kubenswrapper[4867]: I0214 05:43:49.446131 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-fwkj9" podUID="159832eb-a78e-4fcd-bbb3-42445194727f" containerName="registry-server" containerID="cri-o://3ca6cb09cd430f5a3defeae78e1e443d4c1ec2d8364fedad7b075708227be641" gracePeriod=2 Feb 14 05:43:50 crc kubenswrapper[4867]: I0214 05:43:50.220851 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fwkj9" Feb 14 05:43:50 crc kubenswrapper[4867]: I0214 05:43:50.251143 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gthwm\" (UniqueName: \"kubernetes.io/projected/159832eb-a78e-4fcd-bbb3-42445194727f-kube-api-access-gthwm\") pod \"159832eb-a78e-4fcd-bbb3-42445194727f\" (UID: \"159832eb-a78e-4fcd-bbb3-42445194727f\") " Feb 14 05:43:50 crc kubenswrapper[4867]: I0214 05:43:50.251256 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/159832eb-a78e-4fcd-bbb3-42445194727f-catalog-content\") pod \"159832eb-a78e-4fcd-bbb3-42445194727f\" (UID: \"159832eb-a78e-4fcd-bbb3-42445194727f\") " Feb 14 05:43:50 crc kubenswrapper[4867]: I0214 05:43:50.251371 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/159832eb-a78e-4fcd-bbb3-42445194727f-utilities\") pod \"159832eb-a78e-4fcd-bbb3-42445194727f\" (UID: \"159832eb-a78e-4fcd-bbb3-42445194727f\") " Feb 14 05:43:50 crc kubenswrapper[4867]: I0214 05:43:50.252873 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/159832eb-a78e-4fcd-bbb3-42445194727f-utilities" (OuterVolumeSpecName: "utilities") pod "159832eb-a78e-4fcd-bbb3-42445194727f" (UID: "159832eb-a78e-4fcd-bbb3-42445194727f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 05:43:50 crc kubenswrapper[4867]: I0214 05:43:50.265551 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/159832eb-a78e-4fcd-bbb3-42445194727f-kube-api-access-gthwm" (OuterVolumeSpecName: "kube-api-access-gthwm") pod "159832eb-a78e-4fcd-bbb3-42445194727f" (UID: "159832eb-a78e-4fcd-bbb3-42445194727f"). InnerVolumeSpecName "kube-api-access-gthwm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 05:43:50 crc kubenswrapper[4867]: I0214 05:43:50.354808 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/159832eb-a78e-4fcd-bbb3-42445194727f-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 05:43:50 crc kubenswrapper[4867]: I0214 05:43:50.355011 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gthwm\" (UniqueName: \"kubernetes.io/projected/159832eb-a78e-4fcd-bbb3-42445194727f-kube-api-access-gthwm\") on node \"crc\" DevicePath \"\"" Feb 14 05:43:50 crc kubenswrapper[4867]: I0214 05:43:50.404850 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/159832eb-a78e-4fcd-bbb3-42445194727f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "159832eb-a78e-4fcd-bbb3-42445194727f" (UID: "159832eb-a78e-4fcd-bbb3-42445194727f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 05:43:50 crc kubenswrapper[4867]: I0214 05:43:50.457199 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/159832eb-a78e-4fcd-bbb3-42445194727f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 05:43:50 crc kubenswrapper[4867]: I0214 05:43:50.460684 4867 generic.go:334] "Generic (PLEG): container finished" podID="159832eb-a78e-4fcd-bbb3-42445194727f" containerID="3ca6cb09cd430f5a3defeae78e1e443d4c1ec2d8364fedad7b075708227be641" exitCode=0 Feb 14 05:43:50 crc kubenswrapper[4867]: I0214 05:43:50.460726 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fwkj9" event={"ID":"159832eb-a78e-4fcd-bbb3-42445194727f","Type":"ContainerDied","Data":"3ca6cb09cd430f5a3defeae78e1e443d4c1ec2d8364fedad7b075708227be641"} Feb 14 05:43:50 crc kubenswrapper[4867]: I0214 05:43:50.460741 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fwkj9" Feb 14 05:43:50 crc kubenswrapper[4867]: I0214 05:43:50.460767 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fwkj9" event={"ID":"159832eb-a78e-4fcd-bbb3-42445194727f","Type":"ContainerDied","Data":"70a3c8fb291523d49cc697cb4e2f3f0924e4d7a7ffe89d4339b8129115addb42"} Feb 14 05:43:50 crc kubenswrapper[4867]: I0214 05:43:50.460792 4867 scope.go:117] "RemoveContainer" containerID="3ca6cb09cd430f5a3defeae78e1e443d4c1ec2d8364fedad7b075708227be641" Feb 14 05:43:50 crc kubenswrapper[4867]: I0214 05:43:50.504352 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fwkj9"] Feb 14 05:43:50 crc kubenswrapper[4867]: I0214 05:43:50.508838 4867 scope.go:117] "RemoveContainer" containerID="f340285247d57ce26dc5d1b1f4bfd2fffd160b22fc2966043a6cbc8ce2a85141" Feb 14 05:43:50 crc kubenswrapper[4867]: I0214 05:43:50.515328 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-fwkj9"] Feb 14 05:43:50 crc kubenswrapper[4867]: I0214 05:43:50.536266 4867 scope.go:117] "RemoveContainer" containerID="a4f892867677ed3b7f68271fe9e6b97b68c94c9eb4b88b4e24c805889f39d99b" Feb 14 05:43:50 crc kubenswrapper[4867]: I0214 05:43:50.605899 4867 scope.go:117] "RemoveContainer" containerID="3ca6cb09cd430f5a3defeae78e1e443d4c1ec2d8364fedad7b075708227be641" Feb 14 05:43:50 crc kubenswrapper[4867]: E0214 05:43:50.606749 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ca6cb09cd430f5a3defeae78e1e443d4c1ec2d8364fedad7b075708227be641\": container with ID starting with 3ca6cb09cd430f5a3defeae78e1e443d4c1ec2d8364fedad7b075708227be641 not found: ID does not exist" containerID="3ca6cb09cd430f5a3defeae78e1e443d4c1ec2d8364fedad7b075708227be641" Feb 14 05:43:50 crc kubenswrapper[4867]: I0214 05:43:50.606800 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ca6cb09cd430f5a3defeae78e1e443d4c1ec2d8364fedad7b075708227be641"} err="failed to get container status \"3ca6cb09cd430f5a3defeae78e1e443d4c1ec2d8364fedad7b075708227be641\": rpc error: code = NotFound desc = could not find container \"3ca6cb09cd430f5a3defeae78e1e443d4c1ec2d8364fedad7b075708227be641\": container with ID starting with 3ca6cb09cd430f5a3defeae78e1e443d4c1ec2d8364fedad7b075708227be641 not found: ID does not exist" Feb 14 05:43:50 crc kubenswrapper[4867]: I0214 05:43:50.606833 4867 scope.go:117] "RemoveContainer" containerID="f340285247d57ce26dc5d1b1f4bfd2fffd160b22fc2966043a6cbc8ce2a85141" Feb 14 05:43:50 crc kubenswrapper[4867]: E0214 05:43:50.607288 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f340285247d57ce26dc5d1b1f4bfd2fffd160b22fc2966043a6cbc8ce2a85141\": container with ID starting with f340285247d57ce26dc5d1b1f4bfd2fffd160b22fc2966043a6cbc8ce2a85141 not found: ID does not exist" containerID="f340285247d57ce26dc5d1b1f4bfd2fffd160b22fc2966043a6cbc8ce2a85141" Feb 14 05:43:50 crc kubenswrapper[4867]: I0214 05:43:50.607329 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f340285247d57ce26dc5d1b1f4bfd2fffd160b22fc2966043a6cbc8ce2a85141"} err="failed to get container status \"f340285247d57ce26dc5d1b1f4bfd2fffd160b22fc2966043a6cbc8ce2a85141\": rpc error: code = NotFound desc = could not find container \"f340285247d57ce26dc5d1b1f4bfd2fffd160b22fc2966043a6cbc8ce2a85141\": container with ID starting with f340285247d57ce26dc5d1b1f4bfd2fffd160b22fc2966043a6cbc8ce2a85141 not found: ID does not exist" Feb 14 05:43:50 crc kubenswrapper[4867]: I0214 05:43:50.607361 4867 scope.go:117] "RemoveContainer" containerID="a4f892867677ed3b7f68271fe9e6b97b68c94c9eb4b88b4e24c805889f39d99b" Feb 14 05:43:50 crc kubenswrapper[4867]: E0214 05:43:50.607699 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4f892867677ed3b7f68271fe9e6b97b68c94c9eb4b88b4e24c805889f39d99b\": container with ID starting with a4f892867677ed3b7f68271fe9e6b97b68c94c9eb4b88b4e24c805889f39d99b not found: ID does not exist" containerID="a4f892867677ed3b7f68271fe9e6b97b68c94c9eb4b88b4e24c805889f39d99b" Feb 14 05:43:50 crc kubenswrapper[4867]: I0214 05:43:50.607724 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4f892867677ed3b7f68271fe9e6b97b68c94c9eb4b88b4e24c805889f39d99b"} err="failed to get container status \"a4f892867677ed3b7f68271fe9e6b97b68c94c9eb4b88b4e24c805889f39d99b\": rpc error: code = NotFound desc = could not find container \"a4f892867677ed3b7f68271fe9e6b97b68c94c9eb4b88b4e24c805889f39d99b\": container with ID starting with a4f892867677ed3b7f68271fe9e6b97b68c94c9eb4b88b4e24c805889f39d99b not found: ID does not exist" Feb 14 05:43:51 crc kubenswrapper[4867]: I0214 05:43:51.020163 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="159832eb-a78e-4fcd-bbb3-42445194727f" path="/var/lib/kubelet/pods/159832eb-a78e-4fcd-bbb3-42445194727f/volumes" Feb 14 05:44:31 crc kubenswrapper[4867]: I0214 05:44:31.251653 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 05:44:31 crc kubenswrapper[4867]: I0214 05:44:31.252298 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 05:45:00 crc kubenswrapper[4867]: I0214 05:45:00.510396 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517465-7qzlj"] Feb 14 05:45:00 crc kubenswrapper[4867]: E0214 05:45:00.512676 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="159832eb-a78e-4fcd-bbb3-42445194727f" containerName="extract-content" Feb 14 05:45:00 crc kubenswrapper[4867]: I0214 05:45:00.512711 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="159832eb-a78e-4fcd-bbb3-42445194727f" containerName="extract-content" Feb 14 05:45:00 crc kubenswrapper[4867]: E0214 05:45:00.512744 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="159832eb-a78e-4fcd-bbb3-42445194727f" containerName="extract-utilities" Feb 14 05:45:00 crc kubenswrapper[4867]: I0214 05:45:00.512754 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="159832eb-a78e-4fcd-bbb3-42445194727f" containerName="extract-utilities" Feb 14 05:45:00 crc kubenswrapper[4867]: E0214 05:45:00.512782 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="159832eb-a78e-4fcd-bbb3-42445194727f" containerName="registry-server" Feb 14 05:45:00 crc kubenswrapper[4867]: I0214 05:45:00.512790 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="159832eb-a78e-4fcd-bbb3-42445194727f" containerName="registry-server" Feb 14 05:45:00 crc kubenswrapper[4867]: I0214 05:45:00.513203 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="159832eb-a78e-4fcd-bbb3-42445194727f" containerName="registry-server" Feb 14 05:45:00 crc kubenswrapper[4867]: I0214 05:45:00.514739 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517465-7qzlj" Feb 14 05:45:00 crc kubenswrapper[4867]: I0214 05:45:00.534066 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517465-7qzlj"] Feb 14 05:45:00 crc kubenswrapper[4867]: I0214 05:45:00.560456 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 14 05:45:00 crc kubenswrapper[4867]: I0214 05:45:00.560550 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 14 05:45:00 crc kubenswrapper[4867]: I0214 05:45:00.676753 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ffaf4c01-f071-4d1a-9bb1-3711e9938e44-config-volume\") pod \"collect-profiles-29517465-7qzlj\" (UID: \"ffaf4c01-f071-4d1a-9bb1-3711e9938e44\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517465-7qzlj" Feb 14 05:45:00 crc kubenswrapper[4867]: I0214 05:45:00.676892 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxzch\" (UniqueName: \"kubernetes.io/projected/ffaf4c01-f071-4d1a-9bb1-3711e9938e44-kube-api-access-pxzch\") pod \"collect-profiles-29517465-7qzlj\" (UID: \"ffaf4c01-f071-4d1a-9bb1-3711e9938e44\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517465-7qzlj" Feb 14 05:45:00 crc kubenswrapper[4867]: I0214 05:45:00.677125 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ffaf4c01-f071-4d1a-9bb1-3711e9938e44-secret-volume\") pod \"collect-profiles-29517465-7qzlj\" (UID: \"ffaf4c01-f071-4d1a-9bb1-3711e9938e44\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517465-7qzlj" Feb 14 05:45:00 crc kubenswrapper[4867]: I0214 05:45:00.779430 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxzch\" (UniqueName: \"kubernetes.io/projected/ffaf4c01-f071-4d1a-9bb1-3711e9938e44-kube-api-access-pxzch\") pod \"collect-profiles-29517465-7qzlj\" (UID: \"ffaf4c01-f071-4d1a-9bb1-3711e9938e44\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517465-7qzlj" Feb 14 05:45:00 crc kubenswrapper[4867]: I0214 05:45:00.779638 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ffaf4c01-f071-4d1a-9bb1-3711e9938e44-secret-volume\") pod \"collect-profiles-29517465-7qzlj\" (UID: \"ffaf4c01-f071-4d1a-9bb1-3711e9938e44\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517465-7qzlj" Feb 14 05:45:00 crc kubenswrapper[4867]: I0214 05:45:00.779722 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ffaf4c01-f071-4d1a-9bb1-3711e9938e44-config-volume\") pod \"collect-profiles-29517465-7qzlj\" (UID: \"ffaf4c01-f071-4d1a-9bb1-3711e9938e44\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517465-7qzlj" Feb 14 05:45:00 crc kubenswrapper[4867]: I0214 05:45:00.781219 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ffaf4c01-f071-4d1a-9bb1-3711e9938e44-config-volume\") pod \"collect-profiles-29517465-7qzlj\" (UID: \"ffaf4c01-f071-4d1a-9bb1-3711e9938e44\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517465-7qzlj" Feb 14 05:45:00 crc kubenswrapper[4867]: I0214 05:45:00.786771 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ffaf4c01-f071-4d1a-9bb1-3711e9938e44-secret-volume\") pod \"collect-profiles-29517465-7qzlj\" (UID: \"ffaf4c01-f071-4d1a-9bb1-3711e9938e44\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517465-7qzlj" Feb 14 05:45:00 crc kubenswrapper[4867]: I0214 05:45:00.798396 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxzch\" (UniqueName: \"kubernetes.io/projected/ffaf4c01-f071-4d1a-9bb1-3711e9938e44-kube-api-access-pxzch\") pod \"collect-profiles-29517465-7qzlj\" (UID: \"ffaf4c01-f071-4d1a-9bb1-3711e9938e44\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517465-7qzlj" Feb 14 05:45:00 crc kubenswrapper[4867]: I0214 05:45:00.851812 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517465-7qzlj" Feb 14 05:45:01 crc kubenswrapper[4867]: I0214 05:45:01.251038 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 05:45:01 crc kubenswrapper[4867]: I0214 05:45:01.251431 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 05:45:01 crc kubenswrapper[4867]: I0214 05:45:01.588146 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517465-7qzlj"] Feb 14 05:45:01 crc kubenswrapper[4867]: I0214 05:45:01.612909 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517465-7qzlj" event={"ID":"ffaf4c01-f071-4d1a-9bb1-3711e9938e44","Type":"ContainerStarted","Data":"04989d46bc85b246ba9abff5063cc062395b2c5e897fafd11e59b6f35637d5c7"} Feb 14 05:45:02 crc kubenswrapper[4867]: I0214 05:45:02.634805 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517465-7qzlj" event={"ID":"ffaf4c01-f071-4d1a-9bb1-3711e9938e44","Type":"ContainerStarted","Data":"b26df2e0ba8ccf7ec64150d93bdd34ff2089160925b8351fda1257f3e4a295e9"} Feb 14 05:45:02 crc kubenswrapper[4867]: I0214 05:45:02.654873 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29517465-7qzlj" podStartSLOduration=2.654842918 podStartE2EDuration="2.654842918s" podCreationTimestamp="2026-02-14 05:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 05:45:02.652773463 +0000 UTC m=+5734.733710787" watchObservedRunningTime="2026-02-14 05:45:02.654842918 +0000 UTC m=+5734.735780252" Feb 14 05:45:04 crc kubenswrapper[4867]: I0214 05:45:04.658824 4867 generic.go:334] "Generic (PLEG): container finished" podID="ffaf4c01-f071-4d1a-9bb1-3711e9938e44" containerID="b26df2e0ba8ccf7ec64150d93bdd34ff2089160925b8351fda1257f3e4a295e9" exitCode=0 Feb 14 05:45:04 crc kubenswrapper[4867]: I0214 05:45:04.658882 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517465-7qzlj" event={"ID":"ffaf4c01-f071-4d1a-9bb1-3711e9938e44","Type":"ContainerDied","Data":"b26df2e0ba8ccf7ec64150d93bdd34ff2089160925b8351fda1257f3e4a295e9"} Feb 14 05:45:06 crc kubenswrapper[4867]: I0214 05:45:06.201131 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517465-7qzlj" Feb 14 05:45:06 crc kubenswrapper[4867]: I0214 05:45:06.238780 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ffaf4c01-f071-4d1a-9bb1-3711e9938e44-secret-volume\") pod \"ffaf4c01-f071-4d1a-9bb1-3711e9938e44\" (UID: \"ffaf4c01-f071-4d1a-9bb1-3711e9938e44\") " Feb 14 05:45:06 crc kubenswrapper[4867]: I0214 05:45:06.238839 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pxzch\" (UniqueName: \"kubernetes.io/projected/ffaf4c01-f071-4d1a-9bb1-3711e9938e44-kube-api-access-pxzch\") pod \"ffaf4c01-f071-4d1a-9bb1-3711e9938e44\" (UID: \"ffaf4c01-f071-4d1a-9bb1-3711e9938e44\") " Feb 14 05:45:06 crc kubenswrapper[4867]: I0214 05:45:06.238959 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ffaf4c01-f071-4d1a-9bb1-3711e9938e44-config-volume\") pod \"ffaf4c01-f071-4d1a-9bb1-3711e9938e44\" (UID: \"ffaf4c01-f071-4d1a-9bb1-3711e9938e44\") " Feb 14 05:45:06 crc kubenswrapper[4867]: I0214 05:45:06.240295 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ffaf4c01-f071-4d1a-9bb1-3711e9938e44-config-volume" (OuterVolumeSpecName: "config-volume") pod "ffaf4c01-f071-4d1a-9bb1-3711e9938e44" (UID: "ffaf4c01-f071-4d1a-9bb1-3711e9938e44"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 05:45:06 crc kubenswrapper[4867]: I0214 05:45:06.248635 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffaf4c01-f071-4d1a-9bb1-3711e9938e44-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ffaf4c01-f071-4d1a-9bb1-3711e9938e44" (UID: "ffaf4c01-f071-4d1a-9bb1-3711e9938e44"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 05:45:06 crc kubenswrapper[4867]: I0214 05:45:06.249211 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffaf4c01-f071-4d1a-9bb1-3711e9938e44-kube-api-access-pxzch" (OuterVolumeSpecName: "kube-api-access-pxzch") pod "ffaf4c01-f071-4d1a-9bb1-3711e9938e44" (UID: "ffaf4c01-f071-4d1a-9bb1-3711e9938e44"). InnerVolumeSpecName "kube-api-access-pxzch". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 05:45:06 crc kubenswrapper[4867]: I0214 05:45:06.341526 4867 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ffaf4c01-f071-4d1a-9bb1-3711e9938e44-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 14 05:45:06 crc kubenswrapper[4867]: I0214 05:45:06.341555 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pxzch\" (UniqueName: \"kubernetes.io/projected/ffaf4c01-f071-4d1a-9bb1-3711e9938e44-kube-api-access-pxzch\") on node \"crc\" DevicePath \"\"" Feb 14 05:45:06 crc kubenswrapper[4867]: I0214 05:45:06.341564 4867 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ffaf4c01-f071-4d1a-9bb1-3711e9938e44-config-volume\") on node \"crc\" DevicePath \"\"" Feb 14 05:45:06 crc kubenswrapper[4867]: I0214 05:45:06.691668 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517465-7qzlj" event={"ID":"ffaf4c01-f071-4d1a-9bb1-3711e9938e44","Type":"ContainerDied","Data":"04989d46bc85b246ba9abff5063cc062395b2c5e897fafd11e59b6f35637d5c7"} Feb 14 05:45:06 crc kubenswrapper[4867]: I0214 05:45:06.692053 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04989d46bc85b246ba9abff5063cc062395b2c5e897fafd11e59b6f35637d5c7" Feb 14 05:45:06 crc kubenswrapper[4867]: I0214 05:45:06.692162 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517465-7qzlj" Feb 14 05:45:06 crc kubenswrapper[4867]: I0214 05:45:06.813245 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517420-spkbd"] Feb 14 05:45:06 crc kubenswrapper[4867]: I0214 05:45:06.823369 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517420-spkbd"] Feb 14 05:45:07 crc kubenswrapper[4867]: I0214 05:45:07.036093 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f3d9933-ea61-47f2-a857-edd1af2baf67" path="/var/lib/kubelet/pods/9f3d9933-ea61-47f2-a857-edd1af2baf67/volumes" Feb 14 05:45:18 crc kubenswrapper[4867]: I0214 05:45:18.747979 4867 scope.go:117] "RemoveContainer" containerID="7e47076001317bcb38834fe5f61417f02ae8109c8832987a242d29c2b0b144fa" Feb 14 05:45:31 crc kubenswrapper[4867]: I0214 05:45:31.251028 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 05:45:31 crc kubenswrapper[4867]: I0214 05:45:31.251847 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 05:45:31 crc kubenswrapper[4867]: I0214 05:45:31.251919 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" Feb 14 05:45:31 crc kubenswrapper[4867]: I0214 05:45:31.253118 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"57022a394f9e48e84c2c7ab708dd1c775f970a72e65d0163882f6edf72cdab37"} pod="openshift-machine-config-operator/machine-config-daemon-4s95t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 05:45:31 crc kubenswrapper[4867]: I0214 05:45:31.253188 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" containerID="cri-o://57022a394f9e48e84c2c7ab708dd1c775f970a72e65d0163882f6edf72cdab37" gracePeriod=600 Feb 14 05:45:31 crc kubenswrapper[4867]: E0214 05:45:31.381583 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:45:31 crc kubenswrapper[4867]: I0214 05:45:31.983079 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" event={"ID":"5992e46c-bce7-4b9f-82f2-c7ffb93286cd","Type":"ContainerDied","Data":"57022a394f9e48e84c2c7ab708dd1c775f970a72e65d0163882f6edf72cdab37"} Feb 14 05:45:31 crc kubenswrapper[4867]: I0214 05:45:31.983168 4867 scope.go:117] "RemoveContainer" containerID="f5d63b1271ea439ba7c2f7514281f50c704e327b66fe9d213dc7e443134b610b" Feb 14 05:45:31 crc kubenswrapper[4867]: I0214 05:45:31.983019 4867 generic.go:334] "Generic (PLEG): container finished" podID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerID="57022a394f9e48e84c2c7ab708dd1c775f970a72e65d0163882f6edf72cdab37" exitCode=0 Feb 14 05:45:31 crc kubenswrapper[4867]: I0214 05:45:31.984061 4867 scope.go:117] "RemoveContainer" containerID="57022a394f9e48e84c2c7ab708dd1c775f970a72e65d0163882f6edf72cdab37" Feb 14 05:45:31 crc kubenswrapper[4867]: E0214 05:45:31.984486 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:45:46 crc kubenswrapper[4867]: I0214 05:45:46.998098 4867 scope.go:117] "RemoveContainer" containerID="57022a394f9e48e84c2c7ab708dd1c775f970a72e65d0163882f6edf72cdab37" Feb 14 05:45:46 crc kubenswrapper[4867]: E0214 05:45:46.999927 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:46:00 crc kubenswrapper[4867]: I0214 05:46:00.997885 4867 scope.go:117] "RemoveContainer" containerID="57022a394f9e48e84c2c7ab708dd1c775f970a72e65d0163882f6edf72cdab37" Feb 14 05:46:01 crc kubenswrapper[4867]: E0214 05:46:00.998823 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:46:12 crc kubenswrapper[4867]: I0214 05:46:12.998367 4867 scope.go:117] "RemoveContainer" containerID="57022a394f9e48e84c2c7ab708dd1c775f970a72e65d0163882f6edf72cdab37" Feb 14 05:46:13 crc kubenswrapper[4867]: E0214 05:46:12.999622 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:46:23 crc kubenswrapper[4867]: I0214 05:46:23.997759 4867 scope.go:117] "RemoveContainer" containerID="57022a394f9e48e84c2c7ab708dd1c775f970a72e65d0163882f6edf72cdab37" Feb 14 05:46:23 crc kubenswrapper[4867]: E0214 05:46:23.999657 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:46:36 crc kubenswrapper[4867]: I0214 05:46:36.998685 4867 scope.go:117] "RemoveContainer" containerID="57022a394f9e48e84c2c7ab708dd1c775f970a72e65d0163882f6edf72cdab37" Feb 14 05:46:37 crc kubenswrapper[4867]: E0214 05:46:37.000039 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:46:48 crc kubenswrapper[4867]: I0214 05:46:47.999602 4867 scope.go:117] "RemoveContainer" containerID="57022a394f9e48e84c2c7ab708dd1c775f970a72e65d0163882f6edf72cdab37" Feb 14 05:46:48 crc kubenswrapper[4867]: E0214 05:46:48.003455 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:47:01 crc kubenswrapper[4867]: I0214 05:47:01.998783 4867 scope.go:117] "RemoveContainer" containerID="57022a394f9e48e84c2c7ab708dd1c775f970a72e65d0163882f6edf72cdab37" Feb 14 05:47:02 crc kubenswrapper[4867]: E0214 05:47:01.999892 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:47:14 crc kubenswrapper[4867]: I0214 05:47:14.997867 4867 scope.go:117] "RemoveContainer" containerID="57022a394f9e48e84c2c7ab708dd1c775f970a72e65d0163882f6edf72cdab37" Feb 14 05:47:14 crc kubenswrapper[4867]: E0214 05:47:14.998817 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:47:18 crc kubenswrapper[4867]: I0214 05:47:18.897115 4867 scope.go:117] "RemoveContainer" containerID="cbe326a8e5634578b70f7f6afe4763f8fc03fbfab3802a9533507439c097bf40" Feb 14 05:47:18 crc kubenswrapper[4867]: I0214 05:47:18.922732 4867 scope.go:117] "RemoveContainer" containerID="196ca742dcc703f46deb1d50ebb9f9afbcb2cb52b7aa66003ca89e4afaf13dc4" Feb 14 05:47:19 crc kubenswrapper[4867]: I0214 05:47:19.001976 4867 scope.go:117] "RemoveContainer" containerID="cc44a1a3222d6deb16349071be26b927d02318057d20a59ca7cbee80422066fa" Feb 14 05:47:29 crc kubenswrapper[4867]: I0214 05:47:29.997338 4867 scope.go:117] "RemoveContainer" containerID="57022a394f9e48e84c2c7ab708dd1c775f970a72e65d0163882f6edf72cdab37" Feb 14 05:47:29 crc kubenswrapper[4867]: E0214 05:47:29.998850 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:47:44 crc kubenswrapper[4867]: I0214 05:47:44.997891 4867 scope.go:117] "RemoveContainer" containerID="57022a394f9e48e84c2c7ab708dd1c775f970a72e65d0163882f6edf72cdab37" Feb 14 05:47:44 crc kubenswrapper[4867]: E0214 05:47:44.998789 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:47:58 crc kubenswrapper[4867]: I0214 05:47:57.999566 4867 scope.go:117] "RemoveContainer" containerID="57022a394f9e48e84c2c7ab708dd1c775f970a72e65d0163882f6edf72cdab37" Feb 14 05:47:58 crc kubenswrapper[4867]: E0214 05:47:58.001344 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:48:13 crc kubenswrapper[4867]: I0214 05:48:12.998911 4867 scope.go:117] "RemoveContainer" containerID="57022a394f9e48e84c2c7ab708dd1c775f970a72e65d0163882f6edf72cdab37" Feb 14 05:48:13 crc kubenswrapper[4867]: E0214 05:48:12.999986 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:48:23 crc kubenswrapper[4867]: I0214 05:48:23.998346 4867 scope.go:117] "RemoveContainer" containerID="57022a394f9e48e84c2c7ab708dd1c775f970a72e65d0163882f6edf72cdab37" Feb 14 05:48:24 crc kubenswrapper[4867]: E0214 05:48:23.999287 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:48:35 crc kubenswrapper[4867]: I0214 05:48:35.997293 4867 scope.go:117] "RemoveContainer" containerID="57022a394f9e48e84c2c7ab708dd1c775f970a72e65d0163882f6edf72cdab37" Feb 14 05:48:35 crc kubenswrapper[4867]: E0214 05:48:35.998056 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:48:49 crc kubenswrapper[4867]: I0214 05:48:49.997862 4867 scope.go:117] "RemoveContainer" containerID="57022a394f9e48e84c2c7ab708dd1c775f970a72e65d0163882f6edf72cdab37" Feb 14 05:48:49 crc kubenswrapper[4867]: E0214 05:48:49.998705 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:49:03 crc kubenswrapper[4867]: I0214 05:49:02.998284 4867 scope.go:117] "RemoveContainer" containerID="57022a394f9e48e84c2c7ab708dd1c775f970a72e65d0163882f6edf72cdab37" Feb 14 05:49:03 crc kubenswrapper[4867]: E0214 05:49:03.001459 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:49:16 crc kubenswrapper[4867]: I0214 05:49:16.998850 4867 scope.go:117] "RemoveContainer" containerID="57022a394f9e48e84c2c7ab708dd1c775f970a72e65d0163882f6edf72cdab37" Feb 14 05:49:17 crc kubenswrapper[4867]: E0214 05:49:17.000125 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:49:29 crc kubenswrapper[4867]: I0214 05:49:29.007908 4867 scope.go:117] "RemoveContainer" containerID="57022a394f9e48e84c2c7ab708dd1c775f970a72e65d0163882f6edf72cdab37" Feb 14 05:49:29 crc kubenswrapper[4867]: E0214 05:49:29.010655 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:49:42 crc kubenswrapper[4867]: I0214 05:49:42.997389 4867 scope.go:117] "RemoveContainer" containerID="57022a394f9e48e84c2c7ab708dd1c775f970a72e65d0163882f6edf72cdab37" Feb 14 05:49:42 crc kubenswrapper[4867]: E0214 05:49:42.998390 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:49:50 crc kubenswrapper[4867]: I0214 05:49:50.919362 4867 generic.go:334] "Generic (PLEG): container finished" podID="a161c594-8af3-458f-911a-bbf51e7bfcdd" containerID="b1742179cf0672940dcd64c514227d7fd46e83cfc6502a0b57ebf7e4bf13678c" exitCode=1 Feb 14 05:49:50 crc kubenswrapper[4867]: I0214 05:49:50.919457 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"a161c594-8af3-458f-911a-bbf51e7bfcdd","Type":"ContainerDied","Data":"b1742179cf0672940dcd64c514227d7fd46e83cfc6502a0b57ebf7e4bf13678c"} Feb 14 05:49:52 crc kubenswrapper[4867]: I0214 05:49:52.410624 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 14 05:49:52 crc kubenswrapper[4867]: I0214 05:49:52.475341 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a161c594-8af3-458f-911a-bbf51e7bfcdd-openstack-config-secret\") pod \"a161c594-8af3-458f-911a-bbf51e7bfcdd\" (UID: \"a161c594-8af3-458f-911a-bbf51e7bfcdd\") " Feb 14 05:49:52 crc kubenswrapper[4867]: I0214 05:49:52.475415 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vh78z\" (UniqueName: \"kubernetes.io/projected/a161c594-8af3-458f-911a-bbf51e7bfcdd-kube-api-access-vh78z\") pod \"a161c594-8af3-458f-911a-bbf51e7bfcdd\" (UID: \"a161c594-8af3-458f-911a-bbf51e7bfcdd\") " Feb 14 05:49:52 crc kubenswrapper[4867]: I0214 05:49:52.475464 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/a161c594-8af3-458f-911a-bbf51e7bfcdd-test-operator-ephemeral-workdir\") pod \"a161c594-8af3-458f-911a-bbf51e7bfcdd\" (UID: \"a161c594-8af3-458f-911a-bbf51e7bfcdd\") " Feb 14 05:49:52 crc kubenswrapper[4867]: I0214 05:49:52.475534 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a161c594-8af3-458f-911a-bbf51e7bfcdd-config-data\") pod \"a161c594-8af3-458f-911a-bbf51e7bfcdd\" (UID: \"a161c594-8af3-458f-911a-bbf51e7bfcdd\") " Feb 14 05:49:52 crc kubenswrapper[4867]: I0214 05:49:52.475682 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a161c594-8af3-458f-911a-bbf51e7bfcdd-openstack-config\") pod \"a161c594-8af3-458f-911a-bbf51e7bfcdd\" (UID: \"a161c594-8af3-458f-911a-bbf51e7bfcdd\") " Feb 14 05:49:52 crc kubenswrapper[4867]: I0214 05:49:52.475772 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a161c594-8af3-458f-911a-bbf51e7bfcdd-ssh-key\") pod \"a161c594-8af3-458f-911a-bbf51e7bfcdd\" (UID: \"a161c594-8af3-458f-911a-bbf51e7bfcdd\") " Feb 14 05:49:52 crc kubenswrapper[4867]: I0214 05:49:52.475818 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/a161c594-8af3-458f-911a-bbf51e7bfcdd-ca-certs\") pod \"a161c594-8af3-458f-911a-bbf51e7bfcdd\" (UID: \"a161c594-8af3-458f-911a-bbf51e7bfcdd\") " Feb 14 05:49:52 crc kubenswrapper[4867]: I0214 05:49:52.475872 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"a161c594-8af3-458f-911a-bbf51e7bfcdd\" (UID: \"a161c594-8af3-458f-911a-bbf51e7bfcdd\") " Feb 14 05:49:52 crc kubenswrapper[4867]: I0214 05:49:52.475913 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/a161c594-8af3-458f-911a-bbf51e7bfcdd-test-operator-ephemeral-temporary\") pod \"a161c594-8af3-458f-911a-bbf51e7bfcdd\" (UID: \"a161c594-8af3-458f-911a-bbf51e7bfcdd\") " Feb 14 05:49:52 crc kubenswrapper[4867]: I0214 05:49:52.477653 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a161c594-8af3-458f-911a-bbf51e7bfcdd-config-data" (OuterVolumeSpecName: "config-data") pod "a161c594-8af3-458f-911a-bbf51e7bfcdd" (UID: "a161c594-8af3-458f-911a-bbf51e7bfcdd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 05:49:52 crc kubenswrapper[4867]: I0214 05:49:52.478287 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a161c594-8af3-458f-911a-bbf51e7bfcdd-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "a161c594-8af3-458f-911a-bbf51e7bfcdd" (UID: "a161c594-8af3-458f-911a-bbf51e7bfcdd"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 05:49:52 crc kubenswrapper[4867]: I0214 05:49:52.486616 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "test-operator-logs") pod "a161c594-8af3-458f-911a-bbf51e7bfcdd" (UID: "a161c594-8af3-458f-911a-bbf51e7bfcdd"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 14 05:49:52 crc kubenswrapper[4867]: I0214 05:49:52.489843 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a161c594-8af3-458f-911a-bbf51e7bfcdd-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "a161c594-8af3-458f-911a-bbf51e7bfcdd" (UID: "a161c594-8af3-458f-911a-bbf51e7bfcdd"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 05:49:52 crc kubenswrapper[4867]: I0214 05:49:52.491052 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a161c594-8af3-458f-911a-bbf51e7bfcdd-kube-api-access-vh78z" (OuterVolumeSpecName: "kube-api-access-vh78z") pod "a161c594-8af3-458f-911a-bbf51e7bfcdd" (UID: "a161c594-8af3-458f-911a-bbf51e7bfcdd"). InnerVolumeSpecName "kube-api-access-vh78z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 05:49:52 crc kubenswrapper[4867]: I0214 05:49:52.535764 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a161c594-8af3-458f-911a-bbf51e7bfcdd-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "a161c594-8af3-458f-911a-bbf51e7bfcdd" (UID: "a161c594-8af3-458f-911a-bbf51e7bfcdd"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 05:49:52 crc kubenswrapper[4867]: I0214 05:49:52.550244 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a161c594-8af3-458f-911a-bbf51e7bfcdd-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a161c594-8af3-458f-911a-bbf51e7bfcdd" (UID: "a161c594-8af3-458f-911a-bbf51e7bfcdd"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 05:49:52 crc kubenswrapper[4867]: I0214 05:49:52.555497 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a161c594-8af3-458f-911a-bbf51e7bfcdd-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "a161c594-8af3-458f-911a-bbf51e7bfcdd" (UID: "a161c594-8af3-458f-911a-bbf51e7bfcdd"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 05:49:52 crc kubenswrapper[4867]: I0214 05:49:52.564773 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a161c594-8af3-458f-911a-bbf51e7bfcdd-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "a161c594-8af3-458f-911a-bbf51e7bfcdd" (UID: "a161c594-8af3-458f-911a-bbf51e7bfcdd"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 05:49:52 crc kubenswrapper[4867]: I0214 05:49:52.583915 4867 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/a161c594-8af3-458f-911a-bbf51e7bfcdd-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Feb 14 05:49:52 crc kubenswrapper[4867]: I0214 05:49:52.583958 4867 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a161c594-8af3-458f-911a-bbf51e7bfcdd-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 14 05:49:52 crc kubenswrapper[4867]: I0214 05:49:52.583968 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vh78z\" (UniqueName: \"kubernetes.io/projected/a161c594-8af3-458f-911a-bbf51e7bfcdd-kube-api-access-vh78z\") on node \"crc\" DevicePath \"\"" Feb 14 05:49:52 crc kubenswrapper[4867]: I0214 05:49:52.583979 4867 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/a161c594-8af3-458f-911a-bbf51e7bfcdd-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Feb 14 05:49:52 crc kubenswrapper[4867]: I0214 05:49:52.583991 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a161c594-8af3-458f-911a-bbf51e7bfcdd-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 05:49:52 crc kubenswrapper[4867]: I0214 05:49:52.584003 4867 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a161c594-8af3-458f-911a-bbf51e7bfcdd-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 14 05:49:52 crc kubenswrapper[4867]: I0214 05:49:52.584013 4867 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a161c594-8af3-458f-911a-bbf51e7bfcdd-ssh-key\") on node \"crc\" DevicePath \"\"" Feb 14 05:49:52 crc kubenswrapper[4867]: I0214 05:49:52.584020 4867 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/a161c594-8af3-458f-911a-bbf51e7bfcdd-ca-certs\") on node \"crc\" DevicePath \"\"" Feb 14 05:49:52 crc kubenswrapper[4867]: I0214 05:49:52.585908 4867 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Feb 14 05:49:52 crc kubenswrapper[4867]: I0214 05:49:52.619482 4867 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Feb 14 05:49:52 crc kubenswrapper[4867]: I0214 05:49:52.687794 4867 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Feb 14 05:49:52 crc kubenswrapper[4867]: I0214 05:49:52.947827 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"a161c594-8af3-458f-911a-bbf51e7bfcdd","Type":"ContainerDied","Data":"69a1559021e3c0afa3311c13a382b071b919ecabc5729024c716838afe1c709a"} Feb 14 05:49:52 crc kubenswrapper[4867]: I0214 05:49:52.948170 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69a1559021e3c0afa3311c13a382b071b919ecabc5729024c716838afe1c709a" Feb 14 05:49:52 crc kubenswrapper[4867]: I0214 05:49:52.947909 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 14 05:49:56 crc kubenswrapper[4867]: I0214 05:49:56.997426 4867 scope.go:117] "RemoveContainer" containerID="57022a394f9e48e84c2c7ab708dd1c775f970a72e65d0163882f6edf72cdab37" Feb 14 05:49:56 crc kubenswrapper[4867]: E0214 05:49:56.998047 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:49:59 crc kubenswrapper[4867]: I0214 05:49:59.230123 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 14 05:49:59 crc kubenswrapper[4867]: E0214 05:49:59.231228 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffaf4c01-f071-4d1a-9bb1-3711e9938e44" containerName="collect-profiles" Feb 14 05:49:59 crc kubenswrapper[4867]: I0214 05:49:59.231242 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffaf4c01-f071-4d1a-9bb1-3711e9938e44" containerName="collect-profiles" Feb 14 05:49:59 crc kubenswrapper[4867]: E0214 05:49:59.231286 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a161c594-8af3-458f-911a-bbf51e7bfcdd" containerName="tempest-tests-tempest-tests-runner" Feb 14 05:49:59 crc kubenswrapper[4867]: I0214 05:49:59.231292 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="a161c594-8af3-458f-911a-bbf51e7bfcdd" containerName="tempest-tests-tempest-tests-runner" Feb 14 05:49:59 crc kubenswrapper[4867]: I0214 05:49:59.231532 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffaf4c01-f071-4d1a-9bb1-3711e9938e44" containerName="collect-profiles" Feb 14 05:49:59 crc kubenswrapper[4867]: I0214 05:49:59.231549 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="a161c594-8af3-458f-911a-bbf51e7bfcdd" containerName="tempest-tests-tempest-tests-runner" Feb 14 05:49:59 crc kubenswrapper[4867]: I0214 05:49:59.232454 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 14 05:49:59 crc kubenswrapper[4867]: I0214 05:49:59.234704 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-wxg74" Feb 14 05:49:59 crc kubenswrapper[4867]: I0214 05:49:59.256399 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 14 05:49:59 crc kubenswrapper[4867]: I0214 05:49:59.344663 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"be58ab35-1c46-426e-87a1-9010a643ead5\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 14 05:49:59 crc kubenswrapper[4867]: I0214 05:49:59.344758 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2m64q\" (UniqueName: \"kubernetes.io/projected/be58ab35-1c46-426e-87a1-9010a643ead5-kube-api-access-2m64q\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"be58ab35-1c46-426e-87a1-9010a643ead5\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 14 05:49:59 crc kubenswrapper[4867]: I0214 05:49:59.446590 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"be58ab35-1c46-426e-87a1-9010a643ead5\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 14 05:49:59 crc kubenswrapper[4867]: I0214 05:49:59.446714 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2m64q\" (UniqueName: \"kubernetes.io/projected/be58ab35-1c46-426e-87a1-9010a643ead5-kube-api-access-2m64q\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"be58ab35-1c46-426e-87a1-9010a643ead5\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 14 05:49:59 crc kubenswrapper[4867]: I0214 05:49:59.448605 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"be58ab35-1c46-426e-87a1-9010a643ead5\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 14 05:49:59 crc kubenswrapper[4867]: I0214 05:49:59.476349 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2m64q\" (UniqueName: \"kubernetes.io/projected/be58ab35-1c46-426e-87a1-9010a643ead5-kube-api-access-2m64q\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"be58ab35-1c46-426e-87a1-9010a643ead5\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 14 05:49:59 crc kubenswrapper[4867]: I0214 05:49:59.508982 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"be58ab35-1c46-426e-87a1-9010a643ead5\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 14 05:49:59 crc kubenswrapper[4867]: I0214 05:49:59.572301 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 14 05:50:00 crc kubenswrapper[4867]: I0214 05:50:00.054292 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 14 05:50:00 crc kubenswrapper[4867]: I0214 05:50:00.064300 4867 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 14 05:50:01 crc kubenswrapper[4867]: I0214 05:50:01.054613 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"be58ab35-1c46-426e-87a1-9010a643ead5","Type":"ContainerStarted","Data":"f17eb7fe48ca9d0696a2919b81b7780674d72a55c18fc53e5110e168118f3e53"} Feb 14 05:50:03 crc kubenswrapper[4867]: I0214 05:50:03.075378 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"be58ab35-1c46-426e-87a1-9010a643ead5","Type":"ContainerStarted","Data":"e89f42246f95386c41a8b48ca3284511cdaac889dfff3d346f0eeb99b832072d"} Feb 14 05:50:03 crc kubenswrapper[4867]: I0214 05:50:03.097847 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=1.7815386869999998 podStartE2EDuration="4.097829699s" podCreationTimestamp="2026-02-14 05:49:59 +0000 UTC" firstStartedPulling="2026-02-14 05:50:00.0640454 +0000 UTC m=+6032.144982714" lastFinishedPulling="2026-02-14 05:50:02.380336412 +0000 UTC m=+6034.461273726" observedRunningTime="2026-02-14 05:50:03.09097153 +0000 UTC m=+6035.171908844" watchObservedRunningTime="2026-02-14 05:50:03.097829699 +0000 UTC m=+6035.178767003" Feb 14 05:50:07 crc kubenswrapper[4867]: I0214 05:50:07.997333 4867 scope.go:117] "RemoveContainer" containerID="57022a394f9e48e84c2c7ab708dd1c775f970a72e65d0163882f6edf72cdab37" Feb 14 05:50:07 crc kubenswrapper[4867]: E0214 05:50:07.999203 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:50:19 crc kubenswrapper[4867]: I0214 05:50:19.009840 4867 scope.go:117] "RemoveContainer" containerID="57022a394f9e48e84c2c7ab708dd1c775f970a72e65d0163882f6edf72cdab37" Feb 14 05:50:19 crc kubenswrapper[4867]: E0214 05:50:19.010843 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:50:29 crc kubenswrapper[4867]: I0214 05:50:29.997723 4867 scope.go:117] "RemoveContainer" containerID="57022a394f9e48e84c2c7ab708dd1c775f970a72e65d0163882f6edf72cdab37" Feb 14 05:50:29 crc kubenswrapper[4867]: E0214 05:50:29.998445 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:50:40 crc kubenswrapper[4867]: I0214 05:50:40.997998 4867 scope.go:117] "RemoveContainer" containerID="57022a394f9e48e84c2c7ab708dd1c775f970a72e65d0163882f6edf72cdab37" Feb 14 05:50:41 crc kubenswrapper[4867]: I0214 05:50:41.502607 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" event={"ID":"5992e46c-bce7-4b9f-82f2-c7ffb93286cd","Type":"ContainerStarted","Data":"969e0cb4cefe8b8e5046ee62cca830ff3afc22fe72785a6b708c487b9ff93b5e"} Feb 14 05:50:52 crc kubenswrapper[4867]: I0214 05:50:52.528851 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-rtzc7/must-gather-wmzns"] Feb 14 05:50:52 crc kubenswrapper[4867]: I0214 05:50:52.532160 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rtzc7/must-gather-wmzns" Feb 14 05:50:52 crc kubenswrapper[4867]: I0214 05:50:52.534269 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-rtzc7"/"kube-root-ca.crt" Feb 14 05:50:52 crc kubenswrapper[4867]: I0214 05:50:52.535682 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-rtzc7"/"openshift-service-ca.crt" Feb 14 05:50:52 crc kubenswrapper[4867]: I0214 05:50:52.560276 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-rtzc7/must-gather-wmzns"] Feb 14 05:50:52 crc kubenswrapper[4867]: I0214 05:50:52.688286 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/89d6412f-a37d-4f30-8c3a-9514185847fc-must-gather-output\") pod \"must-gather-wmzns\" (UID: \"89d6412f-a37d-4f30-8c3a-9514185847fc\") " pod="openshift-must-gather-rtzc7/must-gather-wmzns" Feb 14 05:50:52 crc kubenswrapper[4867]: I0214 05:50:52.688338 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slmvv\" (UniqueName: \"kubernetes.io/projected/89d6412f-a37d-4f30-8c3a-9514185847fc-kube-api-access-slmvv\") pod \"must-gather-wmzns\" (UID: \"89d6412f-a37d-4f30-8c3a-9514185847fc\") " pod="openshift-must-gather-rtzc7/must-gather-wmzns" Feb 14 05:50:52 crc kubenswrapper[4867]: I0214 05:50:52.792567 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/89d6412f-a37d-4f30-8c3a-9514185847fc-must-gather-output\") pod \"must-gather-wmzns\" (UID: \"89d6412f-a37d-4f30-8c3a-9514185847fc\") " pod="openshift-must-gather-rtzc7/must-gather-wmzns" Feb 14 05:50:52 crc kubenswrapper[4867]: I0214 05:50:52.792845 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slmvv\" (UniqueName: \"kubernetes.io/projected/89d6412f-a37d-4f30-8c3a-9514185847fc-kube-api-access-slmvv\") pod \"must-gather-wmzns\" (UID: \"89d6412f-a37d-4f30-8c3a-9514185847fc\") " pod="openshift-must-gather-rtzc7/must-gather-wmzns" Feb 14 05:50:52 crc kubenswrapper[4867]: I0214 05:50:52.793113 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/89d6412f-a37d-4f30-8c3a-9514185847fc-must-gather-output\") pod \"must-gather-wmzns\" (UID: \"89d6412f-a37d-4f30-8c3a-9514185847fc\") " pod="openshift-must-gather-rtzc7/must-gather-wmzns" Feb 14 05:50:52 crc kubenswrapper[4867]: I0214 05:50:52.812660 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slmvv\" (UniqueName: \"kubernetes.io/projected/89d6412f-a37d-4f30-8c3a-9514185847fc-kube-api-access-slmvv\") pod \"must-gather-wmzns\" (UID: \"89d6412f-a37d-4f30-8c3a-9514185847fc\") " pod="openshift-must-gather-rtzc7/must-gather-wmzns" Feb 14 05:50:52 crc kubenswrapper[4867]: I0214 05:50:52.851682 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rtzc7/must-gather-wmzns" Feb 14 05:50:53 crc kubenswrapper[4867]: I0214 05:50:53.429669 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-rtzc7/must-gather-wmzns"] Feb 14 05:50:53 crc kubenswrapper[4867]: I0214 05:50:53.624647 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rtzc7/must-gather-wmzns" event={"ID":"89d6412f-a37d-4f30-8c3a-9514185847fc","Type":"ContainerStarted","Data":"2d6a5a00012c52a2aac1e8dffdc748b022caf87a8674b148896c8bda016c8acb"} Feb 14 05:51:01 crc kubenswrapper[4867]: I0214 05:51:01.723697 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rtzc7/must-gather-wmzns" event={"ID":"89d6412f-a37d-4f30-8c3a-9514185847fc","Type":"ContainerStarted","Data":"177c95f4e7826d6d799901d70a180712f443165780432f255fcb63f96509fb1c"} Feb 14 05:51:02 crc kubenswrapper[4867]: I0214 05:51:02.744826 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rtzc7/must-gather-wmzns" event={"ID":"89d6412f-a37d-4f30-8c3a-9514185847fc","Type":"ContainerStarted","Data":"8bda962d52e435b73ab83aa35089685e683712a0b3acfa743e4df637f1d29a76"} Feb 14 05:51:02 crc kubenswrapper[4867]: I0214 05:51:02.770831 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-rtzc7/must-gather-wmzns" podStartSLOduration=2.887364411 podStartE2EDuration="10.770811301s" podCreationTimestamp="2026-02-14 05:50:52 +0000 UTC" firstStartedPulling="2026-02-14 05:50:53.430273596 +0000 UTC m=+6085.511210920" lastFinishedPulling="2026-02-14 05:51:01.313720486 +0000 UTC m=+6093.394657810" observedRunningTime="2026-02-14 05:51:02.763993962 +0000 UTC m=+6094.844931306" watchObservedRunningTime="2026-02-14 05:51:02.770811301 +0000 UTC m=+6094.851748605" Feb 14 05:51:07 crc kubenswrapper[4867]: E0214 05:51:07.419544 4867 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.113:59912->38.102.83.113:33373: read tcp 38.102.83.113:59912->38.102.83.113:33373: read: connection reset by peer Feb 14 05:51:08 crc kubenswrapper[4867]: I0214 05:51:08.302553 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-rtzc7/crc-debug-tz25z"] Feb 14 05:51:08 crc kubenswrapper[4867]: I0214 05:51:08.305161 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rtzc7/crc-debug-tz25z" Feb 14 05:51:08 crc kubenswrapper[4867]: I0214 05:51:08.311798 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-rtzc7"/"default-dockercfg-kt9b9" Feb 14 05:51:08 crc kubenswrapper[4867]: I0214 05:51:08.414797 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b8b6ff93-1581-48eb-b74d-f7c97cdb1918-host\") pod \"crc-debug-tz25z\" (UID: \"b8b6ff93-1581-48eb-b74d-f7c97cdb1918\") " pod="openshift-must-gather-rtzc7/crc-debug-tz25z" Feb 14 05:51:08 crc kubenswrapper[4867]: I0214 05:51:08.415238 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j95rn\" (UniqueName: \"kubernetes.io/projected/b8b6ff93-1581-48eb-b74d-f7c97cdb1918-kube-api-access-j95rn\") pod \"crc-debug-tz25z\" (UID: \"b8b6ff93-1581-48eb-b74d-f7c97cdb1918\") " pod="openshift-must-gather-rtzc7/crc-debug-tz25z" Feb 14 05:51:08 crc kubenswrapper[4867]: I0214 05:51:08.517793 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b8b6ff93-1581-48eb-b74d-f7c97cdb1918-host\") pod \"crc-debug-tz25z\" (UID: \"b8b6ff93-1581-48eb-b74d-f7c97cdb1918\") " pod="openshift-must-gather-rtzc7/crc-debug-tz25z" Feb 14 05:51:08 crc kubenswrapper[4867]: I0214 05:51:08.517953 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j95rn\" (UniqueName: \"kubernetes.io/projected/b8b6ff93-1581-48eb-b74d-f7c97cdb1918-kube-api-access-j95rn\") pod \"crc-debug-tz25z\" (UID: \"b8b6ff93-1581-48eb-b74d-f7c97cdb1918\") " pod="openshift-must-gather-rtzc7/crc-debug-tz25z" Feb 14 05:51:08 crc kubenswrapper[4867]: I0214 05:51:08.518820 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b8b6ff93-1581-48eb-b74d-f7c97cdb1918-host\") pod \"crc-debug-tz25z\" (UID: \"b8b6ff93-1581-48eb-b74d-f7c97cdb1918\") " pod="openshift-must-gather-rtzc7/crc-debug-tz25z" Feb 14 05:51:08 crc kubenswrapper[4867]: I0214 05:51:08.538295 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j95rn\" (UniqueName: \"kubernetes.io/projected/b8b6ff93-1581-48eb-b74d-f7c97cdb1918-kube-api-access-j95rn\") pod \"crc-debug-tz25z\" (UID: \"b8b6ff93-1581-48eb-b74d-f7c97cdb1918\") " pod="openshift-must-gather-rtzc7/crc-debug-tz25z" Feb 14 05:51:08 crc kubenswrapper[4867]: I0214 05:51:08.623877 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rtzc7/crc-debug-tz25z" Feb 14 05:51:08 crc kubenswrapper[4867]: I0214 05:51:08.825567 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rtzc7/crc-debug-tz25z" event={"ID":"b8b6ff93-1581-48eb-b74d-f7c97cdb1918","Type":"ContainerStarted","Data":"624245ddd3f542dcc22ae9c7894ed7fd3efca5ec2c05b8bd8c8b8eec3e915a96"} Feb 14 05:51:24 crc kubenswrapper[4867]: E0214 05:51:24.600917 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296" Feb 14 05:51:24 crc kubenswrapper[4867]: E0214 05:51:24.605690 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:container-00,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296,Command:[chroot /host bash -c echo 'TOOLBOX_NAME=toolbox-osp' > /root/.toolboxrc ; rm -rf \"/var/tmp/sos-osp\" && mkdir -p \"/var/tmp/sos-osp\" && sudo podman rm --force toolbox-osp; sudo --preserve-env podman pull --authfile /var/lib/kubelet/config.json registry.redhat.io/rhel9/support-tools && toolbox sos report --batch --all-logs --only-plugins block,cifs,crio,devicemapper,devices,firewall_tables,firewalld,iscsi,lvm2,memory,multipath,nfs,nis,nvme,podman,process,processor,selinux,scsi,udev,logs,crypto --tmp-dir=\"/var/tmp/sos-osp\" && if [[ \"$(ls /var/log/pods/*/{*.log.*,*/*.log.*} 2>/dev/null)\" != '' ]]; then tar --ignore-failed-read --warning=no-file-changed -cJf \"/var/tmp/sos-osp/podlogs.tar.xz\" --transform 's,^,podlogs/,' /var/log/pods/*/{*.log.*,*/*.log.*} || true; fi],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:TMOUT,Value:900,ValueFrom:nil,},EnvVar{Name:HOST,Value:/host,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host,ReadOnly:false,MountPath:/host,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j95rn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod crc-debug-tz25z_openshift-must-gather-rtzc7(b8b6ff93-1581-48eb-b74d-f7c97cdb1918): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 05:51:24 crc kubenswrapper[4867]: E0214 05:51:24.607090 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"container-00\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-must-gather-rtzc7/crc-debug-tz25z" podUID="b8b6ff93-1581-48eb-b74d-f7c97cdb1918" Feb 14 05:51:25 crc kubenswrapper[4867]: E0214 05:51:25.074312 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"container-00\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296\\\"\"" pod="openshift-must-gather-rtzc7/crc-debug-tz25z" podUID="b8b6ff93-1581-48eb-b74d-f7c97cdb1918" Feb 14 05:51:39 crc kubenswrapper[4867]: I0214 05:51:39.949464 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gxd4x"] Feb 14 05:51:39 crc kubenswrapper[4867]: I0214 05:51:39.953196 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gxd4x" Feb 14 05:51:39 crc kubenswrapper[4867]: I0214 05:51:39.963217 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gxd4x"] Feb 14 05:51:39 crc kubenswrapper[4867]: I0214 05:51:39.990957 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7l5cv\" (UniqueName: \"kubernetes.io/projected/90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925-kube-api-access-7l5cv\") pod \"certified-operators-gxd4x\" (UID: \"90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925\") " pod="openshift-marketplace/certified-operators-gxd4x" Feb 14 05:51:39 crc kubenswrapper[4867]: I0214 05:51:39.991115 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925-catalog-content\") pod \"certified-operators-gxd4x\" (UID: \"90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925\") " pod="openshift-marketplace/certified-operators-gxd4x" Feb 14 05:51:39 crc kubenswrapper[4867]: I0214 05:51:39.991314 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925-utilities\") pod \"certified-operators-gxd4x\" (UID: \"90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925\") " pod="openshift-marketplace/certified-operators-gxd4x" Feb 14 05:51:40 crc kubenswrapper[4867]: I0214 05:51:40.093783 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7l5cv\" (UniqueName: \"kubernetes.io/projected/90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925-kube-api-access-7l5cv\") pod \"certified-operators-gxd4x\" (UID: \"90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925\") " pod="openshift-marketplace/certified-operators-gxd4x" Feb 14 05:51:40 crc kubenswrapper[4867]: I0214 05:51:40.094316 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925-catalog-content\") pod \"certified-operators-gxd4x\" (UID: \"90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925\") " pod="openshift-marketplace/certified-operators-gxd4x" Feb 14 05:51:40 crc kubenswrapper[4867]: I0214 05:51:40.094876 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925-catalog-content\") pod \"certified-operators-gxd4x\" (UID: \"90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925\") " pod="openshift-marketplace/certified-operators-gxd4x" Feb 14 05:51:40 crc kubenswrapper[4867]: I0214 05:51:40.095126 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925-utilities\") pod \"certified-operators-gxd4x\" (UID: \"90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925\") " pod="openshift-marketplace/certified-operators-gxd4x" Feb 14 05:51:40 crc kubenswrapper[4867]: I0214 05:51:40.096471 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925-utilities\") pod \"certified-operators-gxd4x\" (UID: \"90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925\") " pod="openshift-marketplace/certified-operators-gxd4x" Feb 14 05:51:40 crc kubenswrapper[4867]: I0214 05:51:40.120394 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7l5cv\" (UniqueName: \"kubernetes.io/projected/90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925-kube-api-access-7l5cv\") pod \"certified-operators-gxd4x\" (UID: \"90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925\") " pod="openshift-marketplace/certified-operators-gxd4x" Feb 14 05:51:40 crc kubenswrapper[4867]: I0214 05:51:40.274408 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gxd4x" Feb 14 05:51:40 crc kubenswrapper[4867]: I0214 05:51:40.296695 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rtzc7/crc-debug-tz25z" event={"ID":"b8b6ff93-1581-48eb-b74d-f7c97cdb1918","Type":"ContainerStarted","Data":"fd380d7db84361518f8a7673c0c88c1dc8ce8c1cbbe679b0aafd4c0d3248660f"} Feb 14 05:51:40 crc kubenswrapper[4867]: I0214 05:51:40.322185 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-rtzc7/crc-debug-tz25z" podStartSLOduration=1.569922638 podStartE2EDuration="32.322164308s" podCreationTimestamp="2026-02-14 05:51:08 +0000 UTC" firstStartedPulling="2026-02-14 05:51:08.684943755 +0000 UTC m=+6100.765881069" lastFinishedPulling="2026-02-14 05:51:39.437185415 +0000 UTC m=+6131.518122739" observedRunningTime="2026-02-14 05:51:40.314841076 +0000 UTC m=+6132.395778400" watchObservedRunningTime="2026-02-14 05:51:40.322164308 +0000 UTC m=+6132.403101622" Feb 14 05:51:41 crc kubenswrapper[4867]: I0214 05:51:41.713381 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gxd4x"] Feb 14 05:51:42 crc kubenswrapper[4867]: I0214 05:51:42.316099 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gxd4x" event={"ID":"90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925","Type":"ContainerStarted","Data":"9ee331d8f9be369631f10654c158e87afe7a9d548a81fbe230376595ebd85ecc"} Feb 14 05:51:43 crc kubenswrapper[4867]: I0214 05:51:43.325305 4867 generic.go:334] "Generic (PLEG): container finished" podID="90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925" containerID="4f87e7e2ede33a2b7fa8751c3558057ce168debd3fad80bbc97dfee71d9403f6" exitCode=0 Feb 14 05:51:43 crc kubenswrapper[4867]: I0214 05:51:43.325349 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gxd4x" event={"ID":"90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925","Type":"ContainerDied","Data":"4f87e7e2ede33a2b7fa8751c3558057ce168debd3fad80bbc97dfee71d9403f6"} Feb 14 05:51:44 crc kubenswrapper[4867]: I0214 05:51:44.344316 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gxd4x" event={"ID":"90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925","Type":"ContainerStarted","Data":"217bb34d1c9ef98f68a83eeb0567200efcaef13a371554b797dc554328ba880f"} Feb 14 05:51:47 crc kubenswrapper[4867]: I0214 05:51:47.378018 4867 generic.go:334] "Generic (PLEG): container finished" podID="90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925" containerID="217bb34d1c9ef98f68a83eeb0567200efcaef13a371554b797dc554328ba880f" exitCode=0 Feb 14 05:51:47 crc kubenswrapper[4867]: I0214 05:51:47.378083 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gxd4x" event={"ID":"90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925","Type":"ContainerDied","Data":"217bb34d1c9ef98f68a83eeb0567200efcaef13a371554b797dc554328ba880f"} Feb 14 05:51:50 crc kubenswrapper[4867]: I0214 05:51:50.759839 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="b27199a8-11ac-4e59-90b8-b42387dd6dd2" containerName="galera" probeResult="failure" output="command timed out" Feb 14 05:51:50 crc kubenswrapper[4867]: I0214 05:51:50.762231 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="b27199a8-11ac-4e59-90b8-b42387dd6dd2" containerName="galera" probeResult="failure" output="command timed out" Feb 14 05:51:52 crc kubenswrapper[4867]: I0214 05:51:52.913315 4867 trace.go:236] Trace[1536607917]: "Calculate volume metrics of catalog-content for pod openshift-marketplace/redhat-operators-bvb8v" (14-Feb-2026 05:51:51.302) (total time: 1610ms): Feb 14 05:51:52 crc kubenswrapper[4867]: Trace[1536607917]: [1.610399058s] [1.610399058s] END Feb 14 05:51:56 crc kubenswrapper[4867]: I0214 05:51:56.485522 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gxd4x" event={"ID":"90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925","Type":"ContainerStarted","Data":"f3a664068966be5f0271feb5aa6cd4ab27234fbada6908923e0783a689fac623"} Feb 14 05:51:56 crc kubenswrapper[4867]: I0214 05:51:56.504756 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gxd4x" podStartSLOduration=5.661832485 podStartE2EDuration="17.504736756s" podCreationTimestamp="2026-02-14 05:51:39 +0000 UTC" firstStartedPulling="2026-02-14 05:51:43.327759818 +0000 UTC m=+6135.408697122" lastFinishedPulling="2026-02-14 05:51:55.170664079 +0000 UTC m=+6147.251601393" observedRunningTime="2026-02-14 05:51:56.50181516 +0000 UTC m=+6148.582752474" watchObservedRunningTime="2026-02-14 05:51:56.504736756 +0000 UTC m=+6148.585674080" Feb 14 05:52:00 crc kubenswrapper[4867]: I0214 05:52:00.274584 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gxd4x" Feb 14 05:52:00 crc kubenswrapper[4867]: I0214 05:52:00.275982 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-gxd4x" Feb 14 05:52:01 crc kubenswrapper[4867]: I0214 05:52:01.326915 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-gxd4x" podUID="90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925" containerName="registry-server" probeResult="failure" output=< Feb 14 05:52:01 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:52:01 crc kubenswrapper[4867]: > Feb 14 05:52:09 crc kubenswrapper[4867]: I0214 05:52:09.622765 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9mmhn"] Feb 14 05:52:09 crc kubenswrapper[4867]: I0214 05:52:09.638980 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9mmhn" Feb 14 05:52:09 crc kubenswrapper[4867]: I0214 05:52:09.673278 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9mmhn"] Feb 14 05:52:09 crc kubenswrapper[4867]: I0214 05:52:09.777012 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-958nm\" (UniqueName: \"kubernetes.io/projected/03648482-256b-4fd0-94f3-f5dd889f5d49-kube-api-access-958nm\") pod \"community-operators-9mmhn\" (UID: \"03648482-256b-4fd0-94f3-f5dd889f5d49\") " pod="openshift-marketplace/community-operators-9mmhn" Feb 14 05:52:09 crc kubenswrapper[4867]: I0214 05:52:09.777281 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03648482-256b-4fd0-94f3-f5dd889f5d49-utilities\") pod \"community-operators-9mmhn\" (UID: \"03648482-256b-4fd0-94f3-f5dd889f5d49\") " pod="openshift-marketplace/community-operators-9mmhn" Feb 14 05:52:09 crc kubenswrapper[4867]: I0214 05:52:09.777377 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03648482-256b-4fd0-94f3-f5dd889f5d49-catalog-content\") pod \"community-operators-9mmhn\" (UID: \"03648482-256b-4fd0-94f3-f5dd889f5d49\") " pod="openshift-marketplace/community-operators-9mmhn" Feb 14 05:52:09 crc kubenswrapper[4867]: I0214 05:52:09.879591 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-958nm\" (UniqueName: \"kubernetes.io/projected/03648482-256b-4fd0-94f3-f5dd889f5d49-kube-api-access-958nm\") pod \"community-operators-9mmhn\" (UID: \"03648482-256b-4fd0-94f3-f5dd889f5d49\") " pod="openshift-marketplace/community-operators-9mmhn" Feb 14 05:52:09 crc kubenswrapper[4867]: I0214 05:52:09.880219 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03648482-256b-4fd0-94f3-f5dd889f5d49-utilities\") pod \"community-operators-9mmhn\" (UID: \"03648482-256b-4fd0-94f3-f5dd889f5d49\") " pod="openshift-marketplace/community-operators-9mmhn" Feb 14 05:52:09 crc kubenswrapper[4867]: I0214 05:52:09.880435 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03648482-256b-4fd0-94f3-f5dd889f5d49-catalog-content\") pod \"community-operators-9mmhn\" (UID: \"03648482-256b-4fd0-94f3-f5dd889f5d49\") " pod="openshift-marketplace/community-operators-9mmhn" Feb 14 05:52:09 crc kubenswrapper[4867]: I0214 05:52:09.881022 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03648482-256b-4fd0-94f3-f5dd889f5d49-utilities\") pod \"community-operators-9mmhn\" (UID: \"03648482-256b-4fd0-94f3-f5dd889f5d49\") " pod="openshift-marketplace/community-operators-9mmhn" Feb 14 05:52:09 crc kubenswrapper[4867]: I0214 05:52:09.881088 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03648482-256b-4fd0-94f3-f5dd889f5d49-catalog-content\") pod \"community-operators-9mmhn\" (UID: \"03648482-256b-4fd0-94f3-f5dd889f5d49\") " pod="openshift-marketplace/community-operators-9mmhn" Feb 14 05:52:09 crc kubenswrapper[4867]: I0214 05:52:09.899753 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-958nm\" (UniqueName: \"kubernetes.io/projected/03648482-256b-4fd0-94f3-f5dd889f5d49-kube-api-access-958nm\") pod \"community-operators-9mmhn\" (UID: \"03648482-256b-4fd0-94f3-f5dd889f5d49\") " pod="openshift-marketplace/community-operators-9mmhn" Feb 14 05:52:10 crc kubenswrapper[4867]: I0214 05:52:10.015370 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9mmhn" Feb 14 05:52:10 crc kubenswrapper[4867]: I0214 05:52:10.935341 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9mmhn"] Feb 14 05:52:10 crc kubenswrapper[4867]: W0214 05:52:10.944036 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod03648482_256b_4fd0_94f3_f5dd889f5d49.slice/crio-4af7c7cb57e6c823cd8ca405b9ff517789ac1bc4c72ea321b165f7d9962baf0c WatchSource:0}: Error finding container 4af7c7cb57e6c823cd8ca405b9ff517789ac1bc4c72ea321b165f7d9962baf0c: Status 404 returned error can't find the container with id 4af7c7cb57e6c823cd8ca405b9ff517789ac1bc4c72ea321b165f7d9962baf0c Feb 14 05:52:11 crc kubenswrapper[4867]: I0214 05:52:11.350094 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-gxd4x" podUID="90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925" containerName="registry-server" probeResult="failure" output=< Feb 14 05:52:11 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:52:11 crc kubenswrapper[4867]: > Feb 14 05:52:11 crc kubenswrapper[4867]: I0214 05:52:11.649868 4867 generic.go:334] "Generic (PLEG): container finished" podID="03648482-256b-4fd0-94f3-f5dd889f5d49" containerID="3acf98add1207742a1f3d6ba0589024876be4929f2133bb35c96811ccecaba3d" exitCode=0 Feb 14 05:52:11 crc kubenswrapper[4867]: I0214 05:52:11.649913 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9mmhn" event={"ID":"03648482-256b-4fd0-94f3-f5dd889f5d49","Type":"ContainerDied","Data":"3acf98add1207742a1f3d6ba0589024876be4929f2133bb35c96811ccecaba3d"} Feb 14 05:52:11 crc kubenswrapper[4867]: I0214 05:52:11.649938 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9mmhn" event={"ID":"03648482-256b-4fd0-94f3-f5dd889f5d49","Type":"ContainerStarted","Data":"4af7c7cb57e6c823cd8ca405b9ff517789ac1bc4c72ea321b165f7d9962baf0c"} Feb 14 05:52:12 crc kubenswrapper[4867]: I0214 05:52:12.662127 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9mmhn" event={"ID":"03648482-256b-4fd0-94f3-f5dd889f5d49","Type":"ContainerStarted","Data":"8a71562eca20736776b1b289eef72b40cdd0bac2d1c9a667381ad0a06ca552e1"} Feb 14 05:52:16 crc kubenswrapper[4867]: I0214 05:52:16.712026 4867 generic.go:334] "Generic (PLEG): container finished" podID="03648482-256b-4fd0-94f3-f5dd889f5d49" containerID="8a71562eca20736776b1b289eef72b40cdd0bac2d1c9a667381ad0a06ca552e1" exitCode=0 Feb 14 05:52:16 crc kubenswrapper[4867]: I0214 05:52:16.712104 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9mmhn" event={"ID":"03648482-256b-4fd0-94f3-f5dd889f5d49","Type":"ContainerDied","Data":"8a71562eca20736776b1b289eef72b40cdd0bac2d1c9a667381ad0a06ca552e1"} Feb 14 05:52:17 crc kubenswrapper[4867]: I0214 05:52:17.725520 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9mmhn" event={"ID":"03648482-256b-4fd0-94f3-f5dd889f5d49","Type":"ContainerStarted","Data":"2d31332057a234b106a0d8f4134ffef0fa3c66cfd6f0489e8bd73f6fbaee3b8d"} Feb 14 05:52:17 crc kubenswrapper[4867]: I0214 05:52:17.762676 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9mmhn" podStartSLOduration=3.300983856 podStartE2EDuration="8.762649406s" podCreationTimestamp="2026-02-14 05:52:09 +0000 UTC" firstStartedPulling="2026-02-14 05:52:11.652469649 +0000 UTC m=+6163.733406963" lastFinishedPulling="2026-02-14 05:52:17.114135199 +0000 UTC m=+6169.195072513" observedRunningTime="2026-02-14 05:52:17.750725463 +0000 UTC m=+6169.831662787" watchObservedRunningTime="2026-02-14 05:52:17.762649406 +0000 UTC m=+6169.843586720" Feb 14 05:52:20 crc kubenswrapper[4867]: I0214 05:52:20.016399 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9mmhn" Feb 14 05:52:20 crc kubenswrapper[4867]: I0214 05:52:20.017047 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9mmhn" Feb 14 05:52:20 crc kubenswrapper[4867]: I0214 05:52:20.074704 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9mmhn" Feb 14 05:52:20 crc kubenswrapper[4867]: I0214 05:52:20.347501 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gxd4x" Feb 14 05:52:20 crc kubenswrapper[4867]: I0214 05:52:20.400725 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gxd4x" Feb 14 05:52:21 crc kubenswrapper[4867]: I0214 05:52:21.323312 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gxd4x"] Feb 14 05:52:21 crc kubenswrapper[4867]: I0214 05:52:21.813153 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-gxd4x" podUID="90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925" containerName="registry-server" containerID="cri-o://f3a664068966be5f0271feb5aa6cd4ab27234fbada6908923e0783a689fac623" gracePeriod=2 Feb 14 05:52:22 crc kubenswrapper[4867]: I0214 05:52:22.651929 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gxd4x" Feb 14 05:52:22 crc kubenswrapper[4867]: I0214 05:52:22.734721 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925-catalog-content\") pod \"90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925\" (UID: \"90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925\") " Feb 14 05:52:22 crc kubenswrapper[4867]: I0214 05:52:22.734901 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925-utilities\") pod \"90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925\" (UID: \"90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925\") " Feb 14 05:52:22 crc kubenswrapper[4867]: I0214 05:52:22.734949 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7l5cv\" (UniqueName: \"kubernetes.io/projected/90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925-kube-api-access-7l5cv\") pod \"90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925\" (UID: \"90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925\") " Feb 14 05:52:22 crc kubenswrapper[4867]: I0214 05:52:22.736895 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925-utilities" (OuterVolumeSpecName: "utilities") pod "90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925" (UID: "90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 05:52:22 crc kubenswrapper[4867]: I0214 05:52:22.748001 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925-kube-api-access-7l5cv" (OuterVolumeSpecName: "kube-api-access-7l5cv") pod "90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925" (UID: "90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925"). InnerVolumeSpecName "kube-api-access-7l5cv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 05:52:22 crc kubenswrapper[4867]: I0214 05:52:22.804381 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925" (UID: "90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 05:52:22 crc kubenswrapper[4867]: I0214 05:52:22.828107 4867 generic.go:334] "Generic (PLEG): container finished" podID="90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925" containerID="f3a664068966be5f0271feb5aa6cd4ab27234fbada6908923e0783a689fac623" exitCode=0 Feb 14 05:52:22 crc kubenswrapper[4867]: I0214 05:52:22.828161 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gxd4x" event={"ID":"90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925","Type":"ContainerDied","Data":"f3a664068966be5f0271feb5aa6cd4ab27234fbada6908923e0783a689fac623"} Feb 14 05:52:22 crc kubenswrapper[4867]: I0214 05:52:22.828214 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gxd4x" event={"ID":"90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925","Type":"ContainerDied","Data":"9ee331d8f9be369631f10654c158e87afe7a9d548a81fbe230376595ebd85ecc"} Feb 14 05:52:22 crc kubenswrapper[4867]: I0214 05:52:22.828234 4867 scope.go:117] "RemoveContainer" containerID="f3a664068966be5f0271feb5aa6cd4ab27234fbada6908923e0783a689fac623" Feb 14 05:52:22 crc kubenswrapper[4867]: I0214 05:52:22.828718 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gxd4x" Feb 14 05:52:22 crc kubenswrapper[4867]: I0214 05:52:22.838139 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 05:52:22 crc kubenswrapper[4867]: I0214 05:52:22.838167 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7l5cv\" (UniqueName: \"kubernetes.io/projected/90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925-kube-api-access-7l5cv\") on node \"crc\" DevicePath \"\"" Feb 14 05:52:22 crc kubenswrapper[4867]: I0214 05:52:22.838176 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 05:52:22 crc kubenswrapper[4867]: I0214 05:52:22.862424 4867 scope.go:117] "RemoveContainer" containerID="217bb34d1c9ef98f68a83eeb0567200efcaef13a371554b797dc554328ba880f" Feb 14 05:52:22 crc kubenswrapper[4867]: I0214 05:52:22.875847 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gxd4x"] Feb 14 05:52:22 crc kubenswrapper[4867]: I0214 05:52:22.892871 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-gxd4x"] Feb 14 05:52:22 crc kubenswrapper[4867]: I0214 05:52:22.893209 4867 scope.go:117] "RemoveContainer" containerID="4f87e7e2ede33a2b7fa8751c3558057ce168debd3fad80bbc97dfee71d9403f6" Feb 14 05:52:22 crc kubenswrapper[4867]: I0214 05:52:22.949836 4867 scope.go:117] "RemoveContainer" containerID="f3a664068966be5f0271feb5aa6cd4ab27234fbada6908923e0783a689fac623" Feb 14 05:52:22 crc kubenswrapper[4867]: E0214 05:52:22.951020 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3a664068966be5f0271feb5aa6cd4ab27234fbada6908923e0783a689fac623\": container with ID starting with f3a664068966be5f0271feb5aa6cd4ab27234fbada6908923e0783a689fac623 not found: ID does not exist" containerID="f3a664068966be5f0271feb5aa6cd4ab27234fbada6908923e0783a689fac623" Feb 14 05:52:22 crc kubenswrapper[4867]: I0214 05:52:22.951176 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3a664068966be5f0271feb5aa6cd4ab27234fbada6908923e0783a689fac623"} err="failed to get container status \"f3a664068966be5f0271feb5aa6cd4ab27234fbada6908923e0783a689fac623\": rpc error: code = NotFound desc = could not find container \"f3a664068966be5f0271feb5aa6cd4ab27234fbada6908923e0783a689fac623\": container with ID starting with f3a664068966be5f0271feb5aa6cd4ab27234fbada6908923e0783a689fac623 not found: ID does not exist" Feb 14 05:52:22 crc kubenswrapper[4867]: I0214 05:52:22.951293 4867 scope.go:117] "RemoveContainer" containerID="217bb34d1c9ef98f68a83eeb0567200efcaef13a371554b797dc554328ba880f" Feb 14 05:52:22 crc kubenswrapper[4867]: E0214 05:52:22.952063 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"217bb34d1c9ef98f68a83eeb0567200efcaef13a371554b797dc554328ba880f\": container with ID starting with 217bb34d1c9ef98f68a83eeb0567200efcaef13a371554b797dc554328ba880f not found: ID does not exist" containerID="217bb34d1c9ef98f68a83eeb0567200efcaef13a371554b797dc554328ba880f" Feb 14 05:52:22 crc kubenswrapper[4867]: I0214 05:52:22.952113 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"217bb34d1c9ef98f68a83eeb0567200efcaef13a371554b797dc554328ba880f"} err="failed to get container status \"217bb34d1c9ef98f68a83eeb0567200efcaef13a371554b797dc554328ba880f\": rpc error: code = NotFound desc = could not find container \"217bb34d1c9ef98f68a83eeb0567200efcaef13a371554b797dc554328ba880f\": container with ID starting with 217bb34d1c9ef98f68a83eeb0567200efcaef13a371554b797dc554328ba880f not found: ID does not exist" Feb 14 05:52:22 crc kubenswrapper[4867]: I0214 05:52:22.952151 4867 scope.go:117] "RemoveContainer" containerID="4f87e7e2ede33a2b7fa8751c3558057ce168debd3fad80bbc97dfee71d9403f6" Feb 14 05:52:22 crc kubenswrapper[4867]: E0214 05:52:22.952634 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f87e7e2ede33a2b7fa8751c3558057ce168debd3fad80bbc97dfee71d9403f6\": container with ID starting with 4f87e7e2ede33a2b7fa8751c3558057ce168debd3fad80bbc97dfee71d9403f6 not found: ID does not exist" containerID="4f87e7e2ede33a2b7fa8751c3558057ce168debd3fad80bbc97dfee71d9403f6" Feb 14 05:52:22 crc kubenswrapper[4867]: I0214 05:52:22.952706 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f87e7e2ede33a2b7fa8751c3558057ce168debd3fad80bbc97dfee71d9403f6"} err="failed to get container status \"4f87e7e2ede33a2b7fa8751c3558057ce168debd3fad80bbc97dfee71d9403f6\": rpc error: code = NotFound desc = could not find container \"4f87e7e2ede33a2b7fa8751c3558057ce168debd3fad80bbc97dfee71d9403f6\": container with ID starting with 4f87e7e2ede33a2b7fa8751c3558057ce168debd3fad80bbc97dfee71d9403f6 not found: ID does not exist" Feb 14 05:52:23 crc kubenswrapper[4867]: I0214 05:52:23.018180 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925" path="/var/lib/kubelet/pods/90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925/volumes" Feb 14 05:52:30 crc kubenswrapper[4867]: I0214 05:52:30.067425 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9mmhn" Feb 14 05:52:30 crc kubenswrapper[4867]: I0214 05:52:30.123429 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9mmhn"] Feb 14 05:52:30 crc kubenswrapper[4867]: I0214 05:52:30.923833 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9mmhn" podUID="03648482-256b-4fd0-94f3-f5dd889f5d49" containerName="registry-server" containerID="cri-o://2d31332057a234b106a0d8f4134ffef0fa3c66cfd6f0489e8bd73f6fbaee3b8d" gracePeriod=2 Feb 14 05:52:31 crc kubenswrapper[4867]: I0214 05:52:31.592729 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9mmhn" Feb 14 05:52:31 crc kubenswrapper[4867]: I0214 05:52:31.668494 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03648482-256b-4fd0-94f3-f5dd889f5d49-utilities\") pod \"03648482-256b-4fd0-94f3-f5dd889f5d49\" (UID: \"03648482-256b-4fd0-94f3-f5dd889f5d49\") " Feb 14 05:52:31 crc kubenswrapper[4867]: I0214 05:52:31.668657 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-958nm\" (UniqueName: \"kubernetes.io/projected/03648482-256b-4fd0-94f3-f5dd889f5d49-kube-api-access-958nm\") pod \"03648482-256b-4fd0-94f3-f5dd889f5d49\" (UID: \"03648482-256b-4fd0-94f3-f5dd889f5d49\") " Feb 14 05:52:31 crc kubenswrapper[4867]: I0214 05:52:31.668807 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03648482-256b-4fd0-94f3-f5dd889f5d49-catalog-content\") pod \"03648482-256b-4fd0-94f3-f5dd889f5d49\" (UID: \"03648482-256b-4fd0-94f3-f5dd889f5d49\") " Feb 14 05:52:31 crc kubenswrapper[4867]: I0214 05:52:31.669268 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03648482-256b-4fd0-94f3-f5dd889f5d49-utilities" (OuterVolumeSpecName: "utilities") pod "03648482-256b-4fd0-94f3-f5dd889f5d49" (UID: "03648482-256b-4fd0-94f3-f5dd889f5d49"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 05:52:31 crc kubenswrapper[4867]: I0214 05:52:31.669812 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03648482-256b-4fd0-94f3-f5dd889f5d49-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 05:52:31 crc kubenswrapper[4867]: I0214 05:52:31.674189 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03648482-256b-4fd0-94f3-f5dd889f5d49-kube-api-access-958nm" (OuterVolumeSpecName: "kube-api-access-958nm") pod "03648482-256b-4fd0-94f3-f5dd889f5d49" (UID: "03648482-256b-4fd0-94f3-f5dd889f5d49"). InnerVolumeSpecName "kube-api-access-958nm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 05:52:31 crc kubenswrapper[4867]: I0214 05:52:31.729972 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03648482-256b-4fd0-94f3-f5dd889f5d49-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "03648482-256b-4fd0-94f3-f5dd889f5d49" (UID: "03648482-256b-4fd0-94f3-f5dd889f5d49"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 05:52:31 crc kubenswrapper[4867]: I0214 05:52:31.771963 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-958nm\" (UniqueName: \"kubernetes.io/projected/03648482-256b-4fd0-94f3-f5dd889f5d49-kube-api-access-958nm\") on node \"crc\" DevicePath \"\"" Feb 14 05:52:31 crc kubenswrapper[4867]: I0214 05:52:31.772018 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03648482-256b-4fd0-94f3-f5dd889f5d49-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 05:52:31 crc kubenswrapper[4867]: I0214 05:52:31.939453 4867 generic.go:334] "Generic (PLEG): container finished" podID="03648482-256b-4fd0-94f3-f5dd889f5d49" containerID="2d31332057a234b106a0d8f4134ffef0fa3c66cfd6f0489e8bd73f6fbaee3b8d" exitCode=0 Feb 14 05:52:31 crc kubenswrapper[4867]: I0214 05:52:31.939530 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9mmhn" event={"ID":"03648482-256b-4fd0-94f3-f5dd889f5d49","Type":"ContainerDied","Data":"2d31332057a234b106a0d8f4134ffef0fa3c66cfd6f0489e8bd73f6fbaee3b8d"} Feb 14 05:52:31 crc kubenswrapper[4867]: I0214 05:52:31.939570 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9mmhn" event={"ID":"03648482-256b-4fd0-94f3-f5dd889f5d49","Type":"ContainerDied","Data":"4af7c7cb57e6c823cd8ca405b9ff517789ac1bc4c72ea321b165f7d9962baf0c"} Feb 14 05:52:31 crc kubenswrapper[4867]: I0214 05:52:31.939599 4867 scope.go:117] "RemoveContainer" containerID="2d31332057a234b106a0d8f4134ffef0fa3c66cfd6f0489e8bd73f6fbaee3b8d" Feb 14 05:52:31 crc kubenswrapper[4867]: I0214 05:52:31.939915 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9mmhn" Feb 14 05:52:31 crc kubenswrapper[4867]: I0214 05:52:31.978627 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9mmhn"] Feb 14 05:52:31 crc kubenswrapper[4867]: I0214 05:52:31.979406 4867 scope.go:117] "RemoveContainer" containerID="8a71562eca20736776b1b289eef72b40cdd0bac2d1c9a667381ad0a06ca552e1" Feb 14 05:52:31 crc kubenswrapper[4867]: I0214 05:52:31.989962 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9mmhn"] Feb 14 05:52:32 crc kubenswrapper[4867]: I0214 05:52:32.018444 4867 scope.go:117] "RemoveContainer" containerID="3acf98add1207742a1f3d6ba0589024876be4929f2133bb35c96811ccecaba3d" Feb 14 05:52:32 crc kubenswrapper[4867]: I0214 05:52:32.062918 4867 scope.go:117] "RemoveContainer" containerID="2d31332057a234b106a0d8f4134ffef0fa3c66cfd6f0489e8bd73f6fbaee3b8d" Feb 14 05:52:32 crc kubenswrapper[4867]: E0214 05:52:32.063348 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d31332057a234b106a0d8f4134ffef0fa3c66cfd6f0489e8bd73f6fbaee3b8d\": container with ID starting with 2d31332057a234b106a0d8f4134ffef0fa3c66cfd6f0489e8bd73f6fbaee3b8d not found: ID does not exist" containerID="2d31332057a234b106a0d8f4134ffef0fa3c66cfd6f0489e8bd73f6fbaee3b8d" Feb 14 05:52:32 crc kubenswrapper[4867]: I0214 05:52:32.063391 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d31332057a234b106a0d8f4134ffef0fa3c66cfd6f0489e8bd73f6fbaee3b8d"} err="failed to get container status \"2d31332057a234b106a0d8f4134ffef0fa3c66cfd6f0489e8bd73f6fbaee3b8d\": rpc error: code = NotFound desc = could not find container \"2d31332057a234b106a0d8f4134ffef0fa3c66cfd6f0489e8bd73f6fbaee3b8d\": container with ID starting with 2d31332057a234b106a0d8f4134ffef0fa3c66cfd6f0489e8bd73f6fbaee3b8d not found: ID does not exist" Feb 14 05:52:32 crc kubenswrapper[4867]: I0214 05:52:32.063417 4867 scope.go:117] "RemoveContainer" containerID="8a71562eca20736776b1b289eef72b40cdd0bac2d1c9a667381ad0a06ca552e1" Feb 14 05:52:32 crc kubenswrapper[4867]: E0214 05:52:32.063851 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a71562eca20736776b1b289eef72b40cdd0bac2d1c9a667381ad0a06ca552e1\": container with ID starting with 8a71562eca20736776b1b289eef72b40cdd0bac2d1c9a667381ad0a06ca552e1 not found: ID does not exist" containerID="8a71562eca20736776b1b289eef72b40cdd0bac2d1c9a667381ad0a06ca552e1" Feb 14 05:52:32 crc kubenswrapper[4867]: I0214 05:52:32.063880 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a71562eca20736776b1b289eef72b40cdd0bac2d1c9a667381ad0a06ca552e1"} err="failed to get container status \"8a71562eca20736776b1b289eef72b40cdd0bac2d1c9a667381ad0a06ca552e1\": rpc error: code = NotFound desc = could not find container \"8a71562eca20736776b1b289eef72b40cdd0bac2d1c9a667381ad0a06ca552e1\": container with ID starting with 8a71562eca20736776b1b289eef72b40cdd0bac2d1c9a667381ad0a06ca552e1 not found: ID does not exist" Feb 14 05:52:32 crc kubenswrapper[4867]: I0214 05:52:32.063900 4867 scope.go:117] "RemoveContainer" containerID="3acf98add1207742a1f3d6ba0589024876be4929f2133bb35c96811ccecaba3d" Feb 14 05:52:32 crc kubenswrapper[4867]: E0214 05:52:32.064186 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3acf98add1207742a1f3d6ba0589024876be4929f2133bb35c96811ccecaba3d\": container with ID starting with 3acf98add1207742a1f3d6ba0589024876be4929f2133bb35c96811ccecaba3d not found: ID does not exist" containerID="3acf98add1207742a1f3d6ba0589024876be4929f2133bb35c96811ccecaba3d" Feb 14 05:52:32 crc kubenswrapper[4867]: I0214 05:52:32.064221 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3acf98add1207742a1f3d6ba0589024876be4929f2133bb35c96811ccecaba3d"} err="failed to get container status \"3acf98add1207742a1f3d6ba0589024876be4929f2133bb35c96811ccecaba3d\": rpc error: code = NotFound desc = could not find container \"3acf98add1207742a1f3d6ba0589024876be4929f2133bb35c96811ccecaba3d\": container with ID starting with 3acf98add1207742a1f3d6ba0589024876be4929f2133bb35c96811ccecaba3d not found: ID does not exist" Feb 14 05:52:33 crc kubenswrapper[4867]: I0214 05:52:33.010162 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03648482-256b-4fd0-94f3-f5dd889f5d49" path="/var/lib/kubelet/pods/03648482-256b-4fd0-94f3-f5dd889f5d49/volumes" Feb 14 05:52:34 crc kubenswrapper[4867]: I0214 05:52:34.429178 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4l6q7"] Feb 14 05:52:34 crc kubenswrapper[4867]: E0214 05:52:34.429992 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03648482-256b-4fd0-94f3-f5dd889f5d49" containerName="extract-utilities" Feb 14 05:52:34 crc kubenswrapper[4867]: I0214 05:52:34.430005 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="03648482-256b-4fd0-94f3-f5dd889f5d49" containerName="extract-utilities" Feb 14 05:52:34 crc kubenswrapper[4867]: E0214 05:52:34.430027 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925" containerName="extract-utilities" Feb 14 05:52:34 crc kubenswrapper[4867]: I0214 05:52:34.430034 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925" containerName="extract-utilities" Feb 14 05:52:34 crc kubenswrapper[4867]: E0214 05:52:34.430059 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03648482-256b-4fd0-94f3-f5dd889f5d49" containerName="registry-server" Feb 14 05:52:34 crc kubenswrapper[4867]: I0214 05:52:34.430080 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="03648482-256b-4fd0-94f3-f5dd889f5d49" containerName="registry-server" Feb 14 05:52:34 crc kubenswrapper[4867]: E0214 05:52:34.430095 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03648482-256b-4fd0-94f3-f5dd889f5d49" containerName="extract-content" Feb 14 05:52:34 crc kubenswrapper[4867]: I0214 05:52:34.430101 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="03648482-256b-4fd0-94f3-f5dd889f5d49" containerName="extract-content" Feb 14 05:52:34 crc kubenswrapper[4867]: E0214 05:52:34.430109 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925" containerName="registry-server" Feb 14 05:52:34 crc kubenswrapper[4867]: I0214 05:52:34.430115 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925" containerName="registry-server" Feb 14 05:52:34 crc kubenswrapper[4867]: E0214 05:52:34.430147 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925" containerName="extract-content" Feb 14 05:52:34 crc kubenswrapper[4867]: I0214 05:52:34.430156 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925" containerName="extract-content" Feb 14 05:52:34 crc kubenswrapper[4867]: I0214 05:52:34.430438 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="03648482-256b-4fd0-94f3-f5dd889f5d49" containerName="registry-server" Feb 14 05:52:34 crc kubenswrapper[4867]: I0214 05:52:34.430455 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925" containerName="registry-server" Feb 14 05:52:34 crc kubenswrapper[4867]: I0214 05:52:34.434674 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4l6q7" Feb 14 05:52:34 crc kubenswrapper[4867]: I0214 05:52:34.441034 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4l6q7"] Feb 14 05:52:34 crc kubenswrapper[4867]: I0214 05:52:34.535183 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd017092-381d-4839-bd5f-b8177c576ab1-utilities\") pod \"redhat-marketplace-4l6q7\" (UID: \"dd017092-381d-4839-bd5f-b8177c576ab1\") " pod="openshift-marketplace/redhat-marketplace-4l6q7" Feb 14 05:52:34 crc kubenswrapper[4867]: I0214 05:52:34.535547 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd017092-381d-4839-bd5f-b8177c576ab1-catalog-content\") pod \"redhat-marketplace-4l6q7\" (UID: \"dd017092-381d-4839-bd5f-b8177c576ab1\") " pod="openshift-marketplace/redhat-marketplace-4l6q7" Feb 14 05:52:34 crc kubenswrapper[4867]: I0214 05:52:34.535848 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtkqr\" (UniqueName: \"kubernetes.io/projected/dd017092-381d-4839-bd5f-b8177c576ab1-kube-api-access-wtkqr\") pod \"redhat-marketplace-4l6q7\" (UID: \"dd017092-381d-4839-bd5f-b8177c576ab1\") " pod="openshift-marketplace/redhat-marketplace-4l6q7" Feb 14 05:52:34 crc kubenswrapper[4867]: I0214 05:52:34.636838 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd017092-381d-4839-bd5f-b8177c576ab1-catalog-content\") pod \"redhat-marketplace-4l6q7\" (UID: \"dd017092-381d-4839-bd5f-b8177c576ab1\") " pod="openshift-marketplace/redhat-marketplace-4l6q7" Feb 14 05:52:34 crc kubenswrapper[4867]: I0214 05:52:34.637013 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtkqr\" (UniqueName: \"kubernetes.io/projected/dd017092-381d-4839-bd5f-b8177c576ab1-kube-api-access-wtkqr\") pod \"redhat-marketplace-4l6q7\" (UID: \"dd017092-381d-4839-bd5f-b8177c576ab1\") " pod="openshift-marketplace/redhat-marketplace-4l6q7" Feb 14 05:52:34 crc kubenswrapper[4867]: I0214 05:52:34.637097 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd017092-381d-4839-bd5f-b8177c576ab1-utilities\") pod \"redhat-marketplace-4l6q7\" (UID: \"dd017092-381d-4839-bd5f-b8177c576ab1\") " pod="openshift-marketplace/redhat-marketplace-4l6q7" Feb 14 05:52:34 crc kubenswrapper[4867]: I0214 05:52:34.637434 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd017092-381d-4839-bd5f-b8177c576ab1-catalog-content\") pod \"redhat-marketplace-4l6q7\" (UID: \"dd017092-381d-4839-bd5f-b8177c576ab1\") " pod="openshift-marketplace/redhat-marketplace-4l6q7" Feb 14 05:52:34 crc kubenswrapper[4867]: I0214 05:52:34.637629 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd017092-381d-4839-bd5f-b8177c576ab1-utilities\") pod \"redhat-marketplace-4l6q7\" (UID: \"dd017092-381d-4839-bd5f-b8177c576ab1\") " pod="openshift-marketplace/redhat-marketplace-4l6q7" Feb 14 05:52:34 crc kubenswrapper[4867]: I0214 05:52:34.661677 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtkqr\" (UniqueName: \"kubernetes.io/projected/dd017092-381d-4839-bd5f-b8177c576ab1-kube-api-access-wtkqr\") pod \"redhat-marketplace-4l6q7\" (UID: \"dd017092-381d-4839-bd5f-b8177c576ab1\") " pod="openshift-marketplace/redhat-marketplace-4l6q7" Feb 14 05:52:34 crc kubenswrapper[4867]: I0214 05:52:34.754091 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4l6q7" Feb 14 05:52:34 crc kubenswrapper[4867]: I0214 05:52:34.984617 4867 generic.go:334] "Generic (PLEG): container finished" podID="b8b6ff93-1581-48eb-b74d-f7c97cdb1918" containerID="fd380d7db84361518f8a7673c0c88c1dc8ce8c1cbbe679b0aafd4c0d3248660f" exitCode=0 Feb 14 05:52:34 crc kubenswrapper[4867]: I0214 05:52:34.984657 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rtzc7/crc-debug-tz25z" event={"ID":"b8b6ff93-1581-48eb-b74d-f7c97cdb1918","Type":"ContainerDied","Data":"fd380d7db84361518f8a7673c0c88c1dc8ce8c1cbbe679b0aafd4c0d3248660f"} Feb 14 05:52:35 crc kubenswrapper[4867]: I0214 05:52:35.261335 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4l6q7"] Feb 14 05:52:36 crc kubenswrapper[4867]: I0214 05:52:36.001597 4867 generic.go:334] "Generic (PLEG): container finished" podID="dd017092-381d-4839-bd5f-b8177c576ab1" containerID="42597f52fbf5596dc6acabedb00d0864ac7884ca48008427361f865ec674d43f" exitCode=0 Feb 14 05:52:36 crc kubenswrapper[4867]: I0214 05:52:36.001710 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4l6q7" event={"ID":"dd017092-381d-4839-bd5f-b8177c576ab1","Type":"ContainerDied","Data":"42597f52fbf5596dc6acabedb00d0864ac7884ca48008427361f865ec674d43f"} Feb 14 05:52:36 crc kubenswrapper[4867]: I0214 05:52:36.002078 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4l6q7" event={"ID":"dd017092-381d-4839-bd5f-b8177c576ab1","Type":"ContainerStarted","Data":"6e4058288a301527f9e48b146670a568dad7aaba4b896ef17241b2794faaef0b"} Feb 14 05:52:36 crc kubenswrapper[4867]: I0214 05:52:36.147270 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rtzc7/crc-debug-tz25z" Feb 14 05:52:36 crc kubenswrapper[4867]: I0214 05:52:36.185134 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-rtzc7/crc-debug-tz25z"] Feb 14 05:52:36 crc kubenswrapper[4867]: I0214 05:52:36.196130 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-rtzc7/crc-debug-tz25z"] Feb 14 05:52:36 crc kubenswrapper[4867]: I0214 05:52:36.276229 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j95rn\" (UniqueName: \"kubernetes.io/projected/b8b6ff93-1581-48eb-b74d-f7c97cdb1918-kube-api-access-j95rn\") pod \"b8b6ff93-1581-48eb-b74d-f7c97cdb1918\" (UID: \"b8b6ff93-1581-48eb-b74d-f7c97cdb1918\") " Feb 14 05:52:36 crc kubenswrapper[4867]: I0214 05:52:36.276819 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b8b6ff93-1581-48eb-b74d-f7c97cdb1918-host\") pod \"b8b6ff93-1581-48eb-b74d-f7c97cdb1918\" (UID: \"b8b6ff93-1581-48eb-b74d-f7c97cdb1918\") " Feb 14 05:52:36 crc kubenswrapper[4867]: I0214 05:52:36.276944 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b8b6ff93-1581-48eb-b74d-f7c97cdb1918-host" (OuterVolumeSpecName: "host") pod "b8b6ff93-1581-48eb-b74d-f7c97cdb1918" (UID: "b8b6ff93-1581-48eb-b74d-f7c97cdb1918"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 05:52:36 crc kubenswrapper[4867]: I0214 05:52:36.278181 4867 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b8b6ff93-1581-48eb-b74d-f7c97cdb1918-host\") on node \"crc\" DevicePath \"\"" Feb 14 05:52:36 crc kubenswrapper[4867]: I0214 05:52:36.285810 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8b6ff93-1581-48eb-b74d-f7c97cdb1918-kube-api-access-j95rn" (OuterVolumeSpecName: "kube-api-access-j95rn") pod "b8b6ff93-1581-48eb-b74d-f7c97cdb1918" (UID: "b8b6ff93-1581-48eb-b74d-f7c97cdb1918"). InnerVolumeSpecName "kube-api-access-j95rn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 05:52:36 crc kubenswrapper[4867]: I0214 05:52:36.380805 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j95rn\" (UniqueName: \"kubernetes.io/projected/b8b6ff93-1581-48eb-b74d-f7c97cdb1918-kube-api-access-j95rn\") on node \"crc\" DevicePath \"\"" Feb 14 05:52:37 crc kubenswrapper[4867]: I0214 05:52:37.012480 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8b6ff93-1581-48eb-b74d-f7c97cdb1918" path="/var/lib/kubelet/pods/b8b6ff93-1581-48eb-b74d-f7c97cdb1918/volumes" Feb 14 05:52:37 crc kubenswrapper[4867]: I0214 05:52:37.015883 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rtzc7/crc-debug-tz25z" Feb 14 05:52:37 crc kubenswrapper[4867]: I0214 05:52:37.015891 4867 scope.go:117] "RemoveContainer" containerID="fd380d7db84361518f8a7673c0c88c1dc8ce8c1cbbe679b0aafd4c0d3248660f" Feb 14 05:52:37 crc kubenswrapper[4867]: I0214 05:52:37.018697 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4l6q7" event={"ID":"dd017092-381d-4839-bd5f-b8177c576ab1","Type":"ContainerStarted","Data":"34866708ac4aa71a03cad90f0816023d1803dec24c4a0beaa1d97dab3b2fee24"} Feb 14 05:52:37 crc kubenswrapper[4867]: I0214 05:52:37.359083 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-rtzc7/crc-debug-4hxqk"] Feb 14 05:52:37 crc kubenswrapper[4867]: E0214 05:52:37.360076 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8b6ff93-1581-48eb-b74d-f7c97cdb1918" containerName="container-00" Feb 14 05:52:37 crc kubenswrapper[4867]: I0214 05:52:37.360109 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8b6ff93-1581-48eb-b74d-f7c97cdb1918" containerName="container-00" Feb 14 05:52:37 crc kubenswrapper[4867]: I0214 05:52:37.360451 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8b6ff93-1581-48eb-b74d-f7c97cdb1918" containerName="container-00" Feb 14 05:52:37 crc kubenswrapper[4867]: I0214 05:52:37.361747 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rtzc7/crc-debug-4hxqk" Feb 14 05:52:37 crc kubenswrapper[4867]: I0214 05:52:37.363845 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-rtzc7"/"default-dockercfg-kt9b9" Feb 14 05:52:37 crc kubenswrapper[4867]: I0214 05:52:37.505776 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f6ac22fd-fd3d-4423-885f-165f2cfb3e40-host\") pod \"crc-debug-4hxqk\" (UID: \"f6ac22fd-fd3d-4423-885f-165f2cfb3e40\") " pod="openshift-must-gather-rtzc7/crc-debug-4hxqk" Feb 14 05:52:37 crc kubenswrapper[4867]: I0214 05:52:37.505932 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58cd8\" (UniqueName: \"kubernetes.io/projected/f6ac22fd-fd3d-4423-885f-165f2cfb3e40-kube-api-access-58cd8\") pod \"crc-debug-4hxqk\" (UID: \"f6ac22fd-fd3d-4423-885f-165f2cfb3e40\") " pod="openshift-must-gather-rtzc7/crc-debug-4hxqk" Feb 14 05:52:37 crc kubenswrapper[4867]: I0214 05:52:37.608397 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58cd8\" (UniqueName: \"kubernetes.io/projected/f6ac22fd-fd3d-4423-885f-165f2cfb3e40-kube-api-access-58cd8\") pod \"crc-debug-4hxqk\" (UID: \"f6ac22fd-fd3d-4423-885f-165f2cfb3e40\") " pod="openshift-must-gather-rtzc7/crc-debug-4hxqk" Feb 14 05:52:37 crc kubenswrapper[4867]: I0214 05:52:37.608610 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f6ac22fd-fd3d-4423-885f-165f2cfb3e40-host\") pod \"crc-debug-4hxqk\" (UID: \"f6ac22fd-fd3d-4423-885f-165f2cfb3e40\") " pod="openshift-must-gather-rtzc7/crc-debug-4hxqk" Feb 14 05:52:37 crc kubenswrapper[4867]: I0214 05:52:37.608903 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f6ac22fd-fd3d-4423-885f-165f2cfb3e40-host\") pod \"crc-debug-4hxqk\" (UID: \"f6ac22fd-fd3d-4423-885f-165f2cfb3e40\") " pod="openshift-must-gather-rtzc7/crc-debug-4hxqk" Feb 14 05:52:37 crc kubenswrapper[4867]: I0214 05:52:37.631156 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58cd8\" (UniqueName: \"kubernetes.io/projected/f6ac22fd-fd3d-4423-885f-165f2cfb3e40-kube-api-access-58cd8\") pod \"crc-debug-4hxqk\" (UID: \"f6ac22fd-fd3d-4423-885f-165f2cfb3e40\") " pod="openshift-must-gather-rtzc7/crc-debug-4hxqk" Feb 14 05:52:37 crc kubenswrapper[4867]: I0214 05:52:37.695953 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rtzc7/crc-debug-4hxqk" Feb 14 05:52:37 crc kubenswrapper[4867]: W0214 05:52:37.723548 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf6ac22fd_fd3d_4423_885f_165f2cfb3e40.slice/crio-83e31f3555d0823b5e024a7d8a5547c1b122056309f05c4836e880c92d5acc5a WatchSource:0}: Error finding container 83e31f3555d0823b5e024a7d8a5547c1b122056309f05c4836e880c92d5acc5a: Status 404 returned error can't find the container with id 83e31f3555d0823b5e024a7d8a5547c1b122056309f05c4836e880c92d5acc5a Feb 14 05:52:38 crc kubenswrapper[4867]: I0214 05:52:38.033175 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rtzc7/crc-debug-4hxqk" event={"ID":"f6ac22fd-fd3d-4423-885f-165f2cfb3e40","Type":"ContainerStarted","Data":"d33a1717567f6a680e45e042d96f55e350bd59885fe58a65c1000f62e337ee63"} Feb 14 05:52:38 crc kubenswrapper[4867]: I0214 05:52:38.033218 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rtzc7/crc-debug-4hxqk" event={"ID":"f6ac22fd-fd3d-4423-885f-165f2cfb3e40","Type":"ContainerStarted","Data":"83e31f3555d0823b5e024a7d8a5547c1b122056309f05c4836e880c92d5acc5a"} Feb 14 05:52:38 crc kubenswrapper[4867]: I0214 05:52:38.038960 4867 generic.go:334] "Generic (PLEG): container finished" podID="dd017092-381d-4839-bd5f-b8177c576ab1" containerID="34866708ac4aa71a03cad90f0816023d1803dec24c4a0beaa1d97dab3b2fee24" exitCode=0 Feb 14 05:52:38 crc kubenswrapper[4867]: I0214 05:52:38.039047 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4l6q7" event={"ID":"dd017092-381d-4839-bd5f-b8177c576ab1","Type":"ContainerDied","Data":"34866708ac4aa71a03cad90f0816023d1803dec24c4a0beaa1d97dab3b2fee24"} Feb 14 05:52:38 crc kubenswrapper[4867]: I0214 05:52:38.061994 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-rtzc7/crc-debug-4hxqk" podStartSLOduration=1.061971412 podStartE2EDuration="1.061971412s" podCreationTimestamp="2026-02-14 05:52:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 05:52:38.045782227 +0000 UTC m=+6190.126719541" watchObservedRunningTime="2026-02-14 05:52:38.061971412 +0000 UTC m=+6190.142908736" Feb 14 05:52:39 crc kubenswrapper[4867]: I0214 05:52:39.050936 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4l6q7" event={"ID":"dd017092-381d-4839-bd5f-b8177c576ab1","Type":"ContainerStarted","Data":"7f3fa20b1289d32c7b1976f73449e5acc7b19d562021b6942a508a80adbfa767"} Feb 14 05:52:39 crc kubenswrapper[4867]: I0214 05:52:39.053549 4867 generic.go:334] "Generic (PLEG): container finished" podID="f6ac22fd-fd3d-4423-885f-165f2cfb3e40" containerID="d33a1717567f6a680e45e042d96f55e350bd59885fe58a65c1000f62e337ee63" exitCode=0 Feb 14 05:52:39 crc kubenswrapper[4867]: I0214 05:52:39.053582 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rtzc7/crc-debug-4hxqk" event={"ID":"f6ac22fd-fd3d-4423-885f-165f2cfb3e40","Type":"ContainerDied","Data":"d33a1717567f6a680e45e042d96f55e350bd59885fe58a65c1000f62e337ee63"} Feb 14 05:52:39 crc kubenswrapper[4867]: I0214 05:52:39.087114 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4l6q7" podStartSLOduration=2.448085352 podStartE2EDuration="5.087095792s" podCreationTimestamp="2026-02-14 05:52:34 +0000 UTC" firstStartedPulling="2026-02-14 05:52:36.004097651 +0000 UTC m=+6188.085034965" lastFinishedPulling="2026-02-14 05:52:38.643108101 +0000 UTC m=+6190.724045405" observedRunningTime="2026-02-14 05:52:39.074196963 +0000 UTC m=+6191.155134277" watchObservedRunningTime="2026-02-14 05:52:39.087095792 +0000 UTC m=+6191.168033106" Feb 14 05:52:40 crc kubenswrapper[4867]: I0214 05:52:40.195942 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rtzc7/crc-debug-4hxqk" Feb 14 05:52:40 crc kubenswrapper[4867]: I0214 05:52:40.271383 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-rtzc7/crc-debug-4hxqk"] Feb 14 05:52:40 crc kubenswrapper[4867]: I0214 05:52:40.286031 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-rtzc7/crc-debug-4hxqk"] Feb 14 05:52:40 crc kubenswrapper[4867]: I0214 05:52:40.368744 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58cd8\" (UniqueName: \"kubernetes.io/projected/f6ac22fd-fd3d-4423-885f-165f2cfb3e40-kube-api-access-58cd8\") pod \"f6ac22fd-fd3d-4423-885f-165f2cfb3e40\" (UID: \"f6ac22fd-fd3d-4423-885f-165f2cfb3e40\") " Feb 14 05:52:40 crc kubenswrapper[4867]: I0214 05:52:40.368988 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f6ac22fd-fd3d-4423-885f-165f2cfb3e40-host\") pod \"f6ac22fd-fd3d-4423-885f-165f2cfb3e40\" (UID: \"f6ac22fd-fd3d-4423-885f-165f2cfb3e40\") " Feb 14 05:52:40 crc kubenswrapper[4867]: I0214 05:52:40.369337 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f6ac22fd-fd3d-4423-885f-165f2cfb3e40-host" (OuterVolumeSpecName: "host") pod "f6ac22fd-fd3d-4423-885f-165f2cfb3e40" (UID: "f6ac22fd-fd3d-4423-885f-165f2cfb3e40"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 05:52:40 crc kubenswrapper[4867]: I0214 05:52:40.369866 4867 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f6ac22fd-fd3d-4423-885f-165f2cfb3e40-host\") on node \"crc\" DevicePath \"\"" Feb 14 05:52:40 crc kubenswrapper[4867]: I0214 05:52:40.375856 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6ac22fd-fd3d-4423-885f-165f2cfb3e40-kube-api-access-58cd8" (OuterVolumeSpecName: "kube-api-access-58cd8") pod "f6ac22fd-fd3d-4423-885f-165f2cfb3e40" (UID: "f6ac22fd-fd3d-4423-885f-165f2cfb3e40"). InnerVolumeSpecName "kube-api-access-58cd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 05:52:40 crc kubenswrapper[4867]: I0214 05:52:40.472320 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-58cd8\" (UniqueName: \"kubernetes.io/projected/f6ac22fd-fd3d-4423-885f-165f2cfb3e40-kube-api-access-58cd8\") on node \"crc\" DevicePath \"\"" Feb 14 05:52:41 crc kubenswrapper[4867]: I0214 05:52:41.025601 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6ac22fd-fd3d-4423-885f-165f2cfb3e40" path="/var/lib/kubelet/pods/f6ac22fd-fd3d-4423-885f-165f2cfb3e40/volumes" Feb 14 05:52:41 crc kubenswrapper[4867]: I0214 05:52:41.078311 4867 scope.go:117] "RemoveContainer" containerID="d33a1717567f6a680e45e042d96f55e350bd59885fe58a65c1000f62e337ee63" Feb 14 05:52:41 crc kubenswrapper[4867]: I0214 05:52:41.078343 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rtzc7/crc-debug-4hxqk" Feb 14 05:52:41 crc kubenswrapper[4867]: I0214 05:52:41.417396 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-rtzc7/crc-debug-ppckp"] Feb 14 05:52:41 crc kubenswrapper[4867]: E0214 05:52:41.418181 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6ac22fd-fd3d-4423-885f-165f2cfb3e40" containerName="container-00" Feb 14 05:52:41 crc kubenswrapper[4867]: I0214 05:52:41.418194 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6ac22fd-fd3d-4423-885f-165f2cfb3e40" containerName="container-00" Feb 14 05:52:41 crc kubenswrapper[4867]: I0214 05:52:41.418493 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6ac22fd-fd3d-4423-885f-165f2cfb3e40" containerName="container-00" Feb 14 05:52:41 crc kubenswrapper[4867]: I0214 05:52:41.419375 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rtzc7/crc-debug-ppckp" Feb 14 05:52:41 crc kubenswrapper[4867]: I0214 05:52:41.424685 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-rtzc7"/"default-dockercfg-kt9b9" Feb 14 05:52:41 crc kubenswrapper[4867]: I0214 05:52:41.596807 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/abe60a3f-52b5-45a9-8603-17020367713d-host\") pod \"crc-debug-ppckp\" (UID: \"abe60a3f-52b5-45a9-8603-17020367713d\") " pod="openshift-must-gather-rtzc7/crc-debug-ppckp" Feb 14 05:52:41 crc kubenswrapper[4867]: I0214 05:52:41.596986 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2dgd\" (UniqueName: \"kubernetes.io/projected/abe60a3f-52b5-45a9-8603-17020367713d-kube-api-access-p2dgd\") pod \"crc-debug-ppckp\" (UID: \"abe60a3f-52b5-45a9-8603-17020367713d\") " pod="openshift-must-gather-rtzc7/crc-debug-ppckp" Feb 14 05:52:41 crc kubenswrapper[4867]: I0214 05:52:41.699744 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2dgd\" (UniqueName: \"kubernetes.io/projected/abe60a3f-52b5-45a9-8603-17020367713d-kube-api-access-p2dgd\") pod \"crc-debug-ppckp\" (UID: \"abe60a3f-52b5-45a9-8603-17020367713d\") " pod="openshift-must-gather-rtzc7/crc-debug-ppckp" Feb 14 05:52:41 crc kubenswrapper[4867]: I0214 05:52:41.700167 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/abe60a3f-52b5-45a9-8603-17020367713d-host\") pod \"crc-debug-ppckp\" (UID: \"abe60a3f-52b5-45a9-8603-17020367713d\") " pod="openshift-must-gather-rtzc7/crc-debug-ppckp" Feb 14 05:52:41 crc kubenswrapper[4867]: I0214 05:52:41.700492 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/abe60a3f-52b5-45a9-8603-17020367713d-host\") pod \"crc-debug-ppckp\" (UID: \"abe60a3f-52b5-45a9-8603-17020367713d\") " pod="openshift-must-gather-rtzc7/crc-debug-ppckp" Feb 14 05:52:41 crc kubenswrapper[4867]: I0214 05:52:41.721180 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2dgd\" (UniqueName: \"kubernetes.io/projected/abe60a3f-52b5-45a9-8603-17020367713d-kube-api-access-p2dgd\") pod \"crc-debug-ppckp\" (UID: \"abe60a3f-52b5-45a9-8603-17020367713d\") " pod="openshift-must-gather-rtzc7/crc-debug-ppckp" Feb 14 05:52:41 crc kubenswrapper[4867]: I0214 05:52:41.736294 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rtzc7/crc-debug-ppckp" Feb 14 05:52:41 crc kubenswrapper[4867]: W0214 05:52:41.785022 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podabe60a3f_52b5_45a9_8603_17020367713d.slice/crio-9d90e3055d5802151f70dd3eda3bc8ae23806b0cd5b5f89e99084cb4aa991c4f WatchSource:0}: Error finding container 9d90e3055d5802151f70dd3eda3bc8ae23806b0cd5b5f89e99084cb4aa991c4f: Status 404 returned error can't find the container with id 9d90e3055d5802151f70dd3eda3bc8ae23806b0cd5b5f89e99084cb4aa991c4f Feb 14 05:52:42 crc kubenswrapper[4867]: I0214 05:52:42.092871 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rtzc7/crc-debug-ppckp" event={"ID":"abe60a3f-52b5-45a9-8603-17020367713d","Type":"ContainerStarted","Data":"9d90e3055d5802151f70dd3eda3bc8ae23806b0cd5b5f89e99084cb4aa991c4f"} Feb 14 05:52:43 crc kubenswrapper[4867]: I0214 05:52:43.107600 4867 generic.go:334] "Generic (PLEG): container finished" podID="abe60a3f-52b5-45a9-8603-17020367713d" containerID="6b5cffd7e072900308ed2bccccbaeab058d3de9d59f219fa2df9bcdc2a813ccc" exitCode=0 Feb 14 05:52:43 crc kubenswrapper[4867]: I0214 05:52:43.107831 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rtzc7/crc-debug-ppckp" event={"ID":"abe60a3f-52b5-45a9-8603-17020367713d","Type":"ContainerDied","Data":"6b5cffd7e072900308ed2bccccbaeab058d3de9d59f219fa2df9bcdc2a813ccc"} Feb 14 05:52:43 crc kubenswrapper[4867]: I0214 05:52:43.163770 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-rtzc7/crc-debug-ppckp"] Feb 14 05:52:43 crc kubenswrapper[4867]: I0214 05:52:43.174832 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-rtzc7/crc-debug-ppckp"] Feb 14 05:52:44 crc kubenswrapper[4867]: I0214 05:52:44.278800 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rtzc7/crc-debug-ppckp" Feb 14 05:52:44 crc kubenswrapper[4867]: I0214 05:52:44.361152 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/abe60a3f-52b5-45a9-8603-17020367713d-host\") pod \"abe60a3f-52b5-45a9-8603-17020367713d\" (UID: \"abe60a3f-52b5-45a9-8603-17020367713d\") " Feb 14 05:52:44 crc kubenswrapper[4867]: I0214 05:52:44.361213 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2dgd\" (UniqueName: \"kubernetes.io/projected/abe60a3f-52b5-45a9-8603-17020367713d-kube-api-access-p2dgd\") pod \"abe60a3f-52b5-45a9-8603-17020367713d\" (UID: \"abe60a3f-52b5-45a9-8603-17020367713d\") " Feb 14 05:52:44 crc kubenswrapper[4867]: I0214 05:52:44.361297 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/abe60a3f-52b5-45a9-8603-17020367713d-host" (OuterVolumeSpecName: "host") pod "abe60a3f-52b5-45a9-8603-17020367713d" (UID: "abe60a3f-52b5-45a9-8603-17020367713d"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 05:52:44 crc kubenswrapper[4867]: I0214 05:52:44.362449 4867 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/abe60a3f-52b5-45a9-8603-17020367713d-host\") on node \"crc\" DevicePath \"\"" Feb 14 05:52:44 crc kubenswrapper[4867]: I0214 05:52:44.369663 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abe60a3f-52b5-45a9-8603-17020367713d-kube-api-access-p2dgd" (OuterVolumeSpecName: "kube-api-access-p2dgd") pod "abe60a3f-52b5-45a9-8603-17020367713d" (UID: "abe60a3f-52b5-45a9-8603-17020367713d"). InnerVolumeSpecName "kube-api-access-p2dgd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 05:52:44 crc kubenswrapper[4867]: I0214 05:52:44.464885 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2dgd\" (UniqueName: \"kubernetes.io/projected/abe60a3f-52b5-45a9-8603-17020367713d-kube-api-access-p2dgd\") on node \"crc\" DevicePath \"\"" Feb 14 05:52:44 crc kubenswrapper[4867]: I0214 05:52:44.755596 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4l6q7" Feb 14 05:52:44 crc kubenswrapper[4867]: I0214 05:52:44.755869 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4l6q7" Feb 14 05:52:44 crc kubenswrapper[4867]: I0214 05:52:44.809300 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4l6q7" Feb 14 05:52:45 crc kubenswrapper[4867]: I0214 05:52:45.010079 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abe60a3f-52b5-45a9-8603-17020367713d" path="/var/lib/kubelet/pods/abe60a3f-52b5-45a9-8603-17020367713d/volumes" Feb 14 05:52:45 crc kubenswrapper[4867]: I0214 05:52:45.128824 4867 scope.go:117] "RemoveContainer" containerID="6b5cffd7e072900308ed2bccccbaeab058d3de9d59f219fa2df9bcdc2a813ccc" Feb 14 05:52:45 crc kubenswrapper[4867]: I0214 05:52:45.128838 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rtzc7/crc-debug-ppckp" Feb 14 05:52:45 crc kubenswrapper[4867]: I0214 05:52:45.194125 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4l6q7" Feb 14 05:52:45 crc kubenswrapper[4867]: I0214 05:52:45.252201 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4l6q7"] Feb 14 05:52:47 crc kubenswrapper[4867]: I0214 05:52:47.157388 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4l6q7" podUID="dd017092-381d-4839-bd5f-b8177c576ab1" containerName="registry-server" containerID="cri-o://7f3fa20b1289d32c7b1976f73449e5acc7b19d562021b6942a508a80adbfa767" gracePeriod=2 Feb 14 05:52:47 crc kubenswrapper[4867]: I0214 05:52:47.727004 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4l6q7" Feb 14 05:52:47 crc kubenswrapper[4867]: I0214 05:52:47.842521 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wtkqr\" (UniqueName: \"kubernetes.io/projected/dd017092-381d-4839-bd5f-b8177c576ab1-kube-api-access-wtkqr\") pod \"dd017092-381d-4839-bd5f-b8177c576ab1\" (UID: \"dd017092-381d-4839-bd5f-b8177c576ab1\") " Feb 14 05:52:47 crc kubenswrapper[4867]: I0214 05:52:47.842601 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd017092-381d-4839-bd5f-b8177c576ab1-catalog-content\") pod \"dd017092-381d-4839-bd5f-b8177c576ab1\" (UID: \"dd017092-381d-4839-bd5f-b8177c576ab1\") " Feb 14 05:52:47 crc kubenswrapper[4867]: I0214 05:52:47.842678 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd017092-381d-4839-bd5f-b8177c576ab1-utilities\") pod \"dd017092-381d-4839-bd5f-b8177c576ab1\" (UID: \"dd017092-381d-4839-bd5f-b8177c576ab1\") " Feb 14 05:52:47 crc kubenswrapper[4867]: I0214 05:52:47.844018 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd017092-381d-4839-bd5f-b8177c576ab1-utilities" (OuterVolumeSpecName: "utilities") pod "dd017092-381d-4839-bd5f-b8177c576ab1" (UID: "dd017092-381d-4839-bd5f-b8177c576ab1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 05:52:47 crc kubenswrapper[4867]: I0214 05:52:47.852462 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd017092-381d-4839-bd5f-b8177c576ab1-kube-api-access-wtkqr" (OuterVolumeSpecName: "kube-api-access-wtkqr") pod "dd017092-381d-4839-bd5f-b8177c576ab1" (UID: "dd017092-381d-4839-bd5f-b8177c576ab1"). InnerVolumeSpecName "kube-api-access-wtkqr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 05:52:47 crc kubenswrapper[4867]: I0214 05:52:47.871525 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd017092-381d-4839-bd5f-b8177c576ab1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dd017092-381d-4839-bd5f-b8177c576ab1" (UID: "dd017092-381d-4839-bd5f-b8177c576ab1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 05:52:47 crc kubenswrapper[4867]: I0214 05:52:47.946057 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wtkqr\" (UniqueName: \"kubernetes.io/projected/dd017092-381d-4839-bd5f-b8177c576ab1-kube-api-access-wtkqr\") on node \"crc\" DevicePath \"\"" Feb 14 05:52:47 crc kubenswrapper[4867]: I0214 05:52:47.946098 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd017092-381d-4839-bd5f-b8177c576ab1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 05:52:47 crc kubenswrapper[4867]: I0214 05:52:47.946109 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd017092-381d-4839-bd5f-b8177c576ab1-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 05:52:48 crc kubenswrapper[4867]: I0214 05:52:48.174370 4867 generic.go:334] "Generic (PLEG): container finished" podID="dd017092-381d-4839-bd5f-b8177c576ab1" containerID="7f3fa20b1289d32c7b1976f73449e5acc7b19d562021b6942a508a80adbfa767" exitCode=0 Feb 14 05:52:48 crc kubenswrapper[4867]: I0214 05:52:48.174433 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4l6q7" event={"ID":"dd017092-381d-4839-bd5f-b8177c576ab1","Type":"ContainerDied","Data":"7f3fa20b1289d32c7b1976f73449e5acc7b19d562021b6942a508a80adbfa767"} Feb 14 05:52:48 crc kubenswrapper[4867]: I0214 05:52:48.174470 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4l6q7" event={"ID":"dd017092-381d-4839-bd5f-b8177c576ab1","Type":"ContainerDied","Data":"6e4058288a301527f9e48b146670a568dad7aaba4b896ef17241b2794faaef0b"} Feb 14 05:52:48 crc kubenswrapper[4867]: I0214 05:52:48.174494 4867 scope.go:117] "RemoveContainer" containerID="7f3fa20b1289d32c7b1976f73449e5acc7b19d562021b6942a508a80adbfa767" Feb 14 05:52:48 crc kubenswrapper[4867]: I0214 05:52:48.174710 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4l6q7" Feb 14 05:52:48 crc kubenswrapper[4867]: I0214 05:52:48.203552 4867 scope.go:117] "RemoveContainer" containerID="34866708ac4aa71a03cad90f0816023d1803dec24c4a0beaa1d97dab3b2fee24" Feb 14 05:52:48 crc kubenswrapper[4867]: I0214 05:52:48.251400 4867 scope.go:117] "RemoveContainer" containerID="42597f52fbf5596dc6acabedb00d0864ac7884ca48008427361f865ec674d43f" Feb 14 05:52:48 crc kubenswrapper[4867]: I0214 05:52:48.290476 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4l6q7"] Feb 14 05:52:48 crc kubenswrapper[4867]: I0214 05:52:48.306098 4867 scope.go:117] "RemoveContainer" containerID="7f3fa20b1289d32c7b1976f73449e5acc7b19d562021b6942a508a80adbfa767" Feb 14 05:52:48 crc kubenswrapper[4867]: E0214 05:52:48.316495 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f3fa20b1289d32c7b1976f73449e5acc7b19d562021b6942a508a80adbfa767\": container with ID starting with 7f3fa20b1289d32c7b1976f73449e5acc7b19d562021b6942a508a80adbfa767 not found: ID does not exist" containerID="7f3fa20b1289d32c7b1976f73449e5acc7b19d562021b6942a508a80adbfa767" Feb 14 05:52:48 crc kubenswrapper[4867]: I0214 05:52:48.316586 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f3fa20b1289d32c7b1976f73449e5acc7b19d562021b6942a508a80adbfa767"} err="failed to get container status \"7f3fa20b1289d32c7b1976f73449e5acc7b19d562021b6942a508a80adbfa767\": rpc error: code = NotFound desc = could not find container \"7f3fa20b1289d32c7b1976f73449e5acc7b19d562021b6942a508a80adbfa767\": container with ID starting with 7f3fa20b1289d32c7b1976f73449e5acc7b19d562021b6942a508a80adbfa767 not found: ID does not exist" Feb 14 05:52:48 crc kubenswrapper[4867]: I0214 05:52:48.316616 4867 scope.go:117] "RemoveContainer" containerID="34866708ac4aa71a03cad90f0816023d1803dec24c4a0beaa1d97dab3b2fee24" Feb 14 05:52:48 crc kubenswrapper[4867]: E0214 05:52:48.317638 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34866708ac4aa71a03cad90f0816023d1803dec24c4a0beaa1d97dab3b2fee24\": container with ID starting with 34866708ac4aa71a03cad90f0816023d1803dec24c4a0beaa1d97dab3b2fee24 not found: ID does not exist" containerID="34866708ac4aa71a03cad90f0816023d1803dec24c4a0beaa1d97dab3b2fee24" Feb 14 05:52:48 crc kubenswrapper[4867]: I0214 05:52:48.317706 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34866708ac4aa71a03cad90f0816023d1803dec24c4a0beaa1d97dab3b2fee24"} err="failed to get container status \"34866708ac4aa71a03cad90f0816023d1803dec24c4a0beaa1d97dab3b2fee24\": rpc error: code = NotFound desc = could not find container \"34866708ac4aa71a03cad90f0816023d1803dec24c4a0beaa1d97dab3b2fee24\": container with ID starting with 34866708ac4aa71a03cad90f0816023d1803dec24c4a0beaa1d97dab3b2fee24 not found: ID does not exist" Feb 14 05:52:48 crc kubenswrapper[4867]: I0214 05:52:48.317727 4867 scope.go:117] "RemoveContainer" containerID="42597f52fbf5596dc6acabedb00d0864ac7884ca48008427361f865ec674d43f" Feb 14 05:52:48 crc kubenswrapper[4867]: E0214 05:52:48.318137 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42597f52fbf5596dc6acabedb00d0864ac7884ca48008427361f865ec674d43f\": container with ID starting with 42597f52fbf5596dc6acabedb00d0864ac7884ca48008427361f865ec674d43f not found: ID does not exist" containerID="42597f52fbf5596dc6acabedb00d0864ac7884ca48008427361f865ec674d43f" Feb 14 05:52:48 crc kubenswrapper[4867]: I0214 05:52:48.318224 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42597f52fbf5596dc6acabedb00d0864ac7884ca48008427361f865ec674d43f"} err="failed to get container status \"42597f52fbf5596dc6acabedb00d0864ac7884ca48008427361f865ec674d43f\": rpc error: code = NotFound desc = could not find container \"42597f52fbf5596dc6acabedb00d0864ac7884ca48008427361f865ec674d43f\": container with ID starting with 42597f52fbf5596dc6acabedb00d0864ac7884ca48008427361f865ec674d43f not found: ID does not exist" Feb 14 05:52:48 crc kubenswrapper[4867]: I0214 05:52:48.324669 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4l6q7"] Feb 14 05:52:48 crc kubenswrapper[4867]: E0214 05:52:48.353628 4867 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd017092_381d_4839_bd5f_b8177c576ab1.slice/crio-6e4058288a301527f9e48b146670a568dad7aaba4b896ef17241b2794faaef0b\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd017092_381d_4839_bd5f_b8177c576ab1.slice\": RecentStats: unable to find data in memory cache]" Feb 14 05:52:48 crc kubenswrapper[4867]: E0214 05:52:48.353964 4867 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd017092_381d_4839_bd5f_b8177c576ab1.slice/crio-6e4058288a301527f9e48b146670a568dad7aaba4b896ef17241b2794faaef0b\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd017092_381d_4839_bd5f_b8177c576ab1.slice\": RecentStats: unable to find data in memory cache]" Feb 14 05:52:49 crc kubenswrapper[4867]: I0214 05:52:49.011710 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd017092-381d-4839-bd5f-b8177c576ab1" path="/var/lib/kubelet/pods/dd017092-381d-4839-bd5f-b8177c576ab1/volumes" Feb 14 05:53:01 crc kubenswrapper[4867]: I0214 05:53:01.251078 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 05:53:01 crc kubenswrapper[4867]: I0214 05:53:01.252031 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 05:53:10 crc kubenswrapper[4867]: I0214 05:53:10.912379 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_532a3c72-e995-4be9-a7db-f288b6c1a311/aodh-api/0.log" Feb 14 05:53:11 crc kubenswrapper[4867]: I0214 05:53:11.116960 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_532a3c72-e995-4be9-a7db-f288b6c1a311/aodh-listener/0.log" Feb 14 05:53:11 crc kubenswrapper[4867]: I0214 05:53:11.149261 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_532a3c72-e995-4be9-a7db-f288b6c1a311/aodh-evaluator/0.log" Feb 14 05:53:11 crc kubenswrapper[4867]: I0214 05:53:11.226058 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_532a3c72-e995-4be9-a7db-f288b6c1a311/aodh-notifier/0.log" Feb 14 05:53:11 crc kubenswrapper[4867]: I0214 05:53:11.354876 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-584d8cfdf8-4lt8c_3375fa12-2e3a-431e-9341-72d5a213083e/barbican-api/0.log" Feb 14 05:53:11 crc kubenswrapper[4867]: I0214 05:53:11.357373 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-584d8cfdf8-4lt8c_3375fa12-2e3a-431e-9341-72d5a213083e/barbican-api-log/0.log" Feb 14 05:53:11 crc kubenswrapper[4867]: I0214 05:53:11.528612 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7f6876db8-kxmgv_4a4a3883-6484-4af9-a7f0-8dd5ee4da247/barbican-keystone-listener/0.log" Feb 14 05:53:11 crc kubenswrapper[4867]: I0214 05:53:11.607736 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7f6876db8-kxmgv_4a4a3883-6484-4af9-a7f0-8dd5ee4da247/barbican-keystone-listener-log/0.log" Feb 14 05:53:11 crc kubenswrapper[4867]: I0214 05:53:11.700576 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6cb8d59db5-hc7rx_6517b483-cb9c-465e-a7f0-f697b6ba3189/barbican-worker/0.log" Feb 14 05:53:11 crc kubenswrapper[4867]: I0214 05:53:11.785181 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6cb8d59db5-hc7rx_6517b483-cb9c-465e-a7f0-f697b6ba3189/barbican-worker-log/0.log" Feb 14 05:53:11 crc kubenswrapper[4867]: I0214 05:53:11.884808 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-d4nh9_e3d43ea0-54e7-4fd1-892d-bbc3d01a5321/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 05:53:12 crc kubenswrapper[4867]: I0214 05:53:12.037695 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_27437fd9-2bc5-48ac-9e34-e733da15dd2b/ceilometer-central-agent/1.log" Feb 14 05:53:12 crc kubenswrapper[4867]: I0214 05:53:12.142918 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_27437fd9-2bc5-48ac-9e34-e733da15dd2b/ceilometer-central-agent/0.log" Feb 14 05:53:12 crc kubenswrapper[4867]: I0214 05:53:12.168796 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_27437fd9-2bc5-48ac-9e34-e733da15dd2b/ceilometer-notification-agent/0.log" Feb 14 05:53:12 crc kubenswrapper[4867]: I0214 05:53:12.251781 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_27437fd9-2bc5-48ac-9e34-e733da15dd2b/proxy-httpd/0.log" Feb 14 05:53:12 crc kubenswrapper[4867]: I0214 05:53:12.261343 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_27437fd9-2bc5-48ac-9e34-e733da15dd2b/sg-core/0.log" Feb 14 05:53:12 crc kubenswrapper[4867]: I0214 05:53:12.455224 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_195db0d6-0991-48b6-a7a1-ad5311555ede/cinder-api/0.log" Feb 14 05:53:12 crc kubenswrapper[4867]: I0214 05:53:12.482721 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_195db0d6-0991-48b6-a7a1-ad5311555ede/cinder-api-log/0.log" Feb 14 05:53:12 crc kubenswrapper[4867]: I0214 05:53:12.643735 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_38c903d9-50f6-418b-84d5-7ee82e9d1e2f/cinder-scheduler/1.log" Feb 14 05:53:12 crc kubenswrapper[4867]: I0214 05:53:12.673569 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_38c903d9-50f6-418b-84d5-7ee82e9d1e2f/cinder-scheduler/0.log" Feb 14 05:53:12 crc kubenswrapper[4867]: I0214 05:53:12.738287 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_38c903d9-50f6-418b-84d5-7ee82e9d1e2f/probe/0.log" Feb 14 05:53:12 crc kubenswrapper[4867]: I0214 05:53:12.832643 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-fq9nf_a716bc3f-98b5-4c50-af5f-46de007bd255/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 05:53:12 crc kubenswrapper[4867]: I0214 05:53:12.961696 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-78rwr_e04d43db-dfbf-41c6-8b73-48ff87baa800/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 05:53:13 crc kubenswrapper[4867]: I0214 05:53:13.098690 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6f6df4f56c-tnn8p_2ff227b0-1fbd-4d96-9201-8ef0fb5a68a6/init/0.log" Feb 14 05:53:13 crc kubenswrapper[4867]: I0214 05:53:13.288174 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6f6df4f56c-tnn8p_2ff227b0-1fbd-4d96-9201-8ef0fb5a68a6/init/0.log" Feb 14 05:53:13 crc kubenswrapper[4867]: I0214 05:53:13.368259 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6f6df4f56c-tnn8p_2ff227b0-1fbd-4d96-9201-8ef0fb5a68a6/dnsmasq-dns/0.log" Feb 14 05:53:13 crc kubenswrapper[4867]: I0214 05:53:13.369218 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-jsmhs_879dee23-804e-4b8a-ac20-0546383202b0/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 05:53:13 crc kubenswrapper[4867]: I0214 05:53:13.624989 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_f5e42dca-0c7d-485a-95bc-b26db4e12369/glance-httpd/0.log" Feb 14 05:53:13 crc kubenswrapper[4867]: I0214 05:53:13.664310 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_f5e42dca-0c7d-485a-95bc-b26db4e12369/glance-log/0.log" Feb 14 05:53:13 crc kubenswrapper[4867]: I0214 05:53:13.874637 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_b66304c6-61a4-4b8b-b77b-dd816c0a0890/glance-log/0.log" Feb 14 05:53:13 crc kubenswrapper[4867]: I0214 05:53:13.895058 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_b66304c6-61a4-4b8b-b77b-dd816c0a0890/glance-httpd/0.log" Feb 14 05:53:14 crc kubenswrapper[4867]: I0214 05:53:14.739869 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9_01cb12dd-9d34-4898-941a-05635d21630f/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 05:53:14 crc kubenswrapper[4867]: I0214 05:53:14.755582 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-7b479dbc77-k8ts7_fcce6a26-826f-4268-9007-2e3c4411450f/heat-engine/0.log" Feb 14 05:53:14 crc kubenswrapper[4867]: I0214 05:53:14.921065 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-64c645895b-sclxg_7996e855-fbe0-4324-a337-8841df83e714/heat-api/0.log" Feb 14 05:53:14 crc kubenswrapper[4867]: I0214 05:53:14.961335 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-57b4cc7645-246cl_24d4f5bc-b41b-4f17-977e-d36995a99521/heat-cfnapi/0.log" Feb 14 05:53:14 crc kubenswrapper[4867]: I0214 05:53:14.983640 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-c22xw_0b6f69a7-8ea6-48ad-aa0c-bd11b1efef10/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 05:53:15 crc kubenswrapper[4867]: I0214 05:53:15.225447 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29517421-jh7t8_dabbee2b-0869-439e-8c9c-f417ab44f850/keystone-cron/0.log" Feb 14 05:53:15 crc kubenswrapper[4867]: I0214 05:53:15.511269 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_89e70483-d3e8-4758-bb61-ae6147dd4f39/kube-state-metrics/0.log" Feb 14 05:53:15 crc kubenswrapper[4867]: I0214 05:53:15.556075 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-4rs2p_8ec3156c-bcce-4dee-8ce5-7773409e880e/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 05:53:15 crc kubenswrapper[4867]: I0214 05:53:15.803463 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-7595b47f77-vtg9d_1ddcc862-a10c-487c-aaa4-0e93df9c0005/keystone-api/0.log" Feb 14 05:53:15 crc kubenswrapper[4867]: I0214 05:53:15.893643 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_logging-edpm-deployment-openstack-edpm-ipam-jgnc5_6e133b22-e3ca-4be2-8e71-56b6ca79dab2/logging-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 05:53:16 crc kubenswrapper[4867]: I0214 05:53:16.065985 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mysqld-exporter-0_e9139dc7-b868-4f7c-9e7e-10e313ff1e10/mysqld-exporter/0.log" Feb 14 05:53:16 crc kubenswrapper[4867]: I0214 05:53:16.410840 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7886d5654f-wzr2s_d4a16bfe-366a-4143-932a-e0b51615c401/neutron-api/0.log" Feb 14 05:53:16 crc kubenswrapper[4867]: I0214 05:53:16.444729 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-wq44m_d07bc498-5b6c-465a-bda2-df814e9c19c8/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 05:53:16 crc kubenswrapper[4867]: I0214 05:53:16.477080 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7886d5654f-wzr2s_d4a16bfe-366a-4143-932a-e0b51615c401/neutron-httpd/0.log" Feb 14 05:53:17 crc kubenswrapper[4867]: I0214 05:53:17.153368 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_fdfa169f-f57f-4d9c-bef3-529878be941b/nova-cell0-conductor-conductor/0.log" Feb 14 05:53:17 crc kubenswrapper[4867]: I0214 05:53:17.398137 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_464bbcc9-1810-40bc-8773-bfa3e615b67b/nova-api-log/0.log" Feb 14 05:53:17 crc kubenswrapper[4867]: I0214 05:53:17.414209 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_e367f188-2aa4-4374-a768-92b8e463e40d/nova-cell1-conductor-conductor/0.log" Feb 14 05:53:17 crc kubenswrapper[4867]: I0214 05:53:17.749091 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_3e1bf5e4-7b04-4a47-aa41-e547815fc623/nova-cell1-novncproxy-novncproxy/0.log" Feb 14 05:53:17 crc kubenswrapper[4867]: I0214 05:53:17.771195 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_464bbcc9-1810-40bc-8773-bfa3e615b67b/nova-api-api/0.log" Feb 14 05:53:17 crc kubenswrapper[4867]: I0214 05:53:17.792960 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-s5lc4_8c3553e4-9d3b-4c1d-bbc3-35371d733c86/nova-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 05:53:18 crc kubenswrapper[4867]: I0214 05:53:18.124537 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_3748198f-49fe-4a76-bd81-4ad518a594e8/nova-metadata-log/0.log" Feb 14 05:53:18 crc kubenswrapper[4867]: I0214 05:53:18.596698 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_505de461-9e6f-4914-bf50-e2bf4149b566/mysql-bootstrap/0.log" Feb 14 05:53:18 crc kubenswrapper[4867]: I0214 05:53:18.639967 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_7bb228b6-c3a9-46ac-8c21-a8786c6ac11b/nova-scheduler-scheduler/0.log" Feb 14 05:53:18 crc kubenswrapper[4867]: I0214 05:53:18.855891 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_505de461-9e6f-4914-bf50-e2bf4149b566/mysql-bootstrap/0.log" Feb 14 05:53:18 crc kubenswrapper[4867]: I0214 05:53:18.943361 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_505de461-9e6f-4914-bf50-e2bf4149b566/galera/1.log" Feb 14 05:53:18 crc kubenswrapper[4867]: I0214 05:53:18.947361 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_505de461-9e6f-4914-bf50-e2bf4149b566/galera/0.log" Feb 14 05:53:19 crc kubenswrapper[4867]: I0214 05:53:19.237878 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_b27199a8-11ac-4e59-90b8-b42387dd6dd2/mysql-bootstrap/0.log" Feb 14 05:53:19 crc kubenswrapper[4867]: I0214 05:53:19.464194 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_b27199a8-11ac-4e59-90b8-b42387dd6dd2/mysql-bootstrap/0.log" Feb 14 05:53:19 crc kubenswrapper[4867]: I0214 05:53:19.532127 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_b27199a8-11ac-4e59-90b8-b42387dd6dd2/galera/0.log" Feb 14 05:53:19 crc kubenswrapper[4867]: I0214 05:53:19.603246 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_b27199a8-11ac-4e59-90b8-b42387dd6dd2/galera/1.log" Feb 14 05:53:19 crc kubenswrapper[4867]: I0214 05:53:19.908100 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_6fdee887-8ecb-4c1e-8a88-0284fc050f0e/openstackclient/0.log" Feb 14 05:53:20 crc kubenswrapper[4867]: I0214 05:53:20.184564 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-7lpqj_16c28c0f-9310-4721-87cf-2d1bb88b5bba/ovn-controller/0.log" Feb 14 05:53:20 crc kubenswrapper[4867]: I0214 05:53:20.317479 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-4gz6p_43e8f5ec-ba3d-4962-97f1-2be3a087852e/openstack-network-exporter/0.log" Feb 14 05:53:20 crc kubenswrapper[4867]: I0214 05:53:20.529744 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-dznst_6f356df8-0955-46c4-9166-2c1eef982399/ovsdb-server-init/0.log" Feb 14 05:53:20 crc kubenswrapper[4867]: I0214 05:53:20.559314 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_3748198f-49fe-4a76-bd81-4ad518a594e8/nova-metadata-metadata/0.log" Feb 14 05:53:20 crc kubenswrapper[4867]: I0214 05:53:20.696147 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-dznst_6f356df8-0955-46c4-9166-2c1eef982399/ovs-vswitchd/0.log" Feb 14 05:53:20 crc kubenswrapper[4867]: I0214 05:53:20.756900 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-dznst_6f356df8-0955-46c4-9166-2c1eef982399/ovsdb-server-init/0.log" Feb 14 05:53:20 crc kubenswrapper[4867]: I0214 05:53:20.760566 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-dznst_6f356df8-0955-46c4-9166-2c1eef982399/ovsdb-server/0.log" Feb 14 05:53:20 crc kubenswrapper[4867]: I0214 05:53:20.968125 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-vjz5q_c3ef84d6-150a-46b1-8e93-7e650c8be1ef/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 05:53:21 crc kubenswrapper[4867]: I0214 05:53:21.027922 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_0552eb77-2bc5-49dd-911e-f08071a83da9/openstack-network-exporter/0.log" Feb 14 05:53:21 crc kubenswrapper[4867]: I0214 05:53:21.104436 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_0552eb77-2bc5-49dd-911e-f08071a83da9/ovn-northd/0.log" Feb 14 05:53:21 crc kubenswrapper[4867]: I0214 05:53:21.274966 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_353b0cad-bb6a-4a68-b787-64fb7b32ee27/openstack-network-exporter/0.log" Feb 14 05:53:21 crc kubenswrapper[4867]: I0214 05:53:21.296090 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_353b0cad-bb6a-4a68-b787-64fb7b32ee27/ovsdbserver-nb/0.log" Feb 14 05:53:21 crc kubenswrapper[4867]: I0214 05:53:21.513665 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_9faf0052-6200-4ac5-9216-7a26a29f4508/openstack-network-exporter/0.log" Feb 14 05:53:21 crc kubenswrapper[4867]: I0214 05:53:21.549361 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_9faf0052-6200-4ac5-9216-7a26a29f4508/ovsdbserver-sb/0.log" Feb 14 05:53:21 crc kubenswrapper[4867]: I0214 05:53:21.822014 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-8574cd8bdd-r5cv6_2ef45c32-32a1-4302-84e3-3ff7e864cb99/placement-api/0.log" Feb 14 05:53:21 crc kubenswrapper[4867]: I0214 05:53:21.871330 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-8574cd8bdd-r5cv6_2ef45c32-32a1-4302-84e3-3ff7e864cb99/placement-log/0.log" Feb 14 05:53:21 crc kubenswrapper[4867]: I0214 05:53:21.899021 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_8c8003cd-8992-4714-96a2-2e649aead118/init-config-reloader/0.log" Feb 14 05:53:22 crc kubenswrapper[4867]: I0214 05:53:22.095789 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_8c8003cd-8992-4714-96a2-2e649aead118/init-config-reloader/0.log" Feb 14 05:53:22 crc kubenswrapper[4867]: I0214 05:53:22.151717 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_8c8003cd-8992-4714-96a2-2e649aead118/thanos-sidecar/0.log" Feb 14 05:53:22 crc kubenswrapper[4867]: I0214 05:53:22.159446 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_8c8003cd-8992-4714-96a2-2e649aead118/prometheus/0.log" Feb 14 05:53:22 crc kubenswrapper[4867]: I0214 05:53:22.204079 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_8c8003cd-8992-4714-96a2-2e649aead118/config-reloader/0.log" Feb 14 05:53:22 crc kubenswrapper[4867]: I0214 05:53:22.414315 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c/setup-container/0.log" Feb 14 05:53:22 crc kubenswrapper[4867]: I0214 05:53:22.671473 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c/setup-container/0.log" Feb 14 05:53:22 crc kubenswrapper[4867]: I0214 05:53:22.760243 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c/rabbitmq/0.log" Feb 14 05:53:22 crc kubenswrapper[4867]: I0214 05:53:22.797821 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_7e279860-a36f-473d-a79a-a34e5820e5a6/setup-container/0.log" Feb 14 05:53:23 crc kubenswrapper[4867]: I0214 05:53:23.203431 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_7e279860-a36f-473d-a79a-a34e5820e5a6/setup-container/0.log" Feb 14 05:53:23 crc kubenswrapper[4867]: I0214 05:53:23.212835 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_7e279860-a36f-473d-a79a-a34e5820e5a6/rabbitmq/0.log" Feb 14 05:53:23 crc kubenswrapper[4867]: I0214 05:53:23.299654 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_82f2a63e-b256-4ad7-96ee-1def8a174cfb/setup-container/0.log" Feb 14 05:53:23 crc kubenswrapper[4867]: I0214 05:53:23.591898 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_82f2a63e-b256-4ad7-96ee-1def8a174cfb/rabbitmq/0.log" Feb 14 05:53:23 crc kubenswrapper[4867]: I0214 05:53:23.627264 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_82f2a63e-b256-4ad7-96ee-1def8a174cfb/setup-container/0.log" Feb 14 05:53:23 crc kubenswrapper[4867]: I0214 05:53:23.693154 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_c8afa7ab-eaaa-4558-99d5-c655cf271f62/setup-container/0.log" Feb 14 05:53:23 crc kubenswrapper[4867]: I0214 05:53:23.852560 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_c8afa7ab-eaaa-4558-99d5-c655cf271f62/setup-container/0.log" Feb 14 05:53:23 crc kubenswrapper[4867]: I0214 05:53:23.923018 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_c8afa7ab-eaaa-4558-99d5-c655cf271f62/rabbitmq/0.log" Feb 14 05:53:24 crc kubenswrapper[4867]: I0214 05:53:24.010681 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-8zlml_4a0a98e3-261b-460d-92c2-4fce312f5171/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 05:53:24 crc kubenswrapper[4867]: I0214 05:53:24.243911 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-drcl6_0c240366-e845-4987-943c-afc965ddc2f4/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 05:53:24 crc kubenswrapper[4867]: I0214 05:53:24.361948 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-hwqcf_51f6e45c-a545-4b49-b6f8-a3048619f24d/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 05:53:24 crc kubenswrapper[4867]: I0214 05:53:24.528007 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-lsj48_764366f2-ea14-4cc9-a195-52ee347e666d/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 05:53:24 crc kubenswrapper[4867]: I0214 05:53:24.668563 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-5rl49_e72df4ca-d603-4f2e-9ff1-3ec392ef11b7/ssh-known-hosts-edpm-deployment/0.log" Feb 14 05:53:24 crc kubenswrapper[4867]: I0214 05:53:24.873881 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5559ff585f-sb7wb_76fdab94-9bfb-48b7-82f9-bdd6d2258cdb/proxy-server/0.log" Feb 14 05:53:25 crc kubenswrapper[4867]: I0214 05:53:25.052567 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-dc8sm_92f44db3-78d7-4707-af34-daf9f3bbc0bf/swift-ring-rebalance/0.log" Feb 14 05:53:25 crc kubenswrapper[4867]: I0214 05:53:25.056804 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5559ff585f-sb7wb_76fdab94-9bfb-48b7-82f9-bdd6d2258cdb/proxy-httpd/0.log" Feb 14 05:53:25 crc kubenswrapper[4867]: I0214 05:53:25.295342 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1d9f9909-1442-4d83-b2aa-0f58d4022338/account-auditor/0.log" Feb 14 05:53:25 crc kubenswrapper[4867]: I0214 05:53:25.327233 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1d9f9909-1442-4d83-b2aa-0f58d4022338/account-reaper/0.log" Feb 14 05:53:25 crc kubenswrapper[4867]: I0214 05:53:25.335709 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1d9f9909-1442-4d83-b2aa-0f58d4022338/account-replicator/0.log" Feb 14 05:53:25 crc kubenswrapper[4867]: I0214 05:53:25.492066 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1d9f9909-1442-4d83-b2aa-0f58d4022338/container-auditor/0.log" Feb 14 05:53:25 crc kubenswrapper[4867]: I0214 05:53:25.514108 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1d9f9909-1442-4d83-b2aa-0f58d4022338/account-server/0.log" Feb 14 05:53:25 crc kubenswrapper[4867]: I0214 05:53:25.667014 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1d9f9909-1442-4d83-b2aa-0f58d4022338/container-server/0.log" Feb 14 05:53:25 crc kubenswrapper[4867]: I0214 05:53:25.690235 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1d9f9909-1442-4d83-b2aa-0f58d4022338/container-replicator/0.log" Feb 14 05:53:25 crc kubenswrapper[4867]: I0214 05:53:25.790897 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1d9f9909-1442-4d83-b2aa-0f58d4022338/container-updater/0.log" Feb 14 05:53:25 crc kubenswrapper[4867]: I0214 05:53:25.830456 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1d9f9909-1442-4d83-b2aa-0f58d4022338/object-auditor/0.log" Feb 14 05:53:25 crc kubenswrapper[4867]: I0214 05:53:25.950329 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1d9f9909-1442-4d83-b2aa-0f58d4022338/object-expirer/0.log" Feb 14 05:53:26 crc kubenswrapper[4867]: I0214 05:53:26.027013 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1d9f9909-1442-4d83-b2aa-0f58d4022338/object-replicator/0.log" Feb 14 05:53:26 crc kubenswrapper[4867]: I0214 05:53:26.064053 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1d9f9909-1442-4d83-b2aa-0f58d4022338/object-server/0.log" Feb 14 05:53:26 crc kubenswrapper[4867]: I0214 05:53:26.138374 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1d9f9909-1442-4d83-b2aa-0f58d4022338/object-updater/0.log" Feb 14 05:53:26 crc kubenswrapper[4867]: I0214 05:53:26.259121 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1d9f9909-1442-4d83-b2aa-0f58d4022338/rsync/0.log" Feb 14 05:53:26 crc kubenswrapper[4867]: I0214 05:53:26.327106 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1d9f9909-1442-4d83-b2aa-0f58d4022338/swift-recon-cron/0.log" Feb 14 05:53:26 crc kubenswrapper[4867]: I0214 05:53:26.508214 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq_b70721c5-f29f-4cc4-8ee7-88341a81765d/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 05:53:26 crc kubenswrapper[4867]: I0214 05:53:26.738364 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps_43f6ac0f-9203-4827-bd57-acbae7793028/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 05:53:26 crc kubenswrapper[4867]: I0214 05:53:26.966566 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_be58ab35-1c46-426e-87a1-9010a643ead5/test-operator-logs-container/0.log" Feb 14 05:53:27 crc kubenswrapper[4867]: I0214 05:53:27.134663 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-sk5ns_6eaa68ce-0a13-47ec-b1d9-3a11bd50c4be/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 05:53:28 crc kubenswrapper[4867]: I0214 05:53:28.035443 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_a161c594-8af3-458f-911a-bbf51e7bfcdd/tempest-tests-tempest-tests-runner/0.log" Feb 14 05:53:28 crc kubenswrapper[4867]: I0214 05:53:28.234351 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_f1d6dceb-5ee5-407d-ade4-be35d128d8dc/memcached/0.log" Feb 14 05:53:28 crc kubenswrapper[4867]: I0214 05:53:28.473945 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wsjxv"] Feb 14 05:53:28 crc kubenswrapper[4867]: E0214 05:53:28.474437 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd017092-381d-4839-bd5f-b8177c576ab1" containerName="registry-server" Feb 14 05:53:28 crc kubenswrapper[4867]: I0214 05:53:28.474453 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd017092-381d-4839-bd5f-b8177c576ab1" containerName="registry-server" Feb 14 05:53:28 crc kubenswrapper[4867]: E0214 05:53:28.474489 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd017092-381d-4839-bd5f-b8177c576ab1" containerName="extract-content" Feb 14 05:53:28 crc kubenswrapper[4867]: I0214 05:53:28.474512 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd017092-381d-4839-bd5f-b8177c576ab1" containerName="extract-content" Feb 14 05:53:28 crc kubenswrapper[4867]: E0214 05:53:28.474531 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd017092-381d-4839-bd5f-b8177c576ab1" containerName="extract-utilities" Feb 14 05:53:28 crc kubenswrapper[4867]: I0214 05:53:28.474537 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd017092-381d-4839-bd5f-b8177c576ab1" containerName="extract-utilities" Feb 14 05:53:28 crc kubenswrapper[4867]: E0214 05:53:28.474571 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abe60a3f-52b5-45a9-8603-17020367713d" containerName="container-00" Feb 14 05:53:28 crc kubenswrapper[4867]: I0214 05:53:28.474577 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="abe60a3f-52b5-45a9-8603-17020367713d" containerName="container-00" Feb 14 05:53:28 crc kubenswrapper[4867]: I0214 05:53:28.474799 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd017092-381d-4839-bd5f-b8177c576ab1" containerName="registry-server" Feb 14 05:53:28 crc kubenswrapper[4867]: I0214 05:53:28.474815 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="abe60a3f-52b5-45a9-8603-17020367713d" containerName="container-00" Feb 14 05:53:28 crc kubenswrapper[4867]: I0214 05:53:28.477965 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wsjxv" Feb 14 05:53:28 crc kubenswrapper[4867]: I0214 05:53:28.492269 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wsjxv"] Feb 14 05:53:28 crc kubenswrapper[4867]: I0214 05:53:28.600302 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d55eb762-847d-4073-b20e-d1f306d0a424-catalog-content\") pod \"redhat-operators-wsjxv\" (UID: \"d55eb762-847d-4073-b20e-d1f306d0a424\") " pod="openshift-marketplace/redhat-operators-wsjxv" Feb 14 05:53:28 crc kubenswrapper[4867]: I0214 05:53:28.600869 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lx6j\" (UniqueName: \"kubernetes.io/projected/d55eb762-847d-4073-b20e-d1f306d0a424-kube-api-access-4lx6j\") pod \"redhat-operators-wsjxv\" (UID: \"d55eb762-847d-4073-b20e-d1f306d0a424\") " pod="openshift-marketplace/redhat-operators-wsjxv" Feb 14 05:53:28 crc kubenswrapper[4867]: I0214 05:53:28.601130 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d55eb762-847d-4073-b20e-d1f306d0a424-utilities\") pod \"redhat-operators-wsjxv\" (UID: \"d55eb762-847d-4073-b20e-d1f306d0a424\") " pod="openshift-marketplace/redhat-operators-wsjxv" Feb 14 05:53:28 crc kubenswrapper[4867]: I0214 05:53:28.703827 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d55eb762-847d-4073-b20e-d1f306d0a424-catalog-content\") pod \"redhat-operators-wsjxv\" (UID: \"d55eb762-847d-4073-b20e-d1f306d0a424\") " pod="openshift-marketplace/redhat-operators-wsjxv" Feb 14 05:53:28 crc kubenswrapper[4867]: I0214 05:53:28.704009 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lx6j\" (UniqueName: \"kubernetes.io/projected/d55eb762-847d-4073-b20e-d1f306d0a424-kube-api-access-4lx6j\") pod \"redhat-operators-wsjxv\" (UID: \"d55eb762-847d-4073-b20e-d1f306d0a424\") " pod="openshift-marketplace/redhat-operators-wsjxv" Feb 14 05:53:28 crc kubenswrapper[4867]: I0214 05:53:28.704110 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d55eb762-847d-4073-b20e-d1f306d0a424-utilities\") pod \"redhat-operators-wsjxv\" (UID: \"d55eb762-847d-4073-b20e-d1f306d0a424\") " pod="openshift-marketplace/redhat-operators-wsjxv" Feb 14 05:53:28 crc kubenswrapper[4867]: I0214 05:53:28.705529 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d55eb762-847d-4073-b20e-d1f306d0a424-catalog-content\") pod \"redhat-operators-wsjxv\" (UID: \"d55eb762-847d-4073-b20e-d1f306d0a424\") " pod="openshift-marketplace/redhat-operators-wsjxv" Feb 14 05:53:28 crc kubenswrapper[4867]: I0214 05:53:28.705680 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d55eb762-847d-4073-b20e-d1f306d0a424-utilities\") pod \"redhat-operators-wsjxv\" (UID: \"d55eb762-847d-4073-b20e-d1f306d0a424\") " pod="openshift-marketplace/redhat-operators-wsjxv" Feb 14 05:53:28 crc kubenswrapper[4867]: I0214 05:53:28.728060 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lx6j\" (UniqueName: \"kubernetes.io/projected/d55eb762-847d-4073-b20e-d1f306d0a424-kube-api-access-4lx6j\") pod \"redhat-operators-wsjxv\" (UID: \"d55eb762-847d-4073-b20e-d1f306d0a424\") " pod="openshift-marketplace/redhat-operators-wsjxv" Feb 14 05:53:28 crc kubenswrapper[4867]: I0214 05:53:28.856220 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wsjxv" Feb 14 05:53:29 crc kubenswrapper[4867]: I0214 05:53:29.461338 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wsjxv"] Feb 14 05:53:29 crc kubenswrapper[4867]: I0214 05:53:29.646789 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wsjxv" event={"ID":"d55eb762-847d-4073-b20e-d1f306d0a424","Type":"ContainerStarted","Data":"bdbd7570f641df51015aac2cfdcc57ae989a722bc97af32d68059ca55601be89"} Feb 14 05:53:30 crc kubenswrapper[4867]: I0214 05:53:30.658325 4867 generic.go:334] "Generic (PLEG): container finished" podID="d55eb762-847d-4073-b20e-d1f306d0a424" containerID="30d1013f7099577360605cbfb6563ff4f5ab0068b09bcb682df52799f6f02865" exitCode=0 Feb 14 05:53:30 crc kubenswrapper[4867]: I0214 05:53:30.658844 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wsjxv" event={"ID":"d55eb762-847d-4073-b20e-d1f306d0a424","Type":"ContainerDied","Data":"30d1013f7099577360605cbfb6563ff4f5ab0068b09bcb682df52799f6f02865"} Feb 14 05:53:31 crc kubenswrapper[4867]: I0214 05:53:31.251271 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 05:53:31 crc kubenswrapper[4867]: I0214 05:53:31.251573 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 05:53:31 crc kubenswrapper[4867]: I0214 05:53:31.670407 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wsjxv" event={"ID":"d55eb762-847d-4073-b20e-d1f306d0a424","Type":"ContainerStarted","Data":"7f9aa6a0d7a01fb7e025b11fbd0a7eb4577303eda11bc1442268815c89953f3f"} Feb 14 05:53:38 crc kubenswrapper[4867]: I0214 05:53:38.752094 4867 generic.go:334] "Generic (PLEG): container finished" podID="d55eb762-847d-4073-b20e-d1f306d0a424" containerID="7f9aa6a0d7a01fb7e025b11fbd0a7eb4577303eda11bc1442268815c89953f3f" exitCode=0 Feb 14 05:53:38 crc kubenswrapper[4867]: I0214 05:53:38.752744 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wsjxv" event={"ID":"d55eb762-847d-4073-b20e-d1f306d0a424","Type":"ContainerDied","Data":"7f9aa6a0d7a01fb7e025b11fbd0a7eb4577303eda11bc1442268815c89953f3f"} Feb 14 05:53:39 crc kubenswrapper[4867]: I0214 05:53:39.766999 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wsjxv" event={"ID":"d55eb762-847d-4073-b20e-d1f306d0a424","Type":"ContainerStarted","Data":"7217e257de5b0d565a1fcef5f665ca331c051276a1f2729401d6ffeea61a13c2"} Feb 14 05:53:39 crc kubenswrapper[4867]: I0214 05:53:39.787453 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wsjxv" podStartSLOduration=3.305384714 podStartE2EDuration="11.787439002s" podCreationTimestamp="2026-02-14 05:53:28 +0000 UTC" firstStartedPulling="2026-02-14 05:53:30.660693637 +0000 UTC m=+6242.741630951" lastFinishedPulling="2026-02-14 05:53:39.142747925 +0000 UTC m=+6251.223685239" observedRunningTime="2026-02-14 05:53:39.785575643 +0000 UTC m=+6251.866512957" watchObservedRunningTime="2026-02-14 05:53:39.787439002 +0000 UTC m=+6251.868376316" Feb 14 05:53:48 crc kubenswrapper[4867]: I0214 05:53:48.856691 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wsjxv" Feb 14 05:53:48 crc kubenswrapper[4867]: I0214 05:53:48.857380 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wsjxv" Feb 14 05:53:49 crc kubenswrapper[4867]: I0214 05:53:49.914465 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wsjxv" podUID="d55eb762-847d-4073-b20e-d1f306d0a424" containerName="registry-server" probeResult="failure" output=< Feb 14 05:53:49 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:53:49 crc kubenswrapper[4867]: > Feb 14 05:53:58 crc kubenswrapper[4867]: I0214 05:53:58.745335 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8c4df5843827cca9a4ba10f11751e86eb8b77e6cae3749237366ad3dfec8wq7_fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb/util/0.log" Feb 14 05:53:58 crc kubenswrapper[4867]: I0214 05:53:58.972235 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8c4df5843827cca9a4ba10f11751e86eb8b77e6cae3749237366ad3dfec8wq7_fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb/pull/0.log" Feb 14 05:53:59 crc kubenswrapper[4867]: I0214 05:53:59.004819 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8c4df5843827cca9a4ba10f11751e86eb8b77e6cae3749237366ad3dfec8wq7_fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb/pull/0.log" Feb 14 05:53:59 crc kubenswrapper[4867]: I0214 05:53:59.008867 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8c4df5843827cca9a4ba10f11751e86eb8b77e6cae3749237366ad3dfec8wq7_fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb/util/0.log" Feb 14 05:53:59 crc kubenswrapper[4867]: I0214 05:53:59.143496 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8c4df5843827cca9a4ba10f11751e86eb8b77e6cae3749237366ad3dfec8wq7_fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb/util/0.log" Feb 14 05:53:59 crc kubenswrapper[4867]: I0214 05:53:59.198995 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8c4df5843827cca9a4ba10f11751e86eb8b77e6cae3749237366ad3dfec8wq7_fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb/pull/0.log" Feb 14 05:53:59 crc kubenswrapper[4867]: I0214 05:53:59.248420 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8c4df5843827cca9a4ba10f11751e86eb8b77e6cae3749237366ad3dfec8wq7_fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb/extract/0.log" Feb 14 05:53:59 crc kubenswrapper[4867]: I0214 05:53:59.762187 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d8bf5c495-ndb8l_652d3b74-0634-4f8f-b5ef-3adfc53920eb/manager/0.log" Feb 14 05:53:59 crc kubenswrapper[4867]: I0214 05:53:59.925435 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wsjxv" podUID="d55eb762-847d-4073-b20e-d1f306d0a424" containerName="registry-server" probeResult="failure" output=< Feb 14 05:53:59 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:53:59 crc kubenswrapper[4867]: > Feb 14 05:54:00 crc kubenswrapper[4867]: I0214 05:54:00.204466 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987464f4-tpfxn_1f889f7b-8ae5-43e3-ab54-d3bf06c010df/manager/0.log" Feb 14 05:54:00 crc kubenswrapper[4867]: I0214 05:54:00.425572 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69f49c598c-jxpv2_185d4fd5-608b-48d8-8731-27e7a05adfe2/manager/0.log" Feb 14 05:54:00 crc kubenswrapper[4867]: I0214 05:54:00.762931 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5b9b8895d5-bgznq_4b75df5b-04e5-445f-8d2d-57c6cbe5971c/manager/0.log" Feb 14 05:54:01 crc kubenswrapper[4867]: I0214 05:54:01.250657 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 05:54:01 crc kubenswrapper[4867]: I0214 05:54:01.250976 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 05:54:01 crc kubenswrapper[4867]: I0214 05:54:01.251021 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" Feb 14 05:54:01 crc kubenswrapper[4867]: I0214 05:54:01.251984 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"969e0cb4cefe8b8e5046ee62cca830ff3afc22fe72785a6b708c487b9ff93b5e"} pod="openshift-machine-config-operator/machine-config-daemon-4s95t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 05:54:01 crc kubenswrapper[4867]: I0214 05:54:01.252042 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" containerID="cri-o://969e0cb4cefe8b8e5046ee62cca830ff3afc22fe72785a6b708c487b9ff93b5e" gracePeriod=600 Feb 14 05:54:01 crc kubenswrapper[4867]: I0214 05:54:01.558286 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-554564d7fc-6nhjp_94ff35ef-77e1-4085-ad2f-837ebc666b2a/manager/1.log" Feb 14 05:54:01 crc kubenswrapper[4867]: E0214 05:54:01.626798 4867 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5992e46c_bce7_4b9f_82f2_c7ffb93286cd.slice/crio-969e0cb4cefe8b8e5046ee62cca830ff3afc22fe72785a6b708c487b9ff93b5e.scope\": RecentStats: unable to find data in memory cache]" Feb 14 05:54:01 crc kubenswrapper[4867]: I0214 05:54:01.810976 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-554564d7fc-6nhjp_94ff35ef-77e1-4085-ad2f-837ebc666b2a/manager/0.log" Feb 14 05:54:01 crc kubenswrapper[4867]: I0214 05:54:01.993410 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79d975b745-jqq2w_ebee5651-7233-4c18-bb97-a4dc91eabef4/manager/0.log" Feb 14 05:54:02 crc kubenswrapper[4867]: I0214 05:54:02.034162 4867 generic.go:334] "Generic (PLEG): container finished" podID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerID="969e0cb4cefe8b8e5046ee62cca830ff3afc22fe72785a6b708c487b9ff93b5e" exitCode=0 Feb 14 05:54:02 crc kubenswrapper[4867]: I0214 05:54:02.034203 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" event={"ID":"5992e46c-bce7-4b9f-82f2-c7ffb93286cd","Type":"ContainerDied","Data":"969e0cb4cefe8b8e5046ee62cca830ff3afc22fe72785a6b708c487b9ff93b5e"} Feb 14 05:54:02 crc kubenswrapper[4867]: I0214 05:54:02.034255 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" event={"ID":"5992e46c-bce7-4b9f-82f2-c7ffb93286cd","Type":"ContainerStarted","Data":"85cc1629feee14dea1a79134dc431065e3e76ce7010ce3c502e802c3ae8c3154"} Feb 14 05:54:02 crc kubenswrapper[4867]: I0214 05:54:02.034276 4867 scope.go:117] "RemoveContainer" containerID="57022a394f9e48e84c2c7ab708dd1c775f970a72e65d0163882f6edf72cdab37" Feb 14 05:54:02 crc kubenswrapper[4867]: I0214 05:54:02.483554 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b4d948c87-x7qx5_dc65ca0c-1d72-468f-b600-dfb8332bf4bd/manager/0.log" Feb 14 05:54:02 crc kubenswrapper[4867]: I0214 05:54:02.792785 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-54f6768c69-8dzwp_6b5078d9-f30f-40a8-b5b5-8eb11271ec10/manager/0.log" Feb 14 05:54:03 crc kubenswrapper[4867]: I0214 05:54:03.126265 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-5d946d989d-chbgl_3025ff58-4a91-43f5-8f15-94cadd0cef8b/manager/0.log" Feb 14 05:54:03 crc kubenswrapper[4867]: I0214 05:54:03.479893 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6994f66f48-wwm9m_7bb6de63-3c92-43de-a01b-b34df765aeba/manager/0.log" Feb 14 05:54:03 crc kubenswrapper[4867]: I0214 05:54:03.553219 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-64ddbf8bb-2xwdd_38a9cdf3-42e2-4279-8092-af7e8c82bc51/manager/0.log" Feb 14 05:54:04 crc kubenswrapper[4867]: I0214 05:54:04.165712 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-567668f5cf-tf6rg_74a43e5b-11c4-459d-bbc7-03aa03489f17/manager/0.log" Feb 14 05:54:04 crc kubenswrapper[4867]: I0214 05:54:04.423422 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c6767dc9cs8b7t_634f9e2f-2100-49e3-a31f-a369cf8ff13f/manager/1.log" Feb 14 05:54:04 crc kubenswrapper[4867]: I0214 05:54:04.487019 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c6767dc9cs8b7t_634f9e2f-2100-49e3-a31f-a369cf8ff13f/manager/0.log" Feb 14 05:54:05 crc kubenswrapper[4867]: I0214 05:54:05.126892 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-6b9546c8f4-49lm8_10461723-ecff-48fe-a034-9a07bf3bf8f7/operator/0.log" Feb 14 05:54:05 crc kubenswrapper[4867]: I0214 05:54:05.549401 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-29mb7_b4bb205c-0469-49a0-b783-9b51ae11ddfe/registry-server/1.log" Feb 14 05:54:05 crc kubenswrapper[4867]: I0214 05:54:05.778010 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-69f8888797-7zkqz_64ff8480-2ca0-40d5-b5c9-448d0db3c575/manager/1.log" Feb 14 05:54:06 crc kubenswrapper[4867]: I0214 05:54:06.030649 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-29mb7_b4bb205c-0469-49a0-b783-9b51ae11ddfe/registry-server/0.log" Feb 14 05:54:06 crc kubenswrapper[4867]: I0214 05:54:06.394615 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-d44cf6b75-dszdp_ffb00aaf-6760-440e-827a-f795baf3693a/manager/0.log" Feb 14 05:54:06 crc kubenswrapper[4867]: I0214 05:54:06.754822 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-8497b45c89-vwvtz_9ec66be5-3947-45d1-bf34-c7639e8d4c8a/manager/0.log" Feb 14 05:54:06 crc kubenswrapper[4867]: I0214 05:54:06.990498 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-87pdl_c38fa6a1-63b1-44a2-82b8-d6fd3d8a1f8d/operator/1.log" Feb 14 05:54:07 crc kubenswrapper[4867]: I0214 05:54:07.072831 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-87pdl_c38fa6a1-63b1-44a2-82b8-d6fd3d8a1f8d/operator/0.log" Feb 14 05:54:07 crc kubenswrapper[4867]: I0214 05:54:07.419403 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68f46476f-snrw6_bc4bb4fd-bcc8-438b-af84-a2db3d3e346a/manager/0.log" Feb 14 05:54:07 crc kubenswrapper[4867]: I0214 05:54:07.895067 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7866795846-t7hwz_67e3f2b9-2dbf-4c35-b1cd-02be51f58e38/manager/0.log" Feb 14 05:54:07 crc kubenswrapper[4867]: I0214 05:54:07.904393 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-69f8888797-7zkqz_64ff8480-2ca0-40d5-b5c9-448d0db3c575/manager/0.log" Feb 14 05:54:08 crc kubenswrapper[4867]: I0214 05:54:08.192387 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5db88f68c-6d9jj_82e5dbee-ab1e-498c-9460-be75226afa18/manager/0.log" Feb 14 05:54:08 crc kubenswrapper[4867]: I0214 05:54:08.304607 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-75585db5cc-kzk25_c83fa345-043f-453c-b797-a00db3111d44/manager/0.log" Feb 14 05:54:08 crc kubenswrapper[4867]: I0214 05:54:08.360034 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-55dcdcc8d-49t56_d72a97fb-2a6a-4af1-8f0c-de88ab679119/manager/0.log" Feb 14 05:54:09 crc kubenswrapper[4867]: I0214 05:54:09.921084 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wsjxv" podUID="d55eb762-847d-4073-b20e-d1f306d0a424" containerName="registry-server" probeResult="failure" output=< Feb 14 05:54:09 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:54:09 crc kubenswrapper[4867]: > Feb 14 05:54:14 crc kubenswrapper[4867]: I0214 05:54:14.460994 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-868647ff47-pxm8d_66c8a0dd-f076-4994-bd42-39c80de83233/manager/0.log" Feb 14 05:54:19 crc kubenswrapper[4867]: I0214 05:54:19.915290 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wsjxv" podUID="d55eb762-847d-4073-b20e-d1f306d0a424" containerName="registry-server" probeResult="failure" output=< Feb 14 05:54:19 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:54:19 crc kubenswrapper[4867]: > Feb 14 05:54:28 crc kubenswrapper[4867]: I0214 05:54:28.909184 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wsjxv" Feb 14 05:54:28 crc kubenswrapper[4867]: I0214 05:54:28.965438 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wsjxv" Feb 14 05:54:29 crc kubenswrapper[4867]: I0214 05:54:29.694172 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wsjxv"] Feb 14 05:54:30 crc kubenswrapper[4867]: I0214 05:54:30.389009 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-wsjxv" podUID="d55eb762-847d-4073-b20e-d1f306d0a424" containerName="registry-server" containerID="cri-o://7217e257de5b0d565a1fcef5f665ca331c051276a1f2729401d6ffeea61a13c2" gracePeriod=2 Feb 14 05:54:30 crc kubenswrapper[4867]: I0214 05:54:30.540312 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-f47sx_89db71f1-1a8b-4c57-9a3d-eb725060aee9/control-plane-machine-set-operator/0.log" Feb 14 05:54:30 crc kubenswrapper[4867]: I0214 05:54:30.824633 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-699tj_8437deca-adf5-4648-9abe-2c1c6376d07b/machine-api-operator/0.log" Feb 14 05:54:30 crc kubenswrapper[4867]: I0214 05:54:30.860451 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-699tj_8437deca-adf5-4648-9abe-2c1c6376d07b/kube-rbac-proxy/0.log" Feb 14 05:54:31 crc kubenswrapper[4867]: I0214 05:54:31.286143 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wsjxv" Feb 14 05:54:31 crc kubenswrapper[4867]: I0214 05:54:31.314875 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4lx6j\" (UniqueName: \"kubernetes.io/projected/d55eb762-847d-4073-b20e-d1f306d0a424-kube-api-access-4lx6j\") pod \"d55eb762-847d-4073-b20e-d1f306d0a424\" (UID: \"d55eb762-847d-4073-b20e-d1f306d0a424\") " Feb 14 05:54:31 crc kubenswrapper[4867]: I0214 05:54:31.315069 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d55eb762-847d-4073-b20e-d1f306d0a424-utilities\") pod \"d55eb762-847d-4073-b20e-d1f306d0a424\" (UID: \"d55eb762-847d-4073-b20e-d1f306d0a424\") " Feb 14 05:54:31 crc kubenswrapper[4867]: I0214 05:54:31.315330 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d55eb762-847d-4073-b20e-d1f306d0a424-catalog-content\") pod \"d55eb762-847d-4073-b20e-d1f306d0a424\" (UID: \"d55eb762-847d-4073-b20e-d1f306d0a424\") " Feb 14 05:54:31 crc kubenswrapper[4867]: I0214 05:54:31.315615 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d55eb762-847d-4073-b20e-d1f306d0a424-utilities" (OuterVolumeSpecName: "utilities") pod "d55eb762-847d-4073-b20e-d1f306d0a424" (UID: "d55eb762-847d-4073-b20e-d1f306d0a424"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 05:54:31 crc kubenswrapper[4867]: I0214 05:54:31.315983 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d55eb762-847d-4073-b20e-d1f306d0a424-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 05:54:31 crc kubenswrapper[4867]: I0214 05:54:31.325878 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d55eb762-847d-4073-b20e-d1f306d0a424-kube-api-access-4lx6j" (OuterVolumeSpecName: "kube-api-access-4lx6j") pod "d55eb762-847d-4073-b20e-d1f306d0a424" (UID: "d55eb762-847d-4073-b20e-d1f306d0a424"). InnerVolumeSpecName "kube-api-access-4lx6j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 05:54:31 crc kubenswrapper[4867]: I0214 05:54:31.403445 4867 generic.go:334] "Generic (PLEG): container finished" podID="d55eb762-847d-4073-b20e-d1f306d0a424" containerID="7217e257de5b0d565a1fcef5f665ca331c051276a1f2729401d6ffeea61a13c2" exitCode=0 Feb 14 05:54:31 crc kubenswrapper[4867]: I0214 05:54:31.403550 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wsjxv" Feb 14 05:54:31 crc kubenswrapper[4867]: I0214 05:54:31.403597 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wsjxv" event={"ID":"d55eb762-847d-4073-b20e-d1f306d0a424","Type":"ContainerDied","Data":"7217e257de5b0d565a1fcef5f665ca331c051276a1f2729401d6ffeea61a13c2"} Feb 14 05:54:31 crc kubenswrapper[4867]: I0214 05:54:31.403656 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wsjxv" event={"ID":"d55eb762-847d-4073-b20e-d1f306d0a424","Type":"ContainerDied","Data":"bdbd7570f641df51015aac2cfdcc57ae989a722bc97af32d68059ca55601be89"} Feb 14 05:54:31 crc kubenswrapper[4867]: I0214 05:54:31.403677 4867 scope.go:117] "RemoveContainer" containerID="7217e257de5b0d565a1fcef5f665ca331c051276a1f2729401d6ffeea61a13c2" Feb 14 05:54:31 crc kubenswrapper[4867]: I0214 05:54:31.416998 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4lx6j\" (UniqueName: \"kubernetes.io/projected/d55eb762-847d-4073-b20e-d1f306d0a424-kube-api-access-4lx6j\") on node \"crc\" DevicePath \"\"" Feb 14 05:54:31 crc kubenswrapper[4867]: I0214 05:54:31.430909 4867 scope.go:117] "RemoveContainer" containerID="7f9aa6a0d7a01fb7e025b11fbd0a7eb4577303eda11bc1442268815c89953f3f" Feb 14 05:54:31 crc kubenswrapper[4867]: I0214 05:54:31.452829 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d55eb762-847d-4073-b20e-d1f306d0a424-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d55eb762-847d-4073-b20e-d1f306d0a424" (UID: "d55eb762-847d-4073-b20e-d1f306d0a424"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 05:54:31 crc kubenswrapper[4867]: I0214 05:54:31.468092 4867 scope.go:117] "RemoveContainer" containerID="30d1013f7099577360605cbfb6563ff4f5ab0068b09bcb682df52799f6f02865" Feb 14 05:54:31 crc kubenswrapper[4867]: I0214 05:54:31.524981 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d55eb762-847d-4073-b20e-d1f306d0a424-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 05:54:31 crc kubenswrapper[4867]: I0214 05:54:31.547133 4867 scope.go:117] "RemoveContainer" containerID="7217e257de5b0d565a1fcef5f665ca331c051276a1f2729401d6ffeea61a13c2" Feb 14 05:54:31 crc kubenswrapper[4867]: E0214 05:54:31.547709 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7217e257de5b0d565a1fcef5f665ca331c051276a1f2729401d6ffeea61a13c2\": container with ID starting with 7217e257de5b0d565a1fcef5f665ca331c051276a1f2729401d6ffeea61a13c2 not found: ID does not exist" containerID="7217e257de5b0d565a1fcef5f665ca331c051276a1f2729401d6ffeea61a13c2" Feb 14 05:54:31 crc kubenswrapper[4867]: I0214 05:54:31.547744 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7217e257de5b0d565a1fcef5f665ca331c051276a1f2729401d6ffeea61a13c2"} err="failed to get container status \"7217e257de5b0d565a1fcef5f665ca331c051276a1f2729401d6ffeea61a13c2\": rpc error: code = NotFound desc = could not find container \"7217e257de5b0d565a1fcef5f665ca331c051276a1f2729401d6ffeea61a13c2\": container with ID starting with 7217e257de5b0d565a1fcef5f665ca331c051276a1f2729401d6ffeea61a13c2 not found: ID does not exist" Feb 14 05:54:31 crc kubenswrapper[4867]: I0214 05:54:31.547765 4867 scope.go:117] "RemoveContainer" containerID="7f9aa6a0d7a01fb7e025b11fbd0a7eb4577303eda11bc1442268815c89953f3f" Feb 14 05:54:31 crc kubenswrapper[4867]: E0214 05:54:31.548034 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f9aa6a0d7a01fb7e025b11fbd0a7eb4577303eda11bc1442268815c89953f3f\": container with ID starting with 7f9aa6a0d7a01fb7e025b11fbd0a7eb4577303eda11bc1442268815c89953f3f not found: ID does not exist" containerID="7f9aa6a0d7a01fb7e025b11fbd0a7eb4577303eda11bc1442268815c89953f3f" Feb 14 05:54:31 crc kubenswrapper[4867]: I0214 05:54:31.548072 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f9aa6a0d7a01fb7e025b11fbd0a7eb4577303eda11bc1442268815c89953f3f"} err="failed to get container status \"7f9aa6a0d7a01fb7e025b11fbd0a7eb4577303eda11bc1442268815c89953f3f\": rpc error: code = NotFound desc = could not find container \"7f9aa6a0d7a01fb7e025b11fbd0a7eb4577303eda11bc1442268815c89953f3f\": container with ID starting with 7f9aa6a0d7a01fb7e025b11fbd0a7eb4577303eda11bc1442268815c89953f3f not found: ID does not exist" Feb 14 05:54:31 crc kubenswrapper[4867]: I0214 05:54:31.548091 4867 scope.go:117] "RemoveContainer" containerID="30d1013f7099577360605cbfb6563ff4f5ab0068b09bcb682df52799f6f02865" Feb 14 05:54:31 crc kubenswrapper[4867]: E0214 05:54:31.548379 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30d1013f7099577360605cbfb6563ff4f5ab0068b09bcb682df52799f6f02865\": container with ID starting with 30d1013f7099577360605cbfb6563ff4f5ab0068b09bcb682df52799f6f02865 not found: ID does not exist" containerID="30d1013f7099577360605cbfb6563ff4f5ab0068b09bcb682df52799f6f02865" Feb 14 05:54:31 crc kubenswrapper[4867]: I0214 05:54:31.548452 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30d1013f7099577360605cbfb6563ff4f5ab0068b09bcb682df52799f6f02865"} err="failed to get container status \"30d1013f7099577360605cbfb6563ff4f5ab0068b09bcb682df52799f6f02865\": rpc error: code = NotFound desc = could not find container \"30d1013f7099577360605cbfb6563ff4f5ab0068b09bcb682df52799f6f02865\": container with ID starting with 30d1013f7099577360605cbfb6563ff4f5ab0068b09bcb682df52799f6f02865 not found: ID does not exist" Feb 14 05:54:31 crc kubenswrapper[4867]: I0214 05:54:31.750837 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wsjxv"] Feb 14 05:54:31 crc kubenswrapper[4867]: I0214 05:54:31.765675 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wsjxv"] Feb 14 05:54:33 crc kubenswrapper[4867]: I0214 05:54:33.010125 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d55eb762-847d-4073-b20e-d1f306d0a424" path="/var/lib/kubelet/pods/d55eb762-847d-4073-b20e-d1f306d0a424/volumes" Feb 14 05:54:43 crc kubenswrapper[4867]: I0214 05:54:43.814952 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-gslqt_1f305679-0f4d-440e-a053-7b3627eaae9c/cert-manager-controller/0.log" Feb 14 05:54:44 crc kubenswrapper[4867]: I0214 05:54:44.023678 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-s4258_2224c85e-13be-400d-abf8-6b412d8c55ee/cert-manager-cainjector/0.log" Feb 14 05:54:44 crc kubenswrapper[4867]: I0214 05:54:44.062904 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-xlg4t_34f53dfe-4707-4a5c-8745-c4ed944c6a6a/cert-manager-webhook/0.log" Feb 14 05:54:58 crc kubenswrapper[4867]: I0214 05:54:58.593057 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5c78fc5d65-xwq77_bd1547ee-0518-45af-bb63-9001da6fa7de/nmstate-console-plugin/0.log" Feb 14 05:54:58 crc kubenswrapper[4867]: I0214 05:54:58.796861 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-k6p82_ee9c78b0-77e6-47b0-8e8b-763d69cbd9aa/nmstate-handler/0.log" Feb 14 05:54:58 crc kubenswrapper[4867]: I0214 05:54:58.859325 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-57gj6_c9fcfe59-df8c-4433-a47f-8b07f90d98bc/kube-rbac-proxy/0.log" Feb 14 05:54:58 crc kubenswrapper[4867]: I0214 05:54:58.931348 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-57gj6_c9fcfe59-df8c-4433-a47f-8b07f90d98bc/nmstate-metrics/0.log" Feb 14 05:54:58 crc kubenswrapper[4867]: I0214 05:54:58.992897 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-694c9596b7-tjfgz_914b3f92-c030-4d1e-8454-96a7220f851e/nmstate-operator/0.log" Feb 14 05:54:59 crc kubenswrapper[4867]: I0214 05:54:59.195624 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-866bcb46dc-khbvf_fdb6e297-9da3-41ff-a6f3-de81833178c8/nmstate-webhook/0.log" Feb 14 05:55:14 crc kubenswrapper[4867]: I0214 05:55:14.263042 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-5479889c99-ltnxf_4a918644-d451-4f71-8a69-627b0de1ebb7/manager/1.log" Feb 14 05:55:14 crc kubenswrapper[4867]: I0214 05:55:14.308092 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-5479889c99-ltnxf_4a918644-d451-4f71-8a69-627b0de1ebb7/kube-rbac-proxy/0.log" Feb 14 05:55:14 crc kubenswrapper[4867]: I0214 05:55:14.445539 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-5479889c99-ltnxf_4a918644-d451-4f71-8a69-627b0de1ebb7/manager/0.log" Feb 14 05:55:30 crc kubenswrapper[4867]: I0214 05:55:30.244819 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr_5ecc414b-6bac-4b24-99c5-e2d1fb67f314/prometheus-operator-admission-webhook/0.log" Feb 14 05:55:30 crc kubenswrapper[4867]: I0214 05:55:30.247432 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj_8c7f9ea9-2c5c-4e9c-97b2-02dd8a216d06/prometheus-operator-admission-webhook/0.log" Feb 14 05:55:30 crc kubenswrapper[4867]: I0214 05:55:30.269925 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-vwlcr_987816d4-f9a4-47da-983c-317f9a3f4d86/prometheus-operator/0.log" Feb 14 05:55:30 crc kubenswrapper[4867]: I0214 05:55:30.771383 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-kv4j7_94f47db9-4437-4b3e-aee5-f6f65e715e62/operator/0.log" Feb 14 05:55:30 crc kubenswrapper[4867]: I0214 05:55:30.868445 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-492b9_701367b7-aef6-43b5-a0f9-3a91206962de/observability-ui-dashboards/0.log" Feb 14 05:55:31 crc kubenswrapper[4867]: I0214 05:55:31.002775 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-7qfh9_31f03187-50f6-4015-afdc-422455a63006/perses-operator/0.log" Feb 14 05:55:46 crc kubenswrapper[4867]: I0214 05:55:46.999157 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_cluster-logging-operator-c769fd969-pmdnk_89b20edb-1b24-48e1-accf-f0a2b65c8da1/cluster-logging-operator/0.log" Feb 14 05:55:47 crc kubenswrapper[4867]: I0214 05:55:47.241235 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_collector-4tm7t_0b309a8c-060a-4e8b-9731-3c4c3aab56f7/collector/0.log" Feb 14 05:55:47 crc kubenswrapper[4867]: I0214 05:55:47.254676 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-compactor-0_6975f95f-884b-4952-8bf8-0d18537e3403/loki-compactor/0.log" Feb 14 05:55:47 crc kubenswrapper[4867]: I0214 05:55:47.472217 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-distributor-5d5548c9f5-7zdqp_c9201352-8585-47d4-9c13-b9e21ac4cd9f/loki-distributor/0.log" Feb 14 05:55:47 crc kubenswrapper[4867]: I0214 05:55:47.506101 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-767ffcbf75-l82l4_0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5/gateway/0.log" Feb 14 05:55:47 crc kubenswrapper[4867]: I0214 05:55:47.605947 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-767ffcbf75-l82l4_0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5/opa/0.log" Feb 14 05:55:47 crc kubenswrapper[4867]: I0214 05:55:47.695406 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-767ffcbf75-md7ts_d28844dc-6974-446b-bd9a-b22586858387/opa/0.log" Feb 14 05:55:47 crc kubenswrapper[4867]: I0214 05:55:47.695772 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-767ffcbf75-md7ts_d28844dc-6974-446b-bd9a-b22586858387/gateway/0.log" Feb 14 05:55:47 crc kubenswrapper[4867]: I0214 05:55:47.861899 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-index-gateway-0_3c3333e0-ec4e-41bf-8296-9469ad3ac9cd/loki-index-gateway/0.log" Feb 14 05:55:47 crc kubenswrapper[4867]: I0214 05:55:47.982878 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-ingester-0_775ca902-fd03-4191-9440-ea598768d4e6/loki-ingester/0.log" Feb 14 05:55:48 crc kubenswrapper[4867]: I0214 05:55:48.106182 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-querier-76bf7b6d45-5td7f_9c48c070-b4b3-48af-b40a-d82788f764d9/loki-querier/0.log" Feb 14 05:55:48 crc kubenswrapper[4867]: I0214 05:55:48.224027 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-query-frontend-6d6859c548-cfcbp_837b4fe4-f827-4882-8af7-225b18bb3e22/loki-query-frontend/0.log" Feb 14 05:56:01 crc kubenswrapper[4867]: I0214 05:56:01.250975 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 05:56:01 crc kubenswrapper[4867]: I0214 05:56:01.251914 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 05:56:04 crc kubenswrapper[4867]: I0214 05:56:04.775094 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-zhmxc_516cf204-1263-431e-a450-039739b0d925/kube-rbac-proxy/0.log" Feb 14 05:56:04 crc kubenswrapper[4867]: I0214 05:56:04.779545 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-zhmxc_516cf204-1263-431e-a450-039739b0d925/controller/0.log" Feb 14 05:56:05 crc kubenswrapper[4867]: I0214 05:56:05.021658 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nzdwg_cfde5532-97c7-47b8-8b63-0159fc9e82b9/cp-frr-files/0.log" Feb 14 05:56:05 crc kubenswrapper[4867]: I0214 05:56:05.534412 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nzdwg_cfde5532-97c7-47b8-8b63-0159fc9e82b9/cp-reloader/0.log" Feb 14 05:56:05 crc kubenswrapper[4867]: I0214 05:56:05.549967 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nzdwg_cfde5532-97c7-47b8-8b63-0159fc9e82b9/cp-reloader/0.log" Feb 14 05:56:05 crc kubenswrapper[4867]: I0214 05:56:05.571447 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nzdwg_cfde5532-97c7-47b8-8b63-0159fc9e82b9/cp-metrics/0.log" Feb 14 05:56:05 crc kubenswrapper[4867]: I0214 05:56:05.582367 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nzdwg_cfde5532-97c7-47b8-8b63-0159fc9e82b9/cp-frr-files/0.log" Feb 14 05:56:05 crc kubenswrapper[4867]: I0214 05:56:05.831828 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nzdwg_cfde5532-97c7-47b8-8b63-0159fc9e82b9/cp-metrics/0.log" Feb 14 05:56:05 crc kubenswrapper[4867]: I0214 05:56:05.841591 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nzdwg_cfde5532-97c7-47b8-8b63-0159fc9e82b9/cp-reloader/0.log" Feb 14 05:56:05 crc kubenswrapper[4867]: I0214 05:56:05.859261 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nzdwg_cfde5532-97c7-47b8-8b63-0159fc9e82b9/cp-frr-files/0.log" Feb 14 05:56:05 crc kubenswrapper[4867]: I0214 05:56:05.873068 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nzdwg_cfde5532-97c7-47b8-8b63-0159fc9e82b9/cp-metrics/0.log" Feb 14 05:56:06 crc kubenswrapper[4867]: I0214 05:56:06.064383 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nzdwg_cfde5532-97c7-47b8-8b63-0159fc9e82b9/cp-reloader/0.log" Feb 14 05:56:06 crc kubenswrapper[4867]: I0214 05:56:06.081317 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nzdwg_cfde5532-97c7-47b8-8b63-0159fc9e82b9/cp-metrics/0.log" Feb 14 05:56:06 crc kubenswrapper[4867]: I0214 05:56:06.140135 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nzdwg_cfde5532-97c7-47b8-8b63-0159fc9e82b9/cp-frr-files/0.log" Feb 14 05:56:06 crc kubenswrapper[4867]: I0214 05:56:06.160227 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nzdwg_cfde5532-97c7-47b8-8b63-0159fc9e82b9/controller/0.log" Feb 14 05:56:06 crc kubenswrapper[4867]: I0214 05:56:06.402230 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nzdwg_cfde5532-97c7-47b8-8b63-0159fc9e82b9/kube-rbac-proxy/0.log" Feb 14 05:56:06 crc kubenswrapper[4867]: I0214 05:56:06.472802 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nzdwg_cfde5532-97c7-47b8-8b63-0159fc9e82b9/frr-metrics/0.log" Feb 14 05:56:06 crc kubenswrapper[4867]: I0214 05:56:06.629880 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nzdwg_cfde5532-97c7-47b8-8b63-0159fc9e82b9/frr/1.log" Feb 14 05:56:06 crc kubenswrapper[4867]: I0214 05:56:06.710368 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nzdwg_cfde5532-97c7-47b8-8b63-0159fc9e82b9/kube-rbac-proxy-frr/0.log" Feb 14 05:56:06 crc kubenswrapper[4867]: I0214 05:56:06.735934 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nzdwg_cfde5532-97c7-47b8-8b63-0159fc9e82b9/reloader/0.log" Feb 14 05:56:07 crc kubenswrapper[4867]: I0214 05:56:07.003131 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-78b44bf5bb-9gqfb_85e0628d-4132-4c09-9da0-35db43024c9c/frr-k8s-webhook-server/0.log" Feb 14 05:56:07 crc kubenswrapper[4867]: I0214 05:56:07.111328 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-78b44bf5bb-9gqfb_85e0628d-4132-4c09-9da0-35db43024c9c/frr-k8s-webhook-server/1.log" Feb 14 05:56:07 crc kubenswrapper[4867]: I0214 05:56:07.410869 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-67594686f4-52kwb_e1d5f0bd-4e8c-45c7-9d4e-c530689948ad/manager/1.log" Feb 14 05:56:07 crc kubenswrapper[4867]: I0214 05:56:07.529034 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-67594686f4-52kwb_e1d5f0bd-4e8c-45c7-9d4e-c530689948ad/manager/0.log" Feb 14 05:56:07 crc kubenswrapper[4867]: I0214 05:56:07.732334 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-7f9bfb45cb-mpxbn_d5e9c930-96ca-4a35-af4f-b8ae033469a5/webhook-server/1.log" Feb 14 05:56:07 crc kubenswrapper[4867]: I0214 05:56:07.786767 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-7f9bfb45cb-mpxbn_d5e9c930-96ca-4a35-af4f-b8ae033469a5/webhook-server/0.log" Feb 14 05:56:08 crc kubenswrapper[4867]: I0214 05:56:08.005678 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-4hvw7_6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8/kube-rbac-proxy/0.log" Feb 14 05:56:08 crc kubenswrapper[4867]: I0214 05:56:08.302391 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nzdwg_cfde5532-97c7-47b8-8b63-0159fc9e82b9/frr/0.log" Feb 14 05:56:08 crc kubenswrapper[4867]: I0214 05:56:08.419799 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-4hvw7_6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8/speaker/1.log" Feb 14 05:56:08 crc kubenswrapper[4867]: I0214 05:56:08.780924 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-4hvw7_6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8/speaker/0.log" Feb 14 05:56:21 crc kubenswrapper[4867]: I0214 05:56:21.902123 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19fjnlv_936b69da-ce28-43de-8fcf-82e83936de1b/util/0.log" Feb 14 05:56:22 crc kubenswrapper[4867]: I0214 05:56:22.100469 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19fjnlv_936b69da-ce28-43de-8fcf-82e83936de1b/util/0.log" Feb 14 05:56:22 crc kubenswrapper[4867]: I0214 05:56:22.108913 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19fjnlv_936b69da-ce28-43de-8fcf-82e83936de1b/pull/0.log" Feb 14 05:56:22 crc kubenswrapper[4867]: I0214 05:56:22.160421 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19fjnlv_936b69da-ce28-43de-8fcf-82e83936de1b/pull/0.log" Feb 14 05:56:22 crc kubenswrapper[4867]: I0214 05:56:22.346991 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19fjnlv_936b69da-ce28-43de-8fcf-82e83936de1b/util/0.log" Feb 14 05:56:22 crc kubenswrapper[4867]: I0214 05:56:22.370982 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19fjnlv_936b69da-ce28-43de-8fcf-82e83936de1b/pull/0.log" Feb 14 05:56:22 crc kubenswrapper[4867]: I0214 05:56:22.385460 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19fjnlv_936b69da-ce28-43de-8fcf-82e83936de1b/extract/0.log" Feb 14 05:56:22 crc kubenswrapper[4867]: I0214 05:56:22.536093 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0859vdc_2d5a082b-f5f1-4a9d-be2a-31df6953a4a4/util/0.log" Feb 14 05:56:22 crc kubenswrapper[4867]: I0214 05:56:22.743980 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0859vdc_2d5a082b-f5f1-4a9d-be2a-31df6953a4a4/pull/0.log" Feb 14 05:56:22 crc kubenswrapper[4867]: I0214 05:56:22.770737 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0859vdc_2d5a082b-f5f1-4a9d-be2a-31df6953a4a4/util/0.log" Feb 14 05:56:22 crc kubenswrapper[4867]: I0214 05:56:22.792582 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0859vdc_2d5a082b-f5f1-4a9d-be2a-31df6953a4a4/pull/0.log" Feb 14 05:56:22 crc kubenswrapper[4867]: I0214 05:56:22.965986 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0859vdc_2d5a082b-f5f1-4a9d-be2a-31df6953a4a4/pull/0.log" Feb 14 05:56:22 crc kubenswrapper[4867]: I0214 05:56:22.980086 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0859vdc_2d5a082b-f5f1-4a9d-be2a-31df6953a4a4/util/0.log" Feb 14 05:56:22 crc kubenswrapper[4867]: I0214 05:56:22.980330 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0859vdc_2d5a082b-f5f1-4a9d-be2a-31df6953a4a4/extract/0.log" Feb 14 05:56:23 crc kubenswrapper[4867]: I0214 05:56:23.195604 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vbtkn_cc14a3a2-05fa-4675-bace-02675c564e5f/util/0.log" Feb 14 05:56:23 crc kubenswrapper[4867]: I0214 05:56:23.328745 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vbtkn_cc14a3a2-05fa-4675-bace-02675c564e5f/util/0.log" Feb 14 05:56:23 crc kubenswrapper[4867]: I0214 05:56:23.329803 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vbtkn_cc14a3a2-05fa-4675-bace-02675c564e5f/pull/0.log" Feb 14 05:56:23 crc kubenswrapper[4867]: I0214 05:56:23.378077 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vbtkn_cc14a3a2-05fa-4675-bace-02675c564e5f/pull/0.log" Feb 14 05:56:23 crc kubenswrapper[4867]: I0214 05:56:23.556255 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vbtkn_cc14a3a2-05fa-4675-bace-02675c564e5f/extract/0.log" Feb 14 05:56:23 crc kubenswrapper[4867]: I0214 05:56:23.577122 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vbtkn_cc14a3a2-05fa-4675-bace-02675c564e5f/pull/0.log" Feb 14 05:56:23 crc kubenswrapper[4867]: I0214 05:56:23.608876 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vbtkn_cc14a3a2-05fa-4675-bace-02675c564e5f/util/0.log" Feb 14 05:56:23 crc kubenswrapper[4867]: I0214 05:56:23.768798 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mrccv_e0fe6db4-add0-4993-a40c-c5b6725565fa/extract-utilities/0.log" Feb 14 05:56:23 crc kubenswrapper[4867]: I0214 05:56:23.935737 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mrccv_e0fe6db4-add0-4993-a40c-c5b6725565fa/extract-utilities/0.log" Feb 14 05:56:23 crc kubenswrapper[4867]: I0214 05:56:23.983854 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mrccv_e0fe6db4-add0-4993-a40c-c5b6725565fa/extract-content/0.log" Feb 14 05:56:23 crc kubenswrapper[4867]: I0214 05:56:23.995119 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mrccv_e0fe6db4-add0-4993-a40c-c5b6725565fa/extract-content/0.log" Feb 14 05:56:24 crc kubenswrapper[4867]: I0214 05:56:24.132014 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mrccv_e0fe6db4-add0-4993-a40c-c5b6725565fa/extract-utilities/0.log" Feb 14 05:56:24 crc kubenswrapper[4867]: I0214 05:56:24.135111 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mrccv_e0fe6db4-add0-4993-a40c-c5b6725565fa/extract-content/0.log" Feb 14 05:56:24 crc kubenswrapper[4867]: I0214 05:56:24.481932 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-w69fq_be125812-eeef-4043-bef9-fea01037dddb/extract-utilities/0.log" Feb 14 05:56:24 crc kubenswrapper[4867]: I0214 05:56:24.547948 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mrccv_e0fe6db4-add0-4993-a40c-c5b6725565fa/registry-server/1.log" Feb 14 05:56:24 crc kubenswrapper[4867]: I0214 05:56:24.742697 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-w69fq_be125812-eeef-4043-bef9-fea01037dddb/extract-utilities/0.log" Feb 14 05:56:24 crc kubenswrapper[4867]: I0214 05:56:24.771434 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-w69fq_be125812-eeef-4043-bef9-fea01037dddb/extract-content/0.log" Feb 14 05:56:24 crc kubenswrapper[4867]: I0214 05:56:24.798316 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-w69fq_be125812-eeef-4043-bef9-fea01037dddb/extract-content/0.log" Feb 14 05:56:24 crc kubenswrapper[4867]: I0214 05:56:24.929276 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mrccv_e0fe6db4-add0-4993-a40c-c5b6725565fa/registry-server/0.log" Feb 14 05:56:24 crc kubenswrapper[4867]: I0214 05:56:24.966007 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-w69fq_be125812-eeef-4043-bef9-fea01037dddb/extract-utilities/0.log" Feb 14 05:56:25 crc kubenswrapper[4867]: I0214 05:56:25.025830 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-w69fq_be125812-eeef-4043-bef9-fea01037dddb/extract-content/0.log" Feb 14 05:56:25 crc kubenswrapper[4867]: I0214 05:56:25.291096 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kxs9j_af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe/util/0.log" Feb 14 05:56:25 crc kubenswrapper[4867]: I0214 05:56:25.511131 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kxs9j_af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe/pull/0.log" Feb 14 05:56:25 crc kubenswrapper[4867]: I0214 05:56:25.548076 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kxs9j_af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe/pull/0.log" Feb 14 05:56:25 crc kubenswrapper[4867]: I0214 05:56:25.563373 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kxs9j_af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe/util/0.log" Feb 14 05:56:25 crc kubenswrapper[4867]: I0214 05:56:25.856982 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kxs9j_af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe/pull/0.log" Feb 14 05:56:25 crc kubenswrapper[4867]: I0214 05:56:25.890885 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kxs9j_af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe/util/0.log" Feb 14 05:56:25 crc kubenswrapper[4867]: I0214 05:56:25.921128 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kxs9j_af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe/extract/0.log" Feb 14 05:56:26 crc kubenswrapper[4867]: I0214 05:56:26.052228 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-w69fq_be125812-eeef-4043-bef9-fea01037dddb/registry-server/0.log" Feb 14 05:56:26 crc kubenswrapper[4867]: I0214 05:56:26.147224 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadnhlb_10159ab6-8862-4a8a-afd2-3fb5920f2cae/util/0.log" Feb 14 05:56:26 crc kubenswrapper[4867]: I0214 05:56:26.300848 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadnhlb_10159ab6-8862-4a8a-afd2-3fb5920f2cae/pull/0.log" Feb 14 05:56:26 crc kubenswrapper[4867]: I0214 05:56:26.322290 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadnhlb_10159ab6-8862-4a8a-afd2-3fb5920f2cae/pull/0.log" Feb 14 05:56:26 crc kubenswrapper[4867]: I0214 05:56:26.334852 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadnhlb_10159ab6-8862-4a8a-afd2-3fb5920f2cae/util/0.log" Feb 14 05:56:26 crc kubenswrapper[4867]: I0214 05:56:26.509955 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadnhlb_10159ab6-8862-4a8a-afd2-3fb5920f2cae/util/0.log" Feb 14 05:56:26 crc kubenswrapper[4867]: I0214 05:56:26.510498 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadnhlb_10159ab6-8862-4a8a-afd2-3fb5920f2cae/extract/0.log" Feb 14 05:56:26 crc kubenswrapper[4867]: I0214 05:56:26.528087 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadnhlb_10159ab6-8862-4a8a-afd2-3fb5920f2cae/pull/0.log" Feb 14 05:56:26 crc kubenswrapper[4867]: I0214 05:56:26.557728 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-p82xp_33b576d8-f768-4fd2-895d-7d4ababe8714/marketplace-operator/0.log" Feb 14 05:56:26 crc kubenswrapper[4867]: I0214 05:56:26.703595 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-gbz8c_c8fe62eb-932d-4b17-8ffa-6c90780bdd74/extract-utilities/0.log" Feb 14 05:56:26 crc kubenswrapper[4867]: I0214 05:56:26.921106 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-gbz8c_c8fe62eb-932d-4b17-8ffa-6c90780bdd74/extract-utilities/0.log" Feb 14 05:56:26 crc kubenswrapper[4867]: I0214 05:56:26.927170 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-gbz8c_c8fe62eb-932d-4b17-8ffa-6c90780bdd74/extract-content/0.log" Feb 14 05:56:26 crc kubenswrapper[4867]: I0214 05:56:26.935832 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-gbz8c_c8fe62eb-932d-4b17-8ffa-6c90780bdd74/extract-content/0.log" Feb 14 05:56:27 crc kubenswrapper[4867]: I0214 05:56:27.126527 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-gbz8c_c8fe62eb-932d-4b17-8ffa-6c90780bdd74/extract-content/0.log" Feb 14 05:56:27 crc kubenswrapper[4867]: I0214 05:56:27.130172 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-gbz8c_c8fe62eb-932d-4b17-8ffa-6c90780bdd74/extract-utilities/0.log" Feb 14 05:56:27 crc kubenswrapper[4867]: I0214 05:56:27.204087 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-bvb8v_140d0152-99c5-425c-b956-595dea337206/extract-utilities/0.log" Feb 14 05:56:27 crc kubenswrapper[4867]: I0214 05:56:27.362233 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-gbz8c_c8fe62eb-932d-4b17-8ffa-6c90780bdd74/registry-server/0.log" Feb 14 05:56:27 crc kubenswrapper[4867]: I0214 05:56:27.388079 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-bvb8v_140d0152-99c5-425c-b956-595dea337206/extract-utilities/0.log" Feb 14 05:56:27 crc kubenswrapper[4867]: I0214 05:56:27.390711 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-bvb8v_140d0152-99c5-425c-b956-595dea337206/extract-content/0.log" Feb 14 05:56:27 crc kubenswrapper[4867]: I0214 05:56:27.400206 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-bvb8v_140d0152-99c5-425c-b956-595dea337206/extract-content/0.log" Feb 14 05:56:27 crc kubenswrapper[4867]: I0214 05:56:27.562477 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-bvb8v_140d0152-99c5-425c-b956-595dea337206/extract-content/0.log" Feb 14 05:56:27 crc kubenswrapper[4867]: I0214 05:56:27.567534 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-bvb8v_140d0152-99c5-425c-b956-595dea337206/extract-utilities/0.log" Feb 14 05:56:28 crc kubenswrapper[4867]: I0214 05:56:28.599618 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-bvb8v_140d0152-99c5-425c-b956-595dea337206/registry-server/0.log" Feb 14 05:56:31 crc kubenswrapper[4867]: I0214 05:56:31.250858 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 05:56:31 crc kubenswrapper[4867]: I0214 05:56:31.251416 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 05:56:40 crc kubenswrapper[4867]: I0214 05:56:40.485673 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr_5ecc414b-6bac-4b24-99c5-e2d1fb67f314/prometheus-operator-admission-webhook/0.log" Feb 14 05:56:40 crc kubenswrapper[4867]: I0214 05:56:40.486967 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-vwlcr_987816d4-f9a4-47da-983c-317f9a3f4d86/prometheus-operator/0.log" Feb 14 05:56:40 crc kubenswrapper[4867]: I0214 05:56:40.517060 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj_8c7f9ea9-2c5c-4e9c-97b2-02dd8a216d06/prometheus-operator-admission-webhook/0.log" Feb 14 05:56:40 crc kubenswrapper[4867]: I0214 05:56:40.615690 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-kv4j7_94f47db9-4437-4b3e-aee5-f6f65e715e62/operator/0.log" Feb 14 05:56:40 crc kubenswrapper[4867]: I0214 05:56:40.675834 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-7qfh9_31f03187-50f6-4015-afdc-422455a63006/perses-operator/0.log" Feb 14 05:56:40 crc kubenswrapper[4867]: I0214 05:56:40.701350 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-492b9_701367b7-aef6-43b5-a0f9-3a91206962de/observability-ui-dashboards/0.log" Feb 14 05:56:54 crc kubenswrapper[4867]: I0214 05:56:54.793380 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-5479889c99-ltnxf_4a918644-d451-4f71-8a69-627b0de1ebb7/kube-rbac-proxy/0.log" Feb 14 05:56:54 crc kubenswrapper[4867]: I0214 05:56:54.824142 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-5479889c99-ltnxf_4a918644-d451-4f71-8a69-627b0de1ebb7/manager/0.log" Feb 14 05:56:54 crc kubenswrapper[4867]: I0214 05:56:54.859245 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-5479889c99-ltnxf_4a918644-d451-4f71-8a69-627b0de1ebb7/manager/1.log" Feb 14 05:57:01 crc kubenswrapper[4867]: I0214 05:57:01.251017 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 05:57:01 crc kubenswrapper[4867]: I0214 05:57:01.251584 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 05:57:01 crc kubenswrapper[4867]: I0214 05:57:01.251630 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" Feb 14 05:57:01 crc kubenswrapper[4867]: I0214 05:57:01.252570 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"85cc1629feee14dea1a79134dc431065e3e76ce7010ce3c502e802c3ae8c3154"} pod="openshift-machine-config-operator/machine-config-daemon-4s95t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 05:57:01 crc kubenswrapper[4867]: I0214 05:57:01.252622 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" containerID="cri-o://85cc1629feee14dea1a79134dc431065e3e76ce7010ce3c502e802c3ae8c3154" gracePeriod=600 Feb 14 05:57:01 crc kubenswrapper[4867]: E0214 05:57:01.379386 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:57:02 crc kubenswrapper[4867]: I0214 05:57:02.300370 4867 generic.go:334] "Generic (PLEG): container finished" podID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerID="85cc1629feee14dea1a79134dc431065e3e76ce7010ce3c502e802c3ae8c3154" exitCode=0 Feb 14 05:57:02 crc kubenswrapper[4867]: I0214 05:57:02.300461 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" event={"ID":"5992e46c-bce7-4b9f-82f2-c7ffb93286cd","Type":"ContainerDied","Data":"85cc1629feee14dea1a79134dc431065e3e76ce7010ce3c502e802c3ae8c3154"} Feb 14 05:57:02 crc kubenswrapper[4867]: I0214 05:57:02.300848 4867 scope.go:117] "RemoveContainer" containerID="969e0cb4cefe8b8e5046ee62cca830ff3afc22fe72785a6b708c487b9ff93b5e" Feb 14 05:57:02 crc kubenswrapper[4867]: I0214 05:57:02.301782 4867 scope.go:117] "RemoveContainer" containerID="85cc1629feee14dea1a79134dc431065e3e76ce7010ce3c502e802c3ae8c3154" Feb 14 05:57:02 crc kubenswrapper[4867]: E0214 05:57:02.302225 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:57:16 crc kubenswrapper[4867]: I0214 05:57:16.997994 4867 scope.go:117] "RemoveContainer" containerID="85cc1629feee14dea1a79134dc431065e3e76ce7010ce3c502e802c3ae8c3154" Feb 14 05:57:17 crc kubenswrapper[4867]: E0214 05:57:17.008044 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:57:29 crc kubenswrapper[4867]: I0214 05:57:29.997784 4867 scope.go:117] "RemoveContainer" containerID="85cc1629feee14dea1a79134dc431065e3e76ce7010ce3c502e802c3ae8c3154" Feb 14 05:57:30 crc kubenswrapper[4867]: E0214 05:57:29.999100 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:57:42 crc kubenswrapper[4867]: I0214 05:57:42.998686 4867 scope.go:117] "RemoveContainer" containerID="85cc1629feee14dea1a79134dc431065e3e76ce7010ce3c502e802c3ae8c3154" Feb 14 05:57:43 crc kubenswrapper[4867]: E0214 05:57:42.999825 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:57:57 crc kubenswrapper[4867]: I0214 05:57:56.998282 4867 scope.go:117] "RemoveContainer" containerID="85cc1629feee14dea1a79134dc431065e3e76ce7010ce3c502e802c3ae8c3154" Feb 14 05:57:57 crc kubenswrapper[4867]: E0214 05:57:56.999200 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:58:11 crc kubenswrapper[4867]: I0214 05:58:11.997735 4867 scope.go:117] "RemoveContainer" containerID="85cc1629feee14dea1a79134dc431065e3e76ce7010ce3c502e802c3ae8c3154" Feb 14 05:58:11 crc kubenswrapper[4867]: E0214 05:58:11.999595 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:58:22 crc kubenswrapper[4867]: I0214 05:58:22.997758 4867 scope.go:117] "RemoveContainer" containerID="85cc1629feee14dea1a79134dc431065e3e76ce7010ce3c502e802c3ae8c3154" Feb 14 05:58:22 crc kubenswrapper[4867]: E0214 05:58:22.998905 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:58:37 crc kubenswrapper[4867]: I0214 05:58:37.997849 4867 scope.go:117] "RemoveContainer" containerID="85cc1629feee14dea1a79134dc431065e3e76ce7010ce3c502e802c3ae8c3154" Feb 14 05:58:37 crc kubenswrapper[4867]: E0214 05:58:37.998725 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:58:51 crc kubenswrapper[4867]: I0214 05:58:51.996919 4867 scope.go:117] "RemoveContainer" containerID="85cc1629feee14dea1a79134dc431065e3e76ce7010ce3c502e802c3ae8c3154" Feb 14 05:58:51 crc kubenswrapper[4867]: E0214 05:58:51.997721 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:59:02 crc kubenswrapper[4867]: I0214 05:59:02.997396 4867 scope.go:117] "RemoveContainer" containerID="85cc1629feee14dea1a79134dc431065e3e76ce7010ce3c502e802c3ae8c3154" Feb 14 05:59:02 crc kubenswrapper[4867]: E0214 05:59:02.999520 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:59:04 crc kubenswrapper[4867]: I0214 05:59:04.919723 4867 generic.go:334] "Generic (PLEG): container finished" podID="89d6412f-a37d-4f30-8c3a-9514185847fc" containerID="177c95f4e7826d6d799901d70a180712f443165780432f255fcb63f96509fb1c" exitCode=0 Feb 14 05:59:04 crc kubenswrapper[4867]: I0214 05:59:04.919885 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rtzc7/must-gather-wmzns" event={"ID":"89d6412f-a37d-4f30-8c3a-9514185847fc","Type":"ContainerDied","Data":"177c95f4e7826d6d799901d70a180712f443165780432f255fcb63f96509fb1c"} Feb 14 05:59:04 crc kubenswrapper[4867]: I0214 05:59:04.920991 4867 scope.go:117] "RemoveContainer" containerID="177c95f4e7826d6d799901d70a180712f443165780432f255fcb63f96509fb1c" Feb 14 05:59:05 crc kubenswrapper[4867]: I0214 05:59:05.046933 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-rtzc7_must-gather-wmzns_89d6412f-a37d-4f30-8c3a-9514185847fc/gather/0.log" Feb 14 05:59:13 crc kubenswrapper[4867]: I0214 05:59:13.030322 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-rtzc7/must-gather-wmzns"] Feb 14 05:59:13 crc kubenswrapper[4867]: I0214 05:59:13.031538 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-rtzc7/must-gather-wmzns" podUID="89d6412f-a37d-4f30-8c3a-9514185847fc" containerName="copy" containerID="cri-o://8bda962d52e435b73ab83aa35089685e683712a0b3acfa743e4df637f1d29a76" gracePeriod=2 Feb 14 05:59:13 crc kubenswrapper[4867]: I0214 05:59:13.051153 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-rtzc7/must-gather-wmzns"] Feb 14 05:59:14 crc kubenswrapper[4867]: I0214 05:59:14.068935 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-rtzc7_must-gather-wmzns_89d6412f-a37d-4f30-8c3a-9514185847fc/copy/0.log" Feb 14 05:59:14 crc kubenswrapper[4867]: I0214 05:59:14.070021 4867 generic.go:334] "Generic (PLEG): container finished" podID="89d6412f-a37d-4f30-8c3a-9514185847fc" containerID="8bda962d52e435b73ab83aa35089685e683712a0b3acfa743e4df637f1d29a76" exitCode=143 Feb 14 05:59:14 crc kubenswrapper[4867]: I0214 05:59:14.070112 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d6a5a00012c52a2aac1e8dffdc748b022caf87a8674b148896c8bda016c8acb" Feb 14 05:59:14 crc kubenswrapper[4867]: I0214 05:59:14.079547 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-rtzc7_must-gather-wmzns_89d6412f-a37d-4f30-8c3a-9514185847fc/copy/0.log" Feb 14 05:59:14 crc kubenswrapper[4867]: I0214 05:59:14.079902 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rtzc7/must-gather-wmzns" Feb 14 05:59:14 crc kubenswrapper[4867]: I0214 05:59:14.189124 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/89d6412f-a37d-4f30-8c3a-9514185847fc-must-gather-output\") pod \"89d6412f-a37d-4f30-8c3a-9514185847fc\" (UID: \"89d6412f-a37d-4f30-8c3a-9514185847fc\") " Feb 14 05:59:14 crc kubenswrapper[4867]: I0214 05:59:14.189813 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-slmvv\" (UniqueName: \"kubernetes.io/projected/89d6412f-a37d-4f30-8c3a-9514185847fc-kube-api-access-slmvv\") pod \"89d6412f-a37d-4f30-8c3a-9514185847fc\" (UID: \"89d6412f-a37d-4f30-8c3a-9514185847fc\") " Feb 14 05:59:14 crc kubenswrapper[4867]: I0214 05:59:14.234709 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89d6412f-a37d-4f30-8c3a-9514185847fc-kube-api-access-slmvv" (OuterVolumeSpecName: "kube-api-access-slmvv") pod "89d6412f-a37d-4f30-8c3a-9514185847fc" (UID: "89d6412f-a37d-4f30-8c3a-9514185847fc"). InnerVolumeSpecName "kube-api-access-slmvv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 05:59:14 crc kubenswrapper[4867]: I0214 05:59:14.294224 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-slmvv\" (UniqueName: \"kubernetes.io/projected/89d6412f-a37d-4f30-8c3a-9514185847fc-kube-api-access-slmvv\") on node \"crc\" DevicePath \"\"" Feb 14 05:59:14 crc kubenswrapper[4867]: I0214 05:59:14.532356 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89d6412f-a37d-4f30-8c3a-9514185847fc-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "89d6412f-a37d-4f30-8c3a-9514185847fc" (UID: "89d6412f-a37d-4f30-8c3a-9514185847fc"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 05:59:14 crc kubenswrapper[4867]: I0214 05:59:14.602569 4867 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/89d6412f-a37d-4f30-8c3a-9514185847fc-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 14 05:59:15 crc kubenswrapper[4867]: I0214 05:59:15.011786 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89d6412f-a37d-4f30-8c3a-9514185847fc" path="/var/lib/kubelet/pods/89d6412f-a37d-4f30-8c3a-9514185847fc/volumes" Feb 14 05:59:15 crc kubenswrapper[4867]: I0214 05:59:15.080556 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rtzc7/must-gather-wmzns" Feb 14 05:59:15 crc kubenswrapper[4867]: I0214 05:59:15.998226 4867 scope.go:117] "RemoveContainer" containerID="85cc1629feee14dea1a79134dc431065e3e76ce7010ce3c502e802c3ae8c3154" Feb 14 05:59:15 crc kubenswrapper[4867]: E0214 05:59:15.998760 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:59:19 crc kubenswrapper[4867]: I0214 05:59:19.708245 4867 scope.go:117] "RemoveContainer" containerID="8bda962d52e435b73ab83aa35089685e683712a0b3acfa743e4df637f1d29a76" Feb 14 05:59:19 crc kubenswrapper[4867]: I0214 05:59:19.761687 4867 scope.go:117] "RemoveContainer" containerID="177c95f4e7826d6d799901d70a180712f443165780432f255fcb63f96509fb1c" Feb 14 05:59:29 crc kubenswrapper[4867]: I0214 05:59:29.011929 4867 scope.go:117] "RemoveContainer" containerID="85cc1629feee14dea1a79134dc431065e3e76ce7010ce3c502e802c3ae8c3154" Feb 14 05:59:29 crc kubenswrapper[4867]: E0214 05:59:29.037255 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:59:41 crc kubenswrapper[4867]: I0214 05:59:41.997457 4867 scope.go:117] "RemoveContainer" containerID="85cc1629feee14dea1a79134dc431065e3e76ce7010ce3c502e802c3ae8c3154" Feb 14 05:59:41 crc kubenswrapper[4867]: E0214 05:59:41.998458 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:59:53 crc kubenswrapper[4867]: I0214 05:59:53.998189 4867 scope.go:117] "RemoveContainer" containerID="85cc1629feee14dea1a79134dc431065e3e76ce7010ce3c502e802c3ae8c3154" Feb 14 05:59:54 crc kubenswrapper[4867]: E0214 05:59:54.000467 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 06:00:00 crc kubenswrapper[4867]: I0214 06:00:00.275729 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517480-pr6pg"] Feb 14 06:00:00 crc kubenswrapper[4867]: E0214 06:00:00.278766 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89d6412f-a37d-4f30-8c3a-9514185847fc" containerName="copy" Feb 14 06:00:00 crc kubenswrapper[4867]: I0214 06:00:00.278811 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="89d6412f-a37d-4f30-8c3a-9514185847fc" containerName="copy" Feb 14 06:00:00 crc kubenswrapper[4867]: E0214 06:00:00.278866 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89d6412f-a37d-4f30-8c3a-9514185847fc" containerName="gather" Feb 14 06:00:00 crc kubenswrapper[4867]: I0214 06:00:00.278876 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="89d6412f-a37d-4f30-8c3a-9514185847fc" containerName="gather" Feb 14 06:00:00 crc kubenswrapper[4867]: E0214 06:00:00.278918 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d55eb762-847d-4073-b20e-d1f306d0a424" containerName="registry-server" Feb 14 06:00:00 crc kubenswrapper[4867]: I0214 06:00:00.278928 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="d55eb762-847d-4073-b20e-d1f306d0a424" containerName="registry-server" Feb 14 06:00:00 crc kubenswrapper[4867]: E0214 06:00:00.278950 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d55eb762-847d-4073-b20e-d1f306d0a424" containerName="extract-content" Feb 14 06:00:00 crc kubenswrapper[4867]: I0214 06:00:00.278959 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="d55eb762-847d-4073-b20e-d1f306d0a424" containerName="extract-content" Feb 14 06:00:00 crc kubenswrapper[4867]: E0214 06:00:00.279003 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d55eb762-847d-4073-b20e-d1f306d0a424" containerName="extract-utilities" Feb 14 06:00:00 crc kubenswrapper[4867]: I0214 06:00:00.279014 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="d55eb762-847d-4073-b20e-d1f306d0a424" containerName="extract-utilities" Feb 14 06:00:00 crc kubenswrapper[4867]: I0214 06:00:00.279331 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="89d6412f-a37d-4f30-8c3a-9514185847fc" containerName="gather" Feb 14 06:00:00 crc kubenswrapper[4867]: I0214 06:00:00.279366 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="89d6412f-a37d-4f30-8c3a-9514185847fc" containerName="copy" Feb 14 06:00:00 crc kubenswrapper[4867]: I0214 06:00:00.279386 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="d55eb762-847d-4073-b20e-d1f306d0a424" containerName="registry-server" Feb 14 06:00:00 crc kubenswrapper[4867]: I0214 06:00:00.280534 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517480-pr6pg" Feb 14 06:00:00 crc kubenswrapper[4867]: I0214 06:00:00.302241 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 14 06:00:00 crc kubenswrapper[4867]: I0214 06:00:00.308404 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517480-pr6pg"] Feb 14 06:00:00 crc kubenswrapper[4867]: I0214 06:00:00.309843 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 14 06:00:00 crc kubenswrapper[4867]: I0214 06:00:00.384529 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ggmh\" (UniqueName: \"kubernetes.io/projected/83b3f1b1-9207-4686-88ed-dd7ec0a3d00e-kube-api-access-7ggmh\") pod \"collect-profiles-29517480-pr6pg\" (UID: \"83b3f1b1-9207-4686-88ed-dd7ec0a3d00e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517480-pr6pg" Feb 14 06:00:00 crc kubenswrapper[4867]: I0214 06:00:00.385321 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/83b3f1b1-9207-4686-88ed-dd7ec0a3d00e-config-volume\") pod \"collect-profiles-29517480-pr6pg\" (UID: \"83b3f1b1-9207-4686-88ed-dd7ec0a3d00e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517480-pr6pg" Feb 14 06:00:00 crc kubenswrapper[4867]: I0214 06:00:00.385439 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/83b3f1b1-9207-4686-88ed-dd7ec0a3d00e-secret-volume\") pod \"collect-profiles-29517480-pr6pg\" (UID: \"83b3f1b1-9207-4686-88ed-dd7ec0a3d00e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517480-pr6pg" Feb 14 06:00:00 crc kubenswrapper[4867]: I0214 06:00:00.487435 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ggmh\" (UniqueName: \"kubernetes.io/projected/83b3f1b1-9207-4686-88ed-dd7ec0a3d00e-kube-api-access-7ggmh\") pod \"collect-profiles-29517480-pr6pg\" (UID: \"83b3f1b1-9207-4686-88ed-dd7ec0a3d00e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517480-pr6pg" Feb 14 06:00:00 crc kubenswrapper[4867]: I0214 06:00:00.487620 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/83b3f1b1-9207-4686-88ed-dd7ec0a3d00e-config-volume\") pod \"collect-profiles-29517480-pr6pg\" (UID: \"83b3f1b1-9207-4686-88ed-dd7ec0a3d00e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517480-pr6pg" Feb 14 06:00:00 crc kubenswrapper[4867]: I0214 06:00:00.487693 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/83b3f1b1-9207-4686-88ed-dd7ec0a3d00e-secret-volume\") pod \"collect-profiles-29517480-pr6pg\" (UID: \"83b3f1b1-9207-4686-88ed-dd7ec0a3d00e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517480-pr6pg" Feb 14 06:00:00 crc kubenswrapper[4867]: I0214 06:00:00.490268 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/83b3f1b1-9207-4686-88ed-dd7ec0a3d00e-config-volume\") pod \"collect-profiles-29517480-pr6pg\" (UID: \"83b3f1b1-9207-4686-88ed-dd7ec0a3d00e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517480-pr6pg" Feb 14 06:00:00 crc kubenswrapper[4867]: I0214 06:00:00.513453 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/83b3f1b1-9207-4686-88ed-dd7ec0a3d00e-secret-volume\") pod \"collect-profiles-29517480-pr6pg\" (UID: \"83b3f1b1-9207-4686-88ed-dd7ec0a3d00e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517480-pr6pg" Feb 14 06:00:00 crc kubenswrapper[4867]: I0214 06:00:00.519264 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ggmh\" (UniqueName: \"kubernetes.io/projected/83b3f1b1-9207-4686-88ed-dd7ec0a3d00e-kube-api-access-7ggmh\") pod \"collect-profiles-29517480-pr6pg\" (UID: \"83b3f1b1-9207-4686-88ed-dd7ec0a3d00e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517480-pr6pg" Feb 14 06:00:00 crc kubenswrapper[4867]: I0214 06:00:00.619296 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517480-pr6pg" Feb 14 06:00:01 crc kubenswrapper[4867]: I0214 06:00:01.900145 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517480-pr6pg"] Feb 14 06:00:02 crc kubenswrapper[4867]: I0214 06:00:02.767099 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517480-pr6pg" event={"ID":"83b3f1b1-9207-4686-88ed-dd7ec0a3d00e","Type":"ContainerStarted","Data":"eb918c10b0fa3eab2238e9edf84e1078bf8602876b2df27c8616b050448c6f7d"} Feb 14 06:00:02 crc kubenswrapper[4867]: I0214 06:00:02.767537 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517480-pr6pg" event={"ID":"83b3f1b1-9207-4686-88ed-dd7ec0a3d00e","Type":"ContainerStarted","Data":"5937cf6dba17116f3d3fb07faac4590c6dcaa85002d2afda6f134157b30dd561"} Feb 14 06:00:02 crc kubenswrapper[4867]: I0214 06:00:02.789435 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29517480-pr6pg" podStartSLOduration=2.789411786 podStartE2EDuration="2.789411786s" podCreationTimestamp="2026-02-14 06:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 06:00:02.784117577 +0000 UTC m=+6634.865054901" watchObservedRunningTime="2026-02-14 06:00:02.789411786 +0000 UTC m=+6634.870349100" Feb 14 06:00:04 crc kubenswrapper[4867]: I0214 06:00:04.787487 4867 generic.go:334] "Generic (PLEG): container finished" podID="83b3f1b1-9207-4686-88ed-dd7ec0a3d00e" containerID="eb918c10b0fa3eab2238e9edf84e1078bf8602876b2df27c8616b050448c6f7d" exitCode=0 Feb 14 06:00:04 crc kubenswrapper[4867]: I0214 06:00:04.787555 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517480-pr6pg" event={"ID":"83b3f1b1-9207-4686-88ed-dd7ec0a3d00e","Type":"ContainerDied","Data":"eb918c10b0fa3eab2238e9edf84e1078bf8602876b2df27c8616b050448c6f7d"} Feb 14 06:00:06 crc kubenswrapper[4867]: I0214 06:00:06.218052 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517480-pr6pg" Feb 14 06:00:06 crc kubenswrapper[4867]: I0214 06:00:06.364982 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/83b3f1b1-9207-4686-88ed-dd7ec0a3d00e-secret-volume\") pod \"83b3f1b1-9207-4686-88ed-dd7ec0a3d00e\" (UID: \"83b3f1b1-9207-4686-88ed-dd7ec0a3d00e\") " Feb 14 06:00:06 crc kubenswrapper[4867]: I0214 06:00:06.365442 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7ggmh\" (UniqueName: \"kubernetes.io/projected/83b3f1b1-9207-4686-88ed-dd7ec0a3d00e-kube-api-access-7ggmh\") pod \"83b3f1b1-9207-4686-88ed-dd7ec0a3d00e\" (UID: \"83b3f1b1-9207-4686-88ed-dd7ec0a3d00e\") " Feb 14 06:00:06 crc kubenswrapper[4867]: I0214 06:00:06.365744 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/83b3f1b1-9207-4686-88ed-dd7ec0a3d00e-config-volume\") pod \"83b3f1b1-9207-4686-88ed-dd7ec0a3d00e\" (UID: \"83b3f1b1-9207-4686-88ed-dd7ec0a3d00e\") " Feb 14 06:00:06 crc kubenswrapper[4867]: I0214 06:00:06.366652 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83b3f1b1-9207-4686-88ed-dd7ec0a3d00e-config-volume" (OuterVolumeSpecName: "config-volume") pod "83b3f1b1-9207-4686-88ed-dd7ec0a3d00e" (UID: "83b3f1b1-9207-4686-88ed-dd7ec0a3d00e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 06:00:06 crc kubenswrapper[4867]: I0214 06:00:06.367011 4867 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/83b3f1b1-9207-4686-88ed-dd7ec0a3d00e-config-volume\") on node \"crc\" DevicePath \"\"" Feb 14 06:00:06 crc kubenswrapper[4867]: I0214 06:00:06.372414 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83b3f1b1-9207-4686-88ed-dd7ec0a3d00e-kube-api-access-7ggmh" (OuterVolumeSpecName: "kube-api-access-7ggmh") pod "83b3f1b1-9207-4686-88ed-dd7ec0a3d00e" (UID: "83b3f1b1-9207-4686-88ed-dd7ec0a3d00e"). InnerVolumeSpecName "kube-api-access-7ggmh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 06:00:06 crc kubenswrapper[4867]: I0214 06:00:06.373325 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83b3f1b1-9207-4686-88ed-dd7ec0a3d00e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "83b3f1b1-9207-4686-88ed-dd7ec0a3d00e" (UID: "83b3f1b1-9207-4686-88ed-dd7ec0a3d00e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 06:00:06 crc kubenswrapper[4867]: I0214 06:00:06.469721 4867 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/83b3f1b1-9207-4686-88ed-dd7ec0a3d00e-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 14 06:00:06 crc kubenswrapper[4867]: I0214 06:00:06.469974 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7ggmh\" (UniqueName: \"kubernetes.io/projected/83b3f1b1-9207-4686-88ed-dd7ec0a3d00e-kube-api-access-7ggmh\") on node \"crc\" DevicePath \"\"" Feb 14 06:00:06 crc kubenswrapper[4867]: I0214 06:00:06.810616 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517480-pr6pg" event={"ID":"83b3f1b1-9207-4686-88ed-dd7ec0a3d00e","Type":"ContainerDied","Data":"5937cf6dba17116f3d3fb07faac4590c6dcaa85002d2afda6f134157b30dd561"} Feb 14 06:00:06 crc kubenswrapper[4867]: I0214 06:00:06.810950 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5937cf6dba17116f3d3fb07faac4590c6dcaa85002d2afda6f134157b30dd561" Feb 14 06:00:06 crc kubenswrapper[4867]: I0214 06:00:06.810800 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517480-pr6pg" Feb 14 06:00:06 crc kubenswrapper[4867]: I0214 06:00:06.885236 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517435-sp924"] Feb 14 06:00:06 crc kubenswrapper[4867]: I0214 06:00:06.896590 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517435-sp924"] Feb 14 06:00:06 crc kubenswrapper[4867]: I0214 06:00:06.998253 4867 scope.go:117] "RemoveContainer" containerID="85cc1629feee14dea1a79134dc431065e3e76ce7010ce3c502e802c3ae8c3154" Feb 14 06:00:06 crc kubenswrapper[4867]: E0214 06:00:06.998561 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 06:00:07 crc kubenswrapper[4867]: I0214 06:00:07.010919 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d32d646-2d3a-40db-acb7-a2c9e410c655" path="/var/lib/kubelet/pods/4d32d646-2d3a-40db-acb7-a2c9e410c655/volumes" Feb 14 06:00:19 crc kubenswrapper[4867]: I0214 06:00:19.893639 4867 scope.go:117] "RemoveContainer" containerID="57685fa039b788fdc3d04fb1da2849cb66a1a8363710569f8bd5ff77b56239d6" Feb 14 06:00:20 crc kubenswrapper[4867]: I0214 06:00:20.997908 4867 scope.go:117] "RemoveContainer" containerID="85cc1629feee14dea1a79134dc431065e3e76ce7010ce3c502e802c3ae8c3154" Feb 14 06:00:20 crc kubenswrapper[4867]: E0214 06:00:20.998419 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 06:00:34 crc kubenswrapper[4867]: I0214 06:00:34.998851 4867 scope.go:117] "RemoveContainer" containerID="85cc1629feee14dea1a79134dc431065e3e76ce7010ce3c502e802c3ae8c3154" Feb 14 06:00:35 crc kubenswrapper[4867]: E0214 06:00:34.999918 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 06:00:44 crc kubenswrapper[4867]: I0214 06:00:44.762542 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="27437fd9-2bc5-48ac-9e34-e733da15dd2b" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Feb 14 06:00:45 crc kubenswrapper[4867]: I0214 06:00:45.997401 4867 scope.go:117] "RemoveContainer" containerID="85cc1629feee14dea1a79134dc431065e3e76ce7010ce3c502e802c3ae8c3154" Feb 14 06:00:45 crc kubenswrapper[4867]: E0214 06:00:45.998114 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 06:00:49 crc kubenswrapper[4867]: I0214 06:00:49.762071 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="27437fd9-2bc5-48ac-9e34-e733da15dd2b" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Feb 14 06:00:54 crc kubenswrapper[4867]: I0214 06:00:54.764431 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="27437fd9-2bc5-48ac-9e34-e733da15dd2b" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Feb 14 06:00:54 crc kubenswrapper[4867]: I0214 06:00:54.765183 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ceilometer-0" Feb 14 06:00:54 crc kubenswrapper[4867]: I0214 06:00:54.764714 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="27437fd9-2bc5-48ac-9e34-e733da15dd2b" containerName="ceilometer-notification-agent" probeResult="failure" output="command timed out" Feb 14 06:00:54 crc kubenswrapper[4867]: I0214 06:00:54.769306 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="ceilometer-central-agent" containerStatusID={"Type":"cri-o","ID":"b41170ee2bb16f2e334839addb6382f3dd37db9fe4c0c536cea87f10a0681b84"} pod="openstack/ceilometer-0" containerMessage="Container ceilometer-central-agent failed liveness probe, will be restarted" Feb 14 06:00:54 crc kubenswrapper[4867]: I0214 06:00:54.769545 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="27437fd9-2bc5-48ac-9e34-e733da15dd2b" containerName="ceilometer-central-agent" containerID="cri-o://b41170ee2bb16f2e334839addb6382f3dd37db9fe4c0c536cea87f10a0681b84" gracePeriod=30 Feb 14 06:01:00 crc kubenswrapper[4867]: I0214 06:01:00.198403 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29517481-xvtzl"] Feb 14 06:01:00 crc kubenswrapper[4867]: E0214 06:01:00.200323 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83b3f1b1-9207-4686-88ed-dd7ec0a3d00e" containerName="collect-profiles" Feb 14 06:01:00 crc kubenswrapper[4867]: I0214 06:01:00.200370 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="83b3f1b1-9207-4686-88ed-dd7ec0a3d00e" containerName="collect-profiles" Feb 14 06:01:00 crc kubenswrapper[4867]: I0214 06:01:00.200666 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="83b3f1b1-9207-4686-88ed-dd7ec0a3d00e" containerName="collect-profiles" Feb 14 06:01:00 crc kubenswrapper[4867]: I0214 06:01:00.201542 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29517481-xvtzl" Feb 14 06:01:00 crc kubenswrapper[4867]: I0214 06:01:00.225971 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29517481-xvtzl"] Feb 14 06:01:00 crc kubenswrapper[4867]: I0214 06:01:00.321482 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5lgn\" (UniqueName: \"kubernetes.io/projected/948cecc5-1590-4c1e-b8c5-75d4c05abc2e-kube-api-access-s5lgn\") pod \"keystone-cron-29517481-xvtzl\" (UID: \"948cecc5-1590-4c1e-b8c5-75d4c05abc2e\") " pod="openstack/keystone-cron-29517481-xvtzl" Feb 14 06:01:00 crc kubenswrapper[4867]: I0214 06:01:00.322834 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/948cecc5-1590-4c1e-b8c5-75d4c05abc2e-config-data\") pod \"keystone-cron-29517481-xvtzl\" (UID: \"948cecc5-1590-4c1e-b8c5-75d4c05abc2e\") " pod="openstack/keystone-cron-29517481-xvtzl" Feb 14 06:01:00 crc kubenswrapper[4867]: I0214 06:01:00.323106 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/948cecc5-1590-4c1e-b8c5-75d4c05abc2e-combined-ca-bundle\") pod \"keystone-cron-29517481-xvtzl\" (UID: \"948cecc5-1590-4c1e-b8c5-75d4c05abc2e\") " pod="openstack/keystone-cron-29517481-xvtzl" Feb 14 06:01:00 crc kubenswrapper[4867]: I0214 06:01:00.323825 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/948cecc5-1590-4c1e-b8c5-75d4c05abc2e-fernet-keys\") pod \"keystone-cron-29517481-xvtzl\" (UID: \"948cecc5-1590-4c1e-b8c5-75d4c05abc2e\") " pod="openstack/keystone-cron-29517481-xvtzl" Feb 14 06:01:00 crc kubenswrapper[4867]: I0214 06:01:00.426107 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5lgn\" (UniqueName: \"kubernetes.io/projected/948cecc5-1590-4c1e-b8c5-75d4c05abc2e-kube-api-access-s5lgn\") pod \"keystone-cron-29517481-xvtzl\" (UID: \"948cecc5-1590-4c1e-b8c5-75d4c05abc2e\") " pod="openstack/keystone-cron-29517481-xvtzl" Feb 14 06:01:00 crc kubenswrapper[4867]: I0214 06:01:00.426199 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/948cecc5-1590-4c1e-b8c5-75d4c05abc2e-config-data\") pod \"keystone-cron-29517481-xvtzl\" (UID: \"948cecc5-1590-4c1e-b8c5-75d4c05abc2e\") " pod="openstack/keystone-cron-29517481-xvtzl" Feb 14 06:01:00 crc kubenswrapper[4867]: I0214 06:01:00.426224 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/948cecc5-1590-4c1e-b8c5-75d4c05abc2e-combined-ca-bundle\") pod \"keystone-cron-29517481-xvtzl\" (UID: \"948cecc5-1590-4c1e-b8c5-75d4c05abc2e\") " pod="openstack/keystone-cron-29517481-xvtzl" Feb 14 06:01:00 crc kubenswrapper[4867]: I0214 06:01:00.426293 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/948cecc5-1590-4c1e-b8c5-75d4c05abc2e-fernet-keys\") pod \"keystone-cron-29517481-xvtzl\" (UID: \"948cecc5-1590-4c1e-b8c5-75d4c05abc2e\") " pod="openstack/keystone-cron-29517481-xvtzl" Feb 14 06:01:00 crc kubenswrapper[4867]: I0214 06:01:00.442604 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/948cecc5-1590-4c1e-b8c5-75d4c05abc2e-combined-ca-bundle\") pod \"keystone-cron-29517481-xvtzl\" (UID: \"948cecc5-1590-4c1e-b8c5-75d4c05abc2e\") " pod="openstack/keystone-cron-29517481-xvtzl" Feb 14 06:01:00 crc kubenswrapper[4867]: I0214 06:01:00.443087 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/948cecc5-1590-4c1e-b8c5-75d4c05abc2e-config-data\") pod \"keystone-cron-29517481-xvtzl\" (UID: \"948cecc5-1590-4c1e-b8c5-75d4c05abc2e\") " pod="openstack/keystone-cron-29517481-xvtzl" Feb 14 06:01:00 crc kubenswrapper[4867]: I0214 06:01:00.449567 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/948cecc5-1590-4c1e-b8c5-75d4c05abc2e-fernet-keys\") pod \"keystone-cron-29517481-xvtzl\" (UID: \"948cecc5-1590-4c1e-b8c5-75d4c05abc2e\") " pod="openstack/keystone-cron-29517481-xvtzl" Feb 14 06:01:00 crc kubenswrapper[4867]: I0214 06:01:00.454255 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5lgn\" (UniqueName: \"kubernetes.io/projected/948cecc5-1590-4c1e-b8c5-75d4c05abc2e-kube-api-access-s5lgn\") pod \"keystone-cron-29517481-xvtzl\" (UID: \"948cecc5-1590-4c1e-b8c5-75d4c05abc2e\") " pod="openstack/keystone-cron-29517481-xvtzl" Feb 14 06:01:00 crc kubenswrapper[4867]: I0214 06:01:00.554633 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29517481-xvtzl" Feb 14 06:01:01 crc kubenswrapper[4867]: I0214 06:01:01.003656 4867 scope.go:117] "RemoveContainer" containerID="85cc1629feee14dea1a79134dc431065e3e76ce7010ce3c502e802c3ae8c3154" Feb 14 06:01:01 crc kubenswrapper[4867]: E0214 06:01:01.004729 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 06:01:01 crc kubenswrapper[4867]: I0214 06:01:01.296342 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29517481-xvtzl"] Feb 14 06:01:01 crc kubenswrapper[4867]: I0214 06:01:01.462338 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29517481-xvtzl" event={"ID":"948cecc5-1590-4c1e-b8c5-75d4c05abc2e","Type":"ContainerStarted","Data":"b4f8cd758036799a7597373912ef0a8ff1feee20e55cfc07ae58fb236331fbf2"} Feb 14 06:01:14 crc kubenswrapper[4867]: I0214 06:01:14.997912 4867 scope.go:117] "RemoveContainer" containerID="85cc1629feee14dea1a79134dc431065e3e76ce7010ce3c502e802c3ae8c3154" Feb 14 06:01:15 crc kubenswrapper[4867]: E0214 06:01:14.999411 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 06:01:19 crc kubenswrapper[4867]: I0214 06:01:19.750216 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29517481-xvtzl" event={"ID":"948cecc5-1590-4c1e-b8c5-75d4c05abc2e","Type":"ContainerStarted","Data":"af313107cf808073398af8332b5402e83f1649a10d5e262a4b1d2513f24ea6c4"} Feb 14 06:01:19 crc kubenswrapper[4867]: I0214 06:01:19.777873 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29517481-xvtzl" podStartSLOduration=19.777854687 podStartE2EDuration="19.777854687s" podCreationTimestamp="2026-02-14 06:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 06:01:19.774690744 +0000 UTC m=+6711.855628058" watchObservedRunningTime="2026-02-14 06:01:19.777854687 +0000 UTC m=+6711.858792001" Feb 14 06:01:21 crc kubenswrapper[4867]: I0214 06:01:21.496291 4867 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 14 06:01:21 crc kubenswrapper[4867]: I0214 06:01:21.781729 4867 generic.go:334] "Generic (PLEG): container finished" podID="27437fd9-2bc5-48ac-9e34-e733da15dd2b" containerID="b41170ee2bb16f2e334839addb6382f3dd37db9fe4c0c536cea87f10a0681b84" exitCode=0 Feb 14 06:01:21 crc kubenswrapper[4867]: I0214 06:01:21.781777 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27437fd9-2bc5-48ac-9e34-e733da15dd2b","Type":"ContainerDied","Data":"b41170ee2bb16f2e334839addb6382f3dd37db9fe4c0c536cea87f10a0681b84"} Feb 14 06:01:21 crc kubenswrapper[4867]: I0214 06:01:21.781817 4867 scope.go:117] "RemoveContainer" containerID="86c896e795193cbc041ce48aa8f5cfb49ed56bfd923d3ce2eec001f309e51bd7" Feb 14 06:01:22 crc kubenswrapper[4867]: I0214 06:01:22.796819 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27437fd9-2bc5-48ac-9e34-e733da15dd2b","Type":"ContainerStarted","Data":"71714cb23ecd923ca245480a524041bb02e6c9e3073f3c792d1b4ec0a66caae9"} Feb 14 06:01:22 crc kubenswrapper[4867]: I0214 06:01:22.799110 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29517481-xvtzl" event={"ID":"948cecc5-1590-4c1e-b8c5-75d4c05abc2e","Type":"ContainerDied","Data":"af313107cf808073398af8332b5402e83f1649a10d5e262a4b1d2513f24ea6c4"} Feb 14 06:01:22 crc kubenswrapper[4867]: I0214 06:01:22.799003 4867 generic.go:334] "Generic (PLEG): container finished" podID="948cecc5-1590-4c1e-b8c5-75d4c05abc2e" containerID="af313107cf808073398af8332b5402e83f1649a10d5e262a4b1d2513f24ea6c4" exitCode=0 Feb 14 06:01:24 crc kubenswrapper[4867]: I0214 06:01:24.254678 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29517481-xvtzl" Feb 14 06:01:24 crc kubenswrapper[4867]: I0214 06:01:24.419807 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/948cecc5-1590-4c1e-b8c5-75d4c05abc2e-fernet-keys\") pod \"948cecc5-1590-4c1e-b8c5-75d4c05abc2e\" (UID: \"948cecc5-1590-4c1e-b8c5-75d4c05abc2e\") " Feb 14 06:01:24 crc kubenswrapper[4867]: I0214 06:01:24.420239 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/948cecc5-1590-4c1e-b8c5-75d4c05abc2e-config-data\") pod \"948cecc5-1590-4c1e-b8c5-75d4c05abc2e\" (UID: \"948cecc5-1590-4c1e-b8c5-75d4c05abc2e\") " Feb 14 06:01:24 crc kubenswrapper[4867]: I0214 06:01:24.420376 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5lgn\" (UniqueName: \"kubernetes.io/projected/948cecc5-1590-4c1e-b8c5-75d4c05abc2e-kube-api-access-s5lgn\") pod \"948cecc5-1590-4c1e-b8c5-75d4c05abc2e\" (UID: \"948cecc5-1590-4c1e-b8c5-75d4c05abc2e\") " Feb 14 06:01:24 crc kubenswrapper[4867]: I0214 06:01:24.420662 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/948cecc5-1590-4c1e-b8c5-75d4c05abc2e-combined-ca-bundle\") pod \"948cecc5-1590-4c1e-b8c5-75d4c05abc2e\" (UID: \"948cecc5-1590-4c1e-b8c5-75d4c05abc2e\") " Feb 14 06:01:24 crc kubenswrapper[4867]: I0214 06:01:24.443248 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/948cecc5-1590-4c1e-b8c5-75d4c05abc2e-kube-api-access-s5lgn" (OuterVolumeSpecName: "kube-api-access-s5lgn") pod "948cecc5-1590-4c1e-b8c5-75d4c05abc2e" (UID: "948cecc5-1590-4c1e-b8c5-75d4c05abc2e"). InnerVolumeSpecName "kube-api-access-s5lgn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 06:01:24 crc kubenswrapper[4867]: I0214 06:01:24.444456 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/948cecc5-1590-4c1e-b8c5-75d4c05abc2e-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "948cecc5-1590-4c1e-b8c5-75d4c05abc2e" (UID: "948cecc5-1590-4c1e-b8c5-75d4c05abc2e"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 06:01:24 crc kubenswrapper[4867]: I0214 06:01:24.479326 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/948cecc5-1590-4c1e-b8c5-75d4c05abc2e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "948cecc5-1590-4c1e-b8c5-75d4c05abc2e" (UID: "948cecc5-1590-4c1e-b8c5-75d4c05abc2e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 06:01:24 crc kubenswrapper[4867]: I0214 06:01:24.507352 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/948cecc5-1590-4c1e-b8c5-75d4c05abc2e-config-data" (OuterVolumeSpecName: "config-data") pod "948cecc5-1590-4c1e-b8c5-75d4c05abc2e" (UID: "948cecc5-1590-4c1e-b8c5-75d4c05abc2e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 06:01:24 crc kubenswrapper[4867]: I0214 06:01:24.523538 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/948cecc5-1590-4c1e-b8c5-75d4c05abc2e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 06:01:24 crc kubenswrapper[4867]: I0214 06:01:24.523586 4867 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/948cecc5-1590-4c1e-b8c5-75d4c05abc2e-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 14 06:01:24 crc kubenswrapper[4867]: I0214 06:01:24.523596 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/948cecc5-1590-4c1e-b8c5-75d4c05abc2e-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 06:01:24 crc kubenswrapper[4867]: I0214 06:01:24.523606 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s5lgn\" (UniqueName: \"kubernetes.io/projected/948cecc5-1590-4c1e-b8c5-75d4c05abc2e-kube-api-access-s5lgn\") on node \"crc\" DevicePath \"\"" Feb 14 06:01:24 crc kubenswrapper[4867]: I0214 06:01:24.825648 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29517481-xvtzl" event={"ID":"948cecc5-1590-4c1e-b8c5-75d4c05abc2e","Type":"ContainerDied","Data":"b4f8cd758036799a7597373912ef0a8ff1feee20e55cfc07ae58fb236331fbf2"} Feb 14 06:01:24 crc kubenswrapper[4867]: I0214 06:01:24.825694 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4f8cd758036799a7597373912ef0a8ff1feee20e55cfc07ae58fb236331fbf2" Feb 14 06:01:24 crc kubenswrapper[4867]: I0214 06:01:24.825890 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29517481-xvtzl" Feb 14 06:01:26 crc kubenswrapper[4867]: I0214 06:01:26.998673 4867 scope.go:117] "RemoveContainer" containerID="85cc1629feee14dea1a79134dc431065e3e76ce7010ce3c502e802c3ae8c3154" Feb 14 06:01:27 crc kubenswrapper[4867]: E0214 06:01:26.999434 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 06:01:39 crc kubenswrapper[4867]: I0214 06:01:39.019208 4867 scope.go:117] "RemoveContainer" containerID="85cc1629feee14dea1a79134dc431065e3e76ce7010ce3c502e802c3ae8c3154" Feb 14 06:01:39 crc kubenswrapper[4867]: E0214 06:01:39.020319 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 06:01:49 crc kubenswrapper[4867]: I0214 06:01:49.998462 4867 scope.go:117] "RemoveContainer" containerID="85cc1629feee14dea1a79134dc431065e3e76ce7010ce3c502e802c3ae8c3154" Feb 14 06:01:50 crc kubenswrapper[4867]: E0214 06:01:49.999529 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 06:02:00 crc kubenswrapper[4867]: I0214 06:02:00.997433 4867 scope.go:117] "RemoveContainer" containerID="85cc1629feee14dea1a79134dc431065e3e76ce7010ce3c502e802c3ae8c3154" Feb 14 06:02:01 crc kubenswrapper[4867]: E0214 06:02:00.998574 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 06:02:14 crc kubenswrapper[4867]: I0214 06:02:14.998953 4867 scope.go:117] "RemoveContainer" containerID="85cc1629feee14dea1a79134dc431065e3e76ce7010ce3c502e802c3ae8c3154" Feb 14 06:02:15 crc kubenswrapper[4867]: I0214 06:02:15.494447 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" event={"ID":"5992e46c-bce7-4b9f-82f2-c7ffb93286cd","Type":"ContainerStarted","Data":"92bd21b391618693b38219f6b0a3cae0e5df83bf07f4ba2e4705f2380a1917b6"} Feb 14 06:02:52 crc kubenswrapper[4867]: I0214 06:02:52.276465 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bp6ls"] Feb 14 06:02:52 crc kubenswrapper[4867]: E0214 06:02:52.278625 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="948cecc5-1590-4c1e-b8c5-75d4c05abc2e" containerName="keystone-cron" Feb 14 06:02:52 crc kubenswrapper[4867]: I0214 06:02:52.278647 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="948cecc5-1590-4c1e-b8c5-75d4c05abc2e" containerName="keystone-cron" Feb 14 06:02:52 crc kubenswrapper[4867]: I0214 06:02:52.279208 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="948cecc5-1590-4c1e-b8c5-75d4c05abc2e" containerName="keystone-cron" Feb 14 06:02:52 crc kubenswrapper[4867]: I0214 06:02:52.281804 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bp6ls" Feb 14 06:02:52 crc kubenswrapper[4867]: I0214 06:02:52.299497 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bp6ls"] Feb 14 06:02:52 crc kubenswrapper[4867]: I0214 06:02:52.326571 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b226f7b-fb10-4b1a-a225-587c9afaa99f-catalog-content\") pod \"certified-operators-bp6ls\" (UID: \"4b226f7b-fb10-4b1a-a225-587c9afaa99f\") " pod="openshift-marketplace/certified-operators-bp6ls" Feb 14 06:02:52 crc kubenswrapper[4867]: I0214 06:02:52.326684 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b226f7b-fb10-4b1a-a225-587c9afaa99f-utilities\") pod \"certified-operators-bp6ls\" (UID: \"4b226f7b-fb10-4b1a-a225-587c9afaa99f\") " pod="openshift-marketplace/certified-operators-bp6ls" Feb 14 06:02:52 crc kubenswrapper[4867]: I0214 06:02:52.326746 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2dtz\" (UniqueName: \"kubernetes.io/projected/4b226f7b-fb10-4b1a-a225-587c9afaa99f-kube-api-access-d2dtz\") pod \"certified-operators-bp6ls\" (UID: \"4b226f7b-fb10-4b1a-a225-587c9afaa99f\") " pod="openshift-marketplace/certified-operators-bp6ls" Feb 14 06:02:52 crc kubenswrapper[4867]: I0214 06:02:52.429284 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2dtz\" (UniqueName: \"kubernetes.io/projected/4b226f7b-fb10-4b1a-a225-587c9afaa99f-kube-api-access-d2dtz\") pod \"certified-operators-bp6ls\" (UID: \"4b226f7b-fb10-4b1a-a225-587c9afaa99f\") " pod="openshift-marketplace/certified-operators-bp6ls" Feb 14 06:02:52 crc kubenswrapper[4867]: I0214 06:02:52.429537 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b226f7b-fb10-4b1a-a225-587c9afaa99f-catalog-content\") pod \"certified-operators-bp6ls\" (UID: \"4b226f7b-fb10-4b1a-a225-587c9afaa99f\") " pod="openshift-marketplace/certified-operators-bp6ls" Feb 14 06:02:52 crc kubenswrapper[4867]: I0214 06:02:52.429653 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b226f7b-fb10-4b1a-a225-587c9afaa99f-utilities\") pod \"certified-operators-bp6ls\" (UID: \"4b226f7b-fb10-4b1a-a225-587c9afaa99f\") " pod="openshift-marketplace/certified-operators-bp6ls" Feb 14 06:02:52 crc kubenswrapper[4867]: I0214 06:02:52.430633 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b226f7b-fb10-4b1a-a225-587c9afaa99f-catalog-content\") pod \"certified-operators-bp6ls\" (UID: \"4b226f7b-fb10-4b1a-a225-587c9afaa99f\") " pod="openshift-marketplace/certified-operators-bp6ls" Feb 14 06:02:52 crc kubenswrapper[4867]: I0214 06:02:52.430701 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b226f7b-fb10-4b1a-a225-587c9afaa99f-utilities\") pod \"certified-operators-bp6ls\" (UID: \"4b226f7b-fb10-4b1a-a225-587c9afaa99f\") " pod="openshift-marketplace/certified-operators-bp6ls" Feb 14 06:02:52 crc kubenswrapper[4867]: I0214 06:02:52.458743 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2dtz\" (UniqueName: \"kubernetes.io/projected/4b226f7b-fb10-4b1a-a225-587c9afaa99f-kube-api-access-d2dtz\") pod \"certified-operators-bp6ls\" (UID: \"4b226f7b-fb10-4b1a-a225-587c9afaa99f\") " pod="openshift-marketplace/certified-operators-bp6ls" Feb 14 06:02:52 crc kubenswrapper[4867]: I0214 06:02:52.613395 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bp6ls" Feb 14 06:02:53 crc kubenswrapper[4867]: I0214 06:02:53.207007 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bp6ls"] Feb 14 06:02:54 crc kubenswrapper[4867]: I0214 06:02:54.035985 4867 generic.go:334] "Generic (PLEG): container finished" podID="4b226f7b-fb10-4b1a-a225-587c9afaa99f" containerID="de503032f08be5fc3bffc0d2d0f625246c2e6d8b851ecedc23302420fd9d068f" exitCode=0 Feb 14 06:02:54 crc kubenswrapper[4867]: I0214 06:02:54.036032 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bp6ls" event={"ID":"4b226f7b-fb10-4b1a-a225-587c9afaa99f","Type":"ContainerDied","Data":"de503032f08be5fc3bffc0d2d0f625246c2e6d8b851ecedc23302420fd9d068f"} Feb 14 06:02:54 crc kubenswrapper[4867]: I0214 06:02:54.036345 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bp6ls" event={"ID":"4b226f7b-fb10-4b1a-a225-587c9afaa99f","Type":"ContainerStarted","Data":"a3992203babd7ad31e38237c380935b9570505dceb845b8dacf9d8bf92050df0"} Feb 14 06:02:55 crc kubenswrapper[4867]: I0214 06:02:55.048310 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bp6ls" event={"ID":"4b226f7b-fb10-4b1a-a225-587c9afaa99f","Type":"ContainerStarted","Data":"f5501610fe8e0922fb72b91ffd3a5ca2b8292983fe1aefe1e493daa42e69cb0d"} Feb 14 06:02:57 crc kubenswrapper[4867]: I0214 06:02:57.082549 4867 generic.go:334] "Generic (PLEG): container finished" podID="4b226f7b-fb10-4b1a-a225-587c9afaa99f" containerID="f5501610fe8e0922fb72b91ffd3a5ca2b8292983fe1aefe1e493daa42e69cb0d" exitCode=0 Feb 14 06:02:57 crc kubenswrapper[4867]: I0214 06:02:57.082632 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bp6ls" event={"ID":"4b226f7b-fb10-4b1a-a225-587c9afaa99f","Type":"ContainerDied","Data":"f5501610fe8e0922fb72b91ffd3a5ca2b8292983fe1aefe1e493daa42e69cb0d"} Feb 14 06:02:58 crc kubenswrapper[4867]: I0214 06:02:58.112642 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bp6ls" event={"ID":"4b226f7b-fb10-4b1a-a225-587c9afaa99f","Type":"ContainerStarted","Data":"dc1de924f0bff90f09c48bd209c7d5185435f5f557aa54e03dbd435d4b987ff8"} Feb 14 06:02:58 crc kubenswrapper[4867]: I0214 06:02:58.141249 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bp6ls" podStartSLOduration=2.719328822 podStartE2EDuration="6.141223682s" podCreationTimestamp="2026-02-14 06:02:52 +0000 UTC" firstStartedPulling="2026-02-14 06:02:54.038364753 +0000 UTC m=+6806.119302067" lastFinishedPulling="2026-02-14 06:02:57.460259613 +0000 UTC m=+6809.541196927" observedRunningTime="2026-02-14 06:02:58.133664274 +0000 UTC m=+6810.214601608" watchObservedRunningTime="2026-02-14 06:02:58.141223682 +0000 UTC m=+6810.222161016" Feb 14 06:03:02 crc kubenswrapper[4867]: I0214 06:03:02.614088 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bp6ls" Feb 14 06:03:02 crc kubenswrapper[4867]: I0214 06:03:02.614916 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bp6ls" Feb 14 06:03:03 crc kubenswrapper[4867]: I0214 06:03:03.687820 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-bp6ls" podUID="4b226f7b-fb10-4b1a-a225-587c9afaa99f" containerName="registry-server" probeResult="failure" output=< Feb 14 06:03:03 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 06:03:03 crc kubenswrapper[4867]: > Feb 14 06:03:12 crc kubenswrapper[4867]: I0214 06:03:12.674968 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bp6ls" Feb 14 06:03:12 crc kubenswrapper[4867]: I0214 06:03:12.752661 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bp6ls" Feb 14 06:03:12 crc kubenswrapper[4867]: I0214 06:03:12.923414 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bp6ls"] Feb 14 06:03:14 crc kubenswrapper[4867]: I0214 06:03:14.328892 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bp6ls" podUID="4b226f7b-fb10-4b1a-a225-587c9afaa99f" containerName="registry-server" containerID="cri-o://dc1de924f0bff90f09c48bd209c7d5185435f5f557aa54e03dbd435d4b987ff8" gracePeriod=2 Feb 14 06:03:15 crc kubenswrapper[4867]: I0214 06:03:15.309674 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bp6ls" Feb 14 06:03:15 crc kubenswrapper[4867]: I0214 06:03:15.343325 4867 generic.go:334] "Generic (PLEG): container finished" podID="4b226f7b-fb10-4b1a-a225-587c9afaa99f" containerID="dc1de924f0bff90f09c48bd209c7d5185435f5f557aa54e03dbd435d4b987ff8" exitCode=0 Feb 14 06:03:15 crc kubenswrapper[4867]: I0214 06:03:15.343383 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bp6ls" event={"ID":"4b226f7b-fb10-4b1a-a225-587c9afaa99f","Type":"ContainerDied","Data":"dc1de924f0bff90f09c48bd209c7d5185435f5f557aa54e03dbd435d4b987ff8"} Feb 14 06:03:15 crc kubenswrapper[4867]: I0214 06:03:15.343410 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bp6ls" event={"ID":"4b226f7b-fb10-4b1a-a225-587c9afaa99f","Type":"ContainerDied","Data":"a3992203babd7ad31e38237c380935b9570505dceb845b8dacf9d8bf92050df0"} Feb 14 06:03:15 crc kubenswrapper[4867]: I0214 06:03:15.343427 4867 scope.go:117] "RemoveContainer" containerID="dc1de924f0bff90f09c48bd209c7d5185435f5f557aa54e03dbd435d4b987ff8" Feb 14 06:03:15 crc kubenswrapper[4867]: I0214 06:03:15.343581 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bp6ls" Feb 14 06:03:15 crc kubenswrapper[4867]: I0214 06:03:15.389827 4867 scope.go:117] "RemoveContainer" containerID="f5501610fe8e0922fb72b91ffd3a5ca2b8292983fe1aefe1e493daa42e69cb0d" Feb 14 06:03:15 crc kubenswrapper[4867]: I0214 06:03:15.428265 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b226f7b-fb10-4b1a-a225-587c9afaa99f-utilities\") pod \"4b226f7b-fb10-4b1a-a225-587c9afaa99f\" (UID: \"4b226f7b-fb10-4b1a-a225-587c9afaa99f\") " Feb 14 06:03:15 crc kubenswrapper[4867]: I0214 06:03:15.428364 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b226f7b-fb10-4b1a-a225-587c9afaa99f-catalog-content\") pod \"4b226f7b-fb10-4b1a-a225-587c9afaa99f\" (UID: \"4b226f7b-fb10-4b1a-a225-587c9afaa99f\") " Feb 14 06:03:15 crc kubenswrapper[4867]: I0214 06:03:15.428414 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2dtz\" (UniqueName: \"kubernetes.io/projected/4b226f7b-fb10-4b1a-a225-587c9afaa99f-kube-api-access-d2dtz\") pod \"4b226f7b-fb10-4b1a-a225-587c9afaa99f\" (UID: \"4b226f7b-fb10-4b1a-a225-587c9afaa99f\") " Feb 14 06:03:15 crc kubenswrapper[4867]: I0214 06:03:15.429379 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b226f7b-fb10-4b1a-a225-587c9afaa99f-utilities" (OuterVolumeSpecName: "utilities") pod "4b226f7b-fb10-4b1a-a225-587c9afaa99f" (UID: "4b226f7b-fb10-4b1a-a225-587c9afaa99f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 06:03:15 crc kubenswrapper[4867]: I0214 06:03:15.429861 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b226f7b-fb10-4b1a-a225-587c9afaa99f-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 06:03:15 crc kubenswrapper[4867]: I0214 06:03:15.450766 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b226f7b-fb10-4b1a-a225-587c9afaa99f-kube-api-access-d2dtz" (OuterVolumeSpecName: "kube-api-access-d2dtz") pod "4b226f7b-fb10-4b1a-a225-587c9afaa99f" (UID: "4b226f7b-fb10-4b1a-a225-587c9afaa99f"). InnerVolumeSpecName "kube-api-access-d2dtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 06:03:15 crc kubenswrapper[4867]: I0214 06:03:15.450846 4867 scope.go:117] "RemoveContainer" containerID="de503032f08be5fc3bffc0d2d0f625246c2e6d8b851ecedc23302420fd9d068f" Feb 14 06:03:15 crc kubenswrapper[4867]: I0214 06:03:15.532322 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2dtz\" (UniqueName: \"kubernetes.io/projected/4b226f7b-fb10-4b1a-a225-587c9afaa99f-kube-api-access-d2dtz\") on node \"crc\" DevicePath \"\"" Feb 14 06:03:15 crc kubenswrapper[4867]: I0214 06:03:15.588881 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b226f7b-fb10-4b1a-a225-587c9afaa99f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4b226f7b-fb10-4b1a-a225-587c9afaa99f" (UID: "4b226f7b-fb10-4b1a-a225-587c9afaa99f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 06:03:15 crc kubenswrapper[4867]: I0214 06:03:15.609859 4867 scope.go:117] "RemoveContainer" containerID="dc1de924f0bff90f09c48bd209c7d5185435f5f557aa54e03dbd435d4b987ff8" Feb 14 06:03:15 crc kubenswrapper[4867]: E0214 06:03:15.615801 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc1de924f0bff90f09c48bd209c7d5185435f5f557aa54e03dbd435d4b987ff8\": container with ID starting with dc1de924f0bff90f09c48bd209c7d5185435f5f557aa54e03dbd435d4b987ff8 not found: ID does not exist" containerID="dc1de924f0bff90f09c48bd209c7d5185435f5f557aa54e03dbd435d4b987ff8" Feb 14 06:03:15 crc kubenswrapper[4867]: I0214 06:03:15.615855 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc1de924f0bff90f09c48bd209c7d5185435f5f557aa54e03dbd435d4b987ff8"} err="failed to get container status \"dc1de924f0bff90f09c48bd209c7d5185435f5f557aa54e03dbd435d4b987ff8\": rpc error: code = NotFound desc = could not find container \"dc1de924f0bff90f09c48bd209c7d5185435f5f557aa54e03dbd435d4b987ff8\": container with ID starting with dc1de924f0bff90f09c48bd209c7d5185435f5f557aa54e03dbd435d4b987ff8 not found: ID does not exist" Feb 14 06:03:15 crc kubenswrapper[4867]: I0214 06:03:15.615892 4867 scope.go:117] "RemoveContainer" containerID="f5501610fe8e0922fb72b91ffd3a5ca2b8292983fe1aefe1e493daa42e69cb0d" Feb 14 06:03:15 crc kubenswrapper[4867]: E0214 06:03:15.616397 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5501610fe8e0922fb72b91ffd3a5ca2b8292983fe1aefe1e493daa42e69cb0d\": container with ID starting with f5501610fe8e0922fb72b91ffd3a5ca2b8292983fe1aefe1e493daa42e69cb0d not found: ID does not exist" containerID="f5501610fe8e0922fb72b91ffd3a5ca2b8292983fe1aefe1e493daa42e69cb0d" Feb 14 06:03:15 crc kubenswrapper[4867]: I0214 06:03:15.616440 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5501610fe8e0922fb72b91ffd3a5ca2b8292983fe1aefe1e493daa42e69cb0d"} err="failed to get container status \"f5501610fe8e0922fb72b91ffd3a5ca2b8292983fe1aefe1e493daa42e69cb0d\": rpc error: code = NotFound desc = could not find container \"f5501610fe8e0922fb72b91ffd3a5ca2b8292983fe1aefe1e493daa42e69cb0d\": container with ID starting with f5501610fe8e0922fb72b91ffd3a5ca2b8292983fe1aefe1e493daa42e69cb0d not found: ID does not exist" Feb 14 06:03:15 crc kubenswrapper[4867]: I0214 06:03:15.616466 4867 scope.go:117] "RemoveContainer" containerID="de503032f08be5fc3bffc0d2d0f625246c2e6d8b851ecedc23302420fd9d068f" Feb 14 06:03:15 crc kubenswrapper[4867]: E0214 06:03:15.617986 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de503032f08be5fc3bffc0d2d0f625246c2e6d8b851ecedc23302420fd9d068f\": container with ID starting with de503032f08be5fc3bffc0d2d0f625246c2e6d8b851ecedc23302420fd9d068f not found: ID does not exist" containerID="de503032f08be5fc3bffc0d2d0f625246c2e6d8b851ecedc23302420fd9d068f" Feb 14 06:03:15 crc kubenswrapper[4867]: I0214 06:03:15.618029 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de503032f08be5fc3bffc0d2d0f625246c2e6d8b851ecedc23302420fd9d068f"} err="failed to get container status \"de503032f08be5fc3bffc0d2d0f625246c2e6d8b851ecedc23302420fd9d068f\": rpc error: code = NotFound desc = could not find container \"de503032f08be5fc3bffc0d2d0f625246c2e6d8b851ecedc23302420fd9d068f\": container with ID starting with de503032f08be5fc3bffc0d2d0f625246c2e6d8b851ecedc23302420fd9d068f not found: ID does not exist" Feb 14 06:03:15 crc kubenswrapper[4867]: I0214 06:03:15.634995 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b226f7b-fb10-4b1a-a225-587c9afaa99f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 06:03:15 crc kubenswrapper[4867]: I0214 06:03:15.681452 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bp6ls"] Feb 14 06:03:15 crc kubenswrapper[4867]: I0214 06:03:15.696249 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bp6ls"] Feb 14 06:03:17 crc kubenswrapper[4867]: I0214 06:03:17.029843 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b226f7b-fb10-4b1a-a225-587c9afaa99f" path="/var/lib/kubelet/pods/4b226f7b-fb10-4b1a-a225-587c9afaa99f/volumes" Feb 14 06:03:37 crc kubenswrapper[4867]: I0214 06:03:37.812762 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-skdnt"] Feb 14 06:03:37 crc kubenswrapper[4867]: E0214 06:03:37.814538 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b226f7b-fb10-4b1a-a225-587c9afaa99f" containerName="extract-content" Feb 14 06:03:37 crc kubenswrapper[4867]: I0214 06:03:37.814559 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b226f7b-fb10-4b1a-a225-587c9afaa99f" containerName="extract-content" Feb 14 06:03:37 crc kubenswrapper[4867]: E0214 06:03:37.814589 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b226f7b-fb10-4b1a-a225-587c9afaa99f" containerName="extract-utilities" Feb 14 06:03:37 crc kubenswrapper[4867]: I0214 06:03:37.814595 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b226f7b-fb10-4b1a-a225-587c9afaa99f" containerName="extract-utilities" Feb 14 06:03:37 crc kubenswrapper[4867]: E0214 06:03:37.814610 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b226f7b-fb10-4b1a-a225-587c9afaa99f" containerName="registry-server" Feb 14 06:03:37 crc kubenswrapper[4867]: I0214 06:03:37.814617 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b226f7b-fb10-4b1a-a225-587c9afaa99f" containerName="registry-server" Feb 14 06:03:37 crc kubenswrapper[4867]: I0214 06:03:37.814922 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b226f7b-fb10-4b1a-a225-587c9afaa99f" containerName="registry-server" Feb 14 06:03:37 crc kubenswrapper[4867]: I0214 06:03:37.818267 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-skdnt" Feb 14 06:03:37 crc kubenswrapper[4867]: I0214 06:03:37.838174 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-skdnt"] Feb 14 06:03:37 crc kubenswrapper[4867]: I0214 06:03:37.916059 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f920796-3206-4c6a-ad78-e8a2b2c07c79-catalog-content\") pod \"community-operators-skdnt\" (UID: \"1f920796-3206-4c6a-ad78-e8a2b2c07c79\") " pod="openshift-marketplace/community-operators-skdnt" Feb 14 06:03:37 crc kubenswrapper[4867]: I0214 06:03:37.916446 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f920796-3206-4c6a-ad78-e8a2b2c07c79-utilities\") pod \"community-operators-skdnt\" (UID: \"1f920796-3206-4c6a-ad78-e8a2b2c07c79\") " pod="openshift-marketplace/community-operators-skdnt" Feb 14 06:03:37 crc kubenswrapper[4867]: I0214 06:03:37.916467 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9kcl\" (UniqueName: \"kubernetes.io/projected/1f920796-3206-4c6a-ad78-e8a2b2c07c79-kube-api-access-f9kcl\") pod \"community-operators-skdnt\" (UID: \"1f920796-3206-4c6a-ad78-e8a2b2c07c79\") " pod="openshift-marketplace/community-operators-skdnt" Feb 14 06:03:38 crc kubenswrapper[4867]: I0214 06:03:38.018889 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f920796-3206-4c6a-ad78-e8a2b2c07c79-utilities\") pod \"community-operators-skdnt\" (UID: \"1f920796-3206-4c6a-ad78-e8a2b2c07c79\") " pod="openshift-marketplace/community-operators-skdnt" Feb 14 06:03:38 crc kubenswrapper[4867]: I0214 06:03:38.018931 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9kcl\" (UniqueName: \"kubernetes.io/projected/1f920796-3206-4c6a-ad78-e8a2b2c07c79-kube-api-access-f9kcl\") pod \"community-operators-skdnt\" (UID: \"1f920796-3206-4c6a-ad78-e8a2b2c07c79\") " pod="openshift-marketplace/community-operators-skdnt" Feb 14 06:03:38 crc kubenswrapper[4867]: I0214 06:03:38.019196 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f920796-3206-4c6a-ad78-e8a2b2c07c79-catalog-content\") pod \"community-operators-skdnt\" (UID: \"1f920796-3206-4c6a-ad78-e8a2b2c07c79\") " pod="openshift-marketplace/community-operators-skdnt" Feb 14 06:03:38 crc kubenswrapper[4867]: I0214 06:03:38.019382 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f920796-3206-4c6a-ad78-e8a2b2c07c79-utilities\") pod \"community-operators-skdnt\" (UID: \"1f920796-3206-4c6a-ad78-e8a2b2c07c79\") " pod="openshift-marketplace/community-operators-skdnt" Feb 14 06:03:38 crc kubenswrapper[4867]: I0214 06:03:38.019591 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f920796-3206-4c6a-ad78-e8a2b2c07c79-catalog-content\") pod \"community-operators-skdnt\" (UID: \"1f920796-3206-4c6a-ad78-e8a2b2c07c79\") " pod="openshift-marketplace/community-operators-skdnt" Feb 14 06:03:38 crc kubenswrapper[4867]: I0214 06:03:38.049435 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9kcl\" (UniqueName: \"kubernetes.io/projected/1f920796-3206-4c6a-ad78-e8a2b2c07c79-kube-api-access-f9kcl\") pod \"community-operators-skdnt\" (UID: \"1f920796-3206-4c6a-ad78-e8a2b2c07c79\") " pod="openshift-marketplace/community-operators-skdnt" Feb 14 06:03:38 crc kubenswrapper[4867]: I0214 06:03:38.149028 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-skdnt" Feb 14 06:03:38 crc kubenswrapper[4867]: I0214 06:03:38.731875 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-skdnt"] Feb 14 06:03:39 crc kubenswrapper[4867]: I0214 06:03:39.661723 4867 generic.go:334] "Generic (PLEG): container finished" podID="1f920796-3206-4c6a-ad78-e8a2b2c07c79" containerID="fc34ec4dacce9de548210d3888519f4ed35c73971d99af5acd89710e680fde95" exitCode=0 Feb 14 06:03:39 crc kubenswrapper[4867]: I0214 06:03:39.662267 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-skdnt" event={"ID":"1f920796-3206-4c6a-ad78-e8a2b2c07c79","Type":"ContainerDied","Data":"fc34ec4dacce9de548210d3888519f4ed35c73971d99af5acd89710e680fde95"} Feb 14 06:03:39 crc kubenswrapper[4867]: I0214 06:03:39.662293 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-skdnt" event={"ID":"1f920796-3206-4c6a-ad78-e8a2b2c07c79","Type":"ContainerStarted","Data":"67d6cd647c70f60815e5468464561e04331f9bada9a699f6c2a9522d742b4aec"} Feb 14 06:03:40 crc kubenswrapper[4867]: I0214 06:03:40.674573 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-skdnt" event={"ID":"1f920796-3206-4c6a-ad78-e8a2b2c07c79","Type":"ContainerStarted","Data":"270740dc153ad9475050bc2543190184b7cbfe3e2e0f4304c8365db7d151b101"} Feb 14 06:03:42 crc kubenswrapper[4867]: I0214 06:03:42.698926 4867 generic.go:334] "Generic (PLEG): container finished" podID="1f920796-3206-4c6a-ad78-e8a2b2c07c79" containerID="270740dc153ad9475050bc2543190184b7cbfe3e2e0f4304c8365db7d151b101" exitCode=0 Feb 14 06:03:42 crc kubenswrapper[4867]: I0214 06:03:42.699000 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-skdnt" event={"ID":"1f920796-3206-4c6a-ad78-e8a2b2c07c79","Type":"ContainerDied","Data":"270740dc153ad9475050bc2543190184b7cbfe3e2e0f4304c8365db7d151b101"} Feb 14 06:03:43 crc kubenswrapper[4867]: I0214 06:03:43.719743 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-skdnt" event={"ID":"1f920796-3206-4c6a-ad78-e8a2b2c07c79","Type":"ContainerStarted","Data":"6771b5c9272a56fa84c233588bcb4b4619981c67cfc8b4f6090cd4151faec5c9"} Feb 14 06:03:43 crc kubenswrapper[4867]: I0214 06:03:43.744619 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-skdnt" podStartSLOduration=3.3298954800000002 podStartE2EDuration="6.744600071s" podCreationTimestamp="2026-02-14 06:03:37 +0000 UTC" firstStartedPulling="2026-02-14 06:03:39.664761905 +0000 UTC m=+6851.745699219" lastFinishedPulling="2026-02-14 06:03:43.079466496 +0000 UTC m=+6855.160403810" observedRunningTime="2026-02-14 06:03:43.739812936 +0000 UTC m=+6855.820750250" watchObservedRunningTime="2026-02-14 06:03:43.744600071 +0000 UTC m=+6855.825537385" Feb 14 06:03:48 crc kubenswrapper[4867]: I0214 06:03:48.151331 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-skdnt" Feb 14 06:03:48 crc kubenswrapper[4867]: I0214 06:03:48.157098 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-skdnt" Feb 14 06:03:48 crc kubenswrapper[4867]: I0214 06:03:48.214187 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-skdnt" Feb 14 06:03:48 crc kubenswrapper[4867]: I0214 06:03:48.865883 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-skdnt" Feb 14 06:03:48 crc kubenswrapper[4867]: I0214 06:03:48.947603 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-skdnt"] Feb 14 06:03:50 crc kubenswrapper[4867]: I0214 06:03:50.805399 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-skdnt" podUID="1f920796-3206-4c6a-ad78-e8a2b2c07c79" containerName="registry-server" containerID="cri-o://6771b5c9272a56fa84c233588bcb4b4619981c67cfc8b4f6090cd4151faec5c9" gracePeriod=2 Feb 14 06:03:50 crc kubenswrapper[4867]: I0214 06:03:50.870822 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nb97r"] Feb 14 06:03:50 crc kubenswrapper[4867]: I0214 06:03:50.873452 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nb97r" Feb 14 06:03:50 crc kubenswrapper[4867]: I0214 06:03:50.893715 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nb97r"] Feb 14 06:03:50 crc kubenswrapper[4867]: I0214 06:03:50.990365 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d8ca39c-0068-495f-97b4-5da29e98c60d-catalog-content\") pod \"redhat-marketplace-nb97r\" (UID: \"9d8ca39c-0068-495f-97b4-5da29e98c60d\") " pod="openshift-marketplace/redhat-marketplace-nb97r" Feb 14 06:03:50 crc kubenswrapper[4867]: I0214 06:03:50.990633 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d8ca39c-0068-495f-97b4-5da29e98c60d-utilities\") pod \"redhat-marketplace-nb97r\" (UID: \"9d8ca39c-0068-495f-97b4-5da29e98c60d\") " pod="openshift-marketplace/redhat-marketplace-nb97r" Feb 14 06:03:50 crc kubenswrapper[4867]: I0214 06:03:50.990677 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxfwn\" (UniqueName: \"kubernetes.io/projected/9d8ca39c-0068-495f-97b4-5da29e98c60d-kube-api-access-jxfwn\") pod \"redhat-marketplace-nb97r\" (UID: \"9d8ca39c-0068-495f-97b4-5da29e98c60d\") " pod="openshift-marketplace/redhat-marketplace-nb97r" Feb 14 06:03:51 crc kubenswrapper[4867]: I0214 06:03:51.093278 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d8ca39c-0068-495f-97b4-5da29e98c60d-catalog-content\") pod \"redhat-marketplace-nb97r\" (UID: \"9d8ca39c-0068-495f-97b4-5da29e98c60d\") " pod="openshift-marketplace/redhat-marketplace-nb97r" Feb 14 06:03:51 crc kubenswrapper[4867]: I0214 06:03:51.093411 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d8ca39c-0068-495f-97b4-5da29e98c60d-utilities\") pod \"redhat-marketplace-nb97r\" (UID: \"9d8ca39c-0068-495f-97b4-5da29e98c60d\") " pod="openshift-marketplace/redhat-marketplace-nb97r" Feb 14 06:03:51 crc kubenswrapper[4867]: I0214 06:03:51.093439 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxfwn\" (UniqueName: \"kubernetes.io/projected/9d8ca39c-0068-495f-97b4-5da29e98c60d-kube-api-access-jxfwn\") pod \"redhat-marketplace-nb97r\" (UID: \"9d8ca39c-0068-495f-97b4-5da29e98c60d\") " pod="openshift-marketplace/redhat-marketplace-nb97r" Feb 14 06:03:51 crc kubenswrapper[4867]: I0214 06:03:51.094199 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d8ca39c-0068-495f-97b4-5da29e98c60d-catalog-content\") pod \"redhat-marketplace-nb97r\" (UID: \"9d8ca39c-0068-495f-97b4-5da29e98c60d\") " pod="openshift-marketplace/redhat-marketplace-nb97r" Feb 14 06:03:51 crc kubenswrapper[4867]: I0214 06:03:51.094333 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d8ca39c-0068-495f-97b4-5da29e98c60d-utilities\") pod \"redhat-marketplace-nb97r\" (UID: \"9d8ca39c-0068-495f-97b4-5da29e98c60d\") " pod="openshift-marketplace/redhat-marketplace-nb97r" Feb 14 06:03:51 crc kubenswrapper[4867]: I0214 06:03:51.122669 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxfwn\" (UniqueName: \"kubernetes.io/projected/9d8ca39c-0068-495f-97b4-5da29e98c60d-kube-api-access-jxfwn\") pod \"redhat-marketplace-nb97r\" (UID: \"9d8ca39c-0068-495f-97b4-5da29e98c60d\") " pod="openshift-marketplace/redhat-marketplace-nb97r" Feb 14 06:03:51 crc kubenswrapper[4867]: I0214 06:03:51.324314 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nb97r" Feb 14 06:03:51 crc kubenswrapper[4867]: I0214 06:03:51.544716 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-skdnt" Feb 14 06:03:51 crc kubenswrapper[4867]: I0214 06:03:51.711018 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f920796-3206-4c6a-ad78-e8a2b2c07c79-catalog-content\") pod \"1f920796-3206-4c6a-ad78-e8a2b2c07c79\" (UID: \"1f920796-3206-4c6a-ad78-e8a2b2c07c79\") " Feb 14 06:03:51 crc kubenswrapper[4867]: I0214 06:03:51.711215 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f920796-3206-4c6a-ad78-e8a2b2c07c79-utilities\") pod \"1f920796-3206-4c6a-ad78-e8a2b2c07c79\" (UID: \"1f920796-3206-4c6a-ad78-e8a2b2c07c79\") " Feb 14 06:03:51 crc kubenswrapper[4867]: I0214 06:03:51.711332 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9kcl\" (UniqueName: \"kubernetes.io/projected/1f920796-3206-4c6a-ad78-e8a2b2c07c79-kube-api-access-f9kcl\") pod \"1f920796-3206-4c6a-ad78-e8a2b2c07c79\" (UID: \"1f920796-3206-4c6a-ad78-e8a2b2c07c79\") " Feb 14 06:03:51 crc kubenswrapper[4867]: I0214 06:03:51.713471 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f920796-3206-4c6a-ad78-e8a2b2c07c79-utilities" (OuterVolumeSpecName: "utilities") pod "1f920796-3206-4c6a-ad78-e8a2b2c07c79" (UID: "1f920796-3206-4c6a-ad78-e8a2b2c07c79"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 06:03:51 crc kubenswrapper[4867]: I0214 06:03:51.718639 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f920796-3206-4c6a-ad78-e8a2b2c07c79-kube-api-access-f9kcl" (OuterVolumeSpecName: "kube-api-access-f9kcl") pod "1f920796-3206-4c6a-ad78-e8a2b2c07c79" (UID: "1f920796-3206-4c6a-ad78-e8a2b2c07c79"). InnerVolumeSpecName "kube-api-access-f9kcl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 06:03:51 crc kubenswrapper[4867]: I0214 06:03:51.760477 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f920796-3206-4c6a-ad78-e8a2b2c07c79-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1f920796-3206-4c6a-ad78-e8a2b2c07c79" (UID: "1f920796-3206-4c6a-ad78-e8a2b2c07c79"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 06:03:51 crc kubenswrapper[4867]: I0214 06:03:51.814744 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f9kcl\" (UniqueName: \"kubernetes.io/projected/1f920796-3206-4c6a-ad78-e8a2b2c07c79-kube-api-access-f9kcl\") on node \"crc\" DevicePath \"\"" Feb 14 06:03:51 crc kubenswrapper[4867]: I0214 06:03:51.814781 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f920796-3206-4c6a-ad78-e8a2b2c07c79-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 06:03:51 crc kubenswrapper[4867]: I0214 06:03:51.814794 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f920796-3206-4c6a-ad78-e8a2b2c07c79-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 06:03:51 crc kubenswrapper[4867]: I0214 06:03:51.821202 4867 generic.go:334] "Generic (PLEG): container finished" podID="1f920796-3206-4c6a-ad78-e8a2b2c07c79" containerID="6771b5c9272a56fa84c233588bcb4b4619981c67cfc8b4f6090cd4151faec5c9" exitCode=0 Feb 14 06:03:51 crc kubenswrapper[4867]: I0214 06:03:51.821246 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-skdnt" event={"ID":"1f920796-3206-4c6a-ad78-e8a2b2c07c79","Type":"ContainerDied","Data":"6771b5c9272a56fa84c233588bcb4b4619981c67cfc8b4f6090cd4151faec5c9"} Feb 14 06:03:51 crc kubenswrapper[4867]: I0214 06:03:51.821279 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-skdnt" event={"ID":"1f920796-3206-4c6a-ad78-e8a2b2c07c79","Type":"ContainerDied","Data":"67d6cd647c70f60815e5468464561e04331f9bada9a699f6c2a9522d742b4aec"} Feb 14 06:03:51 crc kubenswrapper[4867]: I0214 06:03:51.821298 4867 scope.go:117] "RemoveContainer" containerID="6771b5c9272a56fa84c233588bcb4b4619981c67cfc8b4f6090cd4151faec5c9" Feb 14 06:03:51 crc kubenswrapper[4867]: I0214 06:03:51.821336 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-skdnt" Feb 14 06:03:51 crc kubenswrapper[4867]: I0214 06:03:51.843087 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nb97r"] Feb 14 06:03:51 crc kubenswrapper[4867]: I0214 06:03:51.844932 4867 scope.go:117] "RemoveContainer" containerID="270740dc153ad9475050bc2543190184b7cbfe3e2e0f4304c8365db7d151b101" Feb 14 06:03:51 crc kubenswrapper[4867]: I0214 06:03:51.864771 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-skdnt"] Feb 14 06:03:51 crc kubenswrapper[4867]: I0214 06:03:51.878131 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-skdnt"] Feb 14 06:03:51 crc kubenswrapper[4867]: I0214 06:03:51.907821 4867 scope.go:117] "RemoveContainer" containerID="fc34ec4dacce9de548210d3888519f4ed35c73971d99af5acd89710e680fde95" Feb 14 06:03:51 crc kubenswrapper[4867]: I0214 06:03:51.948573 4867 scope.go:117] "RemoveContainer" containerID="6771b5c9272a56fa84c233588bcb4b4619981c67cfc8b4f6090cd4151faec5c9" Feb 14 06:03:51 crc kubenswrapper[4867]: E0214 06:03:51.949997 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6771b5c9272a56fa84c233588bcb4b4619981c67cfc8b4f6090cd4151faec5c9\": container with ID starting with 6771b5c9272a56fa84c233588bcb4b4619981c67cfc8b4f6090cd4151faec5c9 not found: ID does not exist" containerID="6771b5c9272a56fa84c233588bcb4b4619981c67cfc8b4f6090cd4151faec5c9" Feb 14 06:03:51 crc kubenswrapper[4867]: I0214 06:03:51.950060 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6771b5c9272a56fa84c233588bcb4b4619981c67cfc8b4f6090cd4151faec5c9"} err="failed to get container status \"6771b5c9272a56fa84c233588bcb4b4619981c67cfc8b4f6090cd4151faec5c9\": rpc error: code = NotFound desc = could not find container \"6771b5c9272a56fa84c233588bcb4b4619981c67cfc8b4f6090cd4151faec5c9\": container with ID starting with 6771b5c9272a56fa84c233588bcb4b4619981c67cfc8b4f6090cd4151faec5c9 not found: ID does not exist" Feb 14 06:03:51 crc kubenswrapper[4867]: I0214 06:03:51.950098 4867 scope.go:117] "RemoveContainer" containerID="270740dc153ad9475050bc2543190184b7cbfe3e2e0f4304c8365db7d151b101" Feb 14 06:03:51 crc kubenswrapper[4867]: E0214 06:03:51.950392 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"270740dc153ad9475050bc2543190184b7cbfe3e2e0f4304c8365db7d151b101\": container with ID starting with 270740dc153ad9475050bc2543190184b7cbfe3e2e0f4304c8365db7d151b101 not found: ID does not exist" containerID="270740dc153ad9475050bc2543190184b7cbfe3e2e0f4304c8365db7d151b101" Feb 14 06:03:51 crc kubenswrapper[4867]: I0214 06:03:51.950429 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"270740dc153ad9475050bc2543190184b7cbfe3e2e0f4304c8365db7d151b101"} err="failed to get container status \"270740dc153ad9475050bc2543190184b7cbfe3e2e0f4304c8365db7d151b101\": rpc error: code = NotFound desc = could not find container \"270740dc153ad9475050bc2543190184b7cbfe3e2e0f4304c8365db7d151b101\": container with ID starting with 270740dc153ad9475050bc2543190184b7cbfe3e2e0f4304c8365db7d151b101 not found: ID does not exist" Feb 14 06:03:51 crc kubenswrapper[4867]: I0214 06:03:51.950452 4867 scope.go:117] "RemoveContainer" containerID="fc34ec4dacce9de548210d3888519f4ed35c73971d99af5acd89710e680fde95" Feb 14 06:03:51 crc kubenswrapper[4867]: E0214 06:03:51.950629 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc34ec4dacce9de548210d3888519f4ed35c73971d99af5acd89710e680fde95\": container with ID starting with fc34ec4dacce9de548210d3888519f4ed35c73971d99af5acd89710e680fde95 not found: ID does not exist" containerID="fc34ec4dacce9de548210d3888519f4ed35c73971d99af5acd89710e680fde95" Feb 14 06:03:51 crc kubenswrapper[4867]: I0214 06:03:51.950645 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc34ec4dacce9de548210d3888519f4ed35c73971d99af5acd89710e680fde95"} err="failed to get container status \"fc34ec4dacce9de548210d3888519f4ed35c73971d99af5acd89710e680fde95\": rpc error: code = NotFound desc = could not find container \"fc34ec4dacce9de548210d3888519f4ed35c73971d99af5acd89710e680fde95\": container with ID starting with fc34ec4dacce9de548210d3888519f4ed35c73971d99af5acd89710e680fde95 not found: ID does not exist" Feb 14 06:03:52 crc kubenswrapper[4867]: I0214 06:03:52.841394 4867 generic.go:334] "Generic (PLEG): container finished" podID="9d8ca39c-0068-495f-97b4-5da29e98c60d" containerID="413f4dc3289647869b4e0a78505be1e59030680c66702a8177e82a0cff56b430" exitCode=0 Feb 14 06:03:52 crc kubenswrapper[4867]: I0214 06:03:52.841458 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nb97r" event={"ID":"9d8ca39c-0068-495f-97b4-5da29e98c60d","Type":"ContainerDied","Data":"413f4dc3289647869b4e0a78505be1e59030680c66702a8177e82a0cff56b430"} Feb 14 06:03:52 crc kubenswrapper[4867]: I0214 06:03:52.841527 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nb97r" event={"ID":"9d8ca39c-0068-495f-97b4-5da29e98c60d","Type":"ContainerStarted","Data":"d6d2b1021653408d35b901acf6093bc305e2a51d6c61afc86f9c705075f31a80"} Feb 14 06:03:53 crc kubenswrapper[4867]: I0214 06:03:53.019238 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f920796-3206-4c6a-ad78-e8a2b2c07c79" path="/var/lib/kubelet/pods/1f920796-3206-4c6a-ad78-e8a2b2c07c79/volumes" Feb 14 06:03:53 crc kubenswrapper[4867]: I0214 06:03:53.857599 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nb97r" event={"ID":"9d8ca39c-0068-495f-97b4-5da29e98c60d","Type":"ContainerStarted","Data":"0672b088860a536354dcbeae22dec1fcbace310acd6427ef724105d24287fc4d"} Feb 14 06:03:54 crc kubenswrapper[4867]: I0214 06:03:54.875841 4867 generic.go:334] "Generic (PLEG): container finished" podID="9d8ca39c-0068-495f-97b4-5da29e98c60d" containerID="0672b088860a536354dcbeae22dec1fcbace310acd6427ef724105d24287fc4d" exitCode=0 Feb 14 06:03:54 crc kubenswrapper[4867]: I0214 06:03:54.876088 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nb97r" event={"ID":"9d8ca39c-0068-495f-97b4-5da29e98c60d","Type":"ContainerDied","Data":"0672b088860a536354dcbeae22dec1fcbace310acd6427ef724105d24287fc4d"} Feb 14 06:03:55 crc kubenswrapper[4867]: I0214 06:03:55.893488 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nb97r" event={"ID":"9d8ca39c-0068-495f-97b4-5da29e98c60d","Type":"ContainerStarted","Data":"c238cf32431b5fdbf0d046a658827e69d531fa80e0765bdb709fdb0e7d84ff73"} Feb 14 06:03:55 crc kubenswrapper[4867]: I0214 06:03:55.938089 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nb97r" podStartSLOduration=3.50076664 podStartE2EDuration="5.938067281s" podCreationTimestamp="2026-02-14 06:03:50 +0000 UTC" firstStartedPulling="2026-02-14 06:03:52.845810642 +0000 UTC m=+6864.926747956" lastFinishedPulling="2026-02-14 06:03:55.283111283 +0000 UTC m=+6867.364048597" observedRunningTime="2026-02-14 06:03:55.931595061 +0000 UTC m=+6868.012532375" watchObservedRunningTime="2026-02-14 06:03:55.938067281 +0000 UTC m=+6868.019004595" Feb 14 06:04:01 crc kubenswrapper[4867]: I0214 06:04:01.325058 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nb97r" Feb 14 06:04:01 crc kubenswrapper[4867]: I0214 06:04:01.325946 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nb97r" Feb 14 06:04:01 crc kubenswrapper[4867]: I0214 06:04:01.396759 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nb97r" Feb 14 06:04:02 crc kubenswrapper[4867]: I0214 06:04:02.048985 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nb97r" Feb 14 06:04:02 crc kubenswrapper[4867]: I0214 06:04:02.117415 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nb97r"] Feb 14 06:04:03 crc kubenswrapper[4867]: I0214 06:04:03.999064 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nb97r" podUID="9d8ca39c-0068-495f-97b4-5da29e98c60d" containerName="registry-server" containerID="cri-o://c238cf32431b5fdbf0d046a658827e69d531fa80e0765bdb709fdb0e7d84ff73" gracePeriod=2 Feb 14 06:04:04 crc kubenswrapper[4867]: I0214 06:04:04.582761 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nb97r" Feb 14 06:04:04 crc kubenswrapper[4867]: I0214 06:04:04.675621 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d8ca39c-0068-495f-97b4-5da29e98c60d-utilities\") pod \"9d8ca39c-0068-495f-97b4-5da29e98c60d\" (UID: \"9d8ca39c-0068-495f-97b4-5da29e98c60d\") " Feb 14 06:04:04 crc kubenswrapper[4867]: I0214 06:04:04.675706 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d8ca39c-0068-495f-97b4-5da29e98c60d-catalog-content\") pod \"9d8ca39c-0068-495f-97b4-5da29e98c60d\" (UID: \"9d8ca39c-0068-495f-97b4-5da29e98c60d\") " Feb 14 06:04:04 crc kubenswrapper[4867]: I0214 06:04:04.676027 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jxfwn\" (UniqueName: \"kubernetes.io/projected/9d8ca39c-0068-495f-97b4-5da29e98c60d-kube-api-access-jxfwn\") pod \"9d8ca39c-0068-495f-97b4-5da29e98c60d\" (UID: \"9d8ca39c-0068-495f-97b4-5da29e98c60d\") " Feb 14 06:04:04 crc kubenswrapper[4867]: I0214 06:04:04.676572 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d8ca39c-0068-495f-97b4-5da29e98c60d-utilities" (OuterVolumeSpecName: "utilities") pod "9d8ca39c-0068-495f-97b4-5da29e98c60d" (UID: "9d8ca39c-0068-495f-97b4-5da29e98c60d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 06:04:04 crc kubenswrapper[4867]: I0214 06:04:04.677258 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d8ca39c-0068-495f-97b4-5da29e98c60d-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 06:04:04 crc kubenswrapper[4867]: I0214 06:04:04.688213 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d8ca39c-0068-495f-97b4-5da29e98c60d-kube-api-access-jxfwn" (OuterVolumeSpecName: "kube-api-access-jxfwn") pod "9d8ca39c-0068-495f-97b4-5da29e98c60d" (UID: "9d8ca39c-0068-495f-97b4-5da29e98c60d"). InnerVolumeSpecName "kube-api-access-jxfwn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 06:04:04 crc kubenswrapper[4867]: I0214 06:04:04.714314 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d8ca39c-0068-495f-97b4-5da29e98c60d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9d8ca39c-0068-495f-97b4-5da29e98c60d" (UID: "9d8ca39c-0068-495f-97b4-5da29e98c60d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 06:04:04 crc kubenswrapper[4867]: I0214 06:04:04.780156 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d8ca39c-0068-495f-97b4-5da29e98c60d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 06:04:04 crc kubenswrapper[4867]: I0214 06:04:04.780192 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jxfwn\" (UniqueName: \"kubernetes.io/projected/9d8ca39c-0068-495f-97b4-5da29e98c60d-kube-api-access-jxfwn\") on node \"crc\" DevicePath \"\"" Feb 14 06:04:05 crc kubenswrapper[4867]: I0214 06:04:05.016324 4867 generic.go:334] "Generic (PLEG): container finished" podID="9d8ca39c-0068-495f-97b4-5da29e98c60d" containerID="c238cf32431b5fdbf0d046a658827e69d531fa80e0765bdb709fdb0e7d84ff73" exitCode=0 Feb 14 06:04:05 crc kubenswrapper[4867]: I0214 06:04:05.016669 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nb97r" Feb 14 06:04:05 crc kubenswrapper[4867]: I0214 06:04:05.018389 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nb97r" event={"ID":"9d8ca39c-0068-495f-97b4-5da29e98c60d","Type":"ContainerDied","Data":"c238cf32431b5fdbf0d046a658827e69d531fa80e0765bdb709fdb0e7d84ff73"} Feb 14 06:04:05 crc kubenswrapper[4867]: I0214 06:04:05.018465 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nb97r" event={"ID":"9d8ca39c-0068-495f-97b4-5da29e98c60d","Type":"ContainerDied","Data":"d6d2b1021653408d35b901acf6093bc305e2a51d6c61afc86f9c705075f31a80"} Feb 14 06:04:05 crc kubenswrapper[4867]: I0214 06:04:05.018531 4867 scope.go:117] "RemoveContainer" containerID="c238cf32431b5fdbf0d046a658827e69d531fa80e0765bdb709fdb0e7d84ff73" Feb 14 06:04:05 crc kubenswrapper[4867]: I0214 06:04:05.065625 4867 scope.go:117] "RemoveContainer" containerID="0672b088860a536354dcbeae22dec1fcbace310acd6427ef724105d24287fc4d" Feb 14 06:04:05 crc kubenswrapper[4867]: I0214 06:04:05.096208 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nb97r"] Feb 14 06:04:05 crc kubenswrapper[4867]: I0214 06:04:05.103631 4867 scope.go:117] "RemoveContainer" containerID="413f4dc3289647869b4e0a78505be1e59030680c66702a8177e82a0cff56b430" Feb 14 06:04:05 crc kubenswrapper[4867]: I0214 06:04:05.112781 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nb97r"] Feb 14 06:04:05 crc kubenswrapper[4867]: I0214 06:04:05.176410 4867 scope.go:117] "RemoveContainer" containerID="c238cf32431b5fdbf0d046a658827e69d531fa80e0765bdb709fdb0e7d84ff73" Feb 14 06:04:05 crc kubenswrapper[4867]: E0214 06:04:05.177005 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c238cf32431b5fdbf0d046a658827e69d531fa80e0765bdb709fdb0e7d84ff73\": container with ID starting with c238cf32431b5fdbf0d046a658827e69d531fa80e0765bdb709fdb0e7d84ff73 not found: ID does not exist" containerID="c238cf32431b5fdbf0d046a658827e69d531fa80e0765bdb709fdb0e7d84ff73" Feb 14 06:04:05 crc kubenswrapper[4867]: I0214 06:04:05.177039 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c238cf32431b5fdbf0d046a658827e69d531fa80e0765bdb709fdb0e7d84ff73"} err="failed to get container status \"c238cf32431b5fdbf0d046a658827e69d531fa80e0765bdb709fdb0e7d84ff73\": rpc error: code = NotFound desc = could not find container \"c238cf32431b5fdbf0d046a658827e69d531fa80e0765bdb709fdb0e7d84ff73\": container with ID starting with c238cf32431b5fdbf0d046a658827e69d531fa80e0765bdb709fdb0e7d84ff73 not found: ID does not exist" Feb 14 06:04:05 crc kubenswrapper[4867]: I0214 06:04:05.177060 4867 scope.go:117] "RemoveContainer" containerID="0672b088860a536354dcbeae22dec1fcbace310acd6427ef724105d24287fc4d" Feb 14 06:04:05 crc kubenswrapper[4867]: E0214 06:04:05.177391 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0672b088860a536354dcbeae22dec1fcbace310acd6427ef724105d24287fc4d\": container with ID starting with 0672b088860a536354dcbeae22dec1fcbace310acd6427ef724105d24287fc4d not found: ID does not exist" containerID="0672b088860a536354dcbeae22dec1fcbace310acd6427ef724105d24287fc4d" Feb 14 06:04:05 crc kubenswrapper[4867]: I0214 06:04:05.177426 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0672b088860a536354dcbeae22dec1fcbace310acd6427ef724105d24287fc4d"} err="failed to get container status \"0672b088860a536354dcbeae22dec1fcbace310acd6427ef724105d24287fc4d\": rpc error: code = NotFound desc = could not find container \"0672b088860a536354dcbeae22dec1fcbace310acd6427ef724105d24287fc4d\": container with ID starting with 0672b088860a536354dcbeae22dec1fcbace310acd6427ef724105d24287fc4d not found: ID does not exist" Feb 14 06:04:05 crc kubenswrapper[4867]: I0214 06:04:05.177445 4867 scope.go:117] "RemoveContainer" containerID="413f4dc3289647869b4e0a78505be1e59030680c66702a8177e82a0cff56b430" Feb 14 06:04:05 crc kubenswrapper[4867]: E0214 06:04:05.177906 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"413f4dc3289647869b4e0a78505be1e59030680c66702a8177e82a0cff56b430\": container with ID starting with 413f4dc3289647869b4e0a78505be1e59030680c66702a8177e82a0cff56b430 not found: ID does not exist" containerID="413f4dc3289647869b4e0a78505be1e59030680c66702a8177e82a0cff56b430" Feb 14 06:04:05 crc kubenswrapper[4867]: I0214 06:04:05.177939 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"413f4dc3289647869b4e0a78505be1e59030680c66702a8177e82a0cff56b430"} err="failed to get container status \"413f4dc3289647869b4e0a78505be1e59030680c66702a8177e82a0cff56b430\": rpc error: code = NotFound desc = could not find container \"413f4dc3289647869b4e0a78505be1e59030680c66702a8177e82a0cff56b430\": container with ID starting with 413f4dc3289647869b4e0a78505be1e59030680c66702a8177e82a0cff56b430 not found: ID does not exist" Feb 14 06:04:07 crc kubenswrapper[4867]: I0214 06:04:07.022310 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d8ca39c-0068-495f-97b4-5da29e98c60d" path="/var/lib/kubelet/pods/9d8ca39c-0068-495f-97b4-5da29e98c60d/volumes" Feb 14 06:04:31 crc kubenswrapper[4867]: I0214 06:04:31.251232 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 06:04:31 crc kubenswrapper[4867]: I0214 06:04:31.252055 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 06:05:01 crc kubenswrapper[4867]: I0214 06:05:01.251302 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 06:05:01 crc kubenswrapper[4867]: I0214 06:05:01.252017 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 06:05:27 crc kubenswrapper[4867]: I0214 06:05:27.850344 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rhf5c"] Feb 14 06:05:27 crc kubenswrapper[4867]: E0214 06:05:27.851299 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f920796-3206-4c6a-ad78-e8a2b2c07c79" containerName="extract-utilities" Feb 14 06:05:27 crc kubenswrapper[4867]: I0214 06:05:27.851312 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f920796-3206-4c6a-ad78-e8a2b2c07c79" containerName="extract-utilities" Feb 14 06:05:27 crc kubenswrapper[4867]: E0214 06:05:27.851456 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d8ca39c-0068-495f-97b4-5da29e98c60d" containerName="registry-server" Feb 14 06:05:27 crc kubenswrapper[4867]: I0214 06:05:27.851466 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d8ca39c-0068-495f-97b4-5da29e98c60d" containerName="registry-server" Feb 14 06:05:27 crc kubenswrapper[4867]: E0214 06:05:27.851479 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d8ca39c-0068-495f-97b4-5da29e98c60d" containerName="extract-content" Feb 14 06:05:27 crc kubenswrapper[4867]: I0214 06:05:27.851488 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d8ca39c-0068-495f-97b4-5da29e98c60d" containerName="extract-content" Feb 14 06:05:27 crc kubenswrapper[4867]: E0214 06:05:27.851551 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f920796-3206-4c6a-ad78-e8a2b2c07c79" containerName="extract-content" Feb 14 06:05:27 crc kubenswrapper[4867]: I0214 06:05:27.851557 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f920796-3206-4c6a-ad78-e8a2b2c07c79" containerName="extract-content" Feb 14 06:05:27 crc kubenswrapper[4867]: E0214 06:05:27.851573 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f920796-3206-4c6a-ad78-e8a2b2c07c79" containerName="registry-server" Feb 14 06:05:27 crc kubenswrapper[4867]: I0214 06:05:27.851578 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f920796-3206-4c6a-ad78-e8a2b2c07c79" containerName="registry-server" Feb 14 06:05:27 crc kubenswrapper[4867]: E0214 06:05:27.851588 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d8ca39c-0068-495f-97b4-5da29e98c60d" containerName="extract-utilities" Feb 14 06:05:27 crc kubenswrapper[4867]: I0214 06:05:27.851595 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d8ca39c-0068-495f-97b4-5da29e98c60d" containerName="extract-utilities" Feb 14 06:05:27 crc kubenswrapper[4867]: I0214 06:05:27.885869 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d8ca39c-0068-495f-97b4-5da29e98c60d" containerName="registry-server" Feb 14 06:05:27 crc kubenswrapper[4867]: I0214 06:05:27.885964 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f920796-3206-4c6a-ad78-e8a2b2c07c79" containerName="registry-server" Feb 14 06:05:27 crc kubenswrapper[4867]: I0214 06:05:27.889183 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rhf5c"] Feb 14 06:05:27 crc kubenswrapper[4867]: I0214 06:05:27.889282 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rhf5c" Feb 14 06:05:28 crc kubenswrapper[4867]: I0214 06:05:28.018730 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96a7e834-3c00-4eca-96f4-1f90608b01c3-catalog-content\") pod \"redhat-operators-rhf5c\" (UID: \"96a7e834-3c00-4eca-96f4-1f90608b01c3\") " pod="openshift-marketplace/redhat-operators-rhf5c" Feb 14 06:05:28 crc kubenswrapper[4867]: I0214 06:05:28.018829 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7sh87\" (UniqueName: \"kubernetes.io/projected/96a7e834-3c00-4eca-96f4-1f90608b01c3-kube-api-access-7sh87\") pod \"redhat-operators-rhf5c\" (UID: \"96a7e834-3c00-4eca-96f4-1f90608b01c3\") " pod="openshift-marketplace/redhat-operators-rhf5c" Feb 14 06:05:28 crc kubenswrapper[4867]: I0214 06:05:28.019354 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96a7e834-3c00-4eca-96f4-1f90608b01c3-utilities\") pod \"redhat-operators-rhf5c\" (UID: \"96a7e834-3c00-4eca-96f4-1f90608b01c3\") " pod="openshift-marketplace/redhat-operators-rhf5c" Feb 14 06:05:28 crc kubenswrapper[4867]: I0214 06:05:28.122292 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96a7e834-3c00-4eca-96f4-1f90608b01c3-utilities\") pod \"redhat-operators-rhf5c\" (UID: \"96a7e834-3c00-4eca-96f4-1f90608b01c3\") " pod="openshift-marketplace/redhat-operators-rhf5c" Feb 14 06:05:28 crc kubenswrapper[4867]: I0214 06:05:28.122450 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96a7e834-3c00-4eca-96f4-1f90608b01c3-catalog-content\") pod \"redhat-operators-rhf5c\" (UID: \"96a7e834-3c00-4eca-96f4-1f90608b01c3\") " pod="openshift-marketplace/redhat-operators-rhf5c" Feb 14 06:05:28 crc kubenswrapper[4867]: I0214 06:05:28.122500 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7sh87\" (UniqueName: \"kubernetes.io/projected/96a7e834-3c00-4eca-96f4-1f90608b01c3-kube-api-access-7sh87\") pod \"redhat-operators-rhf5c\" (UID: \"96a7e834-3c00-4eca-96f4-1f90608b01c3\") " pod="openshift-marketplace/redhat-operators-rhf5c" Feb 14 06:05:28 crc kubenswrapper[4867]: I0214 06:05:28.123020 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96a7e834-3c00-4eca-96f4-1f90608b01c3-utilities\") pod \"redhat-operators-rhf5c\" (UID: \"96a7e834-3c00-4eca-96f4-1f90608b01c3\") " pod="openshift-marketplace/redhat-operators-rhf5c" Feb 14 06:05:28 crc kubenswrapper[4867]: I0214 06:05:28.123028 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96a7e834-3c00-4eca-96f4-1f90608b01c3-catalog-content\") pod \"redhat-operators-rhf5c\" (UID: \"96a7e834-3c00-4eca-96f4-1f90608b01c3\") " pod="openshift-marketplace/redhat-operators-rhf5c" Feb 14 06:05:28 crc kubenswrapper[4867]: I0214 06:05:28.157155 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7sh87\" (UniqueName: \"kubernetes.io/projected/96a7e834-3c00-4eca-96f4-1f90608b01c3-kube-api-access-7sh87\") pod \"redhat-operators-rhf5c\" (UID: \"96a7e834-3c00-4eca-96f4-1f90608b01c3\") " pod="openshift-marketplace/redhat-operators-rhf5c" Feb 14 06:05:28 crc kubenswrapper[4867]: I0214 06:05:28.221785 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rhf5c" Feb 14 06:05:28 crc kubenswrapper[4867]: I0214 06:05:28.916496 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rhf5c"] Feb 14 06:05:28 crc kubenswrapper[4867]: W0214 06:05:28.927715 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod96a7e834_3c00_4eca_96f4_1f90608b01c3.slice/crio-f5451218d46ba38b7a276381550dee061b8d05f51dbd62f03b43707d6a3ff193 WatchSource:0}: Error finding container f5451218d46ba38b7a276381550dee061b8d05f51dbd62f03b43707d6a3ff193: Status 404 returned error can't find the container with id f5451218d46ba38b7a276381550dee061b8d05f51dbd62f03b43707d6a3ff193 Feb 14 06:05:29 crc kubenswrapper[4867]: I0214 06:05:29.178614 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rhf5c" event={"ID":"96a7e834-3c00-4eca-96f4-1f90608b01c3","Type":"ContainerStarted","Data":"f5451218d46ba38b7a276381550dee061b8d05f51dbd62f03b43707d6a3ff193"} Feb 14 06:05:30 crc kubenswrapper[4867]: I0214 06:05:30.191586 4867 generic.go:334] "Generic (PLEG): container finished" podID="96a7e834-3c00-4eca-96f4-1f90608b01c3" containerID="a1577169f53796ad87dd9cbe2c5941fcc1d433aed1d962478f74adbc5bcd9048" exitCode=0 Feb 14 06:05:30 crc kubenswrapper[4867]: I0214 06:05:30.191648 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rhf5c" event={"ID":"96a7e834-3c00-4eca-96f4-1f90608b01c3","Type":"ContainerDied","Data":"a1577169f53796ad87dd9cbe2c5941fcc1d433aed1d962478f74adbc5bcd9048"} Feb 14 06:05:31 crc kubenswrapper[4867]: I0214 06:05:31.251524 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 06:05:31 crc kubenswrapper[4867]: I0214 06:05:31.251908 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 06:05:31 crc kubenswrapper[4867]: I0214 06:05:31.251963 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" Feb 14 06:05:31 crc kubenswrapper[4867]: I0214 06:05:31.253152 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"92bd21b391618693b38219f6b0a3cae0e5df83bf07f4ba2e4705f2380a1917b6"} pod="openshift-machine-config-operator/machine-config-daemon-4s95t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 06:05:31 crc kubenswrapper[4867]: I0214 06:05:31.253216 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" containerID="cri-o://92bd21b391618693b38219f6b0a3cae0e5df83bf07f4ba2e4705f2380a1917b6" gracePeriod=600 Feb 14 06:05:32 crc kubenswrapper[4867]: I0214 06:05:32.217443 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rhf5c" event={"ID":"96a7e834-3c00-4eca-96f4-1f90608b01c3","Type":"ContainerStarted","Data":"15242e648886784fc4ca0d3130a9e6c65c41e7d0015df516397a3c0f382d001f"} Feb 14 06:05:32 crc kubenswrapper[4867]: I0214 06:05:32.225763 4867 generic.go:334] "Generic (PLEG): container finished" podID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerID="92bd21b391618693b38219f6b0a3cae0e5df83bf07f4ba2e4705f2380a1917b6" exitCode=0 Feb 14 06:05:32 crc kubenswrapper[4867]: I0214 06:05:32.225825 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" event={"ID":"5992e46c-bce7-4b9f-82f2-c7ffb93286cd","Type":"ContainerDied","Data":"92bd21b391618693b38219f6b0a3cae0e5df83bf07f4ba2e4705f2380a1917b6"} Feb 14 06:05:32 crc kubenswrapper[4867]: I0214 06:05:32.225857 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" event={"ID":"5992e46c-bce7-4b9f-82f2-c7ffb93286cd","Type":"ContainerStarted","Data":"29349c5db391c092dec0896657679e70705e3056d75cc0108e6a58d7c22b4c81"} Feb 14 06:05:32 crc kubenswrapper[4867]: I0214 06:05:32.225897 4867 scope.go:117] "RemoveContainer" containerID="85cc1629feee14dea1a79134dc431065e3e76ce7010ce3c502e802c3ae8c3154" Feb 14 06:05:37 crc kubenswrapper[4867]: I0214 06:05:37.286851 4867 generic.go:334] "Generic (PLEG): container finished" podID="96a7e834-3c00-4eca-96f4-1f90608b01c3" containerID="15242e648886784fc4ca0d3130a9e6c65c41e7d0015df516397a3c0f382d001f" exitCode=0 Feb 14 06:05:37 crc kubenswrapper[4867]: I0214 06:05:37.286936 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rhf5c" event={"ID":"96a7e834-3c00-4eca-96f4-1f90608b01c3","Type":"ContainerDied","Data":"15242e648886784fc4ca0d3130a9e6c65c41e7d0015df516397a3c0f382d001f"} Feb 14 06:05:38 crc kubenswrapper[4867]: I0214 06:05:38.302258 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rhf5c" event={"ID":"96a7e834-3c00-4eca-96f4-1f90608b01c3","Type":"ContainerStarted","Data":"9ae9436a1e5c2e515129ad54a68da04aa3f2c5ad6550eb858eccc394361668a0"} Feb 14 06:05:38 crc kubenswrapper[4867]: I0214 06:05:38.325042 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rhf5c" podStartSLOduration=3.817435595 podStartE2EDuration="11.325019794s" podCreationTimestamp="2026-02-14 06:05:27 +0000 UTC" firstStartedPulling="2026-02-14 06:05:30.194443946 +0000 UTC m=+6962.275381260" lastFinishedPulling="2026-02-14 06:05:37.702028155 +0000 UTC m=+6969.782965459" observedRunningTime="2026-02-14 06:05:38.318692148 +0000 UTC m=+6970.399629462" watchObservedRunningTime="2026-02-14 06:05:38.325019794 +0000 UTC m=+6970.405957108" Feb 14 06:05:48 crc kubenswrapper[4867]: I0214 06:05:48.222616 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rhf5c" Feb 14 06:05:48 crc kubenswrapper[4867]: I0214 06:05:48.223217 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rhf5c" Feb 14 06:05:49 crc kubenswrapper[4867]: I0214 06:05:49.287607 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rhf5c" podUID="96a7e834-3c00-4eca-96f4-1f90608b01c3" containerName="registry-server" probeResult="failure" output=< Feb 14 06:05:49 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 06:05:49 crc kubenswrapper[4867]: > Feb 14 06:05:59 crc kubenswrapper[4867]: I0214 06:05:59.289360 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rhf5c" podUID="96a7e834-3c00-4eca-96f4-1f90608b01c3" containerName="registry-server" probeResult="failure" output=< Feb 14 06:05:59 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 06:05:59 crc kubenswrapper[4867]: > var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515144010332024436 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015144010332017353 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015143772212016511 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015143772213015462 5ustar corecore